00:00:00.001 Started by upstream project "autotest-spdk-master-vs-dpdk-v23.11" build number 353 00:00:00.001 originally caused by: 00:00:00.002 Started by upstream project "nightly-trigger" build number 3015 00:00:00.002 originally caused by: 00:00:00.002 Started by timer 00:00:00.037 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.038 The recommended git tool is: git 00:00:00.038 using credential 00000000-0000-0000-0000-000000000002 00:00:00.039 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.061 Fetching changes from the remote Git repository 00:00:00.063 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.099 Using shallow fetch with depth 1 00:00:00.099 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.099 > git --version # timeout=10 00:00:00.146 > git --version # 'git version 2.39.2' 00:00:00.146 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.147 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.147 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:03.901 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:03.914 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:03.927 Checking out Revision f964f6d3463483adf05cc5c086f2abd292e05f1d (FETCH_HEAD) 00:00:03.928 > git config core.sparsecheckout # timeout=10 00:00:03.940 > git read-tree -mu HEAD # timeout=10 00:00:03.959 > git checkout -f f964f6d3463483adf05cc5c086f2abd292e05f1d # timeout=5 00:00:03.988 Commit message: "ansible/roles/custom_facts: Drop nvme features" 00:00:03.988 > git rev-list --no-walk f964f6d3463483adf05cc5c086f2abd292e05f1d # timeout=10 00:00:04.102 [Pipeline] Start of Pipeline 00:00:04.116 [Pipeline] library 00:00:04.118 Loading library shm_lib@master 00:00:04.118 Library shm_lib@master is cached. Copying from home. 00:00:04.136 [Pipeline] node 00:00:04.146 Running on CYP12 in /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:04.147 [Pipeline] { 00:00:04.167 [Pipeline] catchError 00:00:04.169 [Pipeline] { 00:00:04.182 [Pipeline] wrap 00:00:04.193 [Pipeline] { 00:00:04.199 [Pipeline] stage 00:00:04.201 [Pipeline] { (Prologue) 00:00:04.378 [Pipeline] sh 00:00:04.666 + logger -p user.info -t JENKINS-CI 00:00:04.682 [Pipeline] echo 00:00:04.683 Node: CYP12 00:00:04.688 [Pipeline] sh 00:00:04.988 [Pipeline] setCustomBuildProperty 00:00:04.998 [Pipeline] echo 00:00:04.999 Cleanup processes 00:00:05.002 [Pipeline] sh 00:00:05.288 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:05.288 3580954 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:05.301 [Pipeline] sh 00:00:05.586 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:05.586 ++ grep -v 'sudo pgrep' 00:00:05.586 ++ awk '{print $1}' 00:00:05.586 + sudo kill -9 00:00:05.586 + true 00:00:05.598 [Pipeline] cleanWs 00:00:05.607 [WS-CLEANUP] Deleting project workspace... 00:00:05.607 [WS-CLEANUP] Deferred wipeout is used... 00:00:05.614 [WS-CLEANUP] done 00:00:05.617 [Pipeline] setCustomBuildProperty 00:00:05.629 [Pipeline] sh 00:00:05.911 + sudo git config --global --replace-all safe.directory '*' 00:00:06.005 [Pipeline] nodesByLabel 00:00:06.006 Found a total of 1 nodes with the 'sorcerer' label 00:00:06.014 [Pipeline] httpRequest 00:00:06.018 HttpMethod: GET 00:00:06.019 URL: http://10.211.164.96/packages/jbp_f964f6d3463483adf05cc5c086f2abd292e05f1d.tar.gz 00:00:06.022 Sending request to url: http://10.211.164.96/packages/jbp_f964f6d3463483adf05cc5c086f2abd292e05f1d.tar.gz 00:00:06.031 Response Code: HTTP/1.1 200 OK 00:00:06.032 Success: Status code 200 is in the accepted range: 200,404 00:00:06.032 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/jbp_f964f6d3463483adf05cc5c086f2abd292e05f1d.tar.gz 00:00:06.989 [Pipeline] sh 00:00:07.273 + tar --no-same-owner -xf jbp_f964f6d3463483adf05cc5c086f2abd292e05f1d.tar.gz 00:00:07.289 [Pipeline] httpRequest 00:00:07.294 HttpMethod: GET 00:00:07.294 URL: http://10.211.164.96/packages/spdk_8571999d826071a4793ae93dc583715f292620f7.tar.gz 00:00:07.294 Sending request to url: http://10.211.164.96/packages/spdk_8571999d826071a4793ae93dc583715f292620f7.tar.gz 00:00:07.308 Response Code: HTTP/1.1 200 OK 00:00:07.309 Success: Status code 200 is in the accepted range: 200,404 00:00:07.309 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk_8571999d826071a4793ae93dc583715f292620f7.tar.gz 00:00:23.318 [Pipeline] sh 00:00:23.606 + tar --no-same-owner -xf spdk_8571999d826071a4793ae93dc583715f292620f7.tar.gz 00:00:26.925 [Pipeline] sh 00:00:27.215 + git -C spdk log --oneline -n5 00:00:27.215 8571999d8 test/scheduler: Stop moving all processes between cgroups 00:00:27.215 06472fb6d lib/idxd: fix batch size in kernel IDXD 00:00:27.215 44dcf4fb9 pkgdep/idxd: Add dependency for accel-config used in kernel IDXD 00:00:27.215 3dbaa93c1 nvmf: pass command dword 12 and 13 for write 00:00:27.215 19327fc3a bdev/nvme: use dtype/dspec for write commands 00:00:27.239 [Pipeline] withCredentials 00:00:27.250 > git --version # timeout=10 00:00:27.262 > git --version # 'git version 2.39.2' 00:00:27.281 Masking supported pattern matches of $GIT_PASSWORD or $GIT_ASKPASS 00:00:27.283 [Pipeline] { 00:00:27.292 [Pipeline] retry 00:00:27.294 [Pipeline] { 00:00:27.315 [Pipeline] sh 00:00:27.604 + git ls-remote http://dpdk.org/git/dpdk-stable v23.11 00:00:27.618 [Pipeline] } 00:00:27.643 [Pipeline] // retry 00:00:27.648 [Pipeline] } 00:00:27.667 [Pipeline] // withCredentials 00:00:27.678 [Pipeline] httpRequest 00:00:27.683 HttpMethod: GET 00:00:27.683 URL: http://10.211.164.96/packages/dpdk_d15625009dced269fcec27fc81dd74fd58d54cdb.tar.gz 00:00:27.684 Sending request to url: http://10.211.164.96/packages/dpdk_d15625009dced269fcec27fc81dd74fd58d54cdb.tar.gz 00:00:27.687 Response Code: HTTP/1.1 200 OK 00:00:27.688 Success: Status code 200 is in the accepted range: 200,404 00:00:27.688 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk_d15625009dced269fcec27fc81dd74fd58d54cdb.tar.gz 00:00:34.494 [Pipeline] sh 00:00:34.787 + tar --no-same-owner -xf dpdk_d15625009dced269fcec27fc81dd74fd58d54cdb.tar.gz 00:00:36.717 [Pipeline] sh 00:00:37.005 + git -C dpdk log --oneline -n5 00:00:37.005 eeb0605f11 version: 23.11.0 00:00:37.005 238778122a doc: update release notes for 23.11 00:00:37.005 46aa6b3cfc doc: fix description of RSS features 00:00:37.005 dd88f51a57 devtools: forbid DPDK API in cnxk base driver 00:00:37.005 7e421ae345 devtools: support skipping forbid rule check 00:00:37.018 [Pipeline] } 00:00:37.036 [Pipeline] // stage 00:00:37.046 [Pipeline] stage 00:00:37.049 [Pipeline] { (Prepare) 00:00:37.069 [Pipeline] writeFile 00:00:37.084 [Pipeline] sh 00:00:37.429 + logger -p user.info -t JENKINS-CI 00:00:37.443 [Pipeline] sh 00:00:37.728 + logger -p user.info -t JENKINS-CI 00:00:37.741 [Pipeline] sh 00:00:38.026 + cat autorun-spdk.conf 00:00:38.026 SPDK_RUN_FUNCTIONAL_TEST=1 00:00:38.026 SPDK_TEST_NVMF=1 00:00:38.026 SPDK_TEST_NVME_CLI=1 00:00:38.026 SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:38.026 SPDK_TEST_NVMF_NICS=e810 00:00:38.026 SPDK_TEST_VFIOUSER=1 00:00:38.026 SPDK_RUN_UBSAN=1 00:00:38.026 NET_TYPE=phy 00:00:38.026 SPDK_TEST_NATIVE_DPDK=v23.11 00:00:38.026 SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:00:38.034 RUN_NIGHTLY=1 00:00:38.039 [Pipeline] readFile 00:00:38.063 [Pipeline] withEnv 00:00:38.065 [Pipeline] { 00:00:38.080 [Pipeline] sh 00:00:38.368 + set -ex 00:00:38.369 + [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf ]] 00:00:38.369 + source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:00:38.369 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:00:38.369 ++ SPDK_TEST_NVMF=1 00:00:38.369 ++ SPDK_TEST_NVME_CLI=1 00:00:38.369 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:38.369 ++ SPDK_TEST_NVMF_NICS=e810 00:00:38.369 ++ SPDK_TEST_VFIOUSER=1 00:00:38.369 ++ SPDK_RUN_UBSAN=1 00:00:38.369 ++ NET_TYPE=phy 00:00:38.369 ++ SPDK_TEST_NATIVE_DPDK=v23.11 00:00:38.369 ++ SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:00:38.369 ++ RUN_NIGHTLY=1 00:00:38.369 + case $SPDK_TEST_NVMF_NICS in 00:00:38.369 + DRIVERS=ice 00:00:38.369 + [[ tcp == \r\d\m\a ]] 00:00:38.369 + [[ -n ice ]] 00:00:38.369 + sudo rmmod mlx4_ib mlx5_ib irdma i40iw iw_cxgb4 00:00:38.369 rmmod: ERROR: Module mlx4_ib is not currently loaded 00:00:46.514 rmmod: ERROR: Module irdma is not currently loaded 00:00:46.514 rmmod: ERROR: Module i40iw is not currently loaded 00:00:46.514 rmmod: ERROR: Module iw_cxgb4 is not currently loaded 00:00:46.514 + true 00:00:46.514 + for D in $DRIVERS 00:00:46.514 + sudo modprobe ice 00:00:46.514 + exit 0 00:00:46.525 [Pipeline] } 00:00:46.543 [Pipeline] // withEnv 00:00:46.549 [Pipeline] } 00:00:46.566 [Pipeline] // stage 00:00:46.577 [Pipeline] catchError 00:00:46.579 [Pipeline] { 00:00:46.595 [Pipeline] timeout 00:00:46.595 Timeout set to expire in 40 min 00:00:46.597 [Pipeline] { 00:00:46.613 [Pipeline] stage 00:00:46.616 [Pipeline] { (Tests) 00:00:46.628 [Pipeline] sh 00:00:46.913 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:46.913 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:46.913 + DIR_ROOT=/var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:46.913 + [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest ]] 00:00:46.913 + DIR_SPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:46.913 + DIR_OUTPUT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:00:46.913 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk ]] 00:00:46.913 + [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:00:46.913 + mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:00:46.913 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:00:46.913 + cd /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:46.913 + source /etc/os-release 00:00:46.913 ++ NAME='Fedora Linux' 00:00:46.913 ++ VERSION='38 (Cloud Edition)' 00:00:46.913 ++ ID=fedora 00:00:46.913 ++ VERSION_ID=38 00:00:46.913 ++ VERSION_CODENAME= 00:00:46.913 ++ PLATFORM_ID=platform:f38 00:00:46.913 ++ PRETTY_NAME='Fedora Linux 38 (Cloud Edition)' 00:00:46.913 ++ ANSI_COLOR='0;38;2;60;110;180' 00:00:46.913 ++ LOGO=fedora-logo-icon 00:00:46.913 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:38 00:00:46.913 ++ HOME_URL=https://fedoraproject.org/ 00:00:46.913 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f38/system-administrators-guide/ 00:00:46.913 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:00:46.913 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:00:46.913 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:00:46.913 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=38 00:00:46.913 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:00:46.913 ++ REDHAT_SUPPORT_PRODUCT_VERSION=38 00:00:46.913 ++ SUPPORT_END=2024-05-14 00:00:46.913 ++ VARIANT='Cloud Edition' 00:00:46.913 ++ VARIANT_ID=cloud 00:00:46.913 + uname -a 00:00:46.913 Linux spdk-cyp-12 6.7.0-68.fc38.x86_64 #1 SMP PREEMPT_DYNAMIC Mon Jan 15 00:59:40 UTC 2024 x86_64 GNU/Linux 00:00:46.913 + sudo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:00:50.217 Hugepages 00:00:50.217 node hugesize free / total 00:00:50.217 node0 1048576kB 0 / 0 00:00:50.217 node0 2048kB 0 / 0 00:00:50.217 node1 1048576kB 0 / 0 00:00:50.217 node1 2048kB 0 / 0 00:00:50.217 00:00:50.217 Type BDF Vendor Device NUMA Driver Device Block devices 00:00:50.217 I/OAT 0000:00:01.0 8086 0b00 0 ioatdma - - 00:00:50.217 I/OAT 0000:00:01.1 8086 0b00 0 ioatdma - - 00:00:50.217 I/OAT 0000:00:01.2 8086 0b00 0 ioatdma - - 00:00:50.217 I/OAT 0000:00:01.3 8086 0b00 0 ioatdma - - 00:00:50.217 I/OAT 0000:00:01.4 8086 0b00 0 ioatdma - - 00:00:50.217 I/OAT 0000:00:01.5 8086 0b00 0 ioatdma - - 00:00:50.217 I/OAT 0000:00:01.6 8086 0b00 0 ioatdma - - 00:00:50.217 I/OAT 0000:00:01.7 8086 0b00 0 ioatdma - - 00:00:50.217 NVMe 0000:65:00.0 144d a80a 0 nvme nvme0 nvme0n1 00:00:50.217 I/OAT 0000:80:01.0 8086 0b00 1 ioatdma - - 00:00:50.217 I/OAT 0000:80:01.1 8086 0b00 1 ioatdma - - 00:00:50.217 I/OAT 0000:80:01.2 8086 0b00 1 ioatdma - - 00:00:50.217 I/OAT 0000:80:01.3 8086 0b00 1 ioatdma - - 00:00:50.217 I/OAT 0000:80:01.4 8086 0b00 1 ioatdma - - 00:00:50.217 I/OAT 0000:80:01.5 8086 0b00 1 ioatdma - - 00:00:50.217 I/OAT 0000:80:01.6 8086 0b00 1 ioatdma - - 00:00:50.217 I/OAT 0000:80:01.7 8086 0b00 1 ioatdma - - 00:00:50.217 + rm -f /tmp/spdk-ld-path 00:00:50.217 + source autorun-spdk.conf 00:00:50.217 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:00:50.217 ++ SPDK_TEST_NVMF=1 00:00:50.217 ++ SPDK_TEST_NVME_CLI=1 00:00:50.217 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:50.217 ++ SPDK_TEST_NVMF_NICS=e810 00:00:50.217 ++ SPDK_TEST_VFIOUSER=1 00:00:50.217 ++ SPDK_RUN_UBSAN=1 00:00:50.217 ++ NET_TYPE=phy 00:00:50.217 ++ SPDK_TEST_NATIVE_DPDK=v23.11 00:00:50.217 ++ SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:00:50.217 ++ RUN_NIGHTLY=1 00:00:50.217 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:00:50.217 + [[ -n '' ]] 00:00:50.217 + sudo git config --global --add safe.directory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:50.217 + for M in /var/spdk/build-*-manifest.txt 00:00:50.217 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:00:50.217 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:00:50.217 + for M in /var/spdk/build-*-manifest.txt 00:00:50.217 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:00:50.217 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:00:50.217 ++ uname 00:00:50.217 + [[ Linux == \L\i\n\u\x ]] 00:00:50.217 + sudo dmesg -T 00:00:50.217 + sudo dmesg --clear 00:00:50.217 + dmesg_pid=3582558 00:00:50.217 + [[ Fedora Linux == FreeBSD ]] 00:00:50.217 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:00:50.217 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:00:50.217 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:00:50.217 + [[ -x /usr/src/fio-static/fio ]] 00:00:50.217 + export FIO_BIN=/usr/src/fio-static/fio 00:00:50.217 + FIO_BIN=/usr/src/fio-static/fio 00:00:50.217 + sudo dmesg -Tw 00:00:50.217 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\n\v\m\f\-\t\c\p\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:00:50.217 + [[ ! -v VFIO_QEMU_BIN ]] 00:00:50.217 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:00:50.217 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:00:50.217 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:00:50.217 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:00:50.217 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:00:50.217 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:00:50.218 + spdk/autorun.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:00:50.218 Test configuration: 00:00:50.218 SPDK_RUN_FUNCTIONAL_TEST=1 00:00:50.218 SPDK_TEST_NVMF=1 00:00:50.218 SPDK_TEST_NVME_CLI=1 00:00:50.218 SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:50.218 SPDK_TEST_NVMF_NICS=e810 00:00:50.218 SPDK_TEST_VFIOUSER=1 00:00:50.218 SPDK_RUN_UBSAN=1 00:00:50.218 NET_TYPE=phy 00:00:50.218 SPDK_TEST_NATIVE_DPDK=v23.11 00:00:50.218 SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:00:50.218 RUN_NIGHTLY=1 23:03:39 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:00:50.218 23:03:39 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:00:50.218 23:03:39 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:00:50.218 23:03:39 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:00:50.218 23:03:39 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:50.218 23:03:39 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:50.218 23:03:39 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:50.218 23:03:39 -- paths/export.sh@5 -- $ export PATH 00:00:50.218 23:03:39 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:50.218 23:03:39 -- common/autobuild_common.sh@434 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:00:50.218 23:03:39 -- common/autobuild_common.sh@435 -- $ date +%s 00:00:50.218 23:03:39 -- common/autobuild_common.sh@435 -- $ mktemp -dt spdk_1714165419.XXXXXX 00:00:50.218 23:03:39 -- common/autobuild_common.sh@435 -- $ SPDK_WORKSPACE=/tmp/spdk_1714165419.eihQd4 00:00:50.218 23:03:39 -- common/autobuild_common.sh@437 -- $ [[ -n '' ]] 00:00:50.218 23:03:39 -- common/autobuild_common.sh@441 -- $ '[' -n v23.11 ']' 00:00:50.218 23:03:39 -- common/autobuild_common.sh@442 -- $ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:00:50.218 23:03:39 -- common/autobuild_common.sh@442 -- $ scanbuild_exclude=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk' 00:00:50.218 23:03:39 -- common/autobuild_common.sh@448 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:00:50.218 23:03:39 -- common/autobuild_common.sh@450 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:00:50.218 23:03:39 -- common/autobuild_common.sh@451 -- $ get_config_params 00:00:50.218 23:03:39 -- common/autotest_common.sh@385 -- $ xtrace_disable 00:00:50.218 23:03:39 -- common/autotest_common.sh@10 -- $ set +x 00:00:50.218 23:03:39 -- common/autobuild_common.sh@451 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-dpdk=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build' 00:00:50.218 23:03:39 -- common/autobuild_common.sh@453 -- $ start_monitor_resources 00:00:50.218 23:03:39 -- pm/common@17 -- $ local monitor 00:00:50.218 23:03:39 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:00:50.218 23:03:39 -- pm/common@23 -- $ MONITOR_RESOURCES_PIDS["$monitor"]=3582595 00:00:50.218 23:03:39 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:00:50.218 23:03:39 -- pm/common@23 -- $ MONITOR_RESOURCES_PIDS["$monitor"]=3582597 00:00:50.218 23:03:39 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:00:50.218 23:03:39 -- pm/common@21 -- $ date +%s 00:00:50.218 23:03:39 -- pm/common@23 -- $ MONITOR_RESOURCES_PIDS["$monitor"]=3582599 00:00:50.218 23:03:39 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:00:50.218 23:03:39 -- pm/common@21 -- $ date +%s 00:00:50.218 23:03:39 -- pm/common@23 -- $ MONITOR_RESOURCES_PIDS["$monitor"]=3582602 00:00:50.218 23:03:39 -- pm/common@26 -- $ sleep 1 00:00:50.218 23:03:39 -- pm/common@21 -- $ date +%s 00:00:50.218 23:03:39 -- pm/common@21 -- $ date +%s 00:00:50.218 23:03:39 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1714165419 00:00:50.218 23:03:39 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1714165419 00:00:50.218 23:03:39 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1714165419 00:00:50.218 23:03:39 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1714165419 00:00:50.478 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1714165419_collect-cpu-load.pm.log 00:00:50.478 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1714165419_collect-vmstat.pm.log 00:00:50.478 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1714165419_collect-bmc-pm.bmc.pm.log 00:00:50.478 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1714165419_collect-cpu-temp.pm.log 00:00:51.420 23:03:40 -- common/autobuild_common.sh@454 -- $ trap stop_monitor_resources EXIT 00:00:51.420 23:03:40 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:00:51.420 23:03:40 -- spdk/autobuild.sh@12 -- $ umask 022 00:00:51.420 23:03:40 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:51.420 23:03:40 -- spdk/autobuild.sh@16 -- $ date -u 00:00:51.420 Fri Apr 26 09:03:40 PM UTC 2024 00:00:51.420 23:03:40 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:00:51.420 v24.05-pre-449-g8571999d8 00:00:51.420 23:03:40 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:00:51.420 23:03:40 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:00:51.420 23:03:40 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:00:51.420 23:03:40 -- common/autotest_common.sh@1087 -- $ '[' 3 -le 1 ']' 00:00:51.420 23:03:40 -- common/autotest_common.sh@1093 -- $ xtrace_disable 00:00:51.420 23:03:40 -- common/autotest_common.sh@10 -- $ set +x 00:00:51.420 ************************************ 00:00:51.420 START TEST ubsan 00:00:51.420 ************************************ 00:00:51.420 23:03:40 -- common/autotest_common.sh@1111 -- $ echo 'using ubsan' 00:00:51.420 using ubsan 00:00:51.420 00:00:51.420 real 0m0.001s 00:00:51.420 user 0m0.000s 00:00:51.420 sys 0m0.001s 00:00:51.420 23:03:40 -- common/autotest_common.sh@1112 -- $ xtrace_disable 00:00:51.420 23:03:40 -- common/autotest_common.sh@10 -- $ set +x 00:00:51.420 ************************************ 00:00:51.420 END TEST ubsan 00:00:51.420 ************************************ 00:00:51.420 23:03:40 -- spdk/autobuild.sh@27 -- $ '[' -n v23.11 ']' 00:00:51.420 23:03:40 -- spdk/autobuild.sh@28 -- $ build_native_dpdk 00:00:51.420 23:03:40 -- common/autobuild_common.sh@427 -- $ run_test build_native_dpdk _build_native_dpdk 00:00:51.420 23:03:40 -- common/autotest_common.sh@1087 -- $ '[' 2 -le 1 ']' 00:00:51.420 23:03:40 -- common/autotest_common.sh@1093 -- $ xtrace_disable 00:00:51.420 23:03:40 -- common/autotest_common.sh@10 -- $ set +x 00:00:51.680 ************************************ 00:00:51.680 START TEST build_native_dpdk 00:00:51.680 ************************************ 00:00:51.680 23:03:40 -- common/autotest_common.sh@1111 -- $ _build_native_dpdk 00:00:51.680 23:03:40 -- common/autobuild_common.sh@48 -- $ local external_dpdk_dir 00:00:51.680 23:03:40 -- common/autobuild_common.sh@49 -- $ local external_dpdk_base_dir 00:00:51.680 23:03:40 -- common/autobuild_common.sh@50 -- $ local compiler_version 00:00:51.680 23:03:40 -- common/autobuild_common.sh@51 -- $ local compiler 00:00:51.680 23:03:40 -- common/autobuild_common.sh@52 -- $ local dpdk_kmods 00:00:51.680 23:03:40 -- common/autobuild_common.sh@53 -- $ local repo=dpdk 00:00:51.680 23:03:40 -- common/autobuild_common.sh@55 -- $ compiler=gcc 00:00:51.680 23:03:40 -- common/autobuild_common.sh@61 -- $ export CC=gcc 00:00:51.680 23:03:40 -- common/autobuild_common.sh@61 -- $ CC=gcc 00:00:51.680 23:03:40 -- common/autobuild_common.sh@63 -- $ [[ gcc != *clang* ]] 00:00:51.680 23:03:40 -- common/autobuild_common.sh@63 -- $ [[ gcc != *gcc* ]] 00:00:51.680 23:03:40 -- common/autobuild_common.sh@68 -- $ gcc -dumpversion 00:00:51.680 23:03:40 -- common/autobuild_common.sh@68 -- $ compiler_version=13 00:00:51.680 23:03:40 -- common/autobuild_common.sh@69 -- $ compiler_version=13 00:00:51.680 23:03:40 -- common/autobuild_common.sh@70 -- $ external_dpdk_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:00:51.680 23:03:40 -- common/autobuild_common.sh@71 -- $ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:00:51.680 23:03:40 -- common/autobuild_common.sh@71 -- $ external_dpdk_base_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk 00:00:51.680 23:03:40 -- common/autobuild_common.sh@73 -- $ [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk ]] 00:00:51.680 23:03:40 -- common/autobuild_common.sh@82 -- $ orgdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:51.680 23:03:40 -- common/autobuild_common.sh@83 -- $ git -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk log --oneline -n 5 00:00:51.680 eeb0605f11 version: 23.11.0 00:00:51.680 238778122a doc: update release notes for 23.11 00:00:51.680 46aa6b3cfc doc: fix description of RSS features 00:00:51.680 dd88f51a57 devtools: forbid DPDK API in cnxk base driver 00:00:51.680 7e421ae345 devtools: support skipping forbid rule check 00:00:51.680 23:03:40 -- common/autobuild_common.sh@85 -- $ dpdk_cflags='-fPIC -g -fcommon' 00:00:51.680 23:03:40 -- common/autobuild_common.sh@86 -- $ dpdk_ldflags= 00:00:51.680 23:03:40 -- common/autobuild_common.sh@87 -- $ dpdk_ver=23.11.0 00:00:51.680 23:03:40 -- common/autobuild_common.sh@89 -- $ [[ gcc == *gcc* ]] 00:00:51.680 23:03:40 -- common/autobuild_common.sh@89 -- $ [[ 13 -ge 5 ]] 00:00:51.680 23:03:40 -- common/autobuild_common.sh@90 -- $ dpdk_cflags+=' -Werror' 00:00:51.680 23:03:40 -- common/autobuild_common.sh@93 -- $ [[ gcc == *gcc* ]] 00:00:51.680 23:03:40 -- common/autobuild_common.sh@93 -- $ [[ 13 -ge 10 ]] 00:00:51.680 23:03:40 -- common/autobuild_common.sh@94 -- $ dpdk_cflags+=' -Wno-stringop-overflow' 00:00:51.680 23:03:40 -- common/autobuild_common.sh@100 -- $ DPDK_DRIVERS=("bus" "bus/pci" "bus/vdev" "mempool/ring" "net/i40e" "net/i40e/base") 00:00:51.680 23:03:40 -- common/autobuild_common.sh@102 -- $ local mlx5_libs_added=n 00:00:51.680 23:03:40 -- common/autobuild_common.sh@103 -- $ [[ 0 -eq 1 ]] 00:00:51.680 23:03:40 -- common/autobuild_common.sh@103 -- $ [[ 0 -eq 1 ]] 00:00:51.680 23:03:40 -- common/autobuild_common.sh@139 -- $ [[ 0 -eq 1 ]] 00:00:51.680 23:03:40 -- common/autobuild_common.sh@167 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk 00:00:51.680 23:03:40 -- common/autobuild_common.sh@168 -- $ uname -s 00:00:51.680 23:03:40 -- common/autobuild_common.sh@168 -- $ '[' Linux = Linux ']' 00:00:51.680 23:03:40 -- common/autobuild_common.sh@169 -- $ lt 23.11.0 21.11.0 00:00:51.680 23:03:40 -- scripts/common.sh@370 -- $ cmp_versions 23.11.0 '<' 21.11.0 00:00:51.680 23:03:40 -- scripts/common.sh@330 -- $ local ver1 ver1_l 00:00:51.680 23:03:40 -- scripts/common.sh@331 -- $ local ver2 ver2_l 00:00:51.680 23:03:40 -- scripts/common.sh@333 -- $ IFS=.-: 00:00:51.680 23:03:40 -- scripts/common.sh@333 -- $ read -ra ver1 00:00:51.680 23:03:40 -- scripts/common.sh@334 -- $ IFS=.-: 00:00:51.680 23:03:40 -- scripts/common.sh@334 -- $ read -ra ver2 00:00:51.680 23:03:40 -- scripts/common.sh@335 -- $ local 'op=<' 00:00:51.680 23:03:40 -- scripts/common.sh@337 -- $ ver1_l=3 00:00:51.680 23:03:40 -- scripts/common.sh@338 -- $ ver2_l=3 00:00:51.680 23:03:40 -- scripts/common.sh@340 -- $ local lt=0 gt=0 eq=0 v 00:00:51.680 23:03:40 -- scripts/common.sh@341 -- $ case "$op" in 00:00:51.680 23:03:40 -- scripts/common.sh@342 -- $ : 1 00:00:51.680 23:03:40 -- scripts/common.sh@361 -- $ (( v = 0 )) 00:00:51.680 23:03:40 -- scripts/common.sh@361 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:00:51.680 23:03:40 -- scripts/common.sh@362 -- $ decimal 23 00:00:51.680 23:03:40 -- scripts/common.sh@350 -- $ local d=23 00:00:51.680 23:03:40 -- scripts/common.sh@351 -- $ [[ 23 =~ ^[0-9]+$ ]] 00:00:51.680 23:03:40 -- scripts/common.sh@352 -- $ echo 23 00:00:51.680 23:03:40 -- scripts/common.sh@362 -- $ ver1[v]=23 00:00:51.680 23:03:40 -- scripts/common.sh@363 -- $ decimal 21 00:00:51.680 23:03:40 -- scripts/common.sh@350 -- $ local d=21 00:00:51.680 23:03:40 -- scripts/common.sh@351 -- $ [[ 21 =~ ^[0-9]+$ ]] 00:00:51.680 23:03:40 -- scripts/common.sh@352 -- $ echo 21 00:00:51.680 23:03:40 -- scripts/common.sh@363 -- $ ver2[v]=21 00:00:51.680 23:03:40 -- scripts/common.sh@364 -- $ (( ver1[v] > ver2[v] )) 00:00:51.680 23:03:40 -- scripts/common.sh@364 -- $ return 1 00:00:51.680 23:03:40 -- common/autobuild_common.sh@173 -- $ patch -p1 00:00:51.680 patching file config/rte_config.h 00:00:51.680 Hunk #1 succeeded at 60 (offset 1 line). 00:00:51.680 23:03:40 -- common/autobuild_common.sh@177 -- $ dpdk_kmods=false 00:00:51.680 23:03:40 -- common/autobuild_common.sh@178 -- $ uname -s 00:00:51.680 23:03:40 -- common/autobuild_common.sh@178 -- $ '[' Linux = FreeBSD ']' 00:00:51.680 23:03:40 -- common/autobuild_common.sh@182 -- $ printf %s, bus bus/pci bus/vdev mempool/ring net/i40e net/i40e/base 00:00:51.681 23:03:40 -- common/autobuild_common.sh@182 -- $ meson build-tmp --prefix=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build --libdir lib -Denable_docs=false -Denable_kmods=false -Dtests=false -Dc_link_args= '-Dc_args=-fPIC -g -fcommon -Werror -Wno-stringop-overflow' -Dmachine=native -Denable_drivers=bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base, 00:00:56.967 The Meson build system 00:00:56.967 Version: 1.3.1 00:00:56.967 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk 00:00:56.967 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp 00:00:56.968 Build type: native build 00:00:56.968 Program cat found: YES (/usr/bin/cat) 00:00:56.968 Project name: DPDK 00:00:56.968 Project version: 23.11.0 00:00:56.968 C compiler for the host machine: gcc (gcc 13.2.1 "gcc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:00:56.968 C linker for the host machine: gcc ld.bfd 2.39-16 00:00:56.968 Host machine cpu family: x86_64 00:00:56.968 Host machine cpu: x86_64 00:00:56.968 Message: ## Building in Developer Mode ## 00:00:56.968 Program pkg-config found: YES (/usr/bin/pkg-config) 00:00:56.968 Program check-symbols.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/buildtools/check-symbols.sh) 00:00:56.968 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/buildtools/options-ibverbs-static.sh) 00:00:56.968 Program python3 found: YES (/usr/bin/python3) 00:00:56.968 Program cat found: YES (/usr/bin/cat) 00:00:56.968 config/meson.build:113: WARNING: The "machine" option is deprecated. Please use "cpu_instruction_set" instead. 00:00:56.968 Compiler for C supports arguments -march=native: YES 00:00:56.968 Checking for size of "void *" : 8 00:00:56.968 Checking for size of "void *" : 8 (cached) 00:00:56.968 Library m found: YES 00:00:56.968 Library numa found: YES 00:00:56.968 Has header "numaif.h" : YES 00:00:56.968 Library fdt found: NO 00:00:56.968 Library execinfo found: NO 00:00:56.968 Has header "execinfo.h" : YES 00:00:56.968 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:00:56.968 Run-time dependency libarchive found: NO (tried pkgconfig) 00:00:56.968 Run-time dependency libbsd found: NO (tried pkgconfig) 00:00:56.968 Run-time dependency jansson found: NO (tried pkgconfig) 00:00:56.968 Run-time dependency openssl found: YES 3.0.9 00:00:56.968 Run-time dependency libpcap found: YES 1.10.4 00:00:56.968 Has header "pcap.h" with dependency libpcap: YES 00:00:56.968 Compiler for C supports arguments -Wcast-qual: YES 00:00:56.968 Compiler for C supports arguments -Wdeprecated: YES 00:00:56.968 Compiler for C supports arguments -Wformat: YES 00:00:56.968 Compiler for C supports arguments -Wformat-nonliteral: NO 00:00:56.968 Compiler for C supports arguments -Wformat-security: NO 00:00:56.968 Compiler for C supports arguments -Wmissing-declarations: YES 00:00:56.968 Compiler for C supports arguments -Wmissing-prototypes: YES 00:00:56.968 Compiler for C supports arguments -Wnested-externs: YES 00:00:56.968 Compiler for C supports arguments -Wold-style-definition: YES 00:00:56.968 Compiler for C supports arguments -Wpointer-arith: YES 00:00:56.968 Compiler for C supports arguments -Wsign-compare: YES 00:00:56.968 Compiler for C supports arguments -Wstrict-prototypes: YES 00:00:56.968 Compiler for C supports arguments -Wundef: YES 00:00:56.968 Compiler for C supports arguments -Wwrite-strings: YES 00:00:56.968 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:00:56.968 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:00:56.968 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:00:56.968 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:00:56.968 Program objdump found: YES (/usr/bin/objdump) 00:00:56.968 Compiler for C supports arguments -mavx512f: YES 00:00:56.968 Checking if "AVX512 checking" compiles: YES 00:00:56.968 Fetching value of define "__SSE4_2__" : 1 00:00:56.968 Fetching value of define "__AES__" : 1 00:00:56.968 Fetching value of define "__AVX__" : 1 00:00:56.968 Fetching value of define "__AVX2__" : 1 00:00:56.968 Fetching value of define "__AVX512BW__" : 1 00:00:56.968 Fetching value of define "__AVX512CD__" : 1 00:00:56.968 Fetching value of define "__AVX512DQ__" : 1 00:00:56.968 Fetching value of define "__AVX512F__" : 1 00:00:56.968 Fetching value of define "__AVX512VL__" : 1 00:00:56.968 Fetching value of define "__PCLMUL__" : 1 00:00:56.968 Fetching value of define "__RDRND__" : 1 00:00:56.968 Fetching value of define "__RDSEED__" : 1 00:00:56.968 Fetching value of define "__VPCLMULQDQ__" : 1 00:00:56.968 Fetching value of define "__znver1__" : (undefined) 00:00:56.968 Fetching value of define "__znver2__" : (undefined) 00:00:56.968 Fetching value of define "__znver3__" : (undefined) 00:00:56.968 Fetching value of define "__znver4__" : (undefined) 00:00:56.968 Compiler for C supports arguments -Wno-format-truncation: YES 00:00:56.968 Message: lib/log: Defining dependency "log" 00:00:56.968 Message: lib/kvargs: Defining dependency "kvargs" 00:00:56.968 Message: lib/telemetry: Defining dependency "telemetry" 00:00:56.968 Checking for function "getentropy" : NO 00:00:56.968 Message: lib/eal: Defining dependency "eal" 00:00:56.968 Message: lib/ring: Defining dependency "ring" 00:00:56.968 Message: lib/rcu: Defining dependency "rcu" 00:00:56.968 Message: lib/mempool: Defining dependency "mempool" 00:00:56.968 Message: lib/mbuf: Defining dependency "mbuf" 00:00:56.968 Fetching value of define "__PCLMUL__" : 1 (cached) 00:00:56.968 Fetching value of define "__AVX512F__" : 1 (cached) 00:00:56.968 Fetching value of define "__AVX512BW__" : 1 (cached) 00:00:56.968 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:00:56.968 Fetching value of define "__AVX512VL__" : 1 (cached) 00:00:56.968 Fetching value of define "__VPCLMULQDQ__" : 1 (cached) 00:00:56.968 Compiler for C supports arguments -mpclmul: YES 00:00:56.968 Compiler for C supports arguments -maes: YES 00:00:56.968 Compiler for C supports arguments -mavx512f: YES (cached) 00:00:56.968 Compiler for C supports arguments -mavx512bw: YES 00:00:56.968 Compiler for C supports arguments -mavx512dq: YES 00:00:56.968 Compiler for C supports arguments -mavx512vl: YES 00:00:56.968 Compiler for C supports arguments -mvpclmulqdq: YES 00:00:56.968 Compiler for C supports arguments -mavx2: YES 00:00:56.968 Compiler for C supports arguments -mavx: YES 00:00:56.968 Message: lib/net: Defining dependency "net" 00:00:56.968 Message: lib/meter: Defining dependency "meter" 00:00:56.968 Message: lib/ethdev: Defining dependency "ethdev" 00:00:56.968 Message: lib/pci: Defining dependency "pci" 00:00:56.968 Message: lib/cmdline: Defining dependency "cmdline" 00:00:56.968 Message: lib/metrics: Defining dependency "metrics" 00:00:56.968 Message: lib/hash: Defining dependency "hash" 00:00:56.968 Message: lib/timer: Defining dependency "timer" 00:00:56.968 Fetching value of define "__AVX512F__" : 1 (cached) 00:00:56.968 Fetching value of define "__AVX512VL__" : 1 (cached) 00:00:56.968 Fetching value of define "__AVX512CD__" : 1 (cached) 00:00:56.968 Fetching value of define "__AVX512BW__" : 1 (cached) 00:00:56.968 Message: lib/acl: Defining dependency "acl" 00:00:56.968 Message: lib/bbdev: Defining dependency "bbdev" 00:00:56.968 Message: lib/bitratestats: Defining dependency "bitratestats" 00:00:56.968 Run-time dependency libelf found: YES 0.190 00:00:56.968 Message: lib/bpf: Defining dependency "bpf" 00:00:56.968 Message: lib/cfgfile: Defining dependency "cfgfile" 00:00:56.968 Message: lib/compressdev: Defining dependency "compressdev" 00:00:56.968 Message: lib/cryptodev: Defining dependency "cryptodev" 00:00:56.968 Message: lib/distributor: Defining dependency "distributor" 00:00:56.968 Message: lib/dmadev: Defining dependency "dmadev" 00:00:56.968 Message: lib/efd: Defining dependency "efd" 00:00:56.968 Message: lib/eventdev: Defining dependency "eventdev" 00:00:56.968 Message: lib/dispatcher: Defining dependency "dispatcher" 00:00:56.968 Message: lib/gpudev: Defining dependency "gpudev" 00:00:56.968 Message: lib/gro: Defining dependency "gro" 00:00:56.968 Message: lib/gso: Defining dependency "gso" 00:00:56.968 Message: lib/ip_frag: Defining dependency "ip_frag" 00:00:56.968 Message: lib/jobstats: Defining dependency "jobstats" 00:00:56.968 Message: lib/latencystats: Defining dependency "latencystats" 00:00:56.968 Message: lib/lpm: Defining dependency "lpm" 00:00:56.968 Fetching value of define "__AVX512F__" : 1 (cached) 00:00:56.968 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:00:56.968 Fetching value of define "__AVX512IFMA__" : 1 00:00:56.968 Message: lib/member: Defining dependency "member" 00:00:56.968 Message: lib/pcapng: Defining dependency "pcapng" 00:00:56.968 Compiler for C supports arguments -Wno-cast-qual: YES 00:00:56.968 Message: lib/power: Defining dependency "power" 00:00:56.968 Message: lib/rawdev: Defining dependency "rawdev" 00:00:56.968 Message: lib/regexdev: Defining dependency "regexdev" 00:00:56.968 Message: lib/mldev: Defining dependency "mldev" 00:00:56.968 Message: lib/rib: Defining dependency "rib" 00:00:56.968 Message: lib/reorder: Defining dependency "reorder" 00:00:56.968 Message: lib/sched: Defining dependency "sched" 00:00:56.968 Message: lib/security: Defining dependency "security" 00:00:56.968 Message: lib/stack: Defining dependency "stack" 00:00:56.968 Has header "linux/userfaultfd.h" : YES 00:00:56.968 Has header "linux/vduse.h" : YES 00:00:56.968 Message: lib/vhost: Defining dependency "vhost" 00:00:56.969 Message: lib/ipsec: Defining dependency "ipsec" 00:00:56.969 Message: lib/pdcp: Defining dependency "pdcp" 00:00:56.969 Fetching value of define "__AVX512F__" : 1 (cached) 00:00:56.969 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:00:56.969 Fetching value of define "__AVX512BW__" : 1 (cached) 00:00:56.969 Message: lib/fib: Defining dependency "fib" 00:00:56.969 Message: lib/port: Defining dependency "port" 00:00:56.969 Message: lib/pdump: Defining dependency "pdump" 00:00:56.969 Message: lib/table: Defining dependency "table" 00:00:56.969 Message: lib/pipeline: Defining dependency "pipeline" 00:00:56.969 Message: lib/graph: Defining dependency "graph" 00:00:56.969 Message: lib/node: Defining dependency "node" 00:00:56.969 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:00:56.969 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:00:56.969 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:00:57.915 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:00:57.915 Compiler for C supports arguments -Wno-sign-compare: YES 00:00:57.915 Compiler for C supports arguments -Wno-unused-value: YES 00:00:57.915 Compiler for C supports arguments -Wno-format: YES 00:00:57.915 Compiler for C supports arguments -Wno-format-security: YES 00:00:57.915 Compiler for C supports arguments -Wno-format-nonliteral: YES 00:00:57.915 Compiler for C supports arguments -Wno-strict-aliasing: YES 00:00:57.915 Compiler for C supports arguments -Wno-unused-but-set-variable: YES 00:00:57.915 Compiler for C supports arguments -Wno-unused-parameter: YES 00:00:57.915 Fetching value of define "__AVX512F__" : 1 (cached) 00:00:57.915 Fetching value of define "__AVX512BW__" : 1 (cached) 00:00:57.915 Compiler for C supports arguments -mavx512f: YES (cached) 00:00:57.915 Compiler for C supports arguments -mavx512bw: YES (cached) 00:00:57.915 Compiler for C supports arguments -march=skylake-avx512: YES 00:00:57.915 Message: drivers/net/i40e: Defining dependency "net_i40e" 00:00:57.915 Has header "sys/epoll.h" : YES 00:00:57.915 Program doxygen found: YES (/usr/bin/doxygen) 00:00:57.915 Configuring doxy-api-html.conf using configuration 00:00:57.915 Configuring doxy-api-man.conf using configuration 00:00:57.915 Program mandb found: YES (/usr/bin/mandb) 00:00:57.915 Program sphinx-build found: NO 00:00:57.915 Configuring rte_build_config.h using configuration 00:00:57.915 Message: 00:00:57.915 ================= 00:00:57.915 Applications Enabled 00:00:57.915 ================= 00:00:57.915 00:00:57.915 apps: 00:00:57.915 dumpcap, graph, pdump, proc-info, test-acl, test-bbdev, test-cmdline, test-compress-perf, 00:00:57.915 test-crypto-perf, test-dma-perf, test-eventdev, test-fib, test-flow-perf, test-gpudev, test-mldev, test-pipeline, 00:00:57.915 test-pmd, test-regex, test-sad, test-security-perf, 00:00:57.915 00:00:57.915 Message: 00:00:57.915 ================= 00:00:57.915 Libraries Enabled 00:00:57.915 ================= 00:00:57.915 00:00:57.915 libs: 00:00:57.915 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:00:57.915 net, meter, ethdev, pci, cmdline, metrics, hash, timer, 00:00:57.915 acl, bbdev, bitratestats, bpf, cfgfile, compressdev, cryptodev, distributor, 00:00:57.915 dmadev, efd, eventdev, dispatcher, gpudev, gro, gso, ip_frag, 00:00:57.915 jobstats, latencystats, lpm, member, pcapng, power, rawdev, regexdev, 00:00:57.915 mldev, rib, reorder, sched, security, stack, vhost, ipsec, 00:00:57.915 pdcp, fib, port, pdump, table, pipeline, graph, node, 00:00:57.915 00:00:57.915 00:00:57.915 Message: 00:00:57.915 =============== 00:00:57.915 Drivers Enabled 00:00:57.915 =============== 00:00:57.915 00:00:57.915 common: 00:00:57.915 00:00:57.915 bus: 00:00:57.915 pci, vdev, 00:00:57.915 mempool: 00:00:57.915 ring, 00:00:57.915 dma: 00:00:57.915 00:00:57.915 net: 00:00:57.915 i40e, 00:00:57.915 raw: 00:00:57.915 00:00:57.915 crypto: 00:00:57.915 00:00:57.915 compress: 00:00:57.915 00:00:57.915 regex: 00:00:57.915 00:00:57.915 ml: 00:00:57.915 00:00:57.915 vdpa: 00:00:57.915 00:00:57.915 event: 00:00:57.915 00:00:57.915 baseband: 00:00:57.915 00:00:57.915 gpu: 00:00:57.915 00:00:57.915 00:00:57.915 Message: 00:00:57.915 ================= 00:00:57.915 Content Skipped 00:00:57.915 ================= 00:00:57.915 00:00:57.915 apps: 00:00:57.915 00:00:57.915 libs: 00:00:57.915 00:00:57.915 drivers: 00:00:57.915 common/cpt: not in enabled drivers build config 00:00:57.915 common/dpaax: not in enabled drivers build config 00:00:57.915 common/iavf: not in enabled drivers build config 00:00:57.915 common/idpf: not in enabled drivers build config 00:00:57.915 common/mvep: not in enabled drivers build config 00:00:57.915 common/octeontx: not in enabled drivers build config 00:00:57.915 bus/auxiliary: not in enabled drivers build config 00:00:57.915 bus/cdx: not in enabled drivers build config 00:00:57.915 bus/dpaa: not in enabled drivers build config 00:00:57.915 bus/fslmc: not in enabled drivers build config 00:00:57.915 bus/ifpga: not in enabled drivers build config 00:00:57.915 bus/platform: not in enabled drivers build config 00:00:57.915 bus/vmbus: not in enabled drivers build config 00:00:57.915 common/cnxk: not in enabled drivers build config 00:00:57.915 common/mlx5: not in enabled drivers build config 00:00:57.915 common/nfp: not in enabled drivers build config 00:00:57.915 common/qat: not in enabled drivers build config 00:00:57.915 common/sfc_efx: not in enabled drivers build config 00:00:57.915 mempool/bucket: not in enabled drivers build config 00:00:57.915 mempool/cnxk: not in enabled drivers build config 00:00:57.915 mempool/dpaa: not in enabled drivers build config 00:00:57.915 mempool/dpaa2: not in enabled drivers build config 00:00:57.915 mempool/octeontx: not in enabled drivers build config 00:00:57.915 mempool/stack: not in enabled drivers build config 00:00:57.915 dma/cnxk: not in enabled drivers build config 00:00:57.915 dma/dpaa: not in enabled drivers build config 00:00:57.915 dma/dpaa2: not in enabled drivers build config 00:00:57.915 dma/hisilicon: not in enabled drivers build config 00:00:57.915 dma/idxd: not in enabled drivers build config 00:00:57.915 dma/ioat: not in enabled drivers build config 00:00:57.915 dma/skeleton: not in enabled drivers build config 00:00:57.915 net/af_packet: not in enabled drivers build config 00:00:57.915 net/af_xdp: not in enabled drivers build config 00:00:57.915 net/ark: not in enabled drivers build config 00:00:57.915 net/atlantic: not in enabled drivers build config 00:00:57.915 net/avp: not in enabled drivers build config 00:00:57.915 net/axgbe: not in enabled drivers build config 00:00:57.915 net/bnx2x: not in enabled drivers build config 00:00:57.915 net/bnxt: not in enabled drivers build config 00:00:57.915 net/bonding: not in enabled drivers build config 00:00:57.915 net/cnxk: not in enabled drivers build config 00:00:57.915 net/cpfl: not in enabled drivers build config 00:00:57.915 net/cxgbe: not in enabled drivers build config 00:00:57.915 net/dpaa: not in enabled drivers build config 00:00:57.915 net/dpaa2: not in enabled drivers build config 00:00:57.915 net/e1000: not in enabled drivers build config 00:00:57.916 net/ena: not in enabled drivers build config 00:00:57.916 net/enetc: not in enabled drivers build config 00:00:57.916 net/enetfec: not in enabled drivers build config 00:00:57.916 net/enic: not in enabled drivers build config 00:00:57.916 net/failsafe: not in enabled drivers build config 00:00:57.916 net/fm10k: not in enabled drivers build config 00:00:57.916 net/gve: not in enabled drivers build config 00:00:57.916 net/hinic: not in enabled drivers build config 00:00:57.916 net/hns3: not in enabled drivers build config 00:00:57.916 net/iavf: not in enabled drivers build config 00:00:57.916 net/ice: not in enabled drivers build config 00:00:57.916 net/idpf: not in enabled drivers build config 00:00:57.916 net/igc: not in enabled drivers build config 00:00:57.916 net/ionic: not in enabled drivers build config 00:00:57.916 net/ipn3ke: not in enabled drivers build config 00:00:57.916 net/ixgbe: not in enabled drivers build config 00:00:57.916 net/mana: not in enabled drivers build config 00:00:57.916 net/memif: not in enabled drivers build config 00:00:57.916 net/mlx4: not in enabled drivers build config 00:00:57.916 net/mlx5: not in enabled drivers build config 00:00:57.916 net/mvneta: not in enabled drivers build config 00:00:57.916 net/mvpp2: not in enabled drivers build config 00:00:57.916 net/netvsc: not in enabled drivers build config 00:00:57.916 net/nfb: not in enabled drivers build config 00:00:57.916 net/nfp: not in enabled drivers build config 00:00:57.916 net/ngbe: not in enabled drivers build config 00:00:57.916 net/null: not in enabled drivers build config 00:00:57.916 net/octeontx: not in enabled drivers build config 00:00:57.916 net/octeon_ep: not in enabled drivers build config 00:00:57.916 net/pcap: not in enabled drivers build config 00:00:57.916 net/pfe: not in enabled drivers build config 00:00:57.916 net/qede: not in enabled drivers build config 00:00:57.916 net/ring: not in enabled drivers build config 00:00:57.916 net/sfc: not in enabled drivers build config 00:00:57.916 net/softnic: not in enabled drivers build config 00:00:57.916 net/tap: not in enabled drivers build config 00:00:57.916 net/thunderx: not in enabled drivers build config 00:00:57.916 net/txgbe: not in enabled drivers build config 00:00:57.916 net/vdev_netvsc: not in enabled drivers build config 00:00:57.916 net/vhost: not in enabled drivers build config 00:00:57.916 net/virtio: not in enabled drivers build config 00:00:57.916 net/vmxnet3: not in enabled drivers build config 00:00:57.916 raw/cnxk_bphy: not in enabled drivers build config 00:00:57.916 raw/cnxk_gpio: not in enabled drivers build config 00:00:57.916 raw/dpaa2_cmdif: not in enabled drivers build config 00:00:57.916 raw/ifpga: not in enabled drivers build config 00:00:57.916 raw/ntb: not in enabled drivers build config 00:00:57.916 raw/skeleton: not in enabled drivers build config 00:00:57.916 crypto/armv8: not in enabled drivers build config 00:00:57.916 crypto/bcmfs: not in enabled drivers build config 00:00:57.916 crypto/caam_jr: not in enabled drivers build config 00:00:57.916 crypto/ccp: not in enabled drivers build config 00:00:57.916 crypto/cnxk: not in enabled drivers build config 00:00:57.916 crypto/dpaa_sec: not in enabled drivers build config 00:00:57.916 crypto/dpaa2_sec: not in enabled drivers build config 00:00:57.916 crypto/ipsec_mb: not in enabled drivers build config 00:00:57.916 crypto/mlx5: not in enabled drivers build config 00:00:57.916 crypto/mvsam: not in enabled drivers build config 00:00:57.916 crypto/nitrox: not in enabled drivers build config 00:00:57.916 crypto/null: not in enabled drivers build config 00:00:57.916 crypto/octeontx: not in enabled drivers build config 00:00:57.916 crypto/openssl: not in enabled drivers build config 00:00:57.916 crypto/scheduler: not in enabled drivers build config 00:00:57.916 crypto/uadk: not in enabled drivers build config 00:00:57.916 crypto/virtio: not in enabled drivers build config 00:00:57.916 compress/isal: not in enabled drivers build config 00:00:57.916 compress/mlx5: not in enabled drivers build config 00:00:57.916 compress/octeontx: not in enabled drivers build config 00:00:57.916 compress/zlib: not in enabled drivers build config 00:00:57.916 regex/mlx5: not in enabled drivers build config 00:00:57.916 regex/cn9k: not in enabled drivers build config 00:00:57.916 ml/cnxk: not in enabled drivers build config 00:00:57.916 vdpa/ifc: not in enabled drivers build config 00:00:57.916 vdpa/mlx5: not in enabled drivers build config 00:00:57.916 vdpa/nfp: not in enabled drivers build config 00:00:57.916 vdpa/sfc: not in enabled drivers build config 00:00:57.916 event/cnxk: not in enabled drivers build config 00:00:57.916 event/dlb2: not in enabled drivers build config 00:00:57.916 event/dpaa: not in enabled drivers build config 00:00:57.916 event/dpaa2: not in enabled drivers build config 00:00:57.916 event/dsw: not in enabled drivers build config 00:00:57.916 event/opdl: not in enabled drivers build config 00:00:57.916 event/skeleton: not in enabled drivers build config 00:00:57.916 event/sw: not in enabled drivers build config 00:00:57.916 event/octeontx: not in enabled drivers build config 00:00:57.916 baseband/acc: not in enabled drivers build config 00:00:57.916 baseband/fpga_5gnr_fec: not in enabled drivers build config 00:00:57.916 baseband/fpga_lte_fec: not in enabled drivers build config 00:00:57.916 baseband/la12xx: not in enabled drivers build config 00:00:57.916 baseband/null: not in enabled drivers build config 00:00:57.916 baseband/turbo_sw: not in enabled drivers build config 00:00:57.916 gpu/cuda: not in enabled drivers build config 00:00:57.916 00:00:57.916 00:00:57.916 Build targets in project: 215 00:00:57.916 00:00:57.916 DPDK 23.11.0 00:00:57.916 00:00:57.916 User defined options 00:00:57.916 libdir : lib 00:00:57.916 prefix : /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:00:57.916 c_args : -fPIC -g -fcommon -Werror -Wno-stringop-overflow 00:00:57.916 c_link_args : 00:00:57.916 enable_docs : false 00:00:57.916 enable_drivers: bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base, 00:00:57.916 enable_kmods : false 00:00:57.916 machine : native 00:00:57.916 tests : false 00:00:57.916 00:00:57.916 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:00:57.916 WARNING: Running the setup command as `meson [options]` instead of `meson setup [options]` is ambiguous and deprecated. 00:00:57.916 23:03:47 -- common/autobuild_common.sh@186 -- $ ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp -j144 00:00:57.916 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp' 00:00:58.182 [1/705] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:00:58.182 [2/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:00:58.182 [3/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:00:58.182 [4/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:00:58.182 [5/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:00:58.182 [6/705] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:00:58.182 [7/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:00:58.182 [8/705] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:00:58.182 [9/705] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:00:58.182 [10/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:00:58.182 [11/705] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:00:58.182 [12/705] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:00:58.182 [13/705] Linking static target lib/librte_kvargs.a 00:00:58.182 [14/705] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:00:58.440 [15/705] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:00:58.440 [16/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:00:58.440 [17/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:00:58.440 [18/705] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:00:58.440 [19/705] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:00:58.440 [20/705] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:00:58.440 [21/705] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:00:58.440 [22/705] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:00:58.440 [23/705] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:00:58.440 [24/705] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:00:58.440 [25/705] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:00:58.440 [26/705] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:00:58.440 [27/705] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:00:58.440 [28/705] Linking static target lib/librte_pci.a 00:00:58.440 [29/705] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:00:58.440 [30/705] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:00:58.440 [31/705] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:00:58.440 [32/705] Compiling C object lib/librte_log.a.p/log_log.c.o 00:00:58.440 [33/705] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:00:58.440 [34/705] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:00:58.440 [35/705] Linking static target lib/librte_log.a 00:00:58.698 [36/705] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:00:58.698 [37/705] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:00:58.698 [38/705] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:00:58.698 [39/705] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:00:58.698 [40/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:00:58.698 [41/705] Compiling C object lib/librte_cfgfile.a.p/cfgfile_rte_cfgfile.c.o 00:00:58.698 [42/705] Linking static target lib/librte_cfgfile.a 00:00:58.698 [43/705] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:00:58.698 [44/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:00:58.963 [45/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:00:58.963 [46/705] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:00:58.963 [47/705] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:00:58.963 [48/705] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:00:58.963 [49/705] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:00:58.963 [50/705] Compiling C object lib/librte_bpf.a.p/bpf_bpf_stub.c.o 00:00:58.963 [51/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:00:58.963 [52/705] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:00:58.963 [53/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:00:58.963 [54/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:00:58.963 [55/705] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:00:58.963 [56/705] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:00:58.963 [57/705] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:00:58.963 [58/705] Compiling C object lib/librte_acl.a.p/acl_tb_mem.c.o 00:00:58.963 [59/705] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:00:58.963 [60/705] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:00:58.963 [61/705] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:00:58.963 [62/705] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:00:58.963 [63/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:00:58.963 [64/705] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:00:58.963 [65/705] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:00:58.963 [66/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:00:58.963 [67/705] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:00:58.963 [68/705] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:00:58.963 [69/705] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:00:58.963 [70/705] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:00:58.963 [71/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:00:58.963 [72/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:00:58.963 [73/705] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:00:58.963 [74/705] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:00:58.963 [75/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:00:58.963 [76/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:00:58.963 [77/705] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:00:58.963 [78/705] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:00:58.963 [79/705] Linking static target lib/librte_ring.a 00:00:58.963 [80/705] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:00:58.963 [81/705] Linking static target lib/librte_meter.a 00:00:58.963 [82/705] Compiling C object lib/librte_bpf.a.p/bpf_bpf.c.o 00:00:58.963 [83/705] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:00:58.963 [84/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:00:58.963 [85/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:00:58.963 [86/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:00:58.963 [87/705] Compiling C object lib/librte_bpf.a.p/bpf_bpf_dump.c.o 00:00:58.963 [88/705] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:00:58.963 [89/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:00:58.963 [90/705] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:00:58.963 [91/705] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:00:58.963 [92/705] Compiling C object lib/librte_acl.a.p/acl_rte_acl.c.o 00:00:58.963 [93/705] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_match_sse.c.o 00:00:58.963 [94/705] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics.c.o 00:00:58.963 [95/705] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load.c.o 00:00:58.963 [96/705] Linking static target lib/librte_cmdline.a 00:00:58.963 [97/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:00:58.963 [98/705] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics_telemetry.c.o 00:00:58.963 [99/705] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:00:58.963 [100/705] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:00:58.963 [101/705] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:00:58.963 [102/705] Linking static target lib/librte_metrics.a 00:00:58.963 [103/705] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:00:58.963 [104/705] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:00:58.963 [105/705] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:00:59.226 [106/705] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:00:59.226 [107/705] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:00:59.226 [108/705] Compiling C object lib/librte_net.a.p/net_net_crc_avx512.c.o 00:00:59.226 [109/705] Compiling C object lib/librte_bitratestats.a.p/bitratestats_rte_bitrate.c.o 00:00:59.226 [110/705] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:00:59.226 [111/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:00:59.226 [112/705] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:00:59.226 [113/705] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_private.c.o 00:00:59.226 [114/705] Linking static target lib/librte_bitratestats.a 00:00:59.226 [115/705] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:00:59.226 [116/705] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:00:59.227 [117/705] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:00:59.227 [118/705] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load_elf.c.o 00:00:59.227 [119/705] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:00:59.227 [120/705] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:00:59.227 [121/705] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:00:59.227 [122/705] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_ring.c.o 00:00:59.227 [123/705] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:00:59.227 [124/705] Linking static target lib/librte_net.a 00:00:59.227 [125/705] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:00:59.227 [126/705] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:00:59.227 [127/705] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_single.c.o 00:00:59.227 [128/705] Linking target lib/librte_log.so.24.0 00:00:59.227 [129/705] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:00:59.227 [130/705] Compiling C object lib/librte_sched.a.p/sched_rte_approx.c.o 00:00:59.227 [131/705] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:00:59.227 [132/705] Compiling C object lib/librte_bpf.a.p/bpf_bpf_convert.c.o 00:00:59.227 [133/705] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:00:59.227 [134/705] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:00:59.227 [135/705] Linking static target lib/librte_compressdev.a 00:00:59.227 [136/705] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:00:59.227 [137/705] Compiling C object lib/librte_acl.a.p/acl_acl_gen.c.o 00:00:59.227 [138/705] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:00:59.227 [139/705] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:00:59.227 [140/705] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:00:59.227 [141/705] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:00:59.227 [142/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:00:59.227 [143/705] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:00:59.484 [144/705] Compiling C object lib/librte_acl.a.p/acl_acl_run_scalar.c.o 00:00:59.484 [145/705] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:00:59.484 [146/705] Compiling C object lib/librte_bpf.a.p/bpf_bpf_exec.c.o 00:00:59.484 [147/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:00:59.484 [148/705] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_udp4.c.o 00:00:59.484 [149/705] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:00:59.484 [150/705] Compiling C object lib/librte_gro.a.p/gro_gro_tcp6.c.o 00:00:59.484 [151/705] Generating lib/cfgfile.sym_chk with a custom command (wrapped by meson to capture output) 00:00:59.484 [152/705] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:00:59.484 [153/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:00:59.484 [154/705] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:00:59.484 [155/705] Compiling C object lib/librte_gso.a.p/gso_gso_udp4.c.o 00:00:59.484 [156/705] Compiling C object lib/librte_dispatcher.a.p/dispatcher_rte_dispatcher.c.o 00:00:59.484 [157/705] Generating lib/bitratestats.sym_chk with a custom command (wrapped by meson to capture output) 00:00:59.484 [158/705] Compiling C object lib/librte_gro.a.p/gro_gro_tcp4.c.o 00:00:59.484 [159/705] Linking static target lib/librte_dispatcher.a 00:00:59.484 [160/705] Compiling C object lib/librte_gso.a.p/gso_gso_tcp4.c.o 00:00:59.484 [161/705] Generating symbol file lib/librte_log.so.24.0.p/librte_log.so.24.0.symbols 00:00:59.484 [162/705] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:00:59.484 [163/705] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:00:59.484 [164/705] Compiling C object lib/librte_gro.a.p/gro_gro_udp4.c.o 00:00:59.484 [165/705] Linking static target lib/librte_timer.a 00:00:59.484 [166/705] Compiling C object lib/librte_gro.a.p/gro_rte_gro.c.o 00:00:59.484 [167/705] Linking target lib/librte_kvargs.so.24.0 00:00:59.484 [168/705] Compiling C object lib/librte_bbdev.a.p/bbdev_rte_bbdev.c.o 00:00:59.484 [169/705] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_tcp4.c.o 00:00:59.484 [170/705] Compiling C object lib/librte_jobstats.a.p/jobstats_rte_jobstats.c.o 00:00:59.484 [171/705] Linking static target lib/librte_bbdev.a 00:00:59.484 [172/705] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:00:59.484 [173/705] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:00:59.484 [174/705] Compiling C object lib/librte_gpudev.a.p/gpudev_gpudev.c.o 00:00:59.484 [175/705] Linking static target lib/librte_jobstats.a 00:00:59.484 [176/705] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_tcp4.c.o 00:00:59.484 [177/705] Linking static target lib/librte_gpudev.a 00:00:59.484 [178/705] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_reassembly.c.o 00:00:59.484 [179/705] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:00:59.484 [180/705] Compiling C object lib/librte_gso.a.p/gso_rte_gso.c.o 00:00:59.484 [181/705] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_trace_points.c.o 00:00:59.484 [182/705] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:00:59.484 [183/705] Linking static target lib/librte_telemetry.a 00:00:59.484 [184/705] Linking static target lib/librte_dmadev.a 00:00:59.746 [185/705] Compiling C object lib/librte_stack.a.p/stack_rte_stack_std.c.o 00:00:59.746 [186/705] Compiling C object lib/librte_sched.a.p/sched_rte_pie.c.o 00:00:59.746 [187/705] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:00:59.746 [188/705] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:00:59.746 [189/705] Compiling C object lib/librte_mldev.a.p/mldev_mldev_utils.c.o 00:00:59.746 [190/705] Compiling C object lib/librte_stack.a.p/stack_rte_stack_lf.c.o 00:00:59.746 [191/705] Linking static target lib/librte_mempool.a 00:00:59.746 [192/705] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:00:59.746 [193/705] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:00:59.746 [194/705] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_udp4.c.o 00:00:59.746 [195/705] Compiling C object lib/librte_mldev.a.p/mldev_rte_mldev_pmd.c.o 00:00:59.746 [196/705] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_reassembly.c.o 00:00:59.746 [197/705] Linking static target lib/librte_gro.a 00:00:59.746 [198/705] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor.c.o 00:00:59.746 [199/705] Compiling C object lib/librte_sched.a.p/sched_rte_red.c.o 00:00:59.746 [200/705] Compiling C object lib/librte_member.a.p/member_rte_member.c.o 00:00:59.746 [201/705] Compiling C object lib/librte_table.a.p/table_rte_swx_keycmp.c.o 00:00:59.746 [202/705] Compiling C object lib/librte_bpf.a.p/bpf_bpf_pkt.c.o 00:00:59.746 [203/705] Linking static target lib/librte_distributor.a 00:00:59.746 [204/705] Compiling C object lib/librte_stack.a.p/stack_rte_stack.c.o 00:00:59.746 [205/705] Generating lib/metrics.sym_chk with a custom command (wrapped by meson to capture output) 00:00:59.746 [206/705] Linking static target lib/librte_stack.a 00:00:59.746 [207/705] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:00:59.746 [208/705] Generating symbol file lib/librte_kvargs.so.24.0.p/librte_kvargs.so.24.0.symbols 00:00:59.746 [209/705] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:00:59.746 [210/705] Compiling C object lib/librte_mldev.a.p/mldev_mldev_utils_scalar_bfloat16.c.o 00:00:59.746 [211/705] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:00:59.746 [212/705] Compiling C object lib/librte_latencystats.a.p/latencystats_rte_latencystats.c.o 00:00:59.746 [213/705] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm.c.o 00:00:59.746 [214/705] Linking static target lib/librte_latencystats.a 00:00:59.746 [215/705] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:00:59.746 [216/705] Compiling C object lib/librte_member.a.p/member_rte_member_sketch_avx512.c.o 00:00:59.746 [217/705] Compiling C object lib/librte_gso.a.p/gso_gso_common.c.o 00:00:59.746 [218/705] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:00:59.746 [219/705] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_reorder.c.o 00:00:59.746 [220/705] Compiling C object lib/librte_ipsec.a.p/ipsec_ses.c.o 00:00:59.746 [221/705] Linking static target lib/librte_gso.a 00:00:59.746 [222/705] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_telemetry.c.o 00:00:59.746 [223/705] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ip_frag_common.c.o 00:00:59.746 [224/705] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:00:59.746 [225/705] Compiling C object lib/librte_regexdev.a.p/regexdev_rte_regexdev.c.o 00:00:59.746 [226/705] Compiling C object lib/librte_bpf.a.p/bpf_bpf_validate.c.o 00:00:59.746 [227/705] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:00:59.746 [228/705] Linking static target lib/librte_regexdev.a 00:00:59.746 [229/705] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_fragmentation.c.o 00:00:59.746 [230/705] Compiling C object lib/librte_ip_frag.a.p/ip_frag_ip_frag_internal.c.o 00:00:59.746 [231/705] Compiling C object lib/librte_fib.a.p/fib_rte_fib.c.o 00:00:59.746 [232/705] Compiling C object lib/librte_fib.a.p/fib_rte_fib6.c.o 00:01:00.004 [233/705] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_dma_adapter.c.o 00:01:00.004 [234/705] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_crypto.c.o 00:01:00.004 [235/705] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_ctrl_pdu.c.o 00:01:00.004 [236/705] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:01:00.004 [237/705] Compiling C object lib/librte_node.a.p/node_null.c.o 00:01:00.004 [238/705] Compiling C object lib/librte_mldev.a.p/mldev_mldev_utils_scalar.c.o 00:01:00.004 [239/705] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:01:00.004 [240/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:01:00.004 [241/705] Compiling C object lib/librte_mldev.a.p/mldev_rte_mldev.c.o 00:01:00.004 [242/705] Compiling C object lib/librte_rawdev.a.p/rawdev_rte_rawdev.c.o 00:01:00.004 [243/705] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:01:00.004 [244/705] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:01:00.004 [245/705] Linking static target lib/librte_mldev.a 00:01:00.004 [246/705] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:01:00.004 [247/705] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:01:00.004 [248/705] Linking static target lib/librte_rawdev.a 00:01:00.004 [249/705] Linking static target lib/librte_rcu.a 00:01:00.004 [250/705] Linking static target lib/librte_eal.a 00:01:00.004 [251/705] Linking static target lib/librte_power.a 00:01:00.004 [252/705] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:01:00.004 [253/705] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_cnt.c.o 00:01:00.004 [254/705] Generating lib/jobstats.sym_chk with a custom command (wrapped by meson to capture output) 00:01:00.004 [255/705] Compiling C object lib/librte_pcapng.a.p/pcapng_rte_pcapng.c.o 00:01:00.004 [256/705] Generating lib/stack.sym_chk with a custom command (wrapped by meson to capture output) 00:01:00.004 [257/705] Compiling C object lib/librte_rib.a.p/rib_rte_rib.c.o 00:01:00.004 [258/705] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:01:00.004 [259/705] Compiling C object lib/librte_fib.a.p/fib_dir24_8_avx512.c.o 00:01:00.004 [260/705] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:01:00.004 [261/705] Linking static target lib/librte_pcapng.a 00:01:00.004 [262/705] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:01:00.004 [263/705] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_fragmentation.c.o 00:01:00.004 [264/705] Linking static target lib/librte_reorder.a 00:01:00.004 [265/705] Generating lib/gro.sym_chk with a custom command (wrapped by meson to capture output) 00:01:00.004 [266/705] Linking static target lib/librte_security.a 00:01:00.004 [267/705] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:01:00.004 [268/705] Generating lib/distributor.sym_chk with a custom command (wrapped by meson to capture output) 00:01:00.004 [269/705] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:01:00.004 [270/705] Compiling C object lib/librte_fib.a.p/fib_trie_avx512.c.o 00:01:00.004 [271/705] Linking static target lib/librte_ip_frag.a 00:01:00.004 [272/705] Compiling C object lib/librte_port.a.p/port_rte_port_sched.c.o 00:01:00.004 [273/705] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:01:00.004 [274/705] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:01:00.004 [275/705] Compiling C object lib/librte_ipsec.a.p/ipsec_sa.c.o 00:01:00.004 [276/705] Compiling C object lib/librte_bpf.a.p/bpf_bpf_jit_x86.c.o 00:01:00.004 [277/705] Linking static target lib/librte_bpf.a 00:01:00.004 [278/705] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_tx_adapter.c.o 00:01:00.004 [279/705] Generating lib/latencystats.sym_chk with a custom command (wrapped by meson to capture output) 00:01:00.004 [280/705] Generating lib/gso.sym_chk with a custom command (wrapped by meson to capture output) 00:01:00.266 [281/705] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:00.266 [282/705] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:01:00.266 [283/705] Compiling C object lib/librte_table.a.p/table_rte_swx_table_selector.c.o 00:01:00.266 [284/705] Compiling C object lib/librte_table.a.p/table_rte_table_array.c.o 00:01:00.266 [285/705] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:00.266 [286/705] Compiling C object lib/librte_table.a.p/table_rte_table_hash_cuckoo.c.o 00:01:00.266 [287/705] Compiling C object lib/librte_table.a.p/table_rte_swx_table_wm.c.o 00:01:00.266 [288/705] Linking static target lib/librte_mbuf.a 00:01:00.266 [289/705] Generating lib/dispatcher.sym_chk with a custom command (wrapped by meson to capture output) 00:01:00.266 [290/705] Compiling C object lib/librte_pdcp.a.p/pdcp_rte_pdcp.c.o 00:01:00.266 [291/705] Compiling C object lib/librte_table.a.p/table_rte_table_lpm.c.o 00:01:00.266 [292/705] Compiling C object lib/librte_table.a.p/table_rte_table_stub.c.o 00:01:00.266 [293/705] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:01:00.266 [294/705] Compiling C object lib/librte_table.a.p/table_rte_table_lpm_ipv6.c.o 00:01:00.266 [295/705] Compiling C object lib/librte_graph.a.p/graph_graph_debug.c.o 00:01:00.266 [296/705] Compiling C object lib/librte_table.a.p/table_rte_swx_table_learner.c.o 00:01:00.266 [297/705] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:01:00.266 [298/705] Compiling C object lib/librte_graph.a.p/graph_rte_graph_worker.c.o 00:01:00.266 [299/705] Compiling C object lib/librte_member.a.p/member_rte_member_vbf.c.o 00:01:00.266 [300/705] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_port_in_action.c.o 00:01:00.266 [301/705] Compiling C object lib/librte_graph.a.p/graph_graph_ops.c.o 00:01:00.266 [302/705] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_timer_adapter.c.o 00:01:00.266 [303/705] Compiling C object lib/librte_graph.a.p/graph_graph_populate.c.o 00:01:00.266 [304/705] Compiling C object lib/librte_table.a.p/table_rte_table_acl.c.o 00:01:00.266 [305/705] Linking target lib/librte_telemetry.so.24.0 00:01:00.266 [306/705] Compiling C object lib/librte_table.a.p/table_rte_swx_table_em.c.o 00:01:00.266 [307/705] Compiling C object lib/librte_rib.a.p/rib_rte_rib6.c.o 00:01:00.266 [308/705] Compiling C object lib/librte_port.a.p/port_rte_port_fd.c.o 00:01:00.266 [309/705] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ethdev.c.o 00:01:00.266 [310/705] Compiling C object lib/librte_port.a.p/port_rte_port_ras.c.o 00:01:00.266 [311/705] Linking static target lib/librte_rib.a 00:01:00.266 [312/705] Compiling C object lib/librte_acl.a.p/acl_acl_bld.c.o 00:01:00.266 [313/705] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm6.c.o 00:01:00.266 [314/705] Compiling C object lib/librte_graph.a.p/graph_node.c.o 00:01:00.266 [315/705] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_cmdline_test.c.o 00:01:00.266 [316/705] Compiling C object lib/librte_node.a.p/node_log.c.o 00:01:00.523 [317/705] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_commands.c.o 00:01:00.523 [318/705] Compiling C object lib/librte_graph.a.p/graph_graph_pcap.c.o 00:01:00.523 [319/705] Linking static target lib/librte_lpm.a 00:01:00.524 [320/705] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:01:00.524 [321/705] Compiling C object lib/librte_node.a.p/node_ethdev_ctrl.c.o 00:01:00.524 [322/705] Compiling C object lib/librte_port.a.p/port_rte_swx_port_fd.c.o 00:01:00.524 [323/705] Generating lib/pcapng.sym_chk with a custom command (wrapped by meson to capture output) 00:01:00.524 [324/705] Compiling C object lib/librte_efd.a.p/efd_rte_efd.c.o 00:01:00.524 [325/705] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:01:00.524 [326/705] Compiling C object lib/librte_node.a.p/node_pkt_drop.c.o 00:01:00.524 [327/705] Compiling C object lib/librte_member.a.p/member_rte_member_ht.c.o 00:01:00.524 [328/705] Compiling C object lib/librte_port.a.p/port_rte_port_frag.c.o 00:01:00.524 [329/705] Compiling C object lib/librte_graph.a.p/graph_graph_stats.c.o 00:01:00.524 [330/705] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:01:00.524 [331/705] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_crypto_adapter.c.o 00:01:00.524 [332/705] Linking static target lib/librte_efd.a 00:01:00.524 [333/705] Generating lib/bbdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:00.524 [334/705] Generating lib/ip_frag.sym_chk with a custom command (wrapped by meson to capture output) 00:01:00.524 [335/705] Compiling C object lib/librte_port.a.p/port_rte_port_ethdev.c.o 00:01:00.524 [336/705] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:01:00.524 [337/705] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_diag.c.o 00:01:00.524 [338/705] Compiling C object lib/librte_port.a.p/port_rte_port_eventdev.c.o 00:01:00.524 [339/705] Compiling C object lib/librte_node.a.p/node_ethdev_tx.c.o 00:01:00.524 [340/705] Compiling C object lib/librte_graph.a.p/graph_graph.c.o 00:01:00.524 [341/705] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:01:00.524 [342/705] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_test.c.o 00:01:00.524 [343/705] Generating symbol file lib/librte_telemetry.so.24.0.p/librte_telemetry.so.24.0.symbols 00:01:00.524 [344/705] Compiling C object lib/librte_fib.a.p/fib_trie.c.o 00:01:00.524 [345/705] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_hmc.c.o 00:01:00.524 [346/705] Compiling C object lib/librte_port.a.p/port_rte_swx_port_source_sink.c.o 00:01:00.524 [347/705] Generating lib/bpf.sym_chk with a custom command (wrapped by meson to capture output) 00:01:00.524 [348/705] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:01:00.524 [349/705] Compiling C object lib/librte_node.a.p/node_ethdev_rx.c.o 00:01:00.524 [350/705] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:01:00.524 [351/705] Compiling C object lib/librte_port.a.p/port_rte_port_sym_crypto.c.o 00:01:00.524 [352/705] Compiling C object lib/librte_node.a.p/node_kernel_tx.c.o 00:01:00.524 [353/705] Compiling C object lib/librte_node.a.p/node_ip4_reassembly.c.o 00:01:00.524 [354/705] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:01:00.524 [355/705] Compiling C object lib/librte_port.a.p/port_rte_port_source_sink.c.o 00:01:00.524 [356/705] Linking static target drivers/libtmp_rte_bus_vdev.a 00:01:00.783 [357/705] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:01:00.783 [358/705] Compiling C object lib/librte_node.a.p/node_ip4_local.c.o 00:01:00.783 [359/705] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key8.c.o 00:01:00.783 [360/705] Compiling C object app/dpdk-test-mldev.p/test-mldev_ml_test.c.o 00:01:00.783 [361/705] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ring.c.o 00:01:00.783 [362/705] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:01:00.783 [363/705] Generating lib/rawdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:00.783 [364/705] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_sad.c.o 00:01:00.783 [365/705] Compiling C object app/dpdk-graph.p/graph_cli.c.o 00:01:00.783 [366/705] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_pipeline.c.o 00:01:00.783 [367/705] Compiling C object lib/librte_fib.a.p/fib_dir24_8.c.o 00:01:00.783 [368/705] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:01:00.783 [369/705] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_parser.c.o 00:01:00.783 [370/705] Compiling C object lib/librte_graph.a.p/graph_rte_graph_model_mcore_dispatch.c.o 00:01:00.783 [371/705] Linking static target lib/librte_fib.a 00:01:00.783 [372/705] Compiling C object app/dpdk-graph.p/graph_conn.c.o 00:01:00.783 [373/705] Linking static target lib/librte_graph.a 00:01:00.783 [374/705] Compiling C object app/dpdk-graph.p/graph_ip6_route.c.o 00:01:00.783 [375/705] Compiling C object app/dpdk-graph.p/graph_mempool.c.o 00:01:00.783 [376/705] Compiling C object app/dpdk-graph.p/graph_ethdev_rx.c.o 00:01:00.783 [377/705] Compiling C object app/dpdk-graph.p/graph_ip4_route.c.o 00:01:00.783 [378/705] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_eventdev.c.o 00:01:00.783 [379/705] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:01:00.783 [380/705] Compiling C object app/dpdk-test-mldev.p/test-mldev_parser.c.o 00:01:00.783 [381/705] Generating lib/efd.sym_chk with a custom command (wrapped by meson to capture output) 00:01:00.783 [382/705] Compiling C object app/dpdk-graph.p/graph_utils.c.o 00:01:00.783 [383/705] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_dcb.c.o 00:01:00.783 [384/705] Linking static target drivers/libtmp_rte_bus_pci.a 00:01:00.783 [385/705] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_main.c.o 00:01:00.783 [386/705] Compiling C object lib/librte_node.a.p/node_udp4_input.c.o 00:01:00.783 [387/705] Compiling C object app/dpdk-graph.p/graph_main.c.o 00:01:00.783 [388/705] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key16.c.o 00:01:00.783 [389/705] Compiling C object lib/librte_node.a.p/node_pkt_cls.c.o 00:01:00.783 [390/705] Compiling C object app/dpdk-graph.p/graph_l3fwd.c.o 00:01:01.044 [391/705] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_lan_hmc.c.o 00:01:01.044 [392/705] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:01:01.044 [393/705] Compiling C object app/dpdk-graph.p/graph_neigh.c.o 00:01:01.044 [394/705] Compiling C object lib/librte_pdump.a.p/pdump_rte_pdump.c.o 00:01:01.044 [395/705] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_vf_representor.c.o 00:01:01.044 [396/705] Linking static target lib/librte_pdump.a 00:01:01.044 [397/705] Compiling C object lib/librte_table.a.p/table_rte_table_hash_ext.c.o 00:01:01.044 [398/705] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:01.044 [399/705] Compiling C object drivers/librte_bus_vdev.so.24.0.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:01.044 [400/705] Generating lib/gpudev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:01.044 [401/705] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:01:01.044 [402/705] Linking static target drivers/librte_bus_vdev.a 00:01:01.044 [403/705] Compiling C object app/dpdk-graph.p/graph_graph.c.o 00:01:01.044 [404/705] Generating lib/lpm.sym_chk with a custom command (wrapped by meson to capture output) 00:01:01.044 [405/705] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:01:01.044 [406/705] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:01:01.044 [407/705] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_common.c.o 00:01:01.044 [408/705] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:01:01.044 [409/705] Compiling C object lib/librte_table.a.p/table_rte_table_hash_lru.c.o 00:01:01.044 [410/705] Compiling C object lib/librte_node.a.p/node_ip6_lookup.c.o 00:01:01.045 [411/705] Compiling C object lib/librte_node.a.p/node_kernel_rx.c.o 00:01:01.045 [412/705] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_adminq.c.o 00:01:01.045 [413/705] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_main.c.o 00:01:01.045 [414/705] Compiling C object app/dpdk-graph.p/graph_ethdev.c.o 00:01:01.045 [415/705] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_recycle_mbufs_vec_common.c.o 00:01:01.045 [416/705] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_options_parse.c.o 00:01:01.045 [417/705] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vectors.c.o 00:01:01.045 [418/705] Generating lib/regexdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:01.045 [419/705] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:01:01.045 [420/705] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_tm.c.o 00:01:01.045 [421/705] Generating lib/rib.sym_chk with a custom command (wrapped by meson to capture output) 00:01:01.045 [422/705] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key32.c.o 00:01:01.045 [423/705] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:01:01.045 [424/705] Linking static target lib/librte_cryptodev.a 00:01:01.045 [425/705] Linking static target lib/librte_table.a 00:01:01.045 [426/705] Compiling C object app/dpdk-test-mldev.p/test-mldev_ml_main.c.o 00:01:01.045 [427/705] Compiling C object lib/librte_acl.a.p/acl_acl_run_sse.c.o 00:01:01.045 [428/705] Compiling C object lib/librte_node.a.p/node_ip4_lookup.c.o 00:01:01.045 [429/705] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:01:01.045 [430/705] Compiling C object app/dpdk-test-dma-perf.p/test-dma-perf_main.c.o 00:01:01.305 [431/705] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_device_ops.c.o 00:01:01.305 [432/705] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_inference_ordered.c.o 00:01:01.305 [433/705] Compiling C object drivers/librte_bus_pci.so.24.0.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:01.305 [434/705] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:01.305 [435/705] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_main.c.o 00:01:01.305 [436/705] Linking static target drivers/librte_bus_pci.a 00:01:01.305 [437/705] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_model_common.c.o 00:01:01.305 [438/705] Compiling C object app/dpdk-test-acl.p/test-acl_main.c.o 00:01:01.305 [439/705] Compiling C object app/dpdk-dumpcap.p/dumpcap_main.c.o 00:01:01.305 [440/705] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_options.c.o 00:01:01.305 [441/705] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_nvm.c.o 00:01:01.305 [442/705] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_hash.c.o 00:01:01.305 [443/705] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_flow_gen.c.o 00:01:01.305 [444/705] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_inference_interleave.c.o 00:01:01.305 [445/705] Generating lib/fib.sym_chk with a custom command (wrapped by meson to capture output) 00:01:01.305 [446/705] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_stats.c.o 00:01:01.305 [447/705] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_model_ops.c.o 00:01:01.305 [448/705] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_common.c.o 00:01:01.305 [449/705] Generating lib/pdump.sym_chk with a custom command (wrapped by meson to capture output) 00:01:01.305 [450/705] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_main.c.o 00:01:01.305 [451/705] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_items_gen.c.o 00:01:01.305 [452/705] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vector_parsing.c.o 00:01:01.305 [453/705] Compiling C object app/dpdk-test-gpudev.p/test-gpudev_main.c.o 00:01:01.305 [454/705] Compiling C object app/dpdk-test-mldev.p/test-mldev_ml_options.c.o 00:01:01.305 [455/705] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:01.305 [456/705] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:01:01.305 [457/705] Linking static target drivers/libtmp_rte_mempool_ring.a 00:01:01.305 [458/705] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_common.c.o 00:01:01.305 [459/705] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_atq.c.o 00:01:01.305 [460/705] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_throughput.c.o 00:01:01.305 [461/705] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_ops.c.o 00:01:01.305 [462/705] Compiling C object lib/librte_sched.a.p/sched_rte_sched.c.o 00:01:01.305 [463/705] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_stub.c.o 00:01:01.305 [464/705] Linking static target lib/librte_sched.a 00:01:01.305 [465/705] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_cman.c.o 00:01:01.305 [466/705] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_verify.c.o 00:01:01.305 [467/705] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_main.c.o 00:01:01.305 [468/705] Compiling C object lib/librte_node.a.p/node_ip6_rewrite.c.o 00:01:01.305 [469/705] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_init.c.o 00:01:01.305 [470/705] Compiling C object lib/librte_node.a.p/node_ip4_rewrite.c.o 00:01:01.305 [471/705] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_config.c.o 00:01:01.305 [472/705] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_options_parsing.c.o 00:01:01.305 [473/705] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_ipsec.c.o 00:01:01.305 [474/705] Linking static target lib/librte_node.a 00:01:01.305 [475/705] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm.c.o 00:01:01.305 [476/705] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_acl.c.o 00:01:01.305 [477/705] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_inb.c.o 00:01:01.305 [478/705] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_outb.c.o 00:01:01.305 [479/705] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_queue.c.o 00:01:01.305 [480/705] Linking static target lib/librte_ipsec.a 00:01:01.305 [481/705] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_pf.c.o 00:01:01.566 [482/705] Compiling C object app/dpdk-testpmd.p/test-pmd_macfwd.c.o 00:01:01.566 [483/705] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm_ipv6.c.o 00:01:01.566 [484/705] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_vector.c.o 00:01:01.566 [485/705] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_hash.c.o 00:01:01.566 [486/705] Generating lib/graph.sym_chk with a custom command (wrapped by meson to capture output) 00:01:01.566 [487/705] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_verify.c.o 00:01:01.566 [488/705] Compiling C object app/dpdk-testpmd.p/test-pmd_recycle_mbufs.c.o 00:01:01.566 [489/705] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:01:01.566 [490/705] Compiling C object app/dpdk-testpmd.p/test-pmd_cmd_flex_item.c.o 00:01:01.566 [491/705] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_throughput.c.o 00:01:01.566 [492/705] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_actions_gen.c.o 00:01:01.566 [493/705] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_process.c.o 00:01:01.566 [494/705] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:01.566 [495/705] Linking static target lib/librte_pdcp.a 00:01:01.566 [496/705] Linking static target drivers/librte_mempool_ring.a 00:01:01.566 [497/705] Compiling C object app/dpdk-testpmd.p/test-pmd_rxonly.c.o 00:01:01.566 [498/705] Compiling C object drivers/librte_mempool_ring.so.24.0.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:01.566 [499/705] Compiling C object app/dpdk-test-dma-perf.p/test-dma-perf_benchmark.c.o 00:01:01.566 [500/705] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_pmd_cyclecount.c.o 00:01:01.566 [501/705] Compiling C object lib/librte_member.a.p/member_rte_member_sketch.c.o 00:01:01.566 [502/705] Linking static target lib/librte_member.a 00:01:01.566 [503/705] Compiling C object app/dpdk-testpmd.p/test-pmd_5tswap.c.o 00:01:01.566 [504/705] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_common.c.o 00:01:01.566 [505/705] Compiling C object app/dpdk-testpmd.p/test-pmd_macswap.c.o 00:01:01.566 [506/705] Compiling C object app/dpdk-testpmd.p/test-pmd_iofwd.c.o 00:01:01.566 [507/705] Compiling C object app/dpdk-testpmd.p/test-pmd_icmpecho.c.o 00:01:01.566 [508/705] Compiling C object app/dpdk-test-security-perf.p/test-security-perf_test_security_perf.c.o 00:01:01.566 [509/705] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_latency.c.o 00:01:01.566 [510/705] Compiling C object app/dpdk-testpmd.p/test-pmd_flowgen.c.o 00:01:01.566 [511/705] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev.c.o 00:01:01.566 [512/705] Compiling C object app/dpdk-testpmd.p/test-pmd_bpf_cmd.c.o 00:01:01.566 [513/705] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_tm.c.o 00:01:01.566 [514/705] Compiling C object app/dpdk-testpmd.p/test-pmd_shared_rxq_fwd.c.o 00:01:01.566 [515/705] Compiling C object app/dpdk-testpmd.p/test-pmd_util.c.o 00:01:01.566 [516/705] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_cyclecount.c.o 00:01:01.566 [517/705] Compiling C object app/dpdk-proc-info.p/proc-info_main.c.o 00:01:01.566 [518/705] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_mtr.c.o 00:01:01.566 [519/705] Compiling C object app/dpdk-testpmd.p/test-pmd_ieee1588fwd.c.o 00:01:01.827 [520/705] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_flow.c.o 00:01:01.827 [521/705] Generating lib/mldev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:01.827 [522/705] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_ctl.c.o 00:01:01.827 [523/705] Compiling C object app/dpdk-test-sad.p/test-sad_main.c.o 00:01:01.827 [524/705] Compiling C object lib/librte_port.a.p/port_rte_port_ring.c.o 00:01:01.827 [525/705] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:01:01.827 [526/705] Linking static target lib/librte_hash.a 00:01:01.827 [527/705] Linking static target lib/librte_port.a 00:01:01.827 [528/705] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:01.827 [529/705] Compiling C object app/dpdk-pdump.p/pdump_main.c.o 00:01:01.827 [530/705] Generating lib/node.sym_chk with a custom command (wrapped by meson to capture output) 00:01:01.827 [531/705] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_main.c.o 00:01:01.827 [532/705] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_common.c.o 00:01:01.827 [533/705] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_atq.c.o 00:01:01.827 [534/705] Compiling C object app/dpdk-test-fib.p/test-fib_main.c.o 00:01:01.827 [535/705] Generating lib/sched.sym_chk with a custom command (wrapped by meson to capture output) 00:01:01.827 [536/705] Generating lib/ipsec.sym_chk with a custom command (wrapped by meson to capture output) 00:01:01.827 [537/705] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_runtime.c.o 00:01:01.827 [538/705] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_rx_adapter.c.o 00:01:01.827 [539/705] Compiling C object app/dpdk-testpmd.p/.._drivers_net_i40e_i40e_testpmd.c.o 00:01:01.827 [540/705] Compiling C object lib/acl/libavx2_tmp.a.p/acl_run_avx2.c.o 00:01:01.827 [541/705] Linking static target lib/librte_eventdev.a 00:01:01.827 [542/705] Linking static target lib/acl/libavx2_tmp.a 00:01:01.827 [543/705] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_fdir.c.o 00:01:01.827 [544/705] Generating lib/member.sym_chk with a custom command (wrapped by meson to capture output) 00:01:01.827 [545/705] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_queue.c.o 00:01:01.827 [546/705] Compiling C object app/dpdk-testpmd.p/test-pmd_parameters.c.o 00:01:01.827 [547/705] Compiling C object app/dpdk-test-regex.p/test-regex_main.c.o 00:01:01.827 [548/705] Generating lib/table.sym_chk with a custom command (wrapped by meson to capture output) 00:01:01.827 [549/705] Compiling C object drivers/net/i40e/libi40e_avx2_lib.a.p/i40e_rxtx_vec_avx2.c.o 00:01:01.827 [550/705] Generating lib/pdcp.sym_chk with a custom command (wrapped by meson to capture output) 00:01:02.088 [551/705] Linking static target drivers/net/i40e/libi40e_avx2_lib.a 00:01:02.088 [552/705] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_rte_pmd_i40e.c.o 00:01:02.088 [553/705] Compiling C object app/dpdk-test-security-perf.p/test_test_cryptodev_security_ipsec.c.o 00:01:02.088 [554/705] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_common.c.o 00:01:02.088 [555/705] Linking static target drivers/net/i40e/base/libi40e_base.a 00:01:02.088 [556/705] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx_vec_sse.c.o 00:01:02.088 [557/705] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_queue.c.o 00:01:02.088 [558/705] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_atq.c.o 00:01:02.088 [559/705] Compiling C object lib/librte_acl.a.p/acl_acl_run_avx512.c.o 00:01:02.088 [560/705] Linking static target lib/librte_acl.a 00:01:02.088 [561/705] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline_spec.c.o 00:01:02.088 [562/705] Compiling C object drivers/net/i40e/libi40e_avx512_lib.a.p/i40e_rxtx_vec_avx512.c.o 00:01:02.088 [563/705] Linking static target drivers/net/i40e/libi40e_avx512_lib.a 00:01:02.088 [564/705] Compiling C object app/dpdk-testpmd.p/test-pmd_txonly.c.o 00:01:02.348 [565/705] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_inference_common.c.o 00:01:02.348 [566/705] Compiling C object app/dpdk-testpmd.p/test-pmd_csumonly.c.o 00:01:02.609 [567/705] Generating lib/port.sym_chk with a custom command (wrapped by meson to capture output) 00:01:02.609 [568/705] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:01:02.609 [569/705] Generating lib/acl.sym_chk with a custom command (wrapped by meson to capture output) 00:01:02.609 [570/705] Compiling C object app/dpdk-testpmd.p/test-pmd_testpmd.c.o 00:01:02.870 [571/705] Compiling C object app/dpdk-testpmd.p/test-pmd_noisy_vnf.c.o 00:01:02.870 [572/705] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline.c.o 00:01:02.870 [573/705] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:02.870 [574/705] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_common.c.o 00:01:02.870 [575/705] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:01:02.870 [576/705] Linking static target lib/librte_ethdev.a 00:01:03.130 [577/705] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx.c.o 00:01:03.130 [578/705] Compiling C object app/dpdk-testpmd.p/test-pmd_config.c.o 00:01:03.701 [579/705] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_ethdev.c.o 00:01:03.701 [580/705] Linking static target drivers/libtmp_rte_net_i40e.a 00:01:03.701 [581/705] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_flow.c.o 00:01:03.963 [582/705] Generating drivers/rte_net_i40e.pmd.c with a custom command 00:01:03.963 [583/705] Compiling C object drivers/librte_net_i40e.so.24.0.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:01:03.963 [584/705] Compiling C object drivers/librte_net_i40e.a.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:01:03.963 [585/705] Linking static target drivers/librte_net_i40e.a 00:01:03.963 [586/705] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:01:04.905 [587/705] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline.c.o 00:01:04.905 [588/705] Generating drivers/rte_net_i40e.sym_chk with a custom command (wrapped by meson to capture output) 00:01:05.478 [589/705] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_perf.c.o 00:01:05.478 [590/705] Generating lib/eventdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:09.683 [591/705] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_table_action.c.o 00:01:09.683 [592/705] Linking static target lib/librte_pipeline.a 00:01:10.625 [593/705] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:01:10.625 [594/705] Linking static target lib/librte_vhost.a 00:01:10.886 [595/705] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:01:10.886 [596/705] Linking target lib/librte_eal.so.24.0 00:01:10.886 [597/705] Linking target app/dpdk-dumpcap 00:01:10.886 [598/705] Linking target app/dpdk-proc-info 00:01:10.886 [599/705] Linking target app/dpdk-test-cmdline 00:01:10.886 [600/705] Generating symbol file lib/librte_eal.so.24.0.p/librte_eal.so.24.0.symbols 00:01:11.147 [601/705] Linking target lib/librte_dmadev.so.24.0 00:01:11.147 [602/705] Linking target drivers/librte_bus_vdev.so.24.0 00:01:11.147 [603/705] Linking target lib/librte_ring.so.24.0 00:01:11.147 [604/705] Linking target lib/librte_meter.so.24.0 00:01:11.147 [605/705] Linking target app/dpdk-pdump 00:01:11.147 [606/705] Linking target lib/librte_pci.so.24.0 00:01:11.147 [607/705] Linking target lib/librte_timer.so.24.0 00:01:11.147 [608/705] Linking target lib/librte_cfgfile.so.24.0 00:01:11.147 [609/705] Linking target lib/librte_jobstats.so.24.0 00:01:11.147 [610/705] Linking target lib/librte_acl.so.24.0 00:01:11.147 [611/705] Linking target lib/librte_stack.so.24.0 00:01:11.147 [612/705] Linking target app/dpdk-test-acl 00:01:11.147 [613/705] Linking target lib/librte_rawdev.so.24.0 00:01:11.147 [614/705] Linking target app/dpdk-test-regex 00:01:11.147 [615/705] Linking target app/dpdk-test-flow-perf 00:01:11.147 [616/705] Linking target app/dpdk-test-dma-perf 00:01:11.147 [617/705] Linking target app/dpdk-graph 00:01:11.147 [618/705] Linking target app/dpdk-test-fib 00:01:11.147 [619/705] Linking target app/dpdk-test-sad 00:01:11.147 [620/705] Linking target app/dpdk-test-security-perf 00:01:11.147 [621/705] Linking target app/dpdk-test-mldev 00:01:11.147 [622/705] Linking target app/dpdk-test-pipeline 00:01:11.147 [623/705] Linking target app/dpdk-test-compress-perf 00:01:11.147 [624/705] Linking target app/dpdk-test-bbdev 00:01:11.147 [625/705] Linking target app/dpdk-test-crypto-perf 00:01:11.147 [626/705] Linking target app/dpdk-test-eventdev 00:01:11.147 [627/705] Linking target app/dpdk-test-gpudev 00:01:11.147 [628/705] Linking target app/dpdk-testpmd 00:01:11.147 [629/705] Generating symbol file lib/librte_ring.so.24.0.p/librte_ring.so.24.0.symbols 00:01:11.147 [630/705] Generating symbol file lib/librte_dmadev.so.24.0.p/librte_dmadev.so.24.0.symbols 00:01:11.147 [631/705] Generating symbol file drivers/librte_bus_vdev.so.24.0.p/librte_bus_vdev.so.24.0.symbols 00:01:11.147 [632/705] Generating symbol file lib/librte_acl.so.24.0.p/librte_acl.so.24.0.symbols 00:01:11.147 [633/705] Generating symbol file lib/librte_pci.so.24.0.p/librte_pci.so.24.0.symbols 00:01:11.147 [634/705] Generating symbol file lib/librte_meter.so.24.0.p/librte_meter.so.24.0.symbols 00:01:11.147 [635/705] Linking target lib/librte_rcu.so.24.0 00:01:11.147 [636/705] Generating symbol file lib/librte_timer.so.24.0.p/librte_timer.so.24.0.symbols 00:01:11.147 [637/705] Linking target lib/librte_mempool.so.24.0 00:01:11.147 [638/705] Linking target drivers/librte_bus_pci.so.24.0 00:01:11.408 [639/705] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:11.408 [640/705] Generating symbol file lib/librte_rcu.so.24.0.p/librte_rcu.so.24.0.symbols 00:01:11.408 [641/705] Generating symbol file drivers/librte_bus_pci.so.24.0.p/librte_bus_pci.so.24.0.symbols 00:01:11.408 [642/705] Generating symbol file lib/librte_mempool.so.24.0.p/librte_mempool.so.24.0.symbols 00:01:11.408 [643/705] Linking target lib/librte_mbuf.so.24.0 00:01:11.408 [644/705] Linking target drivers/librte_mempool_ring.so.24.0 00:01:11.408 [645/705] Linking target lib/librte_rib.so.24.0 00:01:11.408 [646/705] Generating symbol file lib/librte_mbuf.so.24.0.p/librte_mbuf.so.24.0.symbols 00:01:11.408 [647/705] Generating symbol file lib/librte_rib.so.24.0.p/librte_rib.so.24.0.symbols 00:01:11.669 [648/705] Linking target lib/librte_gpudev.so.24.0 00:01:11.669 [649/705] Linking target lib/librte_net.so.24.0 00:01:11.669 [650/705] Linking target lib/librte_mldev.so.24.0 00:01:11.669 [651/705] Linking target lib/librte_bbdev.so.24.0 00:01:11.669 [652/705] Linking target lib/librte_reorder.so.24.0 00:01:11.669 [653/705] Linking target lib/librte_distributor.so.24.0 00:01:11.669 [654/705] Linking target lib/librte_compressdev.so.24.0 00:01:11.669 [655/705] Linking target lib/librte_regexdev.so.24.0 00:01:11.669 [656/705] Linking target lib/librte_fib.so.24.0 00:01:11.669 [657/705] Linking target lib/librte_cryptodev.so.24.0 00:01:11.669 [658/705] Linking target lib/librte_sched.so.24.0 00:01:11.669 [659/705] Generating symbol file lib/librte_net.so.24.0.p/librte_net.so.24.0.symbols 00:01:11.669 [660/705] Generating symbol file lib/librte_reorder.so.24.0.p/librte_reorder.so.24.0.symbols 00:01:11.669 [661/705] Generating symbol file lib/librte_sched.so.24.0.p/librte_sched.so.24.0.symbols 00:01:11.669 [662/705] Generating symbol file lib/librte_cryptodev.so.24.0.p/librte_cryptodev.so.24.0.symbols 00:01:11.669 [663/705] Linking target lib/librte_hash.so.24.0 00:01:11.669 [664/705] Linking target lib/librte_cmdline.so.24.0 00:01:11.669 [665/705] Linking target lib/librte_ethdev.so.24.0 00:01:11.669 [666/705] Linking target lib/librte_security.so.24.0 00:01:11.929 [667/705] Generating symbol file lib/librte_hash.so.24.0.p/librte_hash.so.24.0.symbols 00:01:11.929 [668/705] Generating symbol file lib/librte_security.so.24.0.p/librte_security.so.24.0.symbols 00:01:11.929 [669/705] Generating symbol file lib/librte_ethdev.so.24.0.p/librte_ethdev.so.24.0.symbols 00:01:11.929 [670/705] Linking target lib/librte_member.so.24.0 00:01:11.929 [671/705] Linking target lib/librte_efd.so.24.0 00:01:11.929 [672/705] Linking target lib/librte_lpm.so.24.0 00:01:11.929 [673/705] Linking target lib/librte_gro.so.24.0 00:01:11.929 [674/705] Linking target lib/librte_ip_frag.so.24.0 00:01:11.929 [675/705] Linking target lib/librte_pdcp.so.24.0 00:01:11.929 [676/705] Linking target lib/librte_pcapng.so.24.0 00:01:11.929 [677/705] Linking target lib/librte_bpf.so.24.0 00:01:11.929 [678/705] Linking target lib/librte_metrics.so.24.0 00:01:11.929 [679/705] Linking target lib/librte_gso.so.24.0 00:01:11.929 [680/705] Linking target lib/librte_ipsec.so.24.0 00:01:11.929 [681/705] Linking target lib/librte_power.so.24.0 00:01:11.929 [682/705] Linking target lib/librte_eventdev.so.24.0 00:01:11.929 [683/705] Linking target drivers/librte_net_i40e.so.24.0 00:01:12.190 [684/705] Generating symbol file lib/librte_ipsec.so.24.0.p/librte_ipsec.so.24.0.symbols 00:01:12.190 [685/705] Generating symbol file lib/librte_lpm.so.24.0.p/librte_lpm.so.24.0.symbols 00:01:12.190 [686/705] Generating symbol file lib/librte_pcapng.so.24.0.p/librte_pcapng.so.24.0.symbols 00:01:12.190 [687/705] Generating symbol file lib/librte_bpf.so.24.0.p/librte_bpf.so.24.0.symbols 00:01:12.190 [688/705] Generating symbol file lib/librte_ip_frag.so.24.0.p/librte_ip_frag.so.24.0.symbols 00:01:12.190 [689/705] Generating symbol file lib/librte_metrics.so.24.0.p/librte_metrics.so.24.0.symbols 00:01:12.190 [690/705] Generating symbol file lib/librte_eventdev.so.24.0.p/librte_eventdev.so.24.0.symbols 00:01:12.190 [691/705] Linking target lib/librte_graph.so.24.0 00:01:12.190 [692/705] Linking target lib/librte_pdump.so.24.0 00:01:12.190 [693/705] Linking target lib/librte_latencystats.so.24.0 00:01:12.190 [694/705] Linking target lib/librte_bitratestats.so.24.0 00:01:12.190 [695/705] Linking target lib/librte_dispatcher.so.24.0 00:01:12.190 [696/705] Linking target lib/librte_port.so.24.0 00:01:12.190 [697/705] Generating symbol file lib/librte_graph.so.24.0.p/librte_graph.so.24.0.symbols 00:01:12.190 [698/705] Generating symbol file lib/librte_port.so.24.0.p/librte_port.so.24.0.symbols 00:01:12.451 [699/705] Linking target lib/librte_node.so.24.0 00:01:12.451 [700/705] Linking target lib/librte_table.so.24.0 00:01:12.451 [701/705] Generating symbol file lib/librte_table.so.24.0.p/librte_table.so.24.0.symbols 00:01:12.451 [702/705] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:01:12.712 [703/705] Linking target lib/librte_vhost.so.24.0 00:01:14.626 [704/705] Generating lib/pipeline.sym_chk with a custom command (wrapped by meson to capture output) 00:01:14.626 [705/705] Linking target lib/librte_pipeline.so.24.0 00:01:14.626 23:04:03 -- common/autobuild_common.sh@187 -- $ ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp -j144 install 00:01:14.626 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp' 00:01:14.626 [0/1] Installing files. 00:01:14.895 Installing subdir /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples 00:01:14.895 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bbdev_app/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bbdev_app 00:01:14.895 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bbdev_app/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bbdev_app 00:01:14.895 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bond/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bond 00:01:14.895 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bond/commands.list to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bond 00:01:14.895 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bond/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bond 00:01:14.895 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/README to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:01:14.895 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/dummy.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:01:14.895 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/t1.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:01:14.895 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/t2.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:01:14.895 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/t3.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:01:14.895 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:01:14.895 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/commands.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:01:14.895 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/commands.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:01:14.895 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:01:14.895 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/parse_obj_list.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:01:14.895 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/parse_obj_list.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:01:14.895 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/common/pkt_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/common 00:01:14.895 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/common/altivec/port_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/common/altivec 00:01:14.895 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/common/neon/port_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/common/neon 00:01:14.895 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/common/sse/port_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/common/sse 00:01:14.895 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/distributor/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/distributor 00:01:14.895 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/distributor/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/distributor 00:01:14.895 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/dma/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/dma 00:01:14.895 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/dma/dmafwd.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/dma 00:01:14.895 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool 00:01:14.895 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/ethtool-app/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:01:14.895 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/ethtool-app/ethapp.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:01:14.895 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/ethtool-app/ethapp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:01:14.895 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/ethtool-app/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:01:14.895 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/lib/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/lib 00:01:14.895 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/lib/rte_ethtool.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/lib 00:01:14.895 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/lib/rte_ethtool.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/lib 00:01:14.895 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:01:14.895 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:01:14.896 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/pipeline_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:01:14.896 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/pipeline_worker_generic.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:01:14.896 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/pipeline_worker_tx.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:01:14.896 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:01:14.896 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_dev_self_test.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:01:14.896 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_dev_self_test.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:01:14.896 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:01:14.896 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:01:14.896 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_aes.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:01:14.896 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_ccm.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:01:14.896 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_cmac.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:01:14.896 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_ecdsa.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:01:14.896 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_gcm.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:01:14.896 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_hmac.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:01:14.896 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_rsa.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:01:14.896 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_sha.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:01:14.896 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_tdes.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:01:14.896 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_xts.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:01:14.896 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:01:14.896 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/flow_filtering/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/flow_filtering 00:01:14.896 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/flow_filtering/flow_blocks.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/flow_filtering 00:01:14.896 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/flow_filtering/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/flow_filtering 00:01:14.896 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/helloworld/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/helloworld 00:01:14.896 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/helloworld/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/helloworld 00:01:14.896 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_fragmentation/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_fragmentation 00:01:14.896 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_fragmentation/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_fragmentation 00:01:14.896 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:14.896 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/action.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:14.896 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/action.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:14.896 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/cli.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:14.896 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/cli.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:14.896 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:14.896 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/conn.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:14.896 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/conn.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:14.896 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/cryptodev.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:14.896 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/cryptodev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:14.896 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/link.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:14.896 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/link.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:14.896 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:14.896 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/mempool.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:14.896 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/mempool.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:14.896 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/parser.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:14.896 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/parser.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:14.896 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/pipeline.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:14.896 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/pipeline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:14.896 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/swq.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:14.896 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/swq.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:14.897 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/tap.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:14.897 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/tap.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:14.897 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/thread.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:14.897 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/thread.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:14.897 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/tmgr.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:14.897 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/tmgr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:14.897 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/firewall.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:01:14.897 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/flow.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:01:14.897 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/flow_crypto.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:01:14.897 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/l2fwd.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:01:14.897 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/route.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:01:14.897 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/route_ecmp.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:01:14.897 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/rss.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:01:14.897 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/tap.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:01:14.897 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_reassembly/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_reassembly 00:01:14.897 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_reassembly/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_reassembly 00:01:14.897 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:14.897 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ep0.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:14.897 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ep1.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:14.897 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/esp.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:14.897 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/esp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:14.897 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/event_helper.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:14.897 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/event_helper.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:14.897 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/flow.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:14.897 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/flow.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:14.897 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipip.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:14.897 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec-secgw.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:14.897 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec-secgw.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:14.897 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:14.897 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:14.897 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_lpm_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:14.897 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:14.898 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_process.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:14.898 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_worker.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:14.898 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_worker.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:14.898 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/parser.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:14.898 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/parser.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:14.898 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/rt.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:14.898 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sa.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:14.898 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sad.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:14.898 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sad.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:14.898 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sp4.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:14.898 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sp6.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:14.898 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/bypass_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:14.898 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:14.898 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/common_defs_secgw.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:14.898 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/data_rxtx.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:14.898 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/linux_test.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:14.898 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/load_env.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:14.898 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/pkttest.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:14.898 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/pkttest.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:14.898 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/run_test.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:14.899 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:14.899 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:14.899 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:14.899 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:14.899 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:14.899 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:14.899 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesgcm_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:14.899 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesgcm_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:14.899 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_ipv6opts.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:14.899 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:14.899 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:14.899 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:14.899 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:14.899 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:14.899 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:14.899 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesgcm_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:14.899 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesgcm_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:14.899 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_null_header_reconstruct.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:14.899 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipv4_multicast/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipv4_multicast 00:01:14.899 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipv4_multicast/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipv4_multicast 00:01:14.899 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-cat/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:01:14.899 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-cat/cat.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:01:14.899 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-cat/cat.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:01:14.899 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-cat/l2fwd-cat.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:01:14.899 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-crypto/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:01:14.899 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-crypto/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:01:14.899 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:01:14.899 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_common.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:01:14.899 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:01:14.899 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:01:14.900 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:01:14.900 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event_generic.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:01:14.900 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event_internal_port.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:01:14.900 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_poll.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:01:14.900 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_poll.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:01:14.900 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:01:14.900 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-jobstats/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:01:14.900 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-jobstats/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:01:14.900 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:01:14.900 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:01:14.900 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/shm.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:01:14.900 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/shm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:01:14.900 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/ka-agent/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:01:14.900 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/ka-agent/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:01:14.900 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-macsec/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-macsec 00:01:14.900 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-macsec/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-macsec 00:01:14.900 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd 00:01:14.900 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd 00:01:14.900 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-graph/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-graph 00:01:14.900 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-graph/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-graph 00:01:14.900 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:01:14.900 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:01:14.900 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/main.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:01:14.900 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/perf_core.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:01:14.900 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/perf_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:01:14.901 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:14.901 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/em_default_v4.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:14.901 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/em_default_v6.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:14.901 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/em_route_parse.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:14.901 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:14.901 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_acl.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:14.901 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_acl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:14.901 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_acl_scalar.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:14.901 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_altivec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:14.901 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:14.901 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:14.901 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:14.901 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_hlm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:14.901 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_hlm_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:14.901 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_hlm_sse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:14.901 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_sequential.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:14.901 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_event.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:14.901 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_event.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:14.901 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_event_generic.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:14.901 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_event_internal_port.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:14.901 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_fib.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:14.901 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:14.901 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:14.901 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm_altivec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:14.901 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:14.901 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm_sse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:14.901 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:14.901 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_route.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:14.901 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_sse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:14.901 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/lpm_default_v4.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:14.901 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/lpm_default_v6.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:14.901 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/lpm_route_parse.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:14.901 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:14.901 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/link_status_interrupt/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/link_status_interrupt 00:01:14.901 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/link_status_interrupt/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/link_status_interrupt 00:01:14.902 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process 00:01:14.902 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp 00:01:14.902 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_client/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:01:14.902 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_client/client.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:01:14.902 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:01:14.902 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/args.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:01:14.902 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/args.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:01:14.902 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/init.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:01:14.902 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/init.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:01:14.902 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:01:14.902 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/shared/common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/shared 00:01:14.902 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/hotplug_mp/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:01:14.902 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/hotplug_mp/commands.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:01:14.902 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/hotplug_mp/commands.list to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:01:14.902 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/hotplug_mp/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:01:14.902 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/simple_mp/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:01:14.902 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/simple_mp/commands.list to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:01:14.902 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/simple_mp/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:01:14.902 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/simple_mp/mp_commands.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:01:14.902 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/simple_mp/mp_commands.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:01:14.902 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/symmetric_mp/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:01:14.902 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/symmetric_mp/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:01:14.902 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ntb/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ntb 00:01:14.902 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ntb/commands.list to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ntb 00:01:14.902 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ntb/ntb_fwd.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ntb 00:01:14.902 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/packet_ordering/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/packet_ordering 00:01:14.902 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/packet_ordering/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/packet_ordering 00:01:14.902 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:01:14.902 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/cli.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:01:14.902 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/cli.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:01:14.903 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/conn.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:01:14.903 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/conn.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:01:14.903 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:01:14.903 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/obj.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:01:14.903 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/obj.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:01:14.903 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/thread.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:01:14.903 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/thread.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:01:14.903 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/ethdev.io to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:14.903 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:14.903 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:14.903 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib_nexthop_group_table.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:14.903 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib_nexthop_table.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:14.903 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib_routing_table.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:14.903 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/hash_func.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:14.903 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/hash_func.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:14.903 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/ipsec.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:14.903 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/ipsec.io to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:14.903 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/ipsec.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:14.903 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/ipsec_sa.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:14.903 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:14.903 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:14.903 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_macswp.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:14.903 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_macswp.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:14.903 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_macswp_pcap.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:14.903 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_pcap.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:14.903 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/learner.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:14.904 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/learner.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:14.904 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/meter.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:14.904 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/meter.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:14.904 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/mirroring.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:14.904 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/mirroring.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:14.904 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/packet.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:14.904 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/pcap.io to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:14.904 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/recirculation.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:14.904 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/recirculation.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:14.904 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/registers.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:14.904 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/registers.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:14.904 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/rss.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:14.904 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/rss.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:14.904 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/selector.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:14.904 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/selector.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:14.904 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/selector.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:14.904 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/varbit.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:14.904 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/varbit.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:14.904 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:14.904 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:14.904 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan_pcap.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:14.904 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan_table.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:14.905 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan_table.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:14.905 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ptpclient/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ptpclient 00:01:14.905 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ptpclient/ptpclient.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ptpclient 00:01:14.905 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:01:14.905 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:01:14.905 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/main.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:01:14.906 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/rte_policer.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:01:14.906 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/rte_policer.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:01:14.906 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:01:14.906 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/app_thread.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:01:14.906 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/args.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:01:14.906 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/cfg_file.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:01:14.906 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/cfg_file.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:01:14.906 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/cmdline.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:01:14.906 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/init.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:01:14.906 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:01:14.906 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/main.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:01:14.906 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/profile.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:01:14.906 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/profile_ov.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:01:14.906 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/profile_pie.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:01:14.906 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/profile_red.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:01:14.906 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/stats.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:01:14.906 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/rxtx_callbacks/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:01:14.906 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/rxtx_callbacks/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:01:14.906 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd 00:01:14.906 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_node/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_node 00:01:14.906 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_node/node.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_node 00:01:14.906 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_server/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:01:14.906 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_server/args.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:01:14.906 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_server/args.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:01:14.906 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_server/init.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:01:14.906 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_server/init.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:01:14.906 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_server/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:01:14.906 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/shared/common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/shared 00:01:14.906 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/service_cores/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/service_cores 00:01:14.906 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/service_cores/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/service_cores 00:01:14.906 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/skeleton/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/skeleton 00:01:14.906 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/skeleton/basicfwd.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/skeleton 00:01:14.906 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/timer/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/timer 00:01:14.906 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/timer/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/timer 00:01:14.907 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vdpa/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vdpa 00:01:14.907 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vdpa/commands.list to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vdpa 00:01:14.907 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vdpa/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vdpa 00:01:14.907 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vdpa/vdpa_blk_compact.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vdpa 00:01:14.907 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:01:14.907 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:01:14.907 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost/main.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:01:14.907 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost/virtio_net.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:01:14.907 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:01:14.907 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/blk.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:01:14.907 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/blk_spec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:01:14.907 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/vhost_blk.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:01:14.907 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/vhost_blk.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:01:14.907 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/vhost_blk_compat.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:01:14.907 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_crypto/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_crypto 00:01:14.907 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_crypto/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_crypto 00:01:14.907 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:01:14.907 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/channel_manager.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:01:14.907 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/channel_manager.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:01:14.907 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/channel_monitor.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:01:14.907 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/channel_monitor.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:01:14.907 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:01:14.907 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/oob_monitor.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:01:14.907 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/oob_monitor_nop.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:01:14.907 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/oob_monitor_x86.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:01:14.907 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/parse.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:01:14.907 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/parse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:01:14.907 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/power_manager.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:01:14.907 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/power_manager.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:01:14.907 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/vm_power_cli.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:01:14.908 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/vm_power_cli.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:01:14.908 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:01:14.908 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:01:14.908 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/parse.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:01:14.908 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/parse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:01:14.908 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:01:14.908 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:01:14.908 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vmdq/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vmdq 00:01:14.908 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vmdq/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vmdq 00:01:14.908 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vmdq_dcb/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vmdq_dcb 00:01:14.908 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vmdq_dcb/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vmdq_dcb 00:01:14.908 Installing lib/librte_log.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:14.908 Installing lib/librte_log.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:14.908 Installing lib/librte_kvargs.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:14.908 Installing lib/librte_kvargs.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:14.908 Installing lib/librte_telemetry.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:14.908 Installing lib/librte_telemetry.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:14.908 Installing lib/librte_eal.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:14.908 Installing lib/librte_eal.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:14.908 Installing lib/librte_ring.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:14.908 Installing lib/librte_ring.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:14.908 Installing lib/librte_rcu.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:14.908 Installing lib/librte_rcu.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:14.908 Installing lib/librte_mempool.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:14.908 Installing lib/librte_mempool.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:14.908 Installing lib/librte_mbuf.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:14.908 Installing lib/librte_mbuf.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:14.908 Installing lib/librte_net.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:14.908 Installing lib/librte_net.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:14.908 Installing lib/librte_meter.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:14.908 Installing lib/librte_meter.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:14.908 Installing lib/librte_ethdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:14.908 Installing lib/librte_ethdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:14.908 Installing lib/librte_pci.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:14.908 Installing lib/librte_pci.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:14.908 Installing lib/librte_cmdline.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:14.908 Installing lib/librte_cmdline.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:14.908 Installing lib/librte_metrics.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:14.908 Installing lib/librte_metrics.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:14.908 Installing lib/librte_hash.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:14.908 Installing lib/librte_hash.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:14.908 Installing lib/librte_timer.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:14.908 Installing lib/librte_timer.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:14.908 Installing lib/librte_acl.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:14.908 Installing lib/librte_acl.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:14.908 Installing lib/librte_bbdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:14.908 Installing lib/librte_bbdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:14.908 Installing lib/librte_bitratestats.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:14.908 Installing lib/librte_bitratestats.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:14.909 Installing lib/librte_bpf.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:14.909 Installing lib/librte_bpf.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:14.909 Installing lib/librte_cfgfile.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:14.909 Installing lib/librte_cfgfile.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:14.909 Installing lib/librte_compressdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:14.909 Installing lib/librte_compressdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:14.909 Installing lib/librte_cryptodev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:14.909 Installing lib/librte_cryptodev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:14.909 Installing lib/librte_distributor.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:14.909 Installing lib/librte_distributor.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:14.909 Installing lib/librte_dmadev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:14.909 Installing lib/librte_dmadev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:14.909 Installing lib/librte_efd.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:14.909 Installing lib/librte_efd.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:14.909 Installing lib/librte_eventdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:14.909 Installing lib/librte_eventdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:14.909 Installing lib/librte_dispatcher.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:14.909 Installing lib/librte_dispatcher.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:14.909 Installing lib/librte_gpudev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:14.909 Installing lib/librte_gpudev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:14.909 Installing lib/librte_gro.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:14.909 Installing lib/librte_gro.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:14.909 Installing lib/librte_gso.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:14.909 Installing lib/librte_gso.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:14.909 Installing lib/librte_ip_frag.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:14.909 Installing lib/librte_ip_frag.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:14.909 Installing lib/librte_jobstats.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:14.909 Installing lib/librte_jobstats.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:14.909 Installing lib/librte_latencystats.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:14.909 Installing lib/librte_latencystats.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:14.909 Installing lib/librte_lpm.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:14.909 Installing lib/librte_lpm.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:14.909 Installing lib/librte_member.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:14.909 Installing lib/librte_member.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:14.909 Installing lib/librte_pcapng.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:14.909 Installing lib/librte_pcapng.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:14.909 Installing lib/librte_power.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:14.909 Installing lib/librte_power.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:14.909 Installing lib/librte_rawdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:14.909 Installing lib/librte_rawdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:14.909 Installing lib/librte_regexdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:14.910 Installing lib/librte_regexdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:14.910 Installing lib/librte_mldev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:14.910 Installing lib/librte_mldev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:14.910 Installing lib/librte_rib.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:14.910 Installing lib/librte_rib.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:14.910 Installing lib/librte_reorder.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:15.236 Installing lib/librte_reorder.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:15.236 Installing lib/librte_sched.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:15.236 Installing lib/librte_sched.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:15.236 Installing lib/librte_security.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:15.236 Installing lib/librte_security.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:15.236 Installing lib/librte_stack.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:15.236 Installing lib/librte_stack.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:15.236 Installing lib/librte_vhost.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:15.236 Installing lib/librte_vhost.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:15.236 Installing lib/librte_ipsec.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:15.236 Installing lib/librte_ipsec.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:15.236 Installing lib/librte_pdcp.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:15.236 Installing lib/librte_pdcp.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:15.236 Installing lib/librte_fib.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:15.236 Installing lib/librte_fib.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:15.236 Installing lib/librte_port.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:15.236 Installing lib/librte_port.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:15.236 Installing lib/librte_pdump.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:15.236 Installing lib/librte_pdump.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:15.236 Installing lib/librte_table.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:15.236 Installing lib/librte_table.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:15.236 Installing lib/librte_pipeline.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:15.236 Installing lib/librte_pipeline.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:15.236 Installing lib/librte_graph.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:15.236 Installing lib/librte_graph.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:15.237 Installing lib/librte_node.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:15.237 Installing lib/librte_node.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:15.237 Installing drivers/librte_bus_pci.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:15.237 Installing drivers/librte_bus_pci.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0 00:01:15.237 Installing drivers/librte_bus_vdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:15.237 Installing drivers/librte_bus_vdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0 00:01:15.237 Installing drivers/librte_mempool_ring.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:15.237 Installing drivers/librte_mempool_ring.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0 00:01:15.237 Installing drivers/librte_net_i40e.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:15.237 Installing drivers/librte_net_i40e.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0 00:01:15.237 Installing app/dpdk-dumpcap to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:01:15.237 Installing app/dpdk-graph to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:01:15.237 Installing app/dpdk-pdump to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:01:15.237 Installing app/dpdk-proc-info to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:01:15.237 Installing app/dpdk-test-acl to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:01:15.237 Installing app/dpdk-test-bbdev to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:01:15.237 Installing app/dpdk-test-cmdline to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:01:15.237 Installing app/dpdk-test-compress-perf to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:01:15.237 Installing app/dpdk-test-crypto-perf to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:01:15.237 Installing app/dpdk-test-dma-perf to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:01:15.237 Installing app/dpdk-test-eventdev to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:01:15.237 Installing app/dpdk-test-fib to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:01:15.237 Installing app/dpdk-test-flow-perf to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:01:15.237 Installing app/dpdk-test-gpudev to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:01:15.237 Installing app/dpdk-test-mldev to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:01:15.237 Installing app/dpdk-test-pipeline to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:01:15.237 Installing app/dpdk-testpmd to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:01:15.237 Installing app/dpdk-test-regex to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:01:15.237 Installing app/dpdk-test-sad to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:01:15.237 Installing app/dpdk-test-security-perf to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:01:15.237 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/config/rte_config.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:15.237 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/log/rte_log.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:15.237 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/kvargs/rte_kvargs.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:15.237 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/telemetry/rte_telemetry.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:15.237 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_atomic.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:01:15.237 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_byteorder.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:01:15.237 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_cpuflags.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:01:15.237 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_cycles.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:01:15.237 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_io.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:01:15.237 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_memcpy.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:01:15.237 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_pause.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:01:15.237 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_power_intrinsics.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:01:15.237 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_prefetch.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:01:15.237 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_rwlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:01:15.237 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_spinlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:01:15.237 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_vect.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:01:15.237 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_atomic.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:15.237 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_byteorder.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:15.237 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_cpuflags.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:15.237 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_cycles.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:15.237 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_io.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:15.237 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_memcpy.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:15.237 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_pause.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:15.237 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_power_intrinsics.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:15.237 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_prefetch.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:15.237 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_rtm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:15.237 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_rwlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:15.237 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_spinlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:15.237 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_vect.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:15.237 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_atomic_32.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:15.237 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_atomic_64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:15.237 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_byteorder_32.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:15.237 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_byteorder_64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:15.237 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_alarm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:15.237 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_bitmap.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:15.237 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_bitops.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:15.237 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_branch_prediction.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:15.237 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_bus.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:15.237 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_class.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:15.237 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:15.237 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_compat.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:15.237 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_debug.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:15.237 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_dev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:15.237 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_devargs.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:15.237 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_eal.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:15.237 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_eal_memconfig.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:15.237 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_eal_trace.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:15.237 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_errno.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:15.237 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_epoll.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:15.237 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_fbarray.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:15.237 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_hexdump.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:15.237 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_hypervisor.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:15.237 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_interrupts.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:15.237 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_keepalive.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:15.237 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_launch.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:15.237 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_lcore.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:15.237 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_lock_annotations.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:15.237 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_malloc.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:15.238 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_mcslock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:15.238 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_memory.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:15.238 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_memzone.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:15.238 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_pci_dev_feature_defs.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:15.238 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_pci_dev_features.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:15.238 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_per_lcore.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:15.238 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_pflock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:15.238 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_random.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:15.238 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_reciprocal.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:15.238 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_seqcount.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:15.238 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_seqlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:15.238 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_service.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:15.238 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_service_component.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:15.238 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_stdatomic.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:15.238 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_string_fns.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:15.238 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_tailq.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:15.238 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_thread.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:15.238 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_ticketlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:15.238 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_time.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:15.238 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_trace.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:15.238 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_trace_point.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:15.238 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_trace_point_register.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:15.238 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_uuid.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:15.238 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_version.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:15.238 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_vfio.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:15.238 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/linux/include/rte_os.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:15.238 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:15.238 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:15.238 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_elem.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:15.238 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_elem_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:15.238 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_c11_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:15.238 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_generic_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:15.238 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_hts.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:15.238 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_hts_elem_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:15.238 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_peek.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:15.238 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_peek_elem_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:15.238 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_peek_zc.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:15.238 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_rts.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:15.238 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_rts_elem_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:15.238 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rcu/rte_rcu_qsbr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:15.238 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mempool/rte_mempool.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:15.238 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mempool/rte_mempool_trace_fp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:15.238 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:15.238 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:15.238 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf_ptype.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:15.238 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf_pool_ops.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:15.238 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf_dyn.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:15.238 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_ip.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:15.238 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_tcp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:15.238 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_udp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:15.238 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_tls.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:15.238 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_dtls.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:15.238 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_esp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:15.238 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_sctp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:15.238 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_icmp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:15.238 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_arp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:15.238 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_ether.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:15.238 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_macsec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:15.238 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_vxlan.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:15.238 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_gre.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:15.238 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_gtp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:15.238 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_net.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:15.238 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_net_crc.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:15.238 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_mpls.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:15.238 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_higig.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:15.238 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_ecpri.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:15.238 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_pdcp_hdr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:15.238 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_geneve.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:15.238 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_l2tpv2.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:15.238 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_ppp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:15.238 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_ib.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:15.238 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/meter/rte_meter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:15.238 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_cman.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:15.238 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_ethdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:15.238 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_ethdev_trace_fp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:15.238 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_dev_info.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:15.238 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_flow.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:15.238 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_flow_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:15.238 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_mtr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:15.238 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_mtr_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:15.238 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_tm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:15.238 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_tm_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:15.238 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_ethdev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:15.239 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_eth_ctrl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:15.239 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pci/rte_pci.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:15.239 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:15.239 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:15.239 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_num.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:15.239 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_ipaddr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:15.239 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_etheraddr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:15.239 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_string.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:15.239 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_rdline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:15.239 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_vt100.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:15.239 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_socket.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:15.239 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_cirbuf.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:15.239 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_portlist.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:15.239 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/metrics/rte_metrics.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:15.239 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/metrics/rte_metrics_telemetry.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:15.239 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_fbk_hash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:15.239 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_hash_crc.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:15.239 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_hash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:15.239 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_jhash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:15.239 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_thash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:15.239 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_thash_gfni.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:15.239 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_crc_arm64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:15.239 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_crc_generic.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:15.239 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_crc_sw.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:15.239 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_crc_x86.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:15.239 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_thash_x86_gfni.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:15.239 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/timer/rte_timer.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:15.239 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/acl/rte_acl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:15.239 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/acl/rte_acl_osdep.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:15.239 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bbdev/rte_bbdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:15.239 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bbdev/rte_bbdev_pmd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:15.239 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bbdev/rte_bbdev_op.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:15.239 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bitratestats/rte_bitrate.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:15.239 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bpf/bpf_def.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:15.239 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bpf/rte_bpf.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:15.239 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bpf/rte_bpf_ethdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:15.239 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cfgfile/rte_cfgfile.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:15.239 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/compressdev/rte_compressdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:15.239 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/compressdev/rte_comp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:15.239 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_cryptodev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:15.239 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_cryptodev_trace_fp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:15.239 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_crypto.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:15.239 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_crypto_sym.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:15.239 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_crypto_asym.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:15.239 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_cryptodev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:15.239 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/distributor/rte_distributor.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:15.239 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/dmadev/rte_dmadev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:15.239 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/dmadev/rte_dmadev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:15.239 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/efd/rte_efd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:15.239 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_crypto_adapter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:15.239 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_dma_adapter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:15.239 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_eth_rx_adapter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:15.239 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_eth_tx_adapter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:15.239 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_ring.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:15.239 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_timer_adapter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:15.239 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_eventdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:15.239 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_eventdev_trace_fp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:15.239 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_eventdev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:15.239 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/dispatcher/rte_dispatcher.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:15.239 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/gpudev/rte_gpudev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:15.239 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/gro/rte_gro.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:15.239 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/gso/rte_gso.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:15.239 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ip_frag/rte_ip_frag.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:15.239 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/jobstats/rte_jobstats.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:15.239 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/latencystats/rte_latencystats.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:15.239 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:15.239 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm6.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:15.239 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_altivec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:15.239 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:15.239 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_scalar.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:15.239 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_sse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:15.239 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_sve.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:15.239 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/member/rte_member.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:15.239 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pcapng/rte_pcapng.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:15.240 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/power/rte_power.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:15.240 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/power/rte_power_guest_channel.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:15.240 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/power/rte_power_pmd_mgmt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:15.240 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/power/rte_power_uncore.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:15.240 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rawdev/rte_rawdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:15.240 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rawdev/rte_rawdev_pmd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:15.240 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/regexdev/rte_regexdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:15.240 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/regexdev/rte_regexdev_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:15.240 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/regexdev/rte_regexdev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:15.240 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mldev/rte_mldev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:15.240 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mldev/rte_mldev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:15.240 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rib/rte_rib.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:15.240 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rib/rte_rib6.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:15.240 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/reorder/rte_reorder.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:15.240 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_approx.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:15.240 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_red.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:15.240 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_sched.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:15.240 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_sched_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:15.240 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_pie.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:15.240 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/security/rte_security.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:15.240 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/security/rte_security_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:15.240 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:15.240 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_std.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:15.240 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_lf.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:15.240 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_lf_generic.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:15.240 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_lf_c11.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:15.240 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_lf_stubs.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:15.240 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/vhost/rte_vdpa.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:15.240 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/vhost/rte_vhost.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:15.240 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/vhost/rte_vhost_async.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:15.240 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/vhost/rte_vhost_crypto.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:15.240 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ipsec/rte_ipsec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:15.240 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ipsec/rte_ipsec_sa.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:15.240 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ipsec/rte_ipsec_sad.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:15.240 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ipsec/rte_ipsec_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:15.240 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pdcp/rte_pdcp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:15.240 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pdcp/rte_pdcp_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:15.240 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/fib/rte_fib.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:15.240 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/fib/rte_fib6.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:15.240 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_ethdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:15.240 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_fd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:15.240 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_frag.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:15.240 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_ras.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:15.240 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:15.240 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_ring.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:15.240 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_sched.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:15.240 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_source_sink.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:15.240 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_sym_crypto.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:15.240 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_eventdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:15.240 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:15.240 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port_ethdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:15.240 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port_fd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:15.240 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port_ring.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:15.240 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port_source_sink.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:15.240 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pdump/rte_pdump.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:15.240 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_lru.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:15.240 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_hash_func.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:15.240 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:15.240 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table_em.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:15.240 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table_learner.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:15.240 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table_selector.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:15.240 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table_wm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:15.240 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:15.240 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_acl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:15.240 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_array.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:15.240 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_hash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:15.240 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_hash_cuckoo.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:15.240 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_hash_func.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:15.240 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_lpm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:15.240 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_lpm_ipv6.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:15.240 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_stub.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:15.240 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_lru_arm64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:15.240 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_lru_x86.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:15.240 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_hash_func_arm64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:15.240 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_pipeline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:15.240 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_port_in_action.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:15.240 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_table_action.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:15.240 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_swx_ipsec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:15.240 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_swx_pipeline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:15.241 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_swx_extern.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:15.241 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_swx_ctl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:15.241 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/graph/rte_graph.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:15.241 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/graph/rte_graph_worker.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:15.241 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/graph/rte_graph_model_mcore_dispatch.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:15.241 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/graph/rte_graph_model_rtc.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:15.241 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/graph/rte_graph_worker_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:15.241 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/node/rte_node_eth_api.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:15.241 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/node/rte_node_ip4_api.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:15.241 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/node/rte_node_ip6_api.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:15.241 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/node/rte_node_udp4_input_api.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:15.241 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/drivers/bus/pci/rte_bus_pci.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:15.241 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/drivers/bus/vdev/rte_bus_vdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:15.241 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/drivers/net/i40e/rte_pmd_i40e.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:15.241 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/buildtools/dpdk-cmdline-gen.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:01:15.241 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-devbind.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:01:15.241 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-pmdinfo.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:01:15.241 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-telemetry.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:01:15.241 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-hugepages.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:01:15.241 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-rss-flows.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:01:15.241 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp/rte_build_config.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:15.241 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp/meson-private/libdpdk-libs.pc to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/pkgconfig 00:01:15.241 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp/meson-private/libdpdk.pc to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/pkgconfig 00:01:15.241 Installing symlink pointing to librte_log.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_log.so.24 00:01:15.241 Installing symlink pointing to librte_log.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_log.so 00:01:15.241 Installing symlink pointing to librte_kvargs.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_kvargs.so.24 00:01:15.241 Installing symlink pointing to librte_kvargs.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_kvargs.so 00:01:15.241 Installing symlink pointing to librte_telemetry.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_telemetry.so.24 00:01:15.241 Installing symlink pointing to librte_telemetry.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_telemetry.so 00:01:15.241 Installing symlink pointing to librte_eal.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_eal.so.24 00:01:15.241 Installing symlink pointing to librte_eal.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_eal.so 00:01:15.241 Installing symlink pointing to librte_ring.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ring.so.24 00:01:15.241 Installing symlink pointing to librte_ring.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ring.so 00:01:15.241 Installing symlink pointing to librte_rcu.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rcu.so.24 00:01:15.241 Installing symlink pointing to librte_rcu.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rcu.so 00:01:15.241 Installing symlink pointing to librte_mempool.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mempool.so.24 00:01:15.241 Installing symlink pointing to librte_mempool.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mempool.so 00:01:15.241 Installing symlink pointing to librte_mbuf.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mbuf.so.24 00:01:15.241 Installing symlink pointing to librte_mbuf.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mbuf.so 00:01:15.241 Installing symlink pointing to librte_net.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_net.so.24 00:01:15.241 Installing symlink pointing to librte_net.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_net.so 00:01:15.241 Installing symlink pointing to librte_meter.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_meter.so.24 00:01:15.241 Installing symlink pointing to librte_meter.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_meter.so 00:01:15.241 Installing symlink pointing to librte_ethdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ethdev.so.24 00:01:15.241 Installing symlink pointing to librte_ethdev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ethdev.so 00:01:15.241 Installing symlink pointing to librte_pci.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pci.so.24 00:01:15.241 Installing symlink pointing to librte_pci.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pci.so 00:01:15.241 Installing symlink pointing to librte_cmdline.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cmdline.so.24 00:01:15.241 Installing symlink pointing to librte_cmdline.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cmdline.so 00:01:15.241 Installing symlink pointing to librte_metrics.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_metrics.so.24 00:01:15.241 Installing symlink pointing to librte_metrics.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_metrics.so 00:01:15.241 Installing symlink pointing to librte_hash.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_hash.so.24 00:01:15.241 Installing symlink pointing to librte_hash.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_hash.so 00:01:15.241 Installing symlink pointing to librte_timer.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_timer.so.24 00:01:15.241 Installing symlink pointing to librte_timer.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_timer.so 00:01:15.241 Installing symlink pointing to librte_acl.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_acl.so.24 00:01:15.241 Installing symlink pointing to librte_acl.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_acl.so 00:01:15.241 Installing symlink pointing to librte_bbdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bbdev.so.24 00:01:15.241 Installing symlink pointing to librte_bbdev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bbdev.so 00:01:15.241 Installing symlink pointing to librte_bitratestats.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bitratestats.so.24 00:01:15.241 Installing symlink pointing to librte_bitratestats.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bitratestats.so 00:01:15.241 Installing symlink pointing to librte_bpf.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bpf.so.24 00:01:15.241 Installing symlink pointing to librte_bpf.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bpf.so 00:01:15.241 Installing symlink pointing to librte_cfgfile.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cfgfile.so.24 00:01:15.242 Installing symlink pointing to librte_cfgfile.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cfgfile.so 00:01:15.242 Installing symlink pointing to librte_compressdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_compressdev.so.24 00:01:15.242 Installing symlink pointing to librte_compressdev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_compressdev.so 00:01:15.242 Installing symlink pointing to librte_cryptodev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cryptodev.so.24 00:01:15.242 Installing symlink pointing to librte_cryptodev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cryptodev.so 00:01:15.242 Installing symlink pointing to librte_distributor.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_distributor.so.24 00:01:15.242 Installing symlink pointing to librte_distributor.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_distributor.so 00:01:15.242 Installing symlink pointing to librte_dmadev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_dmadev.so.24 00:01:15.242 Installing symlink pointing to librte_dmadev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_dmadev.so 00:01:15.242 Installing symlink pointing to librte_efd.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_efd.so.24 00:01:15.242 Installing symlink pointing to librte_efd.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_efd.so 00:01:15.242 Installing symlink pointing to librte_eventdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_eventdev.so.24 00:01:15.242 Installing symlink pointing to librte_eventdev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_eventdev.so 00:01:15.242 Installing symlink pointing to librte_dispatcher.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_dispatcher.so.24 00:01:15.242 Installing symlink pointing to librte_dispatcher.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_dispatcher.so 00:01:15.242 Installing symlink pointing to librte_gpudev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gpudev.so.24 00:01:15.242 Installing symlink pointing to librte_gpudev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gpudev.so 00:01:15.242 Installing symlink pointing to librte_gro.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gro.so.24 00:01:15.242 Installing symlink pointing to librte_gro.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gro.so 00:01:15.242 Installing symlink pointing to librte_gso.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gso.so.24 00:01:15.242 Installing symlink pointing to librte_gso.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gso.so 00:01:15.242 Installing symlink pointing to librte_ip_frag.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ip_frag.so.24 00:01:15.242 Installing symlink pointing to librte_ip_frag.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ip_frag.so 00:01:15.242 Installing symlink pointing to librte_jobstats.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_jobstats.so.24 00:01:15.242 Installing symlink pointing to librte_jobstats.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_jobstats.so 00:01:15.242 Installing symlink pointing to librte_latencystats.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_latencystats.so.24 00:01:15.242 Installing symlink pointing to librte_latencystats.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_latencystats.so 00:01:15.242 Installing symlink pointing to librte_lpm.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_lpm.so.24 00:01:15.242 Installing symlink pointing to librte_lpm.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_lpm.so 00:01:15.242 Installing symlink pointing to librte_member.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_member.so.24 00:01:15.242 Installing symlink pointing to librte_member.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_member.so 00:01:15.242 Installing symlink pointing to librte_pcapng.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pcapng.so.24 00:01:15.242 Installing symlink pointing to librte_pcapng.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pcapng.so 00:01:15.242 Installing symlink pointing to librte_power.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_power.so.24 00:01:15.242 Installing symlink pointing to librte_power.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_power.so 00:01:15.242 './librte_bus_pci.so' -> 'dpdk/pmds-24.0/librte_bus_pci.so' 00:01:15.242 './librte_bus_pci.so.24' -> 'dpdk/pmds-24.0/librte_bus_pci.so.24' 00:01:15.242 './librte_bus_pci.so.24.0' -> 'dpdk/pmds-24.0/librte_bus_pci.so.24.0' 00:01:15.242 './librte_bus_vdev.so' -> 'dpdk/pmds-24.0/librte_bus_vdev.so' 00:01:15.242 './librte_bus_vdev.so.24' -> 'dpdk/pmds-24.0/librte_bus_vdev.so.24' 00:01:15.242 './librte_bus_vdev.so.24.0' -> 'dpdk/pmds-24.0/librte_bus_vdev.so.24.0' 00:01:15.242 './librte_mempool_ring.so' -> 'dpdk/pmds-24.0/librte_mempool_ring.so' 00:01:15.242 './librte_mempool_ring.so.24' -> 'dpdk/pmds-24.0/librte_mempool_ring.so.24' 00:01:15.242 './librte_mempool_ring.so.24.0' -> 'dpdk/pmds-24.0/librte_mempool_ring.so.24.0' 00:01:15.242 './librte_net_i40e.so' -> 'dpdk/pmds-24.0/librte_net_i40e.so' 00:01:15.242 './librte_net_i40e.so.24' -> 'dpdk/pmds-24.0/librte_net_i40e.so.24' 00:01:15.242 './librte_net_i40e.so.24.0' -> 'dpdk/pmds-24.0/librte_net_i40e.so.24.0' 00:01:15.242 Installing symlink pointing to librte_rawdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rawdev.so.24 00:01:15.242 Installing symlink pointing to librte_rawdev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rawdev.so 00:01:15.242 Installing symlink pointing to librte_regexdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_regexdev.so.24 00:01:15.242 Installing symlink pointing to librte_regexdev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_regexdev.so 00:01:15.242 Installing symlink pointing to librte_mldev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mldev.so.24 00:01:15.242 Installing symlink pointing to librte_mldev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mldev.so 00:01:15.242 Installing symlink pointing to librte_rib.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rib.so.24 00:01:15.242 Installing symlink pointing to librte_rib.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rib.so 00:01:15.242 Installing symlink pointing to librte_reorder.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_reorder.so.24 00:01:15.242 Installing symlink pointing to librte_reorder.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_reorder.so 00:01:15.242 Installing symlink pointing to librte_sched.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_sched.so.24 00:01:15.242 Installing symlink pointing to librte_sched.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_sched.so 00:01:15.242 Installing symlink pointing to librte_security.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_security.so.24 00:01:15.242 Installing symlink pointing to librte_security.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_security.so 00:01:15.242 Installing symlink pointing to librte_stack.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_stack.so.24 00:01:15.242 Installing symlink pointing to librte_stack.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_stack.so 00:01:15.242 Installing symlink pointing to librte_vhost.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_vhost.so.24 00:01:15.242 Installing symlink pointing to librte_vhost.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_vhost.so 00:01:15.242 Installing symlink pointing to librte_ipsec.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ipsec.so.24 00:01:15.242 Installing symlink pointing to librte_ipsec.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ipsec.so 00:01:15.242 Installing symlink pointing to librte_pdcp.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pdcp.so.24 00:01:15.242 Installing symlink pointing to librte_pdcp.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pdcp.so 00:01:15.242 Installing symlink pointing to librte_fib.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_fib.so.24 00:01:15.242 Installing symlink pointing to librte_fib.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_fib.so 00:01:15.242 Installing symlink pointing to librte_port.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_port.so.24 00:01:15.242 Installing symlink pointing to librte_port.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_port.so 00:01:15.242 Installing symlink pointing to librte_pdump.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pdump.so.24 00:01:15.242 Installing symlink pointing to librte_pdump.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pdump.so 00:01:15.242 Installing symlink pointing to librte_table.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_table.so.24 00:01:15.242 Installing symlink pointing to librte_table.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_table.so 00:01:15.242 Installing symlink pointing to librte_pipeline.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pipeline.so.24 00:01:15.242 Installing symlink pointing to librte_pipeline.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pipeline.so 00:01:15.242 Installing symlink pointing to librte_graph.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_graph.so.24 00:01:15.242 Installing symlink pointing to librte_graph.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_graph.so 00:01:15.242 Installing symlink pointing to librte_node.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_node.so.24 00:01:15.242 Installing symlink pointing to librte_node.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_node.so 00:01:15.242 Installing symlink pointing to librte_bus_pci.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_pci.so.24 00:01:15.243 Installing symlink pointing to librte_bus_pci.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_pci.so 00:01:15.243 Installing symlink pointing to librte_bus_vdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_vdev.so.24 00:01:15.243 Installing symlink pointing to librte_bus_vdev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_vdev.so 00:01:15.243 Installing symlink pointing to librte_mempool_ring.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_mempool_ring.so.24 00:01:15.243 Installing symlink pointing to librte_mempool_ring.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_mempool_ring.so 00:01:15.243 Installing symlink pointing to librte_net_i40e.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_net_i40e.so.24 00:01:15.243 Installing symlink pointing to librte_net_i40e.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_net_i40e.so 00:01:15.243 Running custom install script '/bin/sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/config/../buildtools/symlink-drivers-solibs.sh lib dpdk/pmds-24.0' 00:01:15.243 23:04:04 -- common/autobuild_common.sh@189 -- $ uname -s 00:01:15.243 23:04:04 -- common/autobuild_common.sh@189 -- $ [[ Linux == \F\r\e\e\B\S\D ]] 00:01:15.243 23:04:04 -- common/autobuild_common.sh@200 -- $ cat 00:01:15.243 23:04:04 -- common/autobuild_common.sh@205 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:15.243 00:01:15.243 real 0m23.584s 00:01:15.243 user 7m6.514s 00:01:15.243 sys 2m43.702s 00:01:15.243 23:04:04 -- common/autotest_common.sh@1112 -- $ xtrace_disable 00:01:15.243 23:04:04 -- common/autotest_common.sh@10 -- $ set +x 00:01:15.243 ************************************ 00:01:15.243 END TEST build_native_dpdk 00:01:15.243 ************************************ 00:01:15.243 23:04:04 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:01:15.243 23:04:04 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:01:15.243 23:04:04 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:01:15.243 23:04:04 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:01:15.243 23:04:04 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:01:15.243 23:04:04 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:01:15.243 23:04:04 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:01:15.243 23:04:04 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-dpdk=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build --with-shared 00:01:15.505 Using /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/pkgconfig for additional libs... 00:01:15.505 DPDK libraries: /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:15.505 DPDK includes: //var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:15.767 Using default SPDK env in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:01:16.028 Using 'verbs' RDMA provider 00:01:31.883 Configuring ISA-L (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal.log)...done. 00:01:44.112 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:01:44.112 Creating mk/config.mk...done. 00:01:44.112 Creating mk/cc.flags.mk...done. 00:01:44.112 Type 'make' to build. 00:01:44.113 23:04:32 -- spdk/autobuild.sh@69 -- $ run_test make make -j144 00:01:44.113 23:04:32 -- common/autotest_common.sh@1087 -- $ '[' 3 -le 1 ']' 00:01:44.113 23:04:32 -- common/autotest_common.sh@1093 -- $ xtrace_disable 00:01:44.113 23:04:32 -- common/autotest_common.sh@10 -- $ set +x 00:01:44.113 ************************************ 00:01:44.113 START TEST make 00:01:44.113 ************************************ 00:01:44.113 23:04:32 -- common/autotest_common.sh@1111 -- $ make -j144 00:01:44.113 make[1]: Nothing to be done for 'all'. 00:01:45.050 The Meson build system 00:01:45.050 Version: 1.3.1 00:01:45.050 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user 00:01:45.050 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:01:45.050 Build type: native build 00:01:45.050 Project name: libvfio-user 00:01:45.050 Project version: 0.0.1 00:01:45.050 C compiler for the host machine: gcc (gcc 13.2.1 "gcc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:01:45.050 C linker for the host machine: gcc ld.bfd 2.39-16 00:01:45.050 Host machine cpu family: x86_64 00:01:45.050 Host machine cpu: x86_64 00:01:45.050 Run-time dependency threads found: YES 00:01:45.050 Library dl found: YES 00:01:45.050 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:01:45.050 Run-time dependency json-c found: YES 0.17 00:01:45.050 Run-time dependency cmocka found: YES 1.1.7 00:01:45.050 Program pytest-3 found: NO 00:01:45.050 Program flake8 found: NO 00:01:45.050 Program misspell-fixer found: NO 00:01:45.050 Program restructuredtext-lint found: NO 00:01:45.050 Program valgrind found: YES (/usr/bin/valgrind) 00:01:45.050 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:45.050 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:45.050 Compiler for C supports arguments -Wwrite-strings: YES 00:01:45.050 ../libvfio-user/test/meson.build:20: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:01:45.050 Program test-lspci.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-lspci.sh) 00:01:45.050 Program test-linkage.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-linkage.sh) 00:01:45.050 ../libvfio-user/test/py/meson.build:16: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:01:45.050 Build targets in project: 8 00:01:45.050 WARNING: Project specifies a minimum meson_version '>= 0.53.0' but uses features which were added in newer versions: 00:01:45.050 * 0.57.0: {'exclude_suites arg in add_test_setup'} 00:01:45.050 00:01:45.050 libvfio-user 0.0.1 00:01:45.050 00:01:45.050 User defined options 00:01:45.050 buildtype : debug 00:01:45.050 default_library: shared 00:01:45.050 libdir : /usr/local/lib 00:01:45.050 00:01:45.050 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:45.310 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:01:45.310 [1/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran.c.o 00:01:45.310 [2/37] Compiling C object samples/lspci.p/lspci.c.o 00:01:45.310 [3/37] Compiling C object samples/null.p/null.c.o 00:01:45.310 [4/37] Compiling C object samples/shadow_ioeventfd_server.p/shadow_ioeventfd_server.c.o 00:01:45.310 [5/37] Compiling C object samples/gpio-pci-idio-16.p/gpio-pci-idio-16.c.o 00:01:45.310 [6/37] Compiling C object test/unit_tests.p/.._lib_tran_pipe.c.o 00:01:45.310 [7/37] Compiling C object test/unit_tests.p/.._lib_irq.c.o 00:01:45.310 [8/37] Compiling C object samples/client.p/.._lib_migration.c.o 00:01:45.310 [9/37] Compiling C object lib/libvfio-user.so.0.0.1.p/dma.c.o 00:01:45.310 [10/37] Compiling C object lib/libvfio-user.so.0.0.1.p/migration.c.o 00:01:45.310 [11/37] Compiling C object samples/client.p/.._lib_tran_sock.c.o 00:01:45.310 [12/37] Compiling C object test/unit_tests.p/mocks.c.o 00:01:45.310 [13/37] Compiling C object lib/libvfio-user.so.0.0.1.p/irq.c.o 00:01:45.310 [14/37] Compiling C object samples/client.p/.._lib_tran.c.o 00:01:45.310 [15/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci_caps.c.o 00:01:45.310 [16/37] Compiling C object samples/server.p/server.c.o 00:01:45.310 [17/37] Compiling C object test/unit_tests.p/.._lib_tran.c.o 00:01:45.310 [18/37] Compiling C object test/unit_tests.p/.._lib_pci_caps.c.o 00:01:45.310 [19/37] Compiling C object test/unit_tests.p/.._lib_tran_sock.c.o 00:01:45.310 [20/37] Compiling C object test/unit_tests.p/unit-tests.c.o 00:01:45.310 [21/37] Compiling C object test/unit_tests.p/.._lib_dma.c.o 00:01:45.310 [22/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran_sock.c.o 00:01:45.310 [23/37] Compiling C object test/unit_tests.p/.._lib_migration.c.o 00:01:45.310 [24/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci.c.o 00:01:45.310 [25/37] Compiling C object test/unit_tests.p/.._lib_pci.c.o 00:01:45.310 [26/37] Compiling C object samples/client.p/client.c.o 00:01:45.571 [27/37] Linking target samples/client 00:01:45.571 [28/37] Compiling C object test/unit_tests.p/.._lib_libvfio-user.c.o 00:01:45.571 [29/37] Compiling C object lib/libvfio-user.so.0.0.1.p/libvfio-user.c.o 00:01:45.571 [30/37] Linking target test/unit_tests 00:01:45.571 [31/37] Linking target lib/libvfio-user.so.0.0.1 00:01:45.571 [32/37] Generating symbol file lib/libvfio-user.so.0.0.1.p/libvfio-user.so.0.0.1.symbols 00:01:45.571 [33/37] Linking target samples/gpio-pci-idio-16 00:01:45.832 [34/37] Linking target samples/server 00:01:45.832 [35/37] Linking target samples/lspci 00:01:45.832 [36/37] Linking target samples/null 00:01:45.832 [37/37] Linking target samples/shadow_ioeventfd_server 00:01:45.832 INFO: autodetecting backend as ninja 00:01:45.833 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:01:45.833 DESTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user meson install --quiet -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:01:46.093 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:01:46.093 ninja: no work to do. 00:01:54.231 CC lib/ut/ut.o 00:01:54.231 CC lib/ut_mock/mock.o 00:01:54.231 CC lib/log/log.o 00:01:54.231 CC lib/log/log_flags.o 00:01:54.231 CC lib/log/log_deprecated.o 00:01:54.231 LIB libspdk_ut.a 00:01:54.231 LIB libspdk_ut_mock.a 00:01:54.231 LIB libspdk_log.a 00:01:54.231 SO libspdk_ut.so.2.0 00:01:54.231 SO libspdk_ut_mock.so.6.0 00:01:54.231 SO libspdk_log.so.7.0 00:01:54.231 SYMLINK libspdk_ut.so 00:01:54.231 SYMLINK libspdk_ut_mock.so 00:01:54.231 SYMLINK libspdk_log.so 00:01:54.231 CC lib/ioat/ioat.o 00:01:54.231 CXX lib/trace_parser/trace.o 00:01:54.231 CC lib/util/base64.o 00:01:54.231 CC lib/util/bit_array.o 00:01:54.231 CC lib/util/cpuset.o 00:01:54.231 CC lib/dma/dma.o 00:01:54.231 CC lib/util/crc16.o 00:01:54.231 CC lib/util/crc32.o 00:01:54.231 CC lib/util/crc32c.o 00:01:54.231 CC lib/util/crc32_ieee.o 00:01:54.231 CC lib/util/crc64.o 00:01:54.231 CC lib/util/dif.o 00:01:54.231 CC lib/util/fd.o 00:01:54.231 CC lib/util/file.o 00:01:54.231 CC lib/util/hexlify.o 00:01:54.231 CC lib/util/iov.o 00:01:54.231 CC lib/util/math.o 00:01:54.231 CC lib/util/pipe.o 00:01:54.231 CC lib/util/strerror_tls.o 00:01:54.231 CC lib/util/string.o 00:01:54.231 CC lib/util/uuid.o 00:01:54.231 CC lib/util/xor.o 00:01:54.231 CC lib/util/fd_group.o 00:01:54.231 CC lib/util/zipf.o 00:01:54.231 CC lib/vfio_user/host/vfio_user_pci.o 00:01:54.231 CC lib/vfio_user/host/vfio_user.o 00:01:54.231 LIB libspdk_dma.a 00:01:54.231 LIB libspdk_ioat.a 00:01:54.231 SO libspdk_dma.so.4.0 00:01:54.231 SO libspdk_ioat.so.7.0 00:01:54.231 SYMLINK libspdk_dma.so 00:01:54.231 SYMLINK libspdk_ioat.so 00:01:54.231 LIB libspdk_vfio_user.a 00:01:54.231 SO libspdk_vfio_user.so.5.0 00:01:54.231 LIB libspdk_util.a 00:01:54.231 SYMLINK libspdk_vfio_user.so 00:01:54.231 SO libspdk_util.so.9.0 00:01:54.492 SYMLINK libspdk_util.so 00:01:54.492 LIB libspdk_trace_parser.a 00:01:54.492 SO libspdk_trace_parser.so.5.0 00:01:54.752 SYMLINK libspdk_trace_parser.so 00:01:54.752 CC lib/json/json_parse.o 00:01:54.752 CC lib/idxd/idxd.o 00:01:54.752 CC lib/conf/conf.o 00:01:54.752 CC lib/json/json_util.o 00:01:54.752 CC lib/idxd/idxd_user.o 00:01:54.752 CC lib/json/json_write.o 00:01:54.752 CC lib/rdma/common.o 00:01:54.752 CC lib/vmd/vmd.o 00:01:54.752 CC lib/env_dpdk/env.o 00:01:54.752 CC lib/rdma/rdma_verbs.o 00:01:54.752 CC lib/vmd/led.o 00:01:54.752 CC lib/env_dpdk/memory.o 00:01:54.752 CC lib/env_dpdk/pci.o 00:01:54.752 CC lib/env_dpdk/init.o 00:01:54.752 CC lib/env_dpdk/threads.o 00:01:54.752 CC lib/env_dpdk/pci_ioat.o 00:01:54.752 CC lib/env_dpdk/pci_virtio.o 00:01:54.752 CC lib/env_dpdk/pci_vmd.o 00:01:54.752 CC lib/env_dpdk/pci_idxd.o 00:01:54.752 CC lib/env_dpdk/pci_event.o 00:01:54.752 CC lib/env_dpdk/sigbus_handler.o 00:01:54.752 CC lib/env_dpdk/pci_dpdk.o 00:01:54.752 CC lib/env_dpdk/pci_dpdk_2207.o 00:01:54.752 CC lib/env_dpdk/pci_dpdk_2211.o 00:01:55.090 LIB libspdk_conf.a 00:01:55.090 LIB libspdk_json.a 00:01:55.090 SO libspdk_conf.so.6.0 00:01:55.090 SO libspdk_json.so.6.0 00:01:55.090 LIB libspdk_rdma.a 00:01:55.090 SYMLINK libspdk_conf.so 00:01:55.090 SO libspdk_rdma.so.6.0 00:01:55.090 SYMLINK libspdk_json.so 00:01:55.090 SYMLINK libspdk_rdma.so 00:01:55.348 LIB libspdk_idxd.a 00:01:55.348 SO libspdk_idxd.so.12.0 00:01:55.348 LIB libspdk_vmd.a 00:01:55.348 SYMLINK libspdk_idxd.so 00:01:55.348 SO libspdk_vmd.so.6.0 00:01:55.608 SYMLINK libspdk_vmd.so 00:01:55.608 CC lib/jsonrpc/jsonrpc_client.o 00:01:55.608 CC lib/jsonrpc/jsonrpc_server.o 00:01:55.608 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:01:55.608 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:01:55.868 LIB libspdk_jsonrpc.a 00:01:55.868 SO libspdk_jsonrpc.so.6.0 00:01:55.868 SYMLINK libspdk_jsonrpc.so 00:01:55.868 LIB libspdk_env_dpdk.a 00:01:56.127 SO libspdk_env_dpdk.so.14.0 00:01:56.127 SYMLINK libspdk_env_dpdk.so 00:01:56.127 CC lib/rpc/rpc.o 00:01:56.386 LIB libspdk_rpc.a 00:01:56.386 SO libspdk_rpc.so.6.0 00:01:56.386 SYMLINK libspdk_rpc.so 00:01:56.955 CC lib/keyring/keyring.o 00:01:56.955 CC lib/keyring/keyring_rpc.o 00:01:56.955 CC lib/notify/notify.o 00:01:56.955 CC lib/notify/notify_rpc.o 00:01:56.955 CC lib/trace/trace.o 00:01:56.955 CC lib/trace/trace_flags.o 00:01:56.955 CC lib/trace/trace_rpc.o 00:01:56.955 LIB libspdk_keyring.a 00:01:56.955 LIB libspdk_notify.a 00:01:56.955 SO libspdk_keyring.so.1.0 00:01:56.955 SO libspdk_notify.so.6.0 00:01:57.214 LIB libspdk_trace.a 00:01:57.214 SYMLINK libspdk_keyring.so 00:01:57.214 SYMLINK libspdk_notify.so 00:01:57.214 SO libspdk_trace.so.10.0 00:01:57.214 SYMLINK libspdk_trace.so 00:01:57.474 CC lib/thread/thread.o 00:01:57.474 CC lib/thread/iobuf.o 00:01:57.474 CC lib/sock/sock.o 00:01:57.474 CC lib/sock/sock_rpc.o 00:01:58.045 LIB libspdk_sock.a 00:01:58.045 SO libspdk_sock.so.9.0 00:01:58.045 SYMLINK libspdk_sock.so 00:01:58.306 CC lib/nvme/nvme_ctrlr_cmd.o 00:01:58.306 CC lib/nvme/nvme_ctrlr.o 00:01:58.306 CC lib/nvme/nvme_ns_cmd.o 00:01:58.306 CC lib/nvme/nvme_fabric.o 00:01:58.306 CC lib/nvme/nvme_ns.o 00:01:58.306 CC lib/nvme/nvme_qpair.o 00:01:58.306 CC lib/nvme/nvme_pcie_common.o 00:01:58.306 CC lib/nvme/nvme_pcie.o 00:01:58.306 CC lib/nvme/nvme_quirks.o 00:01:58.306 CC lib/nvme/nvme.o 00:01:58.306 CC lib/nvme/nvme_transport.o 00:01:58.306 CC lib/nvme/nvme_discovery.o 00:01:58.306 CC lib/nvme/nvme_tcp.o 00:01:58.306 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:01:58.306 CC lib/nvme/nvme_opal.o 00:01:58.306 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:01:58.306 CC lib/nvme/nvme_io_msg.o 00:01:58.306 CC lib/nvme/nvme_poll_group.o 00:01:58.306 CC lib/nvme/nvme_stubs.o 00:01:58.306 CC lib/nvme/nvme_zns.o 00:01:58.306 CC lib/nvme/nvme_cuse.o 00:01:58.306 CC lib/nvme/nvme_auth.o 00:01:58.306 CC lib/nvme/nvme_vfio_user.o 00:01:58.306 CC lib/nvme/nvme_rdma.o 00:01:58.878 LIB libspdk_thread.a 00:01:58.878 SO libspdk_thread.so.10.0 00:01:58.878 SYMLINK libspdk_thread.so 00:01:59.136 CC lib/vfu_tgt/tgt_endpoint.o 00:01:59.136 CC lib/vfu_tgt/tgt_rpc.o 00:01:59.136 CC lib/blob/blobstore.o 00:01:59.136 CC lib/blob/zeroes.o 00:01:59.136 CC lib/blob/request.o 00:01:59.136 CC lib/accel/accel.o 00:01:59.136 CC lib/blob/blob_bs_dev.o 00:01:59.136 CC lib/accel/accel_rpc.o 00:01:59.136 CC lib/accel/accel_sw.o 00:01:59.136 CC lib/init/json_config.o 00:01:59.136 CC lib/init/subsystem.o 00:01:59.136 CC lib/init/subsystem_rpc.o 00:01:59.136 CC lib/init/rpc.o 00:01:59.136 CC lib/virtio/virtio.o 00:01:59.136 CC lib/virtio/virtio_vhost_user.o 00:01:59.136 CC lib/virtio/virtio_vfio_user.o 00:01:59.136 CC lib/virtio/virtio_pci.o 00:01:59.396 LIB libspdk_init.a 00:01:59.396 SO libspdk_init.so.5.0 00:01:59.658 LIB libspdk_vfu_tgt.a 00:01:59.658 LIB libspdk_virtio.a 00:01:59.658 SYMLINK libspdk_init.so 00:01:59.658 SO libspdk_virtio.so.7.0 00:01:59.658 SO libspdk_vfu_tgt.so.3.0 00:01:59.658 SYMLINK libspdk_vfu_tgt.so 00:01:59.658 SYMLINK libspdk_virtio.so 00:01:59.919 CC lib/event/app.o 00:01:59.919 CC lib/event/reactor.o 00:01:59.919 CC lib/event/log_rpc.o 00:01:59.919 CC lib/event/app_rpc.o 00:01:59.919 CC lib/event/scheduler_static.o 00:02:00.181 LIB libspdk_accel.a 00:02:00.181 SO libspdk_accel.so.15.0 00:02:00.181 LIB libspdk_nvme.a 00:02:00.181 SYMLINK libspdk_accel.so 00:02:00.181 SO libspdk_nvme.so.13.0 00:02:00.181 LIB libspdk_event.a 00:02:00.441 SO libspdk_event.so.13.0 00:02:00.441 SYMLINK libspdk_event.so 00:02:00.441 SYMLINK libspdk_nvme.so 00:02:00.441 CC lib/bdev/bdev.o 00:02:00.441 CC lib/bdev/bdev_rpc.o 00:02:00.441 CC lib/bdev/part.o 00:02:00.442 CC lib/bdev/bdev_zone.o 00:02:00.701 CC lib/bdev/scsi_nvme.o 00:02:01.640 LIB libspdk_blob.a 00:02:01.640 SO libspdk_blob.so.11.0 00:02:01.640 SYMLINK libspdk_blob.so 00:02:02.212 CC lib/lvol/lvol.o 00:02:02.212 CC lib/blobfs/blobfs.o 00:02:02.212 CC lib/blobfs/tree.o 00:02:02.783 LIB libspdk_bdev.a 00:02:02.783 LIB libspdk_blobfs.a 00:02:02.783 SO libspdk_bdev.so.15.0 00:02:02.783 LIB libspdk_lvol.a 00:02:02.783 SO libspdk_blobfs.so.10.0 00:02:03.045 SO libspdk_lvol.so.10.0 00:02:03.045 SYMLINK libspdk_blobfs.so 00:02:03.045 SYMLINK libspdk_bdev.so 00:02:03.045 SYMLINK libspdk_lvol.so 00:02:03.306 CC lib/ublk/ublk.o 00:02:03.306 CC lib/ublk/ublk_rpc.o 00:02:03.306 CC lib/nbd/nbd_rpc.o 00:02:03.306 CC lib/nbd/nbd.o 00:02:03.306 CC lib/nvmf/ctrlr.o 00:02:03.306 CC lib/nvmf/ctrlr_discovery.o 00:02:03.306 CC lib/nvmf/ctrlr_bdev.o 00:02:03.306 CC lib/nvmf/nvmf.o 00:02:03.306 CC lib/nvmf/subsystem.o 00:02:03.306 CC lib/nvmf/nvmf_rpc.o 00:02:03.306 CC lib/nvmf/transport.o 00:02:03.306 CC lib/nvmf/tcp.o 00:02:03.306 CC lib/nvmf/vfio_user.o 00:02:03.306 CC lib/nvmf/rdma.o 00:02:03.306 CC lib/scsi/dev.o 00:02:03.306 CC lib/scsi/lun.o 00:02:03.306 CC lib/ftl/ftl_core.o 00:02:03.306 CC lib/scsi/port.o 00:02:03.306 CC lib/ftl/ftl_init.o 00:02:03.306 CC lib/scsi/scsi.o 00:02:03.306 CC lib/ftl/ftl_layout.o 00:02:03.306 CC lib/scsi/scsi_bdev.o 00:02:03.306 CC lib/scsi/scsi_pr.o 00:02:03.306 CC lib/ftl/ftl_debug.o 00:02:03.306 CC lib/ftl/ftl_io.o 00:02:03.306 CC lib/scsi/scsi_rpc.o 00:02:03.306 CC lib/scsi/task.o 00:02:03.306 CC lib/ftl/ftl_sb.o 00:02:03.306 CC lib/ftl/ftl_l2p.o 00:02:03.306 CC lib/ftl/ftl_l2p_flat.o 00:02:03.306 CC lib/ftl/ftl_nv_cache.o 00:02:03.306 CC lib/ftl/ftl_band.o 00:02:03.306 CC lib/ftl/ftl_band_ops.o 00:02:03.306 CC lib/ftl/ftl_writer.o 00:02:03.306 CC lib/ftl/ftl_rq.o 00:02:03.306 CC lib/ftl/ftl_reloc.o 00:02:03.306 CC lib/ftl/ftl_l2p_cache.o 00:02:03.306 CC lib/ftl/ftl_p2l.o 00:02:03.306 CC lib/ftl/mngt/ftl_mngt.o 00:02:03.306 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:02:03.306 CC lib/ftl/mngt/ftl_mngt_startup.o 00:02:03.306 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:02:03.306 CC lib/ftl/mngt/ftl_mngt_md.o 00:02:03.306 CC lib/ftl/mngt/ftl_mngt_misc.o 00:02:03.306 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:02:03.306 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:02:03.306 CC lib/ftl/mngt/ftl_mngt_band.o 00:02:03.306 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:02:03.306 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:02:03.306 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:02:03.306 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:02:03.306 CC lib/ftl/utils/ftl_md.o 00:02:03.306 CC lib/ftl/utils/ftl_conf.o 00:02:03.306 CC lib/ftl/utils/ftl_bitmap.o 00:02:03.306 CC lib/ftl/utils/ftl_mempool.o 00:02:03.306 CC lib/ftl/utils/ftl_property.o 00:02:03.306 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:02:03.306 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:02:03.306 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:02:03.306 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:02:03.306 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:02:03.306 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:02:03.306 CC lib/ftl/upgrade/ftl_sb_v3.o 00:02:03.306 CC lib/ftl/upgrade/ftl_sb_v5.o 00:02:03.306 CC lib/ftl/nvc/ftl_nvc_dev.o 00:02:03.306 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:02:03.306 CC lib/ftl/base/ftl_base_bdev.o 00:02:03.306 CC lib/ftl/ftl_trace.o 00:02:03.306 CC lib/ftl/base/ftl_base_dev.o 00:02:03.882 LIB libspdk_nbd.a 00:02:03.882 SO libspdk_nbd.so.7.0 00:02:03.882 LIB libspdk_scsi.a 00:02:03.882 SYMLINK libspdk_nbd.so 00:02:03.882 SO libspdk_scsi.so.9.0 00:02:03.882 LIB libspdk_ublk.a 00:02:04.143 SO libspdk_ublk.so.3.0 00:02:04.143 SYMLINK libspdk_scsi.so 00:02:04.143 SYMLINK libspdk_ublk.so 00:02:04.143 LIB libspdk_ftl.a 00:02:04.403 CC lib/vhost/vhost.o 00:02:04.403 CC lib/vhost/vhost_rpc.o 00:02:04.403 CC lib/vhost/vhost_scsi.o 00:02:04.403 CC lib/vhost/vhost_blk.o 00:02:04.403 CC lib/vhost/rte_vhost_user.o 00:02:04.403 CC lib/iscsi/conn.o 00:02:04.403 CC lib/iscsi/init_grp.o 00:02:04.403 CC lib/iscsi/iscsi.o 00:02:04.403 CC lib/iscsi/md5.o 00:02:04.403 SO libspdk_ftl.so.9.0 00:02:04.403 CC lib/iscsi/param.o 00:02:04.403 CC lib/iscsi/portal_grp.o 00:02:04.403 CC lib/iscsi/tgt_node.o 00:02:04.403 CC lib/iscsi/iscsi_rpc.o 00:02:04.404 CC lib/iscsi/iscsi_subsystem.o 00:02:04.404 CC lib/iscsi/task.o 00:02:04.665 SYMLINK libspdk_ftl.so 00:02:05.238 LIB libspdk_nvmf.a 00:02:05.238 SO libspdk_nvmf.so.18.0 00:02:05.238 LIB libspdk_vhost.a 00:02:05.238 SO libspdk_vhost.so.8.0 00:02:05.499 SYMLINK libspdk_nvmf.so 00:02:05.499 SYMLINK libspdk_vhost.so 00:02:05.499 LIB libspdk_iscsi.a 00:02:05.499 SO libspdk_iscsi.so.8.0 00:02:05.759 SYMLINK libspdk_iscsi.so 00:02:06.495 CC module/vfu_device/vfu_virtio.o 00:02:06.495 CC module/vfu_device/vfu_virtio_blk.o 00:02:06.495 CC module/vfu_device/vfu_virtio_scsi.o 00:02:06.495 CC module/vfu_device/vfu_virtio_rpc.o 00:02:06.495 CC module/env_dpdk/env_dpdk_rpc.o 00:02:06.495 CC module/accel/error/accel_error.o 00:02:06.495 CC module/accel/dsa/accel_dsa.o 00:02:06.495 CC module/accel/error/accel_error_rpc.o 00:02:06.495 CC module/accel/dsa/accel_dsa_rpc.o 00:02:06.495 CC module/keyring/file/keyring.o 00:02:06.495 CC module/accel/ioat/accel_ioat.o 00:02:06.495 CC module/accel/ioat/accel_ioat_rpc.o 00:02:06.495 CC module/keyring/file/keyring_rpc.o 00:02:06.495 LIB libspdk_env_dpdk_rpc.a 00:02:06.495 CC module/scheduler/gscheduler/gscheduler.o 00:02:06.495 CC module/accel/iaa/accel_iaa.o 00:02:06.495 CC module/accel/iaa/accel_iaa_rpc.o 00:02:06.495 CC module/sock/posix/posix.o 00:02:06.495 CC module/blob/bdev/blob_bdev.o 00:02:06.495 CC module/scheduler/dynamic/scheduler_dynamic.o 00:02:06.495 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:02:06.495 SO libspdk_env_dpdk_rpc.so.6.0 00:02:06.495 SYMLINK libspdk_env_dpdk_rpc.so 00:02:06.779 LIB libspdk_keyring_file.a 00:02:06.779 LIB libspdk_scheduler_gscheduler.a 00:02:06.779 LIB libspdk_accel_error.a 00:02:06.779 LIB libspdk_accel_ioat.a 00:02:06.779 LIB libspdk_scheduler_dpdk_governor.a 00:02:06.779 SO libspdk_keyring_file.so.1.0 00:02:06.779 SO libspdk_scheduler_gscheduler.so.4.0 00:02:06.779 LIB libspdk_scheduler_dynamic.a 00:02:06.779 LIB libspdk_accel_iaa.a 00:02:06.779 SO libspdk_accel_error.so.2.0 00:02:06.779 SO libspdk_scheduler_dpdk_governor.so.4.0 00:02:06.779 LIB libspdk_accel_dsa.a 00:02:06.779 SO libspdk_accel_ioat.so.6.0 00:02:06.779 SO libspdk_scheduler_dynamic.so.4.0 00:02:06.779 SYMLINK libspdk_keyring_file.so 00:02:06.779 SO libspdk_accel_iaa.so.3.0 00:02:06.779 SYMLINK libspdk_scheduler_gscheduler.so 00:02:06.779 LIB libspdk_blob_bdev.a 00:02:06.779 SO libspdk_accel_dsa.so.5.0 00:02:06.779 SYMLINK libspdk_accel_error.so 00:02:06.779 SYMLINK libspdk_scheduler_dpdk_governor.so 00:02:06.779 SO libspdk_blob_bdev.so.11.0 00:02:06.779 SYMLINK libspdk_accel_ioat.so 00:02:06.779 SYMLINK libspdk_scheduler_dynamic.so 00:02:06.779 SYMLINK libspdk_accel_iaa.so 00:02:06.779 SYMLINK libspdk_accel_dsa.so 00:02:06.779 SYMLINK libspdk_blob_bdev.so 00:02:06.779 LIB libspdk_vfu_device.a 00:02:07.046 SO libspdk_vfu_device.so.3.0 00:02:07.046 SYMLINK libspdk_vfu_device.so 00:02:07.307 LIB libspdk_sock_posix.a 00:02:07.307 SO libspdk_sock_posix.so.6.0 00:02:07.307 SYMLINK libspdk_sock_posix.so 00:02:07.307 CC module/bdev/lvol/vbdev_lvol.o 00:02:07.307 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:02:07.307 CC module/bdev/virtio/bdev_virtio_scsi.o 00:02:07.307 CC module/bdev/virtio/bdev_virtio_blk.o 00:02:07.307 CC module/bdev/raid/bdev_raid.o 00:02:07.307 CC module/bdev/virtio/bdev_virtio_rpc.o 00:02:07.307 CC module/bdev/raid/bdev_raid_rpc.o 00:02:07.307 CC module/bdev/iscsi/bdev_iscsi.o 00:02:07.567 CC module/bdev/raid/bdev_raid_sb.o 00:02:07.567 CC module/bdev/raid/raid0.o 00:02:07.567 CC module/bdev/nvme/bdev_nvme.o 00:02:07.567 CC module/bdev/raid/raid1.o 00:02:07.567 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:02:07.567 CC module/bdev/nvme/bdev_nvme_rpc.o 00:02:07.567 CC module/bdev/ftl/bdev_ftl.o 00:02:07.567 CC module/bdev/split/vbdev_split.o 00:02:07.567 CC module/bdev/raid/concat.o 00:02:07.567 CC module/bdev/gpt/gpt.o 00:02:07.567 CC module/bdev/ftl/bdev_ftl_rpc.o 00:02:07.567 CC module/bdev/zone_block/vbdev_zone_block.o 00:02:07.567 CC module/bdev/split/vbdev_split_rpc.o 00:02:07.567 CC module/bdev/nvme/nvme_rpc.o 00:02:07.567 CC module/bdev/gpt/vbdev_gpt.o 00:02:07.567 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:02:07.567 CC module/bdev/nvme/bdev_mdns_client.o 00:02:07.567 CC module/bdev/malloc/bdev_malloc.o 00:02:07.567 CC module/bdev/malloc/bdev_malloc_rpc.o 00:02:07.567 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:02:07.567 CC module/bdev/nvme/vbdev_opal.o 00:02:07.567 CC module/blobfs/bdev/blobfs_bdev.o 00:02:07.567 CC module/bdev/delay/vbdev_delay.o 00:02:07.567 CC module/bdev/nvme/vbdev_opal_rpc.o 00:02:07.567 CC module/bdev/delay/vbdev_delay_rpc.o 00:02:07.568 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:02:07.568 CC module/bdev/null/bdev_null.o 00:02:07.568 CC module/bdev/error/vbdev_error.o 00:02:07.568 CC module/bdev/passthru/vbdev_passthru.o 00:02:07.568 CC module/bdev/error/vbdev_error_rpc.o 00:02:07.568 CC module/bdev/null/bdev_null_rpc.o 00:02:07.568 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:02:07.568 CC module/bdev/aio/bdev_aio.o 00:02:07.568 CC module/bdev/aio/bdev_aio_rpc.o 00:02:07.568 LIB libspdk_blobfs_bdev.a 00:02:07.828 SO libspdk_blobfs_bdev.so.6.0 00:02:07.828 LIB libspdk_bdev_split.a 00:02:07.828 LIB libspdk_bdev_null.a 00:02:07.828 SO libspdk_bdev_split.so.6.0 00:02:07.828 LIB libspdk_bdev_gpt.a 00:02:07.828 LIB libspdk_bdev_passthru.a 00:02:07.828 LIB libspdk_bdev_ftl.a 00:02:07.828 SYMLINK libspdk_blobfs_bdev.so 00:02:07.828 LIB libspdk_bdev_error.a 00:02:07.828 LIB libspdk_bdev_zone_block.a 00:02:07.828 SO libspdk_bdev_null.so.6.0 00:02:07.828 SO libspdk_bdev_ftl.so.6.0 00:02:07.828 SO libspdk_bdev_gpt.so.6.0 00:02:07.828 SO libspdk_bdev_passthru.so.6.0 00:02:07.828 SYMLINK libspdk_bdev_split.so 00:02:07.828 SO libspdk_bdev_error.so.6.0 00:02:07.828 LIB libspdk_bdev_malloc.a 00:02:07.828 LIB libspdk_bdev_aio.a 00:02:07.828 SO libspdk_bdev_zone_block.so.6.0 00:02:07.828 LIB libspdk_bdev_delay.a 00:02:07.828 LIB libspdk_bdev_iscsi.a 00:02:07.828 SYMLINK libspdk_bdev_null.so 00:02:07.828 SO libspdk_bdev_aio.so.6.0 00:02:07.828 SYMLINK libspdk_bdev_gpt.so 00:02:07.828 SYMLINK libspdk_bdev_passthru.so 00:02:07.828 SYMLINK libspdk_bdev_ftl.so 00:02:07.828 LIB libspdk_bdev_lvol.a 00:02:07.828 SO libspdk_bdev_malloc.so.6.0 00:02:07.828 SO libspdk_bdev_delay.so.6.0 00:02:07.828 SO libspdk_bdev_iscsi.so.6.0 00:02:07.828 SYMLINK libspdk_bdev_error.so 00:02:07.828 SYMLINK libspdk_bdev_zone_block.so 00:02:07.828 SYMLINK libspdk_bdev_aio.so 00:02:07.828 SO libspdk_bdev_lvol.so.6.0 00:02:07.828 LIB libspdk_bdev_virtio.a 00:02:07.828 SYMLINK libspdk_bdev_malloc.so 00:02:08.088 SYMLINK libspdk_bdev_delay.so 00:02:08.088 SYMLINK libspdk_bdev_iscsi.so 00:02:08.088 SO libspdk_bdev_virtio.so.6.0 00:02:08.088 SYMLINK libspdk_bdev_lvol.so 00:02:08.088 SYMLINK libspdk_bdev_virtio.so 00:02:08.350 LIB libspdk_bdev_raid.a 00:02:08.350 SO libspdk_bdev_raid.so.6.0 00:02:08.350 SYMLINK libspdk_bdev_raid.so 00:02:09.295 LIB libspdk_bdev_nvme.a 00:02:09.295 SO libspdk_bdev_nvme.so.7.0 00:02:09.556 SYMLINK libspdk_bdev_nvme.so 00:02:10.126 CC module/event/subsystems/vmd/vmd.o 00:02:10.126 CC module/event/subsystems/vmd/vmd_rpc.o 00:02:10.126 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:02:10.126 CC module/event/subsystems/iobuf/iobuf.o 00:02:10.126 CC module/event/subsystems/keyring/keyring.o 00:02:10.126 CC module/event/subsystems/vfu_tgt/vfu_tgt.o 00:02:10.126 CC module/event/subsystems/scheduler/scheduler.o 00:02:10.126 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:02:10.126 CC module/event/subsystems/sock/sock.o 00:02:10.387 LIB libspdk_event_scheduler.a 00:02:10.387 LIB libspdk_event_vmd.a 00:02:10.387 LIB libspdk_event_keyring.a 00:02:10.387 LIB libspdk_event_vfu_tgt.a 00:02:10.387 LIB libspdk_event_vhost_blk.a 00:02:10.387 LIB libspdk_event_sock.a 00:02:10.387 LIB libspdk_event_iobuf.a 00:02:10.387 SO libspdk_event_keyring.so.1.0 00:02:10.387 SO libspdk_event_scheduler.so.4.0 00:02:10.387 SO libspdk_event_vmd.so.6.0 00:02:10.387 SO libspdk_event_vfu_tgt.so.3.0 00:02:10.387 SO libspdk_event_sock.so.5.0 00:02:10.387 SO libspdk_event_vhost_blk.so.3.0 00:02:10.387 SO libspdk_event_iobuf.so.3.0 00:02:10.387 SYMLINK libspdk_event_scheduler.so 00:02:10.387 SYMLINK libspdk_event_keyring.so 00:02:10.387 SYMLINK libspdk_event_vhost_blk.so 00:02:10.387 SYMLINK libspdk_event_vmd.so 00:02:10.387 SYMLINK libspdk_event_vfu_tgt.so 00:02:10.387 SYMLINK libspdk_event_sock.so 00:02:10.387 SYMLINK libspdk_event_iobuf.so 00:02:10.958 CC module/event/subsystems/accel/accel.o 00:02:10.958 LIB libspdk_event_accel.a 00:02:10.958 SO libspdk_event_accel.so.6.0 00:02:10.958 SYMLINK libspdk_event_accel.so 00:02:11.529 CC module/event/subsystems/bdev/bdev.o 00:02:11.529 LIB libspdk_event_bdev.a 00:02:11.529 SO libspdk_event_bdev.so.6.0 00:02:11.789 SYMLINK libspdk_event_bdev.so 00:02:12.050 CC module/event/subsystems/ublk/ublk.o 00:02:12.050 CC module/event/subsystems/scsi/scsi.o 00:02:12.050 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:02:12.050 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:02:12.050 CC module/event/subsystems/nbd/nbd.o 00:02:12.310 LIB libspdk_event_ublk.a 00:02:12.310 LIB libspdk_event_scsi.a 00:02:12.310 LIB libspdk_event_nbd.a 00:02:12.310 SO libspdk_event_ublk.so.3.0 00:02:12.310 SO libspdk_event_scsi.so.6.0 00:02:12.310 SO libspdk_event_nbd.so.6.0 00:02:12.310 LIB libspdk_event_nvmf.a 00:02:12.310 SYMLINK libspdk_event_ublk.so 00:02:12.310 SYMLINK libspdk_event_scsi.so 00:02:12.310 SO libspdk_event_nvmf.so.6.0 00:02:12.310 SYMLINK libspdk_event_nbd.so 00:02:12.310 SYMLINK libspdk_event_nvmf.so 00:02:12.571 CC module/event/subsystems/iscsi/iscsi.o 00:02:12.571 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:02:12.831 LIB libspdk_event_iscsi.a 00:02:12.831 SO libspdk_event_iscsi.so.6.0 00:02:12.831 LIB libspdk_event_vhost_scsi.a 00:02:12.831 SO libspdk_event_vhost_scsi.so.3.0 00:02:12.831 SYMLINK libspdk_event_iscsi.so 00:02:13.092 SYMLINK libspdk_event_vhost_scsi.so 00:02:13.092 SO libspdk.so.6.0 00:02:13.092 SYMLINK libspdk.so 00:02:13.670 CC app/trace_record/trace_record.o 00:02:13.670 CXX app/trace/trace.o 00:02:13.670 CC app/spdk_nvme_discover/discovery_aer.o 00:02:13.670 CC app/spdk_top/spdk_top.o 00:02:13.670 CC test/rpc_client/rpc_client_test.o 00:02:13.670 CC app/spdk_nvme_perf/perf.o 00:02:13.670 CC app/spdk_lspci/spdk_lspci.o 00:02:13.670 CC app/spdk_nvme_identify/identify.o 00:02:13.670 TEST_HEADER include/spdk/accel.h 00:02:13.670 TEST_HEADER include/spdk/accel_module.h 00:02:13.670 TEST_HEADER include/spdk/assert.h 00:02:13.670 TEST_HEADER include/spdk/barrier.h 00:02:13.670 TEST_HEADER include/spdk/base64.h 00:02:13.670 TEST_HEADER include/spdk/bdev.h 00:02:13.670 TEST_HEADER include/spdk/bdev_module.h 00:02:13.670 TEST_HEADER include/spdk/bit_array.h 00:02:13.670 TEST_HEADER include/spdk/bdev_zone.h 00:02:13.670 TEST_HEADER include/spdk/bit_pool.h 00:02:13.670 TEST_HEADER include/spdk/blob_bdev.h 00:02:13.670 TEST_HEADER include/spdk/blobfs_bdev.h 00:02:13.670 TEST_HEADER include/spdk/blobfs.h 00:02:13.670 TEST_HEADER include/spdk/blob.h 00:02:13.670 TEST_HEADER include/spdk/conf.h 00:02:13.670 TEST_HEADER include/spdk/config.h 00:02:13.670 CC app/nvmf_tgt/nvmf_main.o 00:02:13.670 TEST_HEADER include/spdk/cpuset.h 00:02:13.670 TEST_HEADER include/spdk/crc16.h 00:02:13.670 TEST_HEADER include/spdk/crc64.h 00:02:13.670 TEST_HEADER include/spdk/dif.h 00:02:13.670 TEST_HEADER include/spdk/crc32.h 00:02:13.670 TEST_HEADER include/spdk/dma.h 00:02:13.670 TEST_HEADER include/spdk/endian.h 00:02:13.670 CC app/vhost/vhost.o 00:02:13.670 TEST_HEADER include/spdk/env_dpdk.h 00:02:13.670 TEST_HEADER include/spdk/env.h 00:02:13.670 TEST_HEADER include/spdk/event.h 00:02:13.670 TEST_HEADER include/spdk/fd_group.h 00:02:13.670 TEST_HEADER include/spdk/file.h 00:02:13.670 CC app/spdk_dd/spdk_dd.o 00:02:13.670 TEST_HEADER include/spdk/ftl.h 00:02:13.670 TEST_HEADER include/spdk/fd.h 00:02:13.670 TEST_HEADER include/spdk/gpt_spec.h 00:02:13.670 CC examples/interrupt_tgt/interrupt_tgt.o 00:02:13.670 TEST_HEADER include/spdk/hexlify.h 00:02:13.670 TEST_HEADER include/spdk/histogram_data.h 00:02:13.670 CC app/iscsi_tgt/iscsi_tgt.o 00:02:13.670 TEST_HEADER include/spdk/idxd.h 00:02:13.670 TEST_HEADER include/spdk/idxd_spec.h 00:02:13.670 TEST_HEADER include/spdk/init.h 00:02:13.670 TEST_HEADER include/spdk/ioat.h 00:02:13.670 TEST_HEADER include/spdk/ioat_spec.h 00:02:13.670 TEST_HEADER include/spdk/iscsi_spec.h 00:02:13.670 TEST_HEADER include/spdk/jsonrpc.h 00:02:13.670 TEST_HEADER include/spdk/json.h 00:02:13.670 TEST_HEADER include/spdk/keyring.h 00:02:13.670 TEST_HEADER include/spdk/keyring_module.h 00:02:13.670 TEST_HEADER include/spdk/likely.h 00:02:13.670 TEST_HEADER include/spdk/log.h 00:02:13.670 TEST_HEADER include/spdk/memory.h 00:02:13.670 TEST_HEADER include/spdk/mmio.h 00:02:13.670 TEST_HEADER include/spdk/lvol.h 00:02:13.670 TEST_HEADER include/spdk/nbd.h 00:02:13.670 TEST_HEADER include/spdk/notify.h 00:02:13.670 TEST_HEADER include/spdk/nvme.h 00:02:13.670 TEST_HEADER include/spdk/nvme_intel.h 00:02:13.670 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:02:13.670 TEST_HEADER include/spdk/nvme_ocssd.h 00:02:13.670 CC app/spdk_tgt/spdk_tgt.o 00:02:13.670 TEST_HEADER include/spdk/nvme_spec.h 00:02:13.670 TEST_HEADER include/spdk/nvmf_cmd.h 00:02:13.670 TEST_HEADER include/spdk/nvme_zns.h 00:02:13.670 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:02:13.670 TEST_HEADER include/spdk/nvmf.h 00:02:13.670 TEST_HEADER include/spdk/nvmf_spec.h 00:02:13.670 TEST_HEADER include/spdk/nvmf_transport.h 00:02:13.670 TEST_HEADER include/spdk/opal.h 00:02:13.670 TEST_HEADER include/spdk/opal_spec.h 00:02:13.670 TEST_HEADER include/spdk/pipe.h 00:02:13.670 TEST_HEADER include/spdk/pci_ids.h 00:02:13.670 TEST_HEADER include/spdk/queue.h 00:02:13.670 TEST_HEADER include/spdk/reduce.h 00:02:13.670 TEST_HEADER include/spdk/rpc.h 00:02:13.670 TEST_HEADER include/spdk/scsi.h 00:02:13.670 TEST_HEADER include/spdk/scheduler.h 00:02:13.670 TEST_HEADER include/spdk/scsi_spec.h 00:02:13.670 TEST_HEADER include/spdk/sock.h 00:02:13.670 TEST_HEADER include/spdk/stdinc.h 00:02:13.670 TEST_HEADER include/spdk/string.h 00:02:13.670 TEST_HEADER include/spdk/trace_parser.h 00:02:13.670 TEST_HEADER include/spdk/trace.h 00:02:13.670 TEST_HEADER include/spdk/tree.h 00:02:13.670 TEST_HEADER include/spdk/thread.h 00:02:13.670 TEST_HEADER include/spdk/ublk.h 00:02:13.670 TEST_HEADER include/spdk/util.h 00:02:13.670 TEST_HEADER include/spdk/version.h 00:02:13.670 TEST_HEADER include/spdk/uuid.h 00:02:13.670 TEST_HEADER include/spdk/vfio_user_pci.h 00:02:13.670 TEST_HEADER include/spdk/vfio_user_spec.h 00:02:13.670 TEST_HEADER include/spdk/vhost.h 00:02:13.670 TEST_HEADER include/spdk/xor.h 00:02:13.670 TEST_HEADER include/spdk/zipf.h 00:02:13.670 TEST_HEADER include/spdk/vmd.h 00:02:13.670 CXX test/cpp_headers/accel.o 00:02:13.670 CXX test/cpp_headers/accel_module.o 00:02:13.670 CXX test/cpp_headers/assert.o 00:02:13.670 CXX test/cpp_headers/barrier.o 00:02:13.670 CXX test/cpp_headers/base64.o 00:02:13.670 CXX test/cpp_headers/bdev_module.o 00:02:13.670 CXX test/cpp_headers/bdev.o 00:02:13.670 CXX test/cpp_headers/bdev_zone.o 00:02:13.670 CXX test/cpp_headers/bit_pool.o 00:02:13.670 CXX test/cpp_headers/bit_array.o 00:02:13.670 CXX test/cpp_headers/blobfs.o 00:02:13.670 CXX test/cpp_headers/blob.o 00:02:13.670 CXX test/cpp_headers/blob_bdev.o 00:02:13.670 CXX test/cpp_headers/conf.o 00:02:13.670 CXX test/cpp_headers/blobfs_bdev.o 00:02:13.670 CXX test/cpp_headers/config.o 00:02:13.670 CXX test/cpp_headers/crc64.o 00:02:13.670 CXX test/cpp_headers/cpuset.o 00:02:13.670 CXX test/cpp_headers/dif.o 00:02:13.670 CXX test/cpp_headers/crc16.o 00:02:13.670 CXX test/cpp_headers/crc32.o 00:02:13.670 CXX test/cpp_headers/dma.o 00:02:13.670 CXX test/cpp_headers/env_dpdk.o 00:02:13.670 CXX test/cpp_headers/env.o 00:02:13.670 CXX test/cpp_headers/endian.o 00:02:13.670 CXX test/cpp_headers/fd_group.o 00:02:13.670 CXX test/cpp_headers/event.o 00:02:13.670 CXX test/cpp_headers/file.o 00:02:13.670 CXX test/cpp_headers/fd.o 00:02:13.670 CXX test/cpp_headers/ftl.o 00:02:13.670 CXX test/cpp_headers/hexlify.o 00:02:13.670 CXX test/cpp_headers/gpt_spec.o 00:02:13.670 CXX test/cpp_headers/idxd.o 00:02:13.670 CXX test/cpp_headers/idxd_spec.o 00:02:13.670 CXX test/cpp_headers/histogram_data.o 00:02:13.670 CXX test/cpp_headers/ioat.o 00:02:13.670 CXX test/cpp_headers/init.o 00:02:13.670 CXX test/cpp_headers/ioat_spec.o 00:02:13.670 CXX test/cpp_headers/iscsi_spec.o 00:02:13.670 CXX test/cpp_headers/jsonrpc.o 00:02:13.670 CXX test/cpp_headers/json.o 00:02:13.670 CXX test/cpp_headers/likely.o 00:02:13.670 CXX test/cpp_headers/keyring_module.o 00:02:13.670 CXX test/cpp_headers/keyring.o 00:02:13.670 CXX test/cpp_headers/lvol.o 00:02:13.670 CXX test/cpp_headers/log.o 00:02:13.670 CXX test/cpp_headers/mmio.o 00:02:13.670 CXX test/cpp_headers/memory.o 00:02:13.670 CXX test/cpp_headers/nvme.o 00:02:13.670 CXX test/cpp_headers/nbd.o 00:02:13.670 CXX test/cpp_headers/notify.o 00:02:13.670 CXX test/cpp_headers/nvme_intel.o 00:02:13.670 CXX test/cpp_headers/nvme_ocssd.o 00:02:13.670 CXX test/cpp_headers/nvme_ocssd_spec.o 00:02:13.670 CXX test/cpp_headers/nvme_spec.o 00:02:13.670 CXX test/cpp_headers/nvme_zns.o 00:02:13.670 CXX test/cpp_headers/nvmf_cmd.o 00:02:13.670 CXX test/cpp_headers/nvmf_fc_spec.o 00:02:13.670 CXX test/cpp_headers/nvmf.o 00:02:13.938 CXX test/cpp_headers/nvmf_spec.o 00:02:13.938 CXX test/cpp_headers/nvmf_transport.o 00:02:13.938 CXX test/cpp_headers/opal_spec.o 00:02:13.938 CXX test/cpp_headers/opal.o 00:02:13.938 CXX test/cpp_headers/pci_ids.o 00:02:13.938 CXX test/cpp_headers/pipe.o 00:02:13.938 CXX test/cpp_headers/queue.o 00:02:13.938 CXX test/cpp_headers/reduce.o 00:02:13.938 CXX test/cpp_headers/rpc.o 00:02:13.938 CXX test/cpp_headers/scheduler.o 00:02:13.938 CXX test/cpp_headers/scsi.o 00:02:13.938 CC test/app/jsoncat/jsoncat.o 00:02:13.938 CC examples/ioat/verify/verify.o 00:02:13.938 CC test/app/histogram_perf/histogram_perf.o 00:02:13.938 CC test/env/memory/memory_ut.o 00:02:13.938 CC examples/ioat/perf/perf.o 00:02:13.938 CC test/nvme/sgl/sgl.o 00:02:13.938 CC examples/accel/perf/accel_perf.o 00:02:13.938 CC test/app/stub/stub.o 00:02:13.938 CC test/nvme/reset/reset.o 00:02:13.938 CC test/nvme/doorbell_aers/doorbell_aers.o 00:02:13.938 CC examples/nvme/hotplug/hotplug.o 00:02:13.938 CC test/accel/dif/dif.o 00:02:13.938 CC examples/nvme/hello_world/hello_world.o 00:02:13.938 CC test/nvme/compliance/nvme_compliance.o 00:02:13.938 CC test/nvme/aer/aer.o 00:02:13.938 CC examples/util/zipf/zipf.o 00:02:13.938 CC examples/vmd/led/led.o 00:02:13.938 CC test/nvme/e2edp/nvme_dp.o 00:02:13.938 CC test/nvme/fdp/fdp.o 00:02:13.938 CC examples/nvme/reconnect/reconnect.o 00:02:13.938 CC test/nvme/startup/startup.o 00:02:13.938 CC test/nvme/fused_ordering/fused_ordering.o 00:02:13.938 CC test/env/vtophys/vtophys.o 00:02:13.938 CC test/thread/poller_perf/poller_perf.o 00:02:13.938 CXX test/cpp_headers/scsi_spec.o 00:02:13.938 CC test/nvme/reserve/reserve.o 00:02:13.938 CC examples/nvme/arbitration/arbitration.o 00:02:13.938 CC test/nvme/boot_partition/boot_partition.o 00:02:13.938 CC test/event/reactor/reactor.o 00:02:13.938 CC app/fio/nvme/fio_plugin.o 00:02:13.938 CC test/nvme/cuse/cuse.o 00:02:13.938 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:02:13.938 CC test/event/reactor_perf/reactor_perf.o 00:02:13.938 CC test/event/app_repeat/app_repeat.o 00:02:13.938 CC examples/nvme/abort/abort.o 00:02:13.938 CC examples/vmd/lsvmd/lsvmd.o 00:02:13.938 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:02:13.938 CC test/nvme/simple_copy/simple_copy.o 00:02:13.938 CC test/env/pci/pci_ut.o 00:02:13.938 CC examples/nvme/cmb_copy/cmb_copy.o 00:02:13.938 CC test/bdev/bdevio/bdevio.o 00:02:13.938 CC examples/nvme/nvme_manage/nvme_manage.o 00:02:13.938 CC test/nvme/overhead/overhead.o 00:02:13.938 LINK spdk_lspci 00:02:13.938 CC test/nvme/err_injection/err_injection.o 00:02:13.938 CC test/dma/test_dma/test_dma.o 00:02:13.938 CC examples/sock/hello_world/hello_sock.o 00:02:13.938 CC examples/idxd/perf/perf.o 00:02:13.938 CC test/event/event_perf/event_perf.o 00:02:13.938 CC test/nvme/connect_stress/connect_stress.o 00:02:13.938 CC test/blobfs/mkfs/mkfs.o 00:02:13.938 CC examples/blob/hello_world/hello_blob.o 00:02:13.938 CC examples/bdev/bdevperf/bdevperf.o 00:02:13.938 CC examples/bdev/hello_world/hello_bdev.o 00:02:13.938 CC examples/blob/cli/blobcli.o 00:02:13.938 CC app/fio/bdev/fio_plugin.o 00:02:13.938 CC test/app/bdev_svc/bdev_svc.o 00:02:13.938 CC examples/nvmf/nvmf/nvmf.o 00:02:13.938 CC examples/thread/thread/thread_ex.o 00:02:13.938 CC test/event/scheduler/scheduler.o 00:02:14.208 LINK rpc_client_test 00:02:14.208 LINK vhost 00:02:14.208 LINK spdk_nvme_discover 00:02:14.208 LINK nvmf_tgt 00:02:14.208 LINK spdk_trace_record 00:02:14.208 LINK interrupt_tgt 00:02:14.469 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:02:14.469 CC test/lvol/esnap/esnap.o 00:02:14.469 CC test/env/mem_callbacks/mem_callbacks.o 00:02:14.469 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:02:14.469 LINK spdk_tgt 00:02:14.469 LINK poller_perf 00:02:14.469 LINK lsvmd 00:02:14.469 LINK iscsi_tgt 00:02:14.469 LINK jsoncat 00:02:14.469 LINK stub 00:02:14.469 LINK vtophys 00:02:14.469 LINK histogram_perf 00:02:14.469 LINK led 00:02:14.469 LINK zipf 00:02:14.469 LINK reactor 00:02:14.469 LINK startup 00:02:14.469 CXX test/cpp_headers/sock.o 00:02:14.469 LINK connect_stress 00:02:14.469 LINK event_perf 00:02:14.469 CXX test/cpp_headers/stdinc.o 00:02:14.469 LINK reactor_perf 00:02:14.469 LINK env_dpdk_post_init 00:02:14.469 LINK boot_partition 00:02:14.469 CXX test/cpp_headers/string.o 00:02:14.469 CXX test/cpp_headers/thread.o 00:02:14.469 LINK app_repeat 00:02:14.730 CXX test/cpp_headers/trace.o 00:02:14.730 CXX test/cpp_headers/tree.o 00:02:14.730 CXX test/cpp_headers/ublk.o 00:02:14.730 CXX test/cpp_headers/trace_parser.o 00:02:14.730 CXX test/cpp_headers/util.o 00:02:14.730 LINK pmr_persistence 00:02:14.730 LINK reserve 00:02:14.730 LINK ioat_perf 00:02:14.730 LINK doorbell_aers 00:02:14.730 CXX test/cpp_headers/uuid.o 00:02:14.730 LINK hello_world 00:02:14.730 LINK err_injection 00:02:14.730 LINK spdk_dd 00:02:14.730 CXX test/cpp_headers/version.o 00:02:14.730 CXX test/cpp_headers/vfio_user_pci.o 00:02:14.730 CXX test/cpp_headers/vfio_user_spec.o 00:02:14.730 CXX test/cpp_headers/vhost.o 00:02:14.730 LINK cmb_copy 00:02:14.730 CXX test/cpp_headers/vmd.o 00:02:14.730 CXX test/cpp_headers/xor.o 00:02:14.730 CXX test/cpp_headers/zipf.o 00:02:14.730 LINK fused_ordering 00:02:14.730 LINK mkfs 00:02:14.730 LINK sgl 00:02:14.730 LINK bdev_svc 00:02:14.730 LINK verify 00:02:14.730 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:02:14.730 LINK hotplug 00:02:14.730 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:02:14.730 LINK simple_copy 00:02:14.730 LINK scheduler 00:02:14.730 LINK aer 00:02:14.730 LINK reset 00:02:14.730 LINK hello_blob 00:02:14.730 LINK hello_sock 00:02:14.730 LINK hello_bdev 00:02:14.730 LINK overhead 00:02:14.730 LINK nvme_dp 00:02:14.730 LINK thread 00:02:14.730 LINK nvme_compliance 00:02:14.730 LINK nvmf 00:02:14.730 LINK spdk_trace 00:02:14.730 LINK fdp 00:02:14.730 LINK reconnect 00:02:14.730 LINK dif 00:02:14.730 LINK arbitration 00:02:14.730 LINK idxd_perf 00:02:14.730 LINK abort 00:02:14.730 LINK test_dma 00:02:14.990 LINK bdevio 00:02:14.990 LINK spdk_nvme 00:02:14.990 LINK accel_perf 00:02:14.990 LINK pci_ut 00:02:14.990 LINK blobcli 00:02:14.990 LINK nvme_manage 00:02:14.990 LINK spdk_bdev 00:02:14.990 LINK spdk_nvme_identify 00:02:14.990 LINK spdk_nvme_perf 00:02:14.990 LINK nvme_fuzz 00:02:15.252 LINK vhost_fuzz 00:02:15.252 LINK spdk_top 00:02:15.252 LINK mem_callbacks 00:02:15.252 LINK memory_ut 00:02:15.252 LINK bdevperf 00:02:15.512 LINK cuse 00:02:16.083 LINK iscsi_fuzz 00:02:17.995 LINK esnap 00:02:18.566 00:02:18.566 real 0m35.044s 00:02:18.566 user 5m11.977s 00:02:18.566 sys 3m20.405s 00:02:18.566 23:05:07 -- common/autotest_common.sh@1112 -- $ xtrace_disable 00:02:18.566 23:05:07 -- common/autotest_common.sh@10 -- $ set +x 00:02:18.566 ************************************ 00:02:18.566 END TEST make 00:02:18.566 ************************************ 00:02:18.566 23:05:07 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:02:18.566 23:05:07 -- pm/common@30 -- $ signal_monitor_resources TERM 00:02:18.566 23:05:07 -- pm/common@41 -- $ local monitor pid pids signal=TERM 00:02:18.566 23:05:07 -- pm/common@43 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:18.566 23:05:07 -- pm/common@44 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:02:18.566 23:05:07 -- pm/common@45 -- $ pid=3582608 00:02:18.566 23:05:07 -- pm/common@52 -- $ sudo kill -TERM 3582608 00:02:18.566 23:05:07 -- pm/common@43 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:18.566 23:05:07 -- pm/common@44 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:02:18.566 23:05:07 -- pm/common@45 -- $ pid=3582610 00:02:18.566 23:05:07 -- pm/common@52 -- $ sudo kill -TERM 3582610 00:02:18.566 23:05:07 -- pm/common@43 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:18.566 23:05:07 -- pm/common@44 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:02:18.566 23:05:07 -- pm/common@45 -- $ pid=3582611 00:02:18.566 23:05:07 -- pm/common@52 -- $ sudo kill -TERM 3582611 00:02:18.566 23:05:07 -- pm/common@43 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:18.566 23:05:07 -- pm/common@44 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:02:18.566 23:05:07 -- pm/common@45 -- $ pid=3582612 00:02:18.566 23:05:07 -- pm/common@52 -- $ sudo kill -TERM 3582612 00:02:18.566 23:05:07 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:02:18.566 23:05:07 -- nvmf/common.sh@7 -- # uname -s 00:02:18.566 23:05:07 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:02:18.566 23:05:07 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:02:18.566 23:05:07 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:02:18.566 23:05:07 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:02:18.566 23:05:07 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:02:18.566 23:05:07 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:02:18.566 23:05:07 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:02:18.566 23:05:07 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:02:18.566 23:05:07 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:02:18.827 23:05:07 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:02:18.827 23:05:07 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:02:18.827 23:05:07 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:02:18.827 23:05:07 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:02:18.827 23:05:07 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:02:18.827 23:05:07 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:02:18.827 23:05:07 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:02:18.827 23:05:07 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:02:18.827 23:05:07 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:02:18.827 23:05:07 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:18.827 23:05:07 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:18.827 23:05:07 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:18.827 23:05:07 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:18.827 23:05:07 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:18.827 23:05:07 -- paths/export.sh@5 -- # export PATH 00:02:18.827 23:05:07 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:18.827 23:05:07 -- nvmf/common.sh@47 -- # : 0 00:02:18.827 23:05:07 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:02:18.827 23:05:07 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:02:18.827 23:05:07 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:02:18.827 23:05:07 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:02:18.828 23:05:07 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:02:18.828 23:05:07 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:02:18.828 23:05:07 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:02:18.828 23:05:07 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:02:18.828 23:05:07 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:02:18.828 23:05:07 -- spdk/autotest.sh@32 -- # uname -s 00:02:18.828 23:05:07 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:02:18.828 23:05:07 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:02:18.828 23:05:07 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:02:18.828 23:05:07 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:02:18.828 23:05:07 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:02:18.828 23:05:07 -- spdk/autotest.sh@44 -- # modprobe nbd 00:02:18.828 23:05:07 -- spdk/autotest.sh@46 -- # type -P udevadm 00:02:18.828 23:05:07 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:02:18.828 23:05:07 -- spdk/autotest.sh@48 -- # udevadm_pid=3658488 00:02:18.828 23:05:07 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:02:18.828 23:05:07 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:02:18.828 23:05:07 -- pm/common@17 -- # local monitor 00:02:18.828 23:05:07 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:18.828 23:05:07 -- pm/common@23 -- # MONITOR_RESOURCES_PIDS["$monitor"]=3658489 00:02:18.828 23:05:07 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:18.828 23:05:07 -- pm/common@23 -- # MONITOR_RESOURCES_PIDS["$monitor"]=3658492 00:02:18.828 23:05:07 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:18.828 23:05:07 -- pm/common@21 -- # date +%s 00:02:18.828 23:05:07 -- pm/common@23 -- # MONITOR_RESOURCES_PIDS["$monitor"]=3658495 00:02:18.828 23:05:07 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:18.828 23:05:07 -- pm/common@21 -- # date +%s 00:02:18.828 23:05:07 -- pm/common@23 -- # MONITOR_RESOURCES_PIDS["$monitor"]=3658498 00:02:18.828 23:05:07 -- pm/common@26 -- # sleep 1 00:02:18.828 23:05:07 -- pm/common@21 -- # date +%s 00:02:18.828 23:05:07 -- pm/common@21 -- # date +%s 00:02:18.828 23:05:07 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1714165507 00:02:18.828 23:05:07 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1714165507 00:02:18.828 23:05:07 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1714165507 00:02:18.828 23:05:07 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1714165507 00:02:18.828 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1714165507_collect-vmstat.pm.log 00:02:18.828 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1714165507_collect-cpu-load.pm.log 00:02:18.828 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1714165507_collect-bmc-pm.bmc.pm.log 00:02:18.828 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1714165507_collect-cpu-temp.pm.log 00:02:19.768 23:05:08 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:02:19.768 23:05:08 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:02:19.768 23:05:08 -- common/autotest_common.sh@710 -- # xtrace_disable 00:02:19.768 23:05:08 -- common/autotest_common.sh@10 -- # set +x 00:02:19.768 23:05:08 -- spdk/autotest.sh@59 -- # create_test_list 00:02:19.768 23:05:08 -- common/autotest_common.sh@734 -- # xtrace_disable 00:02:19.768 23:05:08 -- common/autotest_common.sh@10 -- # set +x 00:02:19.768 23:05:08 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh 00:02:19.768 23:05:08 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:19.768 23:05:08 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:19.768 23:05:08 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:02:19.768 23:05:08 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:19.768 23:05:08 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:02:19.768 23:05:08 -- common/autotest_common.sh@1441 -- # uname 00:02:19.768 23:05:08 -- common/autotest_common.sh@1441 -- # '[' Linux = FreeBSD ']' 00:02:19.768 23:05:08 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:02:19.768 23:05:08 -- common/autotest_common.sh@1461 -- # uname 00:02:19.768 23:05:08 -- common/autotest_common.sh@1461 -- # [[ Linux = FreeBSD ]] 00:02:19.768 23:05:08 -- spdk/autotest.sh@71 -- # grep CC_TYPE mk/cc.mk 00:02:19.768 23:05:08 -- spdk/autotest.sh@71 -- # CC_TYPE=CC_TYPE=gcc 00:02:19.768 23:05:08 -- spdk/autotest.sh@72 -- # hash lcov 00:02:19.768 23:05:08 -- spdk/autotest.sh@72 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:02:19.768 23:05:08 -- spdk/autotest.sh@80 -- # export 'LCOV_OPTS= 00:02:19.768 --rc lcov_branch_coverage=1 00:02:19.768 --rc lcov_function_coverage=1 00:02:19.768 --rc genhtml_branch_coverage=1 00:02:19.768 --rc genhtml_function_coverage=1 00:02:19.768 --rc genhtml_legend=1 00:02:19.768 --rc geninfo_all_blocks=1 00:02:19.768 ' 00:02:19.768 23:05:08 -- spdk/autotest.sh@80 -- # LCOV_OPTS=' 00:02:19.768 --rc lcov_branch_coverage=1 00:02:19.768 --rc lcov_function_coverage=1 00:02:19.768 --rc genhtml_branch_coverage=1 00:02:19.768 --rc genhtml_function_coverage=1 00:02:19.768 --rc genhtml_legend=1 00:02:19.768 --rc geninfo_all_blocks=1 00:02:19.768 ' 00:02:19.768 23:05:08 -- spdk/autotest.sh@81 -- # export 'LCOV=lcov 00:02:19.768 --rc lcov_branch_coverage=1 00:02:19.768 --rc lcov_function_coverage=1 00:02:19.768 --rc genhtml_branch_coverage=1 00:02:19.768 --rc genhtml_function_coverage=1 00:02:19.768 --rc genhtml_legend=1 00:02:19.768 --rc geninfo_all_blocks=1 00:02:19.768 --no-external' 00:02:19.768 23:05:08 -- spdk/autotest.sh@81 -- # LCOV='lcov 00:02:19.768 --rc lcov_branch_coverage=1 00:02:19.768 --rc lcov_function_coverage=1 00:02:19.768 --rc genhtml_branch_coverage=1 00:02:19.768 --rc genhtml_function_coverage=1 00:02:19.768 --rc genhtml_legend=1 00:02:19.768 --rc geninfo_all_blocks=1 00:02:19.768 --no-external' 00:02:19.768 23:05:08 -- spdk/autotest.sh@83 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -v 00:02:20.028 lcov: LCOV version 1.14 00:02:20.028 23:05:09 -- spdk/autotest.sh@85 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -i -t Baseline -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info 00:02:28.169 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel_module.gcno:no functions found 00:02:28.169 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel_module.gcno 00:02:28.169 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel.gcno:no functions found 00:02:28.169 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel.gcno 00:02:28.169 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/barrier.gcno:no functions found 00:02:28.169 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/barrier.gcno 00:02:28.169 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/assert.gcno:no functions found 00:02:28.169 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/assert.gcno 00:02:28.169 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs.gcno:no functions found 00:02:28.169 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs.gcno 00:02:28.169 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/base64.gcno:no functions found 00:02:28.169 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/base64.gcno 00:02:28.169 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno:no functions found 00:02:28.169 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno 00:02:28.169 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno:no functions found 00:02:28.169 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno 00:02:28.169 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev.gcno:no functions found 00:02:28.169 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev.gcno 00:02:28.169 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/conf.gcno:no functions found 00:02:28.169 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/conf.gcno 00:02:28.169 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno:no functions found 00:02:28.169 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno 00:02:28.169 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno:no functions found 00:02:28.169 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno 00:02:28.169 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno:no functions found 00:02:28.169 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno 00:02:28.169 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc64.gcno:no functions found 00:02:28.169 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc64.gcno 00:02:28.169 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/config.gcno:no functions found 00:02:28.169 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/config.gcno 00:02:28.169 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_array.gcno:no functions found 00:02:28.169 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_array.gcno 00:02:28.169 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob.gcno:no functions found 00:02:28.169 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob.gcno 00:02:28.169 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dif.gcno:no functions found 00:02:28.169 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dif.gcno 00:02:28.169 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/cpuset.gcno:no functions found 00:02:28.169 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/cpuset.gcno 00:02:28.169 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/event.gcno:no functions found 00:02:28.169 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/event.gcno 00:02:28.169 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc16.gcno:no functions found 00:02:28.169 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc16.gcno 00:02:28.169 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno:no functions found 00:02:28.169 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno 00:02:28.169 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/endian.gcno:no functions found 00:02:28.169 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/endian.gcno 00:02:28.169 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc32.gcno:no functions found 00:02:28.169 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc32.gcno 00:02:28.169 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dma.gcno:no functions found 00:02:28.169 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dma.gcno 00:02:28.169 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd_group.gcno:no functions found 00:02:28.169 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd_group.gcno 00:02:28.169 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env.gcno:no functions found 00:02:28.169 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env.gcno 00:02:28.169 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ftl.gcno:no functions found 00:02:28.169 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ftl.gcno 00:02:28.169 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd.gcno:no functions found 00:02:28.169 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd.gcno 00:02:28.169 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/file.gcno:no functions found 00:02:28.170 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/file.gcno 00:02:28.170 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno:no functions found 00:02:28.170 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno 00:02:28.170 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno:no functions found 00:02:28.170 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno 00:02:28.170 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno:no functions found 00:02:28.170 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno 00:02:28.170 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd.gcno:no functions found 00:02:28.170 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd.gcno 00:02:28.170 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat.gcno:no functions found 00:02:28.170 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat.gcno 00:02:28.170 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/likely.gcno:no functions found 00:02:28.170 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/likely.gcno 00:02:28.170 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/hexlify.gcno:no functions found 00:02:28.170 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/hexlify.gcno 00:02:28.170 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno:no functions found 00:02:28.170 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno 00:02:28.170 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/init.gcno:no functions found 00:02:28.170 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/init.gcno 00:02:28.170 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno:no functions found 00:02:28.170 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno 00:02:28.170 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring.gcno:no functions found 00:02:28.170 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring.gcno 00:02:28.170 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring_module.gcno:no functions found 00:02:28.170 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring_module.gcno 00:02:28.170 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/json.gcno:no functions found 00:02:28.170 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/json.gcno 00:02:28.170 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno:no functions found 00:02:28.170 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno 00:02:28.170 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/lvol.gcno:no functions found 00:02:28.170 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/lvol.gcno 00:02:28.170 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/log.gcno:no functions found 00:02:28.170 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/log.gcno 00:02:28.170 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/mmio.gcno:no functions found 00:02:28.170 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/mmio.gcno 00:02:28.170 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme.gcno:no functions found 00:02:28.170 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme.gcno 00:02:28.170 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/notify.gcno:no functions found 00:02:28.170 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/notify.gcno 00:02:28.170 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nbd.gcno:no functions found 00:02:28.170 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nbd.gcno 00:02:28.170 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno:no functions found 00:02:28.170 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno 00:02:28.170 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno:no functions found 00:02:28.170 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno 00:02:28.170 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno:no functions found 00:02:28.170 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno 00:02:28.170 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno:no functions found 00:02:28.170 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno 00:02:28.170 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/memory.gcno:no functions found 00:02:28.170 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/memory.gcno 00:02:28.170 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno:no functions found 00:02:28.170 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno 00:02:28.170 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno:no functions found 00:02:28.170 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno 00:02:28.170 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf.gcno:no functions found 00:02:28.170 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf.gcno 00:02:28.170 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno:no functions found 00:02:28.170 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno 00:02:28.170 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno:no functions found 00:02:28.170 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno 00:02:28.170 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno:no functions found 00:02:28.170 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno 00:02:28.170 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/reduce.gcno:no functions found 00:02:28.170 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/reduce.gcno 00:02:28.170 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno:no functions found 00:02:28.170 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno 00:02:28.170 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/queue.gcno:no functions found 00:02:28.170 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/queue.gcno 00:02:28.170 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal.gcno:no functions found 00:02:28.170 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal.gcno 00:02:28.170 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pipe.gcno:no functions found 00:02:28.170 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pipe.gcno 00:02:28.170 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno:no functions found 00:02:28.170 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno 00:02:28.170 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/rpc.gcno:no functions found 00:02:28.170 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/rpc.gcno 00:02:28.170 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scheduler.gcno:no functions found 00:02:28.170 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scheduler.gcno 00:02:28.170 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi.gcno:no functions found 00:02:28.170 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi.gcno 00:02:28.170 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno:no functions found 00:02:28.170 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno 00:02:28.170 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/sock.gcno:no functions found 00:02:28.170 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/sock.gcno 00:02:28.170 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/stdinc.gcno:no functions found 00:02:28.170 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/stdinc.gcno 00:02:28.170 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/string.gcno:no functions found 00:02:28.170 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/string.gcno 00:02:28.170 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/thread.gcno:no functions found 00:02:28.170 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/thread.gcno 00:02:28.170 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace.gcno:no functions found 00:02:28.170 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace.gcno 00:02:28.170 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/tree.gcno:no functions found 00:02:28.170 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/tree.gcno 00:02:28.170 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno:no functions found 00:02:28.170 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno 00:02:28.170 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ublk.gcno:no functions found 00:02:28.171 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ublk.gcno 00:02:28.171 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/util.gcno:no functions found 00:02:28.171 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/util.gcno 00:02:28.171 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/uuid.gcno:no functions found 00:02:28.171 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/uuid.gcno 00:02:28.171 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/version.gcno:no functions found 00:02:28.171 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/version.gcno 00:02:28.171 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno:no functions found 00:02:28.171 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno 00:02:28.171 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno:no functions found 00:02:28.171 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno 00:02:28.171 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vhost.gcno:no functions found 00:02:28.171 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vhost.gcno 00:02:28.171 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vmd.gcno:no functions found 00:02:28.171 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vmd.gcno 00:02:28.171 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/xor.gcno:no functions found 00:02:28.171 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/xor.gcno 00:02:28.171 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/zipf.gcno:no functions found 00:02:28.171 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/zipf.gcno 00:02:31.469 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:02:31.469 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno 00:02:41.464 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno:no functions found 00:02:41.464 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno 00:02:41.464 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno:no functions found 00:02:41.464 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno 00:02:41.464 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno:no functions found 00:02:41.464 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno 00:02:49.598 23:05:37 -- spdk/autotest.sh@89 -- # timing_enter pre_cleanup 00:02:49.598 23:05:37 -- common/autotest_common.sh@710 -- # xtrace_disable 00:02:49.598 23:05:37 -- common/autotest_common.sh@10 -- # set +x 00:02:49.598 23:05:37 -- spdk/autotest.sh@91 -- # rm -f 00:02:49.598 23:05:37 -- spdk/autotest.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:02:52.142 0000:80:01.6 (8086 0b00): Already using the ioatdma driver 00:02:52.142 0000:80:01.7 (8086 0b00): Already using the ioatdma driver 00:02:52.142 0000:80:01.4 (8086 0b00): Already using the ioatdma driver 00:02:52.142 0000:80:01.5 (8086 0b00): Already using the ioatdma driver 00:02:52.142 0000:80:01.2 (8086 0b00): Already using the ioatdma driver 00:02:52.142 0000:80:01.3 (8086 0b00): Already using the ioatdma driver 00:02:52.142 0000:80:01.0 (8086 0b00): Already using the ioatdma driver 00:02:52.142 0000:80:01.1 (8086 0b00): Already using the ioatdma driver 00:02:52.142 0000:65:00.0 (144d a80a): Already using the nvme driver 00:02:52.142 0000:00:01.6 (8086 0b00): Already using the ioatdma driver 00:02:52.142 0000:00:01.7 (8086 0b00): Already using the ioatdma driver 00:02:52.142 0000:00:01.4 (8086 0b00): Already using the ioatdma driver 00:02:52.142 0000:00:01.5 (8086 0b00): Already using the ioatdma driver 00:02:52.142 0000:00:01.2 (8086 0b00): Already using the ioatdma driver 00:02:52.142 0000:00:01.3 (8086 0b00): Already using the ioatdma driver 00:02:52.142 0000:00:01.0 (8086 0b00): Already using the ioatdma driver 00:02:52.142 0000:00:01.1 (8086 0b00): Already using the ioatdma driver 00:02:52.403 23:05:41 -- spdk/autotest.sh@96 -- # get_zoned_devs 00:02:52.403 23:05:41 -- common/autotest_common.sh@1655 -- # zoned_devs=() 00:02:52.403 23:05:41 -- common/autotest_common.sh@1655 -- # local -gA zoned_devs 00:02:52.403 23:05:41 -- common/autotest_common.sh@1656 -- # local nvme bdf 00:02:52.403 23:05:41 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:02:52.403 23:05:41 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n1 00:02:52.403 23:05:41 -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:02:52.403 23:05:41 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:02:52.403 23:05:41 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:02:52.403 23:05:41 -- spdk/autotest.sh@98 -- # (( 0 > 0 )) 00:02:52.403 23:05:41 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:02:52.403 23:05:41 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:02:52.403 23:05:41 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme0n1 00:02:52.403 23:05:41 -- scripts/common.sh@378 -- # local block=/dev/nvme0n1 pt 00:02:52.403 23:05:41 -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:02:52.663 No valid GPT data, bailing 00:02:52.663 23:05:41 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:02:52.663 23:05:41 -- scripts/common.sh@391 -- # pt= 00:02:52.663 23:05:41 -- scripts/common.sh@392 -- # return 1 00:02:52.663 23:05:41 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:02:52.663 1+0 records in 00:02:52.663 1+0 records out 00:02:52.663 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0043531 s, 241 MB/s 00:02:52.663 23:05:41 -- spdk/autotest.sh@118 -- # sync 00:02:52.663 23:05:41 -- spdk/autotest.sh@120 -- # xtrace_disable_per_cmd reap_spdk_processes 00:02:52.663 23:05:41 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:02:52.663 23:05:41 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:03:00.886 23:05:49 -- spdk/autotest.sh@124 -- # uname -s 00:03:00.886 23:05:49 -- spdk/autotest.sh@124 -- # '[' Linux = Linux ']' 00:03:00.886 23:05:49 -- spdk/autotest.sh@125 -- # run_test setup.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/test-setup.sh 00:03:00.886 23:05:49 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:00.886 23:05:49 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:00.886 23:05:49 -- common/autotest_common.sh@10 -- # set +x 00:03:00.886 ************************************ 00:03:00.886 START TEST setup.sh 00:03:00.886 ************************************ 00:03:00.886 23:05:49 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/test-setup.sh 00:03:00.886 * Looking for test storage... 00:03:00.886 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:03:00.886 23:05:49 -- setup/test-setup.sh@10 -- # uname -s 00:03:00.886 23:05:49 -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:03:00.886 23:05:49 -- setup/test-setup.sh@12 -- # run_test acl /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/acl.sh 00:03:00.886 23:05:49 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:00.886 23:05:49 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:00.886 23:05:49 -- common/autotest_common.sh@10 -- # set +x 00:03:00.886 ************************************ 00:03:00.886 START TEST acl 00:03:00.886 ************************************ 00:03:00.886 23:05:50 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/acl.sh 00:03:01.147 * Looking for test storage... 00:03:01.147 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:03:01.147 23:05:50 -- setup/acl.sh@10 -- # get_zoned_devs 00:03:01.147 23:05:50 -- common/autotest_common.sh@1655 -- # zoned_devs=() 00:03:01.147 23:05:50 -- common/autotest_common.sh@1655 -- # local -gA zoned_devs 00:03:01.147 23:05:50 -- common/autotest_common.sh@1656 -- # local nvme bdf 00:03:01.147 23:05:50 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:03:01.147 23:05:50 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n1 00:03:01.147 23:05:50 -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:03:01.147 23:05:50 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:01.147 23:05:50 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:03:01.147 23:05:50 -- setup/acl.sh@12 -- # devs=() 00:03:01.147 23:05:50 -- setup/acl.sh@12 -- # declare -a devs 00:03:01.147 23:05:50 -- setup/acl.sh@13 -- # drivers=() 00:03:01.147 23:05:50 -- setup/acl.sh@13 -- # declare -A drivers 00:03:01.147 23:05:50 -- setup/acl.sh@51 -- # setup reset 00:03:01.147 23:05:50 -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:01.147 23:05:50 -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:04.445 23:05:53 -- setup/acl.sh@52 -- # collect_setup_devs 00:03:04.445 23:05:53 -- setup/acl.sh@16 -- # local dev driver 00:03:04.445 23:05:53 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:04.445 23:05:53 -- setup/acl.sh@15 -- # setup output status 00:03:04.445 23:05:53 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:04.445 23:05:53 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:03:07.745 Hugepages 00:03:07.745 node hugesize free / total 00:03:07.745 23:05:56 -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:03:07.745 23:05:56 -- setup/acl.sh@19 -- # continue 00:03:07.745 23:05:56 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:07.745 23:05:56 -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:03:07.745 23:05:56 -- setup/acl.sh@19 -- # continue 00:03:07.745 23:05:56 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:07.745 23:05:56 -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:03:07.745 23:05:56 -- setup/acl.sh@19 -- # continue 00:03:07.745 23:05:56 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:07.745 00:03:07.745 Type BDF Vendor Device NUMA Driver Device Block devices 00:03:07.745 23:05:56 -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:03:07.745 23:05:56 -- setup/acl.sh@19 -- # continue 00:03:07.745 23:05:56 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:07.745 23:05:56 -- setup/acl.sh@19 -- # [[ 0000:00:01.0 == *:*:*.* ]] 00:03:07.745 23:05:56 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:07.745 23:05:56 -- setup/acl.sh@20 -- # continue 00:03:07.745 23:05:56 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:07.745 23:05:56 -- setup/acl.sh@19 -- # [[ 0000:00:01.1 == *:*:*.* ]] 00:03:07.745 23:05:56 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:07.745 23:05:56 -- setup/acl.sh@20 -- # continue 00:03:07.745 23:05:56 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:07.745 23:05:56 -- setup/acl.sh@19 -- # [[ 0000:00:01.2 == *:*:*.* ]] 00:03:07.745 23:05:56 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:07.746 23:05:56 -- setup/acl.sh@20 -- # continue 00:03:07.746 23:05:56 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:07.746 23:05:56 -- setup/acl.sh@19 -- # [[ 0000:00:01.3 == *:*:*.* ]] 00:03:07.746 23:05:56 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:07.746 23:05:56 -- setup/acl.sh@20 -- # continue 00:03:07.746 23:05:56 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:07.746 23:05:56 -- setup/acl.sh@19 -- # [[ 0000:00:01.4 == *:*:*.* ]] 00:03:07.746 23:05:56 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:07.746 23:05:56 -- setup/acl.sh@20 -- # continue 00:03:07.746 23:05:56 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:07.746 23:05:56 -- setup/acl.sh@19 -- # [[ 0000:00:01.5 == *:*:*.* ]] 00:03:07.746 23:05:56 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:07.746 23:05:56 -- setup/acl.sh@20 -- # continue 00:03:07.746 23:05:56 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:07.746 23:05:56 -- setup/acl.sh@19 -- # [[ 0000:00:01.6 == *:*:*.* ]] 00:03:07.746 23:05:56 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:07.746 23:05:56 -- setup/acl.sh@20 -- # continue 00:03:07.746 23:05:56 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:07.746 23:05:56 -- setup/acl.sh@19 -- # [[ 0000:00:01.7 == *:*:*.* ]] 00:03:07.746 23:05:56 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:07.746 23:05:56 -- setup/acl.sh@20 -- # continue 00:03:07.746 23:05:56 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:07.746 23:05:56 -- setup/acl.sh@19 -- # [[ 0000:65:00.0 == *:*:*.* ]] 00:03:07.746 23:05:56 -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:03:07.746 23:05:56 -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\6\5\:\0\0\.\0* ]] 00:03:07.746 23:05:56 -- setup/acl.sh@22 -- # devs+=("$dev") 00:03:07.746 23:05:56 -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:03:07.746 23:05:56 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:07.746 23:05:56 -- setup/acl.sh@19 -- # [[ 0000:80:01.0 == *:*:*.* ]] 00:03:07.746 23:05:56 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:07.746 23:05:56 -- setup/acl.sh@20 -- # continue 00:03:07.746 23:05:56 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:07.746 23:05:56 -- setup/acl.sh@19 -- # [[ 0000:80:01.1 == *:*:*.* ]] 00:03:07.746 23:05:56 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:07.746 23:05:56 -- setup/acl.sh@20 -- # continue 00:03:07.746 23:05:56 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:07.746 23:05:56 -- setup/acl.sh@19 -- # [[ 0000:80:01.2 == *:*:*.* ]] 00:03:07.746 23:05:56 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:07.746 23:05:56 -- setup/acl.sh@20 -- # continue 00:03:07.746 23:05:56 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:07.746 23:05:56 -- setup/acl.sh@19 -- # [[ 0000:80:01.3 == *:*:*.* ]] 00:03:07.746 23:05:56 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:07.746 23:05:56 -- setup/acl.sh@20 -- # continue 00:03:07.746 23:05:56 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:07.746 23:05:56 -- setup/acl.sh@19 -- # [[ 0000:80:01.4 == *:*:*.* ]] 00:03:07.746 23:05:56 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:07.746 23:05:56 -- setup/acl.sh@20 -- # continue 00:03:07.746 23:05:56 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:07.746 23:05:56 -- setup/acl.sh@19 -- # [[ 0000:80:01.5 == *:*:*.* ]] 00:03:07.746 23:05:56 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:07.746 23:05:56 -- setup/acl.sh@20 -- # continue 00:03:07.746 23:05:56 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:07.746 23:05:56 -- setup/acl.sh@19 -- # [[ 0000:80:01.6 == *:*:*.* ]] 00:03:07.746 23:05:56 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:07.746 23:05:56 -- setup/acl.sh@20 -- # continue 00:03:07.746 23:05:56 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:07.746 23:05:56 -- setup/acl.sh@19 -- # [[ 0000:80:01.7 == *:*:*.* ]] 00:03:07.746 23:05:56 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:07.746 23:05:56 -- setup/acl.sh@20 -- # continue 00:03:07.746 23:05:56 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:07.746 23:05:56 -- setup/acl.sh@24 -- # (( 1 > 0 )) 00:03:07.746 23:05:56 -- setup/acl.sh@54 -- # run_test denied denied 00:03:07.746 23:05:56 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:07.746 23:05:56 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:07.746 23:05:56 -- common/autotest_common.sh@10 -- # set +x 00:03:07.746 ************************************ 00:03:07.746 START TEST denied 00:03:07.746 ************************************ 00:03:07.746 23:05:56 -- common/autotest_common.sh@1111 -- # denied 00:03:07.746 23:05:56 -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:65:00.0' 00:03:07.746 23:05:56 -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:65:00.0' 00:03:07.746 23:05:56 -- setup/acl.sh@38 -- # setup output config 00:03:07.746 23:05:56 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:07.746 23:05:56 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:11.950 0000:65:00.0 (144d a80a): Skipping denied controller at 0000:65:00.0 00:03:11.950 23:06:00 -- setup/acl.sh@40 -- # verify 0000:65:00.0 00:03:11.950 23:06:00 -- setup/acl.sh@28 -- # local dev driver 00:03:11.950 23:06:00 -- setup/acl.sh@30 -- # for dev in "$@" 00:03:11.950 23:06:00 -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:65:00.0 ]] 00:03:11.950 23:06:00 -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:65:00.0/driver 00:03:11.950 23:06:00 -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:03:11.950 23:06:00 -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:03:11.950 23:06:00 -- setup/acl.sh@41 -- # setup reset 00:03:11.950 23:06:00 -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:11.950 23:06:00 -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:17.236 00:03:17.236 real 0m8.461s 00:03:17.236 user 0m2.796s 00:03:17.236 sys 0m4.940s 00:03:17.236 23:06:05 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:03:17.236 23:06:05 -- common/autotest_common.sh@10 -- # set +x 00:03:17.236 ************************************ 00:03:17.236 END TEST denied 00:03:17.236 ************************************ 00:03:17.236 23:06:05 -- setup/acl.sh@55 -- # run_test allowed allowed 00:03:17.236 23:06:05 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:17.236 23:06:05 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:17.236 23:06:05 -- common/autotest_common.sh@10 -- # set +x 00:03:17.236 ************************************ 00:03:17.236 START TEST allowed 00:03:17.236 ************************************ 00:03:17.236 23:06:05 -- common/autotest_common.sh@1111 -- # allowed 00:03:17.236 23:06:05 -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:65:00.0 00:03:17.236 23:06:05 -- setup/acl.sh@46 -- # grep -E '0000:65:00.0 .*: nvme -> .*' 00:03:17.236 23:06:05 -- setup/acl.sh@45 -- # setup output config 00:03:17.236 23:06:05 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:17.236 23:06:05 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:22.526 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:03:22.526 23:06:11 -- setup/acl.sh@47 -- # verify 00:03:22.526 23:06:11 -- setup/acl.sh@28 -- # local dev driver 00:03:22.526 23:06:11 -- setup/acl.sh@48 -- # setup reset 00:03:22.526 23:06:11 -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:22.526 23:06:11 -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:26.729 00:03:26.729 real 0m9.614s 00:03:26.729 user 0m2.839s 00:03:26.729 sys 0m5.056s 00:03:26.729 23:06:15 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:03:26.729 23:06:15 -- common/autotest_common.sh@10 -- # set +x 00:03:26.729 ************************************ 00:03:26.729 END TEST allowed 00:03:26.729 ************************************ 00:03:26.729 00:03:26.729 real 0m25.186s 00:03:26.729 user 0m8.110s 00:03:26.729 sys 0m14.659s 00:03:26.729 23:06:15 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:03:26.729 23:06:15 -- common/autotest_common.sh@10 -- # set +x 00:03:26.729 ************************************ 00:03:26.729 END TEST acl 00:03:26.729 ************************************ 00:03:26.729 23:06:15 -- setup/test-setup.sh@13 -- # run_test hugepages /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/hugepages.sh 00:03:26.729 23:06:15 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:26.729 23:06:15 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:26.729 23:06:15 -- common/autotest_common.sh@10 -- # set +x 00:03:26.729 ************************************ 00:03:26.729 START TEST hugepages 00:03:26.729 ************************************ 00:03:26.729 23:06:15 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/hugepages.sh 00:03:26.729 * Looking for test storage... 00:03:26.729 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:03:26.729 23:06:15 -- setup/hugepages.sh@10 -- # nodes_sys=() 00:03:26.729 23:06:15 -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:03:26.729 23:06:15 -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:03:26.729 23:06:15 -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:03:26.729 23:06:15 -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:03:26.729 23:06:15 -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:03:26.729 23:06:15 -- setup/common.sh@17 -- # local get=Hugepagesize 00:03:26.729 23:06:15 -- setup/common.sh@18 -- # local node= 00:03:26.729 23:06:15 -- setup/common.sh@19 -- # local var val 00:03:26.729 23:06:15 -- setup/common.sh@20 -- # local mem_f mem 00:03:26.729 23:06:15 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:26.729 23:06:15 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:26.729 23:06:15 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:26.729 23:06:15 -- setup/common.sh@28 -- # mapfile -t mem 00:03:26.729 23:06:15 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:26.729 23:06:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.729 23:06:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.729 23:06:15 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338868 kB' 'MemFree: 104466640 kB' 'MemAvailable: 108007712 kB' 'Buffers: 4124 kB' 'Cached: 13007656 kB' 'SwapCached: 0 kB' 'Active: 10115996 kB' 'Inactive: 3515796 kB' 'Active(anon): 9425600 kB' 'Inactive(anon): 0 kB' 'Active(file): 690396 kB' 'Inactive(file): 3515796 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 623528 kB' 'Mapped: 199216 kB' 'Shmem: 8805588 kB' 'KReclaimable: 318424 kB' 'Slab: 1136324 kB' 'SReclaimable: 318424 kB' 'SUnreclaim: 817900 kB' 'KernelStack: 27072 kB' 'PageTables: 9336 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 69460884 kB' 'Committed_AS: 10829396 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 234796 kB' 'VmallocChunk: 0 kB' 'Percpu: 108288 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 3722612 kB' 'DirectMap2M: 43143168 kB' 'DirectMap1G: 89128960 kB' 00:03:26.729 23:06:15 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:26.729 23:06:15 -- setup/common.sh@32 -- # continue 00:03:26.729 23:06:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.729 23:06:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.729 23:06:15 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:26.729 23:06:15 -- setup/common.sh@32 -- # continue 00:03:26.729 23:06:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.729 23:06:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.729 23:06:15 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:26.729 23:06:15 -- setup/common.sh@32 -- # continue 00:03:26.729 23:06:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.729 23:06:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.729 23:06:15 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:26.729 23:06:15 -- setup/common.sh@32 -- # continue 00:03:26.729 23:06:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.729 23:06:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.729 23:06:15 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:26.729 23:06:15 -- setup/common.sh@32 -- # continue 00:03:26.729 23:06:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.729 23:06:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.729 23:06:15 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:26.729 23:06:15 -- setup/common.sh@32 -- # continue 00:03:26.729 23:06:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.729 23:06:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.729 23:06:15 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:26.729 23:06:15 -- setup/common.sh@32 -- # continue 00:03:26.729 23:06:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.729 23:06:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.729 23:06:15 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:26.729 23:06:15 -- setup/common.sh@32 -- # continue 00:03:26.729 23:06:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.729 23:06:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.729 23:06:15 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:26.729 23:06:15 -- setup/common.sh@32 -- # continue 00:03:26.729 23:06:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.729 23:06:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.729 23:06:15 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:26.729 23:06:15 -- setup/common.sh@32 -- # continue 00:03:26.729 23:06:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.729 23:06:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.729 23:06:15 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:26.729 23:06:15 -- setup/common.sh@32 -- # continue 00:03:26.729 23:06:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.729 23:06:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.729 23:06:15 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:26.729 23:06:15 -- setup/common.sh@32 -- # continue 00:03:26.729 23:06:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.729 23:06:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.729 23:06:15 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:26.729 23:06:15 -- setup/common.sh@32 -- # continue 00:03:26.729 23:06:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.729 23:06:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.729 23:06:15 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:26.729 23:06:15 -- setup/common.sh@32 -- # continue 00:03:26.729 23:06:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.729 23:06:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.729 23:06:15 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:26.729 23:06:15 -- setup/common.sh@32 -- # continue 00:03:26.729 23:06:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.729 23:06:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.729 23:06:15 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:26.729 23:06:15 -- setup/common.sh@32 -- # continue 00:03:26.729 23:06:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.729 23:06:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.729 23:06:15 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:26.729 23:06:15 -- setup/common.sh@32 -- # continue 00:03:26.729 23:06:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.729 23:06:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.729 23:06:15 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:26.729 23:06:15 -- setup/common.sh@32 -- # continue 00:03:26.729 23:06:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.729 23:06:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.729 23:06:15 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:26.729 23:06:15 -- setup/common.sh@32 -- # continue 00:03:26.729 23:06:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.729 23:06:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.729 23:06:15 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:26.729 23:06:15 -- setup/common.sh@32 -- # continue 00:03:26.729 23:06:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.729 23:06:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.729 23:06:15 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:26.729 23:06:15 -- setup/common.sh@32 -- # continue 00:03:26.729 23:06:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.729 23:06:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.729 23:06:15 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:26.729 23:06:15 -- setup/common.sh@32 -- # continue 00:03:26.729 23:06:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.730 23:06:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.730 23:06:15 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:26.730 23:06:15 -- setup/common.sh@32 -- # continue 00:03:26.730 23:06:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.730 23:06:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.730 23:06:15 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:26.730 23:06:15 -- setup/common.sh@32 -- # continue 00:03:26.730 23:06:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.730 23:06:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.730 23:06:15 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:26.730 23:06:15 -- setup/common.sh@32 -- # continue 00:03:26.730 23:06:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.730 23:06:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.730 23:06:15 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:26.730 23:06:15 -- setup/common.sh@32 -- # continue 00:03:26.730 23:06:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.730 23:06:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.730 23:06:15 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:26.730 23:06:15 -- setup/common.sh@32 -- # continue 00:03:26.730 23:06:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.730 23:06:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.730 23:06:15 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:26.730 23:06:15 -- setup/common.sh@32 -- # continue 00:03:26.730 23:06:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.730 23:06:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.730 23:06:15 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:26.730 23:06:15 -- setup/common.sh@32 -- # continue 00:03:26.730 23:06:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.730 23:06:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.730 23:06:15 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:26.730 23:06:15 -- setup/common.sh@32 -- # continue 00:03:26.730 23:06:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.730 23:06:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.730 23:06:15 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:26.730 23:06:15 -- setup/common.sh@32 -- # continue 00:03:26.730 23:06:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.730 23:06:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.730 23:06:15 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:26.730 23:06:15 -- setup/common.sh@32 -- # continue 00:03:26.730 23:06:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.730 23:06:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.730 23:06:15 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:26.730 23:06:15 -- setup/common.sh@32 -- # continue 00:03:26.730 23:06:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.730 23:06:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.730 23:06:15 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:26.730 23:06:15 -- setup/common.sh@32 -- # continue 00:03:26.730 23:06:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.730 23:06:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.730 23:06:15 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:26.730 23:06:15 -- setup/common.sh@32 -- # continue 00:03:26.730 23:06:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.730 23:06:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.730 23:06:15 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:26.730 23:06:15 -- setup/common.sh@32 -- # continue 00:03:26.730 23:06:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.730 23:06:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.730 23:06:15 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:26.730 23:06:15 -- setup/common.sh@32 -- # continue 00:03:26.730 23:06:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.730 23:06:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.730 23:06:15 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:26.730 23:06:15 -- setup/common.sh@32 -- # continue 00:03:26.730 23:06:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.730 23:06:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.730 23:06:15 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:26.730 23:06:15 -- setup/common.sh@32 -- # continue 00:03:26.730 23:06:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.730 23:06:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.730 23:06:15 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:26.730 23:06:15 -- setup/common.sh@32 -- # continue 00:03:26.730 23:06:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.730 23:06:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.730 23:06:15 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:26.730 23:06:15 -- setup/common.sh@32 -- # continue 00:03:26.730 23:06:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.730 23:06:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.730 23:06:15 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:26.730 23:06:15 -- setup/common.sh@32 -- # continue 00:03:26.730 23:06:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.730 23:06:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.730 23:06:15 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:26.730 23:06:15 -- setup/common.sh@32 -- # continue 00:03:26.730 23:06:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.730 23:06:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.730 23:06:15 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:26.730 23:06:15 -- setup/common.sh@32 -- # continue 00:03:26.730 23:06:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.730 23:06:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.730 23:06:15 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:26.730 23:06:15 -- setup/common.sh@32 -- # continue 00:03:26.730 23:06:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.730 23:06:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.730 23:06:15 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:26.730 23:06:15 -- setup/common.sh@32 -- # continue 00:03:26.730 23:06:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.730 23:06:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.730 23:06:15 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:26.730 23:06:15 -- setup/common.sh@32 -- # continue 00:03:26.730 23:06:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.730 23:06:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.730 23:06:15 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:26.730 23:06:15 -- setup/common.sh@32 -- # continue 00:03:26.730 23:06:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.730 23:06:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.730 23:06:15 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:26.730 23:06:15 -- setup/common.sh@32 -- # continue 00:03:26.730 23:06:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.730 23:06:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.730 23:06:15 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:26.730 23:06:15 -- setup/common.sh@32 -- # continue 00:03:26.730 23:06:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.730 23:06:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.730 23:06:15 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:26.730 23:06:15 -- setup/common.sh@32 -- # continue 00:03:26.730 23:06:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.730 23:06:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.730 23:06:15 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:26.730 23:06:15 -- setup/common.sh@32 -- # continue 00:03:26.730 23:06:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.730 23:06:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.730 23:06:15 -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:26.730 23:06:15 -- setup/common.sh@33 -- # echo 2048 00:03:26.730 23:06:15 -- setup/common.sh@33 -- # return 0 00:03:26.730 23:06:15 -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:03:26.730 23:06:15 -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:03:26.730 23:06:15 -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:03:26.730 23:06:15 -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:03:26.730 23:06:15 -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:03:26.730 23:06:15 -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:03:26.730 23:06:15 -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:03:26.730 23:06:15 -- setup/hugepages.sh@207 -- # get_nodes 00:03:26.730 23:06:15 -- setup/hugepages.sh@27 -- # local node 00:03:26.730 23:06:15 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:26.730 23:06:15 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:03:26.730 23:06:15 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:26.730 23:06:15 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:26.730 23:06:15 -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:26.730 23:06:15 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:26.730 23:06:15 -- setup/hugepages.sh@208 -- # clear_hp 00:03:26.730 23:06:15 -- setup/hugepages.sh@37 -- # local node hp 00:03:26.730 23:06:15 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:26.730 23:06:15 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:26.730 23:06:15 -- setup/hugepages.sh@41 -- # echo 0 00:03:26.730 23:06:15 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:26.730 23:06:15 -- setup/hugepages.sh@41 -- # echo 0 00:03:26.730 23:06:15 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:26.730 23:06:15 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:26.730 23:06:15 -- setup/hugepages.sh@41 -- # echo 0 00:03:26.730 23:06:15 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:26.730 23:06:15 -- setup/hugepages.sh@41 -- # echo 0 00:03:26.730 23:06:15 -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:03:26.730 23:06:15 -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:03:26.730 23:06:15 -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:03:26.730 23:06:15 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:26.730 23:06:15 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:26.730 23:06:15 -- common/autotest_common.sh@10 -- # set +x 00:03:26.730 ************************************ 00:03:26.730 START TEST default_setup 00:03:26.730 ************************************ 00:03:26.731 23:06:15 -- common/autotest_common.sh@1111 -- # default_setup 00:03:26.731 23:06:15 -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:03:26.731 23:06:15 -- setup/hugepages.sh@49 -- # local size=2097152 00:03:26.731 23:06:15 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:03:26.731 23:06:15 -- setup/hugepages.sh@51 -- # shift 00:03:26.731 23:06:15 -- setup/hugepages.sh@52 -- # node_ids=('0') 00:03:26.731 23:06:15 -- setup/hugepages.sh@52 -- # local node_ids 00:03:26.731 23:06:15 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:26.731 23:06:15 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:26.731 23:06:15 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:03:26.731 23:06:15 -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:03:26.731 23:06:15 -- setup/hugepages.sh@62 -- # local user_nodes 00:03:26.731 23:06:15 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:26.731 23:06:15 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:26.731 23:06:15 -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:26.731 23:06:15 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:26.731 23:06:15 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:03:26.731 23:06:15 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:26.731 23:06:15 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:03:26.731 23:06:15 -- setup/hugepages.sh@73 -- # return 0 00:03:26.731 23:06:15 -- setup/hugepages.sh@137 -- # setup output 00:03:26.731 23:06:15 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:26.731 23:06:15 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:30.037 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:03:30.037 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:03:30.038 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:03:30.038 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:03:30.038 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:03:30.038 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:03:30.038 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:03:30.038 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:03:30.038 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:03:30.038 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:03:30.038 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:03:30.038 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:03:30.038 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:03:30.038 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:03:30.038 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:03:30.038 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:03:30.038 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:03:30.614 23:06:19 -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:03:30.614 23:06:19 -- setup/hugepages.sh@89 -- # local node 00:03:30.614 23:06:19 -- setup/hugepages.sh@90 -- # local sorted_t 00:03:30.614 23:06:19 -- setup/hugepages.sh@91 -- # local sorted_s 00:03:30.614 23:06:19 -- setup/hugepages.sh@92 -- # local surp 00:03:30.614 23:06:19 -- setup/hugepages.sh@93 -- # local resv 00:03:30.614 23:06:19 -- setup/hugepages.sh@94 -- # local anon 00:03:30.614 23:06:19 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:30.614 23:06:19 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:30.614 23:06:19 -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:30.614 23:06:19 -- setup/common.sh@18 -- # local node= 00:03:30.614 23:06:19 -- setup/common.sh@19 -- # local var val 00:03:30.614 23:06:19 -- setup/common.sh@20 -- # local mem_f mem 00:03:30.614 23:06:19 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:30.614 23:06:19 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:30.614 23:06:19 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:30.614 23:06:19 -- setup/common.sh@28 -- # mapfile -t mem 00:03:30.614 23:06:19 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:30.614 23:06:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.614 23:06:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.614 23:06:19 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338868 kB' 'MemFree: 106617056 kB' 'MemAvailable: 110157768 kB' 'Buffers: 4124 kB' 'Cached: 13007776 kB' 'SwapCached: 0 kB' 'Active: 10132056 kB' 'Inactive: 3515796 kB' 'Active(anon): 9441660 kB' 'Inactive(anon): 0 kB' 'Active(file): 690396 kB' 'Inactive(file): 3515796 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 638792 kB' 'Mapped: 199668 kB' 'Shmem: 8805708 kB' 'KReclaimable: 317704 kB' 'Slab: 1134236 kB' 'SReclaimable: 317704 kB' 'SUnreclaim: 816532 kB' 'KernelStack: 27328 kB' 'PageTables: 9324 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509460 kB' 'Committed_AS: 10843144 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 234764 kB' 'VmallocChunk: 0 kB' 'Percpu: 108288 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3722612 kB' 'DirectMap2M: 43143168 kB' 'DirectMap1G: 89128960 kB' 00:03:30.614 23:06:19 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.614 23:06:19 -- setup/common.sh@32 -- # continue 00:03:30.614 23:06:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.614 23:06:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.614 23:06:19 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.614 23:06:19 -- setup/common.sh@32 -- # continue 00:03:30.614 23:06:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.614 23:06:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.614 23:06:19 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.614 23:06:19 -- setup/common.sh@32 -- # continue 00:03:30.614 23:06:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.614 23:06:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.614 23:06:19 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.614 23:06:19 -- setup/common.sh@32 -- # continue 00:03:30.614 23:06:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.614 23:06:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.614 23:06:19 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.614 23:06:19 -- setup/common.sh@32 -- # continue 00:03:30.614 23:06:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.614 23:06:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.614 23:06:19 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.614 23:06:19 -- setup/common.sh@32 -- # continue 00:03:30.614 23:06:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.614 23:06:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.614 23:06:19 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.614 23:06:19 -- setup/common.sh@32 -- # continue 00:03:30.614 23:06:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.614 23:06:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.614 23:06:19 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.614 23:06:19 -- setup/common.sh@32 -- # continue 00:03:30.614 23:06:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.614 23:06:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.614 23:06:19 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.614 23:06:19 -- setup/common.sh@32 -- # continue 00:03:30.614 23:06:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.614 23:06:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.615 23:06:19 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.615 23:06:19 -- setup/common.sh@32 -- # continue 00:03:30.615 23:06:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.615 23:06:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.615 23:06:19 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.615 23:06:19 -- setup/common.sh@32 -- # continue 00:03:30.615 23:06:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.615 23:06:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.615 23:06:19 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.615 23:06:19 -- setup/common.sh@32 -- # continue 00:03:30.615 23:06:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.615 23:06:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.615 23:06:19 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.615 23:06:19 -- setup/common.sh@32 -- # continue 00:03:30.615 23:06:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.615 23:06:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.615 23:06:19 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.615 23:06:19 -- setup/common.sh@32 -- # continue 00:03:30.615 23:06:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.615 23:06:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.615 23:06:19 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.615 23:06:19 -- setup/common.sh@32 -- # continue 00:03:30.615 23:06:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.615 23:06:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.615 23:06:19 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.615 23:06:19 -- setup/common.sh@32 -- # continue 00:03:30.615 23:06:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.615 23:06:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.615 23:06:19 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.615 23:06:19 -- setup/common.sh@32 -- # continue 00:03:30.615 23:06:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.615 23:06:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.615 23:06:19 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.615 23:06:19 -- setup/common.sh@32 -- # continue 00:03:30.615 23:06:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.615 23:06:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.615 23:06:19 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.615 23:06:19 -- setup/common.sh@32 -- # continue 00:03:30.615 23:06:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.615 23:06:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.615 23:06:19 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.615 23:06:19 -- setup/common.sh@32 -- # continue 00:03:30.615 23:06:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.615 23:06:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.615 23:06:19 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.615 23:06:19 -- setup/common.sh@32 -- # continue 00:03:30.615 23:06:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.615 23:06:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.615 23:06:19 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.615 23:06:19 -- setup/common.sh@32 -- # continue 00:03:30.615 23:06:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.615 23:06:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.615 23:06:19 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.615 23:06:19 -- setup/common.sh@32 -- # continue 00:03:30.615 23:06:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.615 23:06:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.615 23:06:19 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.615 23:06:19 -- setup/common.sh@32 -- # continue 00:03:30.615 23:06:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.615 23:06:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.615 23:06:19 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.615 23:06:19 -- setup/common.sh@32 -- # continue 00:03:30.615 23:06:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.615 23:06:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.615 23:06:19 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.615 23:06:19 -- setup/common.sh@32 -- # continue 00:03:30.615 23:06:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.615 23:06:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.615 23:06:19 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.615 23:06:19 -- setup/common.sh@32 -- # continue 00:03:30.615 23:06:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.615 23:06:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.615 23:06:19 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.615 23:06:19 -- setup/common.sh@32 -- # continue 00:03:30.615 23:06:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.615 23:06:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.615 23:06:19 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.615 23:06:19 -- setup/common.sh@32 -- # continue 00:03:30.615 23:06:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.615 23:06:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.615 23:06:19 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.615 23:06:19 -- setup/common.sh@32 -- # continue 00:03:30.615 23:06:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.615 23:06:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.615 23:06:19 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.615 23:06:19 -- setup/common.sh@32 -- # continue 00:03:30.615 23:06:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.615 23:06:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.615 23:06:19 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.615 23:06:19 -- setup/common.sh@32 -- # continue 00:03:30.615 23:06:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.615 23:06:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.615 23:06:19 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.615 23:06:19 -- setup/common.sh@32 -- # continue 00:03:30.615 23:06:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.615 23:06:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.615 23:06:19 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.615 23:06:19 -- setup/common.sh@32 -- # continue 00:03:30.615 23:06:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.615 23:06:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.615 23:06:19 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.615 23:06:19 -- setup/common.sh@32 -- # continue 00:03:30.615 23:06:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.615 23:06:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.615 23:06:19 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.615 23:06:19 -- setup/common.sh@32 -- # continue 00:03:30.615 23:06:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.615 23:06:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.615 23:06:19 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.615 23:06:19 -- setup/common.sh@32 -- # continue 00:03:30.615 23:06:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.615 23:06:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.615 23:06:19 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.615 23:06:19 -- setup/common.sh@32 -- # continue 00:03:30.615 23:06:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.615 23:06:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.615 23:06:19 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.615 23:06:19 -- setup/common.sh@32 -- # continue 00:03:30.615 23:06:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.615 23:06:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.615 23:06:19 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.615 23:06:19 -- setup/common.sh@32 -- # continue 00:03:30.615 23:06:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.615 23:06:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.615 23:06:19 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.615 23:06:19 -- setup/common.sh@33 -- # echo 0 00:03:30.615 23:06:19 -- setup/common.sh@33 -- # return 0 00:03:30.615 23:06:19 -- setup/hugepages.sh@97 -- # anon=0 00:03:30.615 23:06:19 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:30.615 23:06:19 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:30.615 23:06:19 -- setup/common.sh@18 -- # local node= 00:03:30.615 23:06:19 -- setup/common.sh@19 -- # local var val 00:03:30.615 23:06:19 -- setup/common.sh@20 -- # local mem_f mem 00:03:30.615 23:06:19 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:30.615 23:06:19 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:30.615 23:06:19 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:30.615 23:06:19 -- setup/common.sh@28 -- # mapfile -t mem 00:03:30.615 23:06:19 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:30.615 23:06:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.615 23:06:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.615 23:06:19 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338868 kB' 'MemFree: 106618268 kB' 'MemAvailable: 110158980 kB' 'Buffers: 4124 kB' 'Cached: 13007780 kB' 'SwapCached: 0 kB' 'Active: 10131120 kB' 'Inactive: 3515796 kB' 'Active(anon): 9440724 kB' 'Inactive(anon): 0 kB' 'Active(file): 690396 kB' 'Inactive(file): 3515796 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 638324 kB' 'Mapped: 199512 kB' 'Shmem: 8805712 kB' 'KReclaimable: 317704 kB' 'Slab: 1134192 kB' 'SReclaimable: 317704 kB' 'SUnreclaim: 816488 kB' 'KernelStack: 27200 kB' 'PageTables: 9324 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509460 kB' 'Committed_AS: 10841528 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 234684 kB' 'VmallocChunk: 0 kB' 'Percpu: 108288 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3722612 kB' 'DirectMap2M: 43143168 kB' 'DirectMap1G: 89128960 kB' 00:03:30.615 23:06:19 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.615 23:06:19 -- setup/common.sh@32 -- # continue 00:03:30.615 23:06:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.615 23:06:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.615 23:06:19 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.615 23:06:19 -- setup/common.sh@32 -- # continue 00:03:30.615 23:06:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.616 23:06:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.616 23:06:19 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.616 23:06:19 -- setup/common.sh@32 -- # continue 00:03:30.616 23:06:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.616 23:06:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.616 23:06:19 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.616 23:06:19 -- setup/common.sh@32 -- # continue 00:03:30.616 23:06:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.616 23:06:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.616 23:06:19 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.616 23:06:19 -- setup/common.sh@32 -- # continue 00:03:30.616 23:06:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.616 23:06:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.616 23:06:19 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.616 23:06:19 -- setup/common.sh@32 -- # continue 00:03:30.616 23:06:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.616 23:06:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.616 23:06:19 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.616 23:06:19 -- setup/common.sh@32 -- # continue 00:03:30.616 23:06:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.616 23:06:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.616 23:06:19 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.616 23:06:19 -- setup/common.sh@32 -- # continue 00:03:30.616 23:06:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.616 23:06:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.616 23:06:19 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.616 23:06:19 -- setup/common.sh@32 -- # continue 00:03:30.616 23:06:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.616 23:06:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.616 23:06:19 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.616 23:06:19 -- setup/common.sh@32 -- # continue 00:03:30.616 23:06:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.616 23:06:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.616 23:06:19 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.616 23:06:19 -- setup/common.sh@32 -- # continue 00:03:30.616 23:06:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.616 23:06:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.616 23:06:19 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.616 23:06:19 -- setup/common.sh@32 -- # continue 00:03:30.616 23:06:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.616 23:06:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.616 23:06:19 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.616 23:06:19 -- setup/common.sh@32 -- # continue 00:03:30.616 23:06:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.616 23:06:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.616 23:06:19 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.616 23:06:19 -- setup/common.sh@32 -- # continue 00:03:30.616 23:06:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.616 23:06:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.616 23:06:19 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.616 23:06:19 -- setup/common.sh@32 -- # continue 00:03:30.616 23:06:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.616 23:06:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.616 23:06:19 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.616 23:06:19 -- setup/common.sh@32 -- # continue 00:03:30.616 23:06:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.616 23:06:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.616 23:06:19 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.616 23:06:19 -- setup/common.sh@32 -- # continue 00:03:30.616 23:06:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.616 23:06:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.616 23:06:19 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.616 23:06:19 -- setup/common.sh@32 -- # continue 00:03:30.616 23:06:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.616 23:06:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.616 23:06:19 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.616 23:06:19 -- setup/common.sh@32 -- # continue 00:03:30.616 23:06:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.616 23:06:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.616 23:06:19 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.616 23:06:19 -- setup/common.sh@32 -- # continue 00:03:30.616 23:06:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.616 23:06:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.616 23:06:19 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.616 23:06:19 -- setup/common.sh@32 -- # continue 00:03:30.616 23:06:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.616 23:06:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.616 23:06:19 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.616 23:06:19 -- setup/common.sh@32 -- # continue 00:03:30.616 23:06:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.616 23:06:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.616 23:06:19 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.616 23:06:19 -- setup/common.sh@32 -- # continue 00:03:30.616 23:06:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.616 23:06:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.616 23:06:19 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.616 23:06:19 -- setup/common.sh@32 -- # continue 00:03:30.616 23:06:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.616 23:06:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.616 23:06:19 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.616 23:06:19 -- setup/common.sh@32 -- # continue 00:03:30.616 23:06:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.616 23:06:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.616 23:06:19 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.616 23:06:19 -- setup/common.sh@32 -- # continue 00:03:30.616 23:06:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.616 23:06:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.616 23:06:19 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.616 23:06:19 -- setup/common.sh@32 -- # continue 00:03:30.616 23:06:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.616 23:06:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.616 23:06:19 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.616 23:06:19 -- setup/common.sh@32 -- # continue 00:03:30.616 23:06:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.616 23:06:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.616 23:06:19 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.616 23:06:19 -- setup/common.sh@32 -- # continue 00:03:30.616 23:06:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.616 23:06:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.616 23:06:19 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.616 23:06:19 -- setup/common.sh@32 -- # continue 00:03:30.616 23:06:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.616 23:06:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.616 23:06:19 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.616 23:06:19 -- setup/common.sh@32 -- # continue 00:03:30.616 23:06:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.616 23:06:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.616 23:06:19 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.616 23:06:19 -- setup/common.sh@32 -- # continue 00:03:30.616 23:06:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.616 23:06:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.616 23:06:19 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.616 23:06:19 -- setup/common.sh@32 -- # continue 00:03:30.616 23:06:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.616 23:06:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.616 23:06:19 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.616 23:06:19 -- setup/common.sh@32 -- # continue 00:03:30.616 23:06:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.616 23:06:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.616 23:06:19 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.616 23:06:19 -- setup/common.sh@32 -- # continue 00:03:30.616 23:06:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.616 23:06:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.616 23:06:19 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.616 23:06:19 -- setup/common.sh@32 -- # continue 00:03:30.616 23:06:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.616 23:06:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.616 23:06:19 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.616 23:06:19 -- setup/common.sh@32 -- # continue 00:03:30.616 23:06:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.616 23:06:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.616 23:06:19 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.616 23:06:19 -- setup/common.sh@32 -- # continue 00:03:30.616 23:06:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.616 23:06:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.616 23:06:19 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.616 23:06:19 -- setup/common.sh@32 -- # continue 00:03:30.616 23:06:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.616 23:06:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.616 23:06:19 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.616 23:06:19 -- setup/common.sh@32 -- # continue 00:03:30.616 23:06:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.616 23:06:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.616 23:06:19 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.616 23:06:19 -- setup/common.sh@32 -- # continue 00:03:30.616 23:06:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.616 23:06:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.616 23:06:19 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.616 23:06:19 -- setup/common.sh@32 -- # continue 00:03:30.616 23:06:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.616 23:06:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.616 23:06:19 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.616 23:06:19 -- setup/common.sh@32 -- # continue 00:03:30.617 23:06:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.617 23:06:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.617 23:06:19 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.617 23:06:19 -- setup/common.sh@32 -- # continue 00:03:30.617 23:06:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.617 23:06:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.617 23:06:19 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.617 23:06:19 -- setup/common.sh@32 -- # continue 00:03:30.617 23:06:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.617 23:06:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.617 23:06:19 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.617 23:06:19 -- setup/common.sh@32 -- # continue 00:03:30.617 23:06:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.617 23:06:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.617 23:06:19 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.617 23:06:19 -- setup/common.sh@32 -- # continue 00:03:30.617 23:06:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.617 23:06:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.617 23:06:19 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.617 23:06:19 -- setup/common.sh@32 -- # continue 00:03:30.617 23:06:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.617 23:06:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.617 23:06:19 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.617 23:06:19 -- setup/common.sh@32 -- # continue 00:03:30.617 23:06:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.617 23:06:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.617 23:06:19 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.617 23:06:19 -- setup/common.sh@32 -- # continue 00:03:30.617 23:06:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.617 23:06:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.617 23:06:19 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.617 23:06:19 -- setup/common.sh@32 -- # continue 00:03:30.617 23:06:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.617 23:06:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.617 23:06:19 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.617 23:06:19 -- setup/common.sh@33 -- # echo 0 00:03:30.617 23:06:19 -- setup/common.sh@33 -- # return 0 00:03:30.617 23:06:19 -- setup/hugepages.sh@99 -- # surp=0 00:03:30.617 23:06:19 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:30.617 23:06:19 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:30.617 23:06:19 -- setup/common.sh@18 -- # local node= 00:03:30.617 23:06:19 -- setup/common.sh@19 -- # local var val 00:03:30.617 23:06:19 -- setup/common.sh@20 -- # local mem_f mem 00:03:30.617 23:06:19 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:30.617 23:06:19 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:30.617 23:06:19 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:30.617 23:06:19 -- setup/common.sh@28 -- # mapfile -t mem 00:03:30.617 23:06:19 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:30.617 23:06:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.617 23:06:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.617 23:06:19 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338868 kB' 'MemFree: 106615828 kB' 'MemAvailable: 110156540 kB' 'Buffers: 4124 kB' 'Cached: 13007792 kB' 'SwapCached: 0 kB' 'Active: 10130988 kB' 'Inactive: 3515796 kB' 'Active(anon): 9440592 kB' 'Inactive(anon): 0 kB' 'Active(file): 690396 kB' 'Inactive(file): 3515796 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 638140 kB' 'Mapped: 199512 kB' 'Shmem: 8805724 kB' 'KReclaimable: 317704 kB' 'Slab: 1134192 kB' 'SReclaimable: 317704 kB' 'SUnreclaim: 816488 kB' 'KernelStack: 27168 kB' 'PageTables: 9108 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509460 kB' 'Committed_AS: 10841680 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 234732 kB' 'VmallocChunk: 0 kB' 'Percpu: 108288 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3722612 kB' 'DirectMap2M: 43143168 kB' 'DirectMap1G: 89128960 kB' 00:03:30.617 23:06:19 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.617 23:06:19 -- setup/common.sh@32 -- # continue 00:03:30.617 23:06:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.617 23:06:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.617 23:06:19 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.617 23:06:19 -- setup/common.sh@32 -- # continue 00:03:30.617 23:06:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.617 23:06:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.617 23:06:19 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.617 23:06:19 -- setup/common.sh@32 -- # continue 00:03:30.617 23:06:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.617 23:06:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.617 23:06:19 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.617 23:06:19 -- setup/common.sh@32 -- # continue 00:03:30.617 23:06:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.617 23:06:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.617 23:06:19 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.617 23:06:19 -- setup/common.sh@32 -- # continue 00:03:30.617 23:06:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.617 23:06:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.617 23:06:19 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.617 23:06:19 -- setup/common.sh@32 -- # continue 00:03:30.617 23:06:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.617 23:06:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.617 23:06:19 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.617 23:06:19 -- setup/common.sh@32 -- # continue 00:03:30.617 23:06:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.617 23:06:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.617 23:06:19 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.617 23:06:19 -- setup/common.sh@32 -- # continue 00:03:30.617 23:06:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.617 23:06:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.617 23:06:19 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.617 23:06:19 -- setup/common.sh@32 -- # continue 00:03:30.617 23:06:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.617 23:06:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.617 23:06:19 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.617 23:06:19 -- setup/common.sh@32 -- # continue 00:03:30.617 23:06:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.617 23:06:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.617 23:06:19 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.617 23:06:19 -- setup/common.sh@32 -- # continue 00:03:30.617 23:06:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.617 23:06:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.617 23:06:19 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.617 23:06:19 -- setup/common.sh@32 -- # continue 00:03:30.617 23:06:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.617 23:06:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.617 23:06:19 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.617 23:06:19 -- setup/common.sh@32 -- # continue 00:03:30.617 23:06:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.617 23:06:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.617 23:06:19 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.617 23:06:19 -- setup/common.sh@32 -- # continue 00:03:30.617 23:06:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.617 23:06:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.617 23:06:19 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.617 23:06:19 -- setup/common.sh@32 -- # continue 00:03:30.617 23:06:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.617 23:06:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.617 23:06:19 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.617 23:06:19 -- setup/common.sh@32 -- # continue 00:03:30.617 23:06:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.617 23:06:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.617 23:06:19 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.617 23:06:19 -- setup/common.sh@32 -- # continue 00:03:30.617 23:06:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.617 23:06:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.617 23:06:19 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.617 23:06:19 -- setup/common.sh@32 -- # continue 00:03:30.617 23:06:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.617 23:06:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.617 23:06:19 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.617 23:06:19 -- setup/common.sh@32 -- # continue 00:03:30.617 23:06:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.617 23:06:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.617 23:06:19 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.617 23:06:19 -- setup/common.sh@32 -- # continue 00:03:30.617 23:06:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.617 23:06:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.617 23:06:19 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.617 23:06:19 -- setup/common.sh@32 -- # continue 00:03:30.617 23:06:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.617 23:06:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.617 23:06:19 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.617 23:06:19 -- setup/common.sh@32 -- # continue 00:03:30.617 23:06:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.617 23:06:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.617 23:06:19 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.617 23:06:19 -- setup/common.sh@32 -- # continue 00:03:30.617 23:06:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.617 23:06:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.617 23:06:19 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.617 23:06:19 -- setup/common.sh@32 -- # continue 00:03:30.618 23:06:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.618 23:06:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.618 23:06:19 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.618 23:06:19 -- setup/common.sh@32 -- # continue 00:03:30.618 23:06:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.618 23:06:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.618 23:06:19 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.618 23:06:19 -- setup/common.sh@32 -- # continue 00:03:30.618 23:06:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.618 23:06:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.618 23:06:19 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.618 23:06:19 -- setup/common.sh@32 -- # continue 00:03:30.618 23:06:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.618 23:06:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.618 23:06:19 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.618 23:06:19 -- setup/common.sh@32 -- # continue 00:03:30.618 23:06:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.618 23:06:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.618 23:06:19 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.618 23:06:19 -- setup/common.sh@32 -- # continue 00:03:30.618 23:06:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.618 23:06:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.618 23:06:19 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.618 23:06:19 -- setup/common.sh@32 -- # continue 00:03:30.618 23:06:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.618 23:06:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.618 23:06:19 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.618 23:06:19 -- setup/common.sh@32 -- # continue 00:03:30.618 23:06:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.618 23:06:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.618 23:06:19 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.618 23:06:19 -- setup/common.sh@32 -- # continue 00:03:30.618 23:06:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.618 23:06:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.618 23:06:19 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.618 23:06:19 -- setup/common.sh@32 -- # continue 00:03:30.618 23:06:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.618 23:06:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.618 23:06:19 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.618 23:06:19 -- setup/common.sh@32 -- # continue 00:03:30.618 23:06:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.618 23:06:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.618 23:06:19 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.618 23:06:19 -- setup/common.sh@32 -- # continue 00:03:30.618 23:06:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.618 23:06:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.618 23:06:19 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.618 23:06:19 -- setup/common.sh@32 -- # continue 00:03:30.618 23:06:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.618 23:06:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.618 23:06:19 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.618 23:06:19 -- setup/common.sh@32 -- # continue 00:03:30.618 23:06:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.618 23:06:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.618 23:06:19 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.618 23:06:19 -- setup/common.sh@32 -- # continue 00:03:30.618 23:06:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.618 23:06:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.618 23:06:19 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.618 23:06:19 -- setup/common.sh@32 -- # continue 00:03:30.618 23:06:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.618 23:06:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.618 23:06:19 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.618 23:06:19 -- setup/common.sh@32 -- # continue 00:03:30.618 23:06:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.618 23:06:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.618 23:06:19 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.618 23:06:19 -- setup/common.sh@32 -- # continue 00:03:30.618 23:06:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.618 23:06:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.618 23:06:19 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.618 23:06:19 -- setup/common.sh@32 -- # continue 00:03:30.618 23:06:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.618 23:06:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.618 23:06:19 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.618 23:06:19 -- setup/common.sh@32 -- # continue 00:03:30.618 23:06:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.618 23:06:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.618 23:06:19 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.618 23:06:19 -- setup/common.sh@32 -- # continue 00:03:30.618 23:06:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.618 23:06:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.618 23:06:19 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.618 23:06:19 -- setup/common.sh@32 -- # continue 00:03:30.618 23:06:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.618 23:06:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.618 23:06:19 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.618 23:06:19 -- setup/common.sh@32 -- # continue 00:03:30.618 23:06:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.618 23:06:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.618 23:06:19 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.618 23:06:19 -- setup/common.sh@32 -- # continue 00:03:30.618 23:06:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.618 23:06:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.618 23:06:19 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.618 23:06:19 -- setup/common.sh@32 -- # continue 00:03:30.618 23:06:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.618 23:06:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.618 23:06:19 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.618 23:06:19 -- setup/common.sh@32 -- # continue 00:03:30.618 23:06:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.618 23:06:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.618 23:06:19 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.618 23:06:19 -- setup/common.sh@32 -- # continue 00:03:30.618 23:06:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.618 23:06:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.618 23:06:19 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.618 23:06:19 -- setup/common.sh@33 -- # echo 0 00:03:30.618 23:06:19 -- setup/common.sh@33 -- # return 0 00:03:30.618 23:06:19 -- setup/hugepages.sh@100 -- # resv=0 00:03:30.618 23:06:19 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:30.618 nr_hugepages=1024 00:03:30.618 23:06:19 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:30.618 resv_hugepages=0 00:03:30.618 23:06:19 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:30.618 surplus_hugepages=0 00:03:30.618 23:06:19 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:30.618 anon_hugepages=0 00:03:30.618 23:06:19 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:30.618 23:06:19 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:30.618 23:06:19 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:30.618 23:06:19 -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:30.618 23:06:19 -- setup/common.sh@18 -- # local node= 00:03:30.618 23:06:19 -- setup/common.sh@19 -- # local var val 00:03:30.618 23:06:19 -- setup/common.sh@20 -- # local mem_f mem 00:03:30.618 23:06:19 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:30.618 23:06:19 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:30.618 23:06:19 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:30.618 23:06:19 -- setup/common.sh@28 -- # mapfile -t mem 00:03:30.618 23:06:19 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:30.618 23:06:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.618 23:06:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.618 23:06:19 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338868 kB' 'MemFree: 106620116 kB' 'MemAvailable: 110160828 kB' 'Buffers: 4124 kB' 'Cached: 13007808 kB' 'SwapCached: 0 kB' 'Active: 10131356 kB' 'Inactive: 3515796 kB' 'Active(anon): 9440960 kB' 'Inactive(anon): 0 kB' 'Active(file): 690396 kB' 'Inactive(file): 3515796 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 638532 kB' 'Mapped: 199512 kB' 'Shmem: 8805740 kB' 'KReclaimable: 317704 kB' 'Slab: 1134072 kB' 'SReclaimable: 317704 kB' 'SUnreclaim: 816368 kB' 'KernelStack: 27280 kB' 'PageTables: 9440 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509460 kB' 'Committed_AS: 10843328 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 234764 kB' 'VmallocChunk: 0 kB' 'Percpu: 108288 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3722612 kB' 'DirectMap2M: 43143168 kB' 'DirectMap1G: 89128960 kB' 00:03:30.618 23:06:19 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.618 23:06:19 -- setup/common.sh@32 -- # continue 00:03:30.618 23:06:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.618 23:06:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.618 23:06:19 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.618 23:06:19 -- setup/common.sh@32 -- # continue 00:03:30.618 23:06:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.618 23:06:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.618 23:06:19 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.618 23:06:19 -- setup/common.sh@32 -- # continue 00:03:30.619 23:06:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.619 23:06:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.619 23:06:19 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.619 23:06:19 -- setup/common.sh@32 -- # continue 00:03:30.619 23:06:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.619 23:06:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.619 23:06:19 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.619 23:06:19 -- setup/common.sh@32 -- # continue 00:03:30.619 23:06:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.619 23:06:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.619 23:06:19 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.619 23:06:19 -- setup/common.sh@32 -- # continue 00:03:30.619 23:06:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.619 23:06:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.619 23:06:19 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.619 23:06:19 -- setup/common.sh@32 -- # continue 00:03:30.619 23:06:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.619 23:06:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.619 23:06:19 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.619 23:06:19 -- setup/common.sh@32 -- # continue 00:03:30.619 23:06:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.619 23:06:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.619 23:06:19 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.619 23:06:19 -- setup/common.sh@32 -- # continue 00:03:30.619 23:06:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.619 23:06:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.619 23:06:19 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.619 23:06:19 -- setup/common.sh@32 -- # continue 00:03:30.619 23:06:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.619 23:06:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.619 23:06:19 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.619 23:06:19 -- setup/common.sh@32 -- # continue 00:03:30.619 23:06:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.619 23:06:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.619 23:06:19 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.619 23:06:19 -- setup/common.sh@32 -- # continue 00:03:30.619 23:06:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.619 23:06:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.619 23:06:19 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.619 23:06:19 -- setup/common.sh@32 -- # continue 00:03:30.619 23:06:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.619 23:06:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.619 23:06:19 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.619 23:06:19 -- setup/common.sh@32 -- # continue 00:03:30.619 23:06:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.619 23:06:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.619 23:06:19 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.619 23:06:19 -- setup/common.sh@32 -- # continue 00:03:30.619 23:06:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.619 23:06:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.619 23:06:19 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.619 23:06:19 -- setup/common.sh@32 -- # continue 00:03:30.619 23:06:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.619 23:06:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.619 23:06:19 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.619 23:06:19 -- setup/common.sh@32 -- # continue 00:03:30.619 23:06:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.619 23:06:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.619 23:06:19 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.619 23:06:19 -- setup/common.sh@32 -- # continue 00:03:30.619 23:06:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.619 23:06:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.619 23:06:19 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.619 23:06:19 -- setup/common.sh@32 -- # continue 00:03:30.619 23:06:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.619 23:06:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.619 23:06:19 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.619 23:06:19 -- setup/common.sh@32 -- # continue 00:03:30.619 23:06:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.619 23:06:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.619 23:06:19 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.619 23:06:19 -- setup/common.sh@32 -- # continue 00:03:30.619 23:06:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.619 23:06:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.619 23:06:19 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.619 23:06:19 -- setup/common.sh@32 -- # continue 00:03:30.619 23:06:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.619 23:06:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.619 23:06:19 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.619 23:06:19 -- setup/common.sh@32 -- # continue 00:03:30.619 23:06:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.619 23:06:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.619 23:06:19 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.619 23:06:19 -- setup/common.sh@32 -- # continue 00:03:30.619 23:06:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.619 23:06:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.619 23:06:19 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.619 23:06:19 -- setup/common.sh@32 -- # continue 00:03:30.619 23:06:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.619 23:06:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.619 23:06:19 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.619 23:06:19 -- setup/common.sh@32 -- # continue 00:03:30.619 23:06:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.619 23:06:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.619 23:06:19 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.619 23:06:19 -- setup/common.sh@32 -- # continue 00:03:30.619 23:06:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.619 23:06:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.619 23:06:19 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.619 23:06:19 -- setup/common.sh@32 -- # continue 00:03:30.619 23:06:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.619 23:06:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.619 23:06:19 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.619 23:06:19 -- setup/common.sh@32 -- # continue 00:03:30.619 23:06:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.619 23:06:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.619 23:06:19 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.619 23:06:19 -- setup/common.sh@32 -- # continue 00:03:30.619 23:06:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.619 23:06:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.619 23:06:19 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.619 23:06:19 -- setup/common.sh@32 -- # continue 00:03:30.619 23:06:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.619 23:06:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.619 23:06:19 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.619 23:06:19 -- setup/common.sh@32 -- # continue 00:03:30.619 23:06:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.619 23:06:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.619 23:06:19 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.619 23:06:19 -- setup/common.sh@32 -- # continue 00:03:30.619 23:06:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.619 23:06:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.619 23:06:19 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.619 23:06:19 -- setup/common.sh@32 -- # continue 00:03:30.619 23:06:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.619 23:06:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.619 23:06:19 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.619 23:06:19 -- setup/common.sh@32 -- # continue 00:03:30.619 23:06:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.619 23:06:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.620 23:06:19 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.620 23:06:19 -- setup/common.sh@32 -- # continue 00:03:30.620 23:06:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.620 23:06:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.620 23:06:19 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.620 23:06:19 -- setup/common.sh@32 -- # continue 00:03:30.620 23:06:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.620 23:06:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.620 23:06:19 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.620 23:06:19 -- setup/common.sh@32 -- # continue 00:03:30.620 23:06:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.620 23:06:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.620 23:06:19 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.620 23:06:19 -- setup/common.sh@32 -- # continue 00:03:30.620 23:06:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.620 23:06:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.620 23:06:19 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.620 23:06:19 -- setup/common.sh@32 -- # continue 00:03:30.620 23:06:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.620 23:06:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.620 23:06:19 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.620 23:06:19 -- setup/common.sh@32 -- # continue 00:03:30.620 23:06:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.620 23:06:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.620 23:06:19 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.620 23:06:19 -- setup/common.sh@32 -- # continue 00:03:30.620 23:06:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.620 23:06:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.620 23:06:19 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.620 23:06:19 -- setup/common.sh@32 -- # continue 00:03:30.620 23:06:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.620 23:06:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.620 23:06:19 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.620 23:06:19 -- setup/common.sh@32 -- # continue 00:03:30.620 23:06:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.620 23:06:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.620 23:06:19 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.620 23:06:19 -- setup/common.sh@32 -- # continue 00:03:30.620 23:06:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.620 23:06:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.620 23:06:19 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.620 23:06:19 -- setup/common.sh@32 -- # continue 00:03:30.620 23:06:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.620 23:06:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.620 23:06:19 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.620 23:06:19 -- setup/common.sh@32 -- # continue 00:03:30.620 23:06:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.620 23:06:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.620 23:06:19 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.620 23:06:19 -- setup/common.sh@32 -- # continue 00:03:30.620 23:06:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.620 23:06:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.620 23:06:19 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.620 23:06:19 -- setup/common.sh@33 -- # echo 1024 00:03:30.620 23:06:19 -- setup/common.sh@33 -- # return 0 00:03:30.620 23:06:19 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:30.620 23:06:19 -- setup/hugepages.sh@112 -- # get_nodes 00:03:30.620 23:06:19 -- setup/hugepages.sh@27 -- # local node 00:03:30.620 23:06:19 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:30.620 23:06:19 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:30.620 23:06:19 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:30.620 23:06:19 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:30.620 23:06:19 -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:30.620 23:06:19 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:30.620 23:06:19 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:30.620 23:06:19 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:30.620 23:06:19 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:30.620 23:06:19 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:30.620 23:06:19 -- setup/common.sh@18 -- # local node=0 00:03:30.620 23:06:19 -- setup/common.sh@19 -- # local var val 00:03:30.620 23:06:19 -- setup/common.sh@20 -- # local mem_f mem 00:03:30.620 23:06:19 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:30.620 23:06:19 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:30.620 23:06:19 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:30.620 23:06:19 -- setup/common.sh@28 -- # mapfile -t mem 00:03:30.620 23:06:19 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:30.620 23:06:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.620 23:06:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.620 23:06:19 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 65659008 kB' 'MemFree: 58003204 kB' 'MemUsed: 7655804 kB' 'SwapCached: 0 kB' 'Active: 3360180 kB' 'Inactive: 108980 kB' 'Active(anon): 3050660 kB' 'Inactive(anon): 0 kB' 'Active(file): 309520 kB' 'Inactive(file): 108980 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 3367792 kB' 'Mapped: 99192 kB' 'AnonPages: 104596 kB' 'Shmem: 2949292 kB' 'KernelStack: 13480 kB' 'PageTables: 4128 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 164676 kB' 'Slab: 558460 kB' 'SReclaimable: 164676 kB' 'SUnreclaim: 393784 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:30.620 23:06:19 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.620 23:06:19 -- setup/common.sh@32 -- # continue 00:03:30.620 23:06:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.620 23:06:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.620 23:06:19 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.620 23:06:19 -- setup/common.sh@32 -- # continue 00:03:30.620 23:06:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.620 23:06:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.620 23:06:19 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.620 23:06:19 -- setup/common.sh@32 -- # continue 00:03:30.620 23:06:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.620 23:06:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.620 23:06:19 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.620 23:06:19 -- setup/common.sh@32 -- # continue 00:03:30.620 23:06:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.620 23:06:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.620 23:06:19 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.620 23:06:19 -- setup/common.sh@32 -- # continue 00:03:30.620 23:06:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.620 23:06:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.620 23:06:19 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.620 23:06:19 -- setup/common.sh@32 -- # continue 00:03:30.620 23:06:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.620 23:06:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.620 23:06:19 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.620 23:06:19 -- setup/common.sh@32 -- # continue 00:03:30.620 23:06:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.620 23:06:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.620 23:06:19 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.620 23:06:19 -- setup/common.sh@32 -- # continue 00:03:30.620 23:06:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.620 23:06:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.620 23:06:19 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.620 23:06:19 -- setup/common.sh@32 -- # continue 00:03:30.620 23:06:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.620 23:06:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.620 23:06:19 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.620 23:06:19 -- setup/common.sh@32 -- # continue 00:03:30.620 23:06:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.620 23:06:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.620 23:06:19 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.620 23:06:19 -- setup/common.sh@32 -- # continue 00:03:30.620 23:06:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.620 23:06:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.620 23:06:19 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.620 23:06:19 -- setup/common.sh@32 -- # continue 00:03:30.620 23:06:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.620 23:06:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.620 23:06:19 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.620 23:06:19 -- setup/common.sh@32 -- # continue 00:03:30.620 23:06:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.620 23:06:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.620 23:06:19 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.620 23:06:19 -- setup/common.sh@32 -- # continue 00:03:30.620 23:06:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.620 23:06:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.620 23:06:19 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.620 23:06:19 -- setup/common.sh@32 -- # continue 00:03:30.620 23:06:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.620 23:06:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.620 23:06:19 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.620 23:06:19 -- setup/common.sh@32 -- # continue 00:03:30.620 23:06:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.620 23:06:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.620 23:06:19 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.620 23:06:19 -- setup/common.sh@32 -- # continue 00:03:30.620 23:06:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.620 23:06:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.620 23:06:19 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.620 23:06:19 -- setup/common.sh@32 -- # continue 00:03:30.620 23:06:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.620 23:06:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.621 23:06:19 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.621 23:06:19 -- setup/common.sh@32 -- # continue 00:03:30.621 23:06:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.621 23:06:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.621 23:06:19 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.621 23:06:19 -- setup/common.sh@32 -- # continue 00:03:30.621 23:06:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.621 23:06:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.621 23:06:19 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.621 23:06:19 -- setup/common.sh@32 -- # continue 00:03:30.621 23:06:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.621 23:06:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.621 23:06:19 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.621 23:06:19 -- setup/common.sh@32 -- # continue 00:03:30.621 23:06:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.621 23:06:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.621 23:06:19 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.621 23:06:19 -- setup/common.sh@32 -- # continue 00:03:30.621 23:06:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.621 23:06:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.621 23:06:19 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.621 23:06:19 -- setup/common.sh@32 -- # continue 00:03:30.621 23:06:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.621 23:06:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.621 23:06:19 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.621 23:06:19 -- setup/common.sh@32 -- # continue 00:03:30.621 23:06:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.621 23:06:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.621 23:06:19 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.621 23:06:19 -- setup/common.sh@32 -- # continue 00:03:30.621 23:06:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.621 23:06:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.621 23:06:19 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.621 23:06:19 -- setup/common.sh@32 -- # continue 00:03:30.621 23:06:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.621 23:06:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.621 23:06:19 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.621 23:06:19 -- setup/common.sh@32 -- # continue 00:03:30.621 23:06:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.621 23:06:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.621 23:06:19 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.621 23:06:19 -- setup/common.sh@32 -- # continue 00:03:30.621 23:06:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.621 23:06:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.621 23:06:19 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.621 23:06:19 -- setup/common.sh@32 -- # continue 00:03:30.621 23:06:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.621 23:06:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.621 23:06:19 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.621 23:06:19 -- setup/common.sh@32 -- # continue 00:03:30.621 23:06:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.621 23:06:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.621 23:06:19 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.621 23:06:19 -- setup/common.sh@32 -- # continue 00:03:30.621 23:06:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.621 23:06:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.621 23:06:19 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.621 23:06:19 -- setup/common.sh@32 -- # continue 00:03:30.621 23:06:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.621 23:06:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.621 23:06:19 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.621 23:06:19 -- setup/common.sh@32 -- # continue 00:03:30.621 23:06:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.621 23:06:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.621 23:06:19 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.621 23:06:19 -- setup/common.sh@32 -- # continue 00:03:30.621 23:06:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.621 23:06:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.621 23:06:19 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.621 23:06:19 -- setup/common.sh@32 -- # continue 00:03:30.621 23:06:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.621 23:06:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.621 23:06:19 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.621 23:06:19 -- setup/common.sh@33 -- # echo 0 00:03:30.621 23:06:19 -- setup/common.sh@33 -- # return 0 00:03:30.621 23:06:19 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:30.621 23:06:19 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:30.621 23:06:19 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:30.621 23:06:19 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:30.621 23:06:19 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:30.621 node0=1024 expecting 1024 00:03:30.621 23:06:19 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:30.621 00:03:30.621 real 0m3.943s 00:03:30.621 user 0m1.479s 00:03:30.621 sys 0m2.460s 00:03:30.621 23:06:19 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:03:30.621 23:06:19 -- common/autotest_common.sh@10 -- # set +x 00:03:30.621 ************************************ 00:03:30.621 END TEST default_setup 00:03:30.621 ************************************ 00:03:30.621 23:06:19 -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:03:30.621 23:06:19 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:30.621 23:06:19 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:30.621 23:06:19 -- common/autotest_common.sh@10 -- # set +x 00:03:30.883 ************************************ 00:03:30.883 START TEST per_node_1G_alloc 00:03:30.883 ************************************ 00:03:30.883 23:06:19 -- common/autotest_common.sh@1111 -- # per_node_1G_alloc 00:03:30.883 23:06:19 -- setup/hugepages.sh@143 -- # local IFS=, 00:03:30.883 23:06:19 -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 1 00:03:30.883 23:06:19 -- setup/hugepages.sh@49 -- # local size=1048576 00:03:30.883 23:06:19 -- setup/hugepages.sh@50 -- # (( 3 > 1 )) 00:03:30.883 23:06:19 -- setup/hugepages.sh@51 -- # shift 00:03:30.883 23:06:19 -- setup/hugepages.sh@52 -- # node_ids=('0' '1') 00:03:30.883 23:06:19 -- setup/hugepages.sh@52 -- # local node_ids 00:03:30.883 23:06:19 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:30.883 23:06:19 -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:03:30.883 23:06:19 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 1 00:03:30.883 23:06:19 -- setup/hugepages.sh@62 -- # user_nodes=('0' '1') 00:03:30.883 23:06:19 -- setup/hugepages.sh@62 -- # local user_nodes 00:03:30.883 23:06:19 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:03:30.883 23:06:19 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:30.883 23:06:19 -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:30.883 23:06:19 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:30.883 23:06:19 -- setup/hugepages.sh@69 -- # (( 2 > 0 )) 00:03:30.883 23:06:19 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:30.883 23:06:19 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:03:30.883 23:06:19 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:30.883 23:06:19 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:03:30.883 23:06:19 -- setup/hugepages.sh@73 -- # return 0 00:03:30.883 23:06:19 -- setup/hugepages.sh@146 -- # NRHUGE=512 00:03:30.883 23:06:19 -- setup/hugepages.sh@146 -- # HUGENODE=0,1 00:03:30.883 23:06:19 -- setup/hugepages.sh@146 -- # setup output 00:03:30.883 23:06:19 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:30.883 23:06:19 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:34.190 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:03:34.190 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:03:34.190 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:03:34.190 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:03:34.190 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:03:34.190 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:03:34.190 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:03:34.190 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:03:34.190 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:03:34.190 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:03:34.190 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:03:34.190 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:03:34.190 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:03:34.190 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:03:34.190 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:03:34.190 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:03:34.190 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:03:34.455 23:06:23 -- setup/hugepages.sh@147 -- # nr_hugepages=1024 00:03:34.455 23:06:23 -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:03:34.455 23:06:23 -- setup/hugepages.sh@89 -- # local node 00:03:34.455 23:06:23 -- setup/hugepages.sh@90 -- # local sorted_t 00:03:34.455 23:06:23 -- setup/hugepages.sh@91 -- # local sorted_s 00:03:34.455 23:06:23 -- setup/hugepages.sh@92 -- # local surp 00:03:34.455 23:06:23 -- setup/hugepages.sh@93 -- # local resv 00:03:34.455 23:06:23 -- setup/hugepages.sh@94 -- # local anon 00:03:34.455 23:06:23 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:34.455 23:06:23 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:34.455 23:06:23 -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:34.455 23:06:23 -- setup/common.sh@18 -- # local node= 00:03:34.455 23:06:23 -- setup/common.sh@19 -- # local var val 00:03:34.455 23:06:23 -- setup/common.sh@20 -- # local mem_f mem 00:03:34.455 23:06:23 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:34.455 23:06:23 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:34.455 23:06:23 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:34.455 23:06:23 -- setup/common.sh@28 -- # mapfile -t mem 00:03:34.455 23:06:23 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:34.455 23:06:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.455 23:06:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.456 23:06:23 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338868 kB' 'MemFree: 106659660 kB' 'MemAvailable: 110200360 kB' 'Buffers: 4124 kB' 'Cached: 13007924 kB' 'SwapCached: 0 kB' 'Active: 10130924 kB' 'Inactive: 3515796 kB' 'Active(anon): 9440528 kB' 'Inactive(anon): 0 kB' 'Active(file): 690396 kB' 'Inactive(file): 3515796 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 637564 kB' 'Mapped: 198600 kB' 'Shmem: 8805856 kB' 'KReclaimable: 317680 kB' 'Slab: 1133176 kB' 'SReclaimable: 317680 kB' 'SUnreclaim: 815496 kB' 'KernelStack: 27136 kB' 'PageTables: 9260 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509460 kB' 'Committed_AS: 10834032 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 234860 kB' 'VmallocChunk: 0 kB' 'Percpu: 108288 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3722612 kB' 'DirectMap2M: 43143168 kB' 'DirectMap1G: 89128960 kB' 00:03:34.456 23:06:23 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.456 23:06:23 -- setup/common.sh@32 -- # continue 00:03:34.456 23:06:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.456 23:06:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.456 23:06:23 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.456 23:06:23 -- setup/common.sh@32 -- # continue 00:03:34.456 23:06:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.456 23:06:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.456 23:06:23 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.456 23:06:23 -- setup/common.sh@32 -- # continue 00:03:34.456 23:06:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.456 23:06:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.456 23:06:23 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.456 23:06:23 -- setup/common.sh@32 -- # continue 00:03:34.456 23:06:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.456 23:06:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.456 23:06:23 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.456 23:06:23 -- setup/common.sh@32 -- # continue 00:03:34.456 23:06:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.456 23:06:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.456 23:06:23 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.456 23:06:23 -- setup/common.sh@32 -- # continue 00:03:34.456 23:06:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.456 23:06:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.456 23:06:23 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.456 23:06:23 -- setup/common.sh@32 -- # continue 00:03:34.456 23:06:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.456 23:06:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.456 23:06:23 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.456 23:06:23 -- setup/common.sh@32 -- # continue 00:03:34.456 23:06:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.456 23:06:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.456 23:06:23 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.456 23:06:23 -- setup/common.sh@32 -- # continue 00:03:34.456 23:06:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.456 23:06:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.456 23:06:23 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.456 23:06:23 -- setup/common.sh@32 -- # continue 00:03:34.456 23:06:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.456 23:06:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.456 23:06:23 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.456 23:06:23 -- setup/common.sh@32 -- # continue 00:03:34.456 23:06:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.456 23:06:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.456 23:06:23 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.456 23:06:23 -- setup/common.sh@32 -- # continue 00:03:34.456 23:06:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.456 23:06:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.456 23:06:23 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.456 23:06:23 -- setup/common.sh@32 -- # continue 00:03:34.456 23:06:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.456 23:06:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.456 23:06:23 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.456 23:06:23 -- setup/common.sh@32 -- # continue 00:03:34.456 23:06:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.456 23:06:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.456 23:06:23 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.456 23:06:23 -- setup/common.sh@32 -- # continue 00:03:34.456 23:06:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.456 23:06:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.456 23:06:23 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.456 23:06:23 -- setup/common.sh@32 -- # continue 00:03:34.456 23:06:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.456 23:06:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.456 23:06:23 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.456 23:06:23 -- setup/common.sh@32 -- # continue 00:03:34.456 23:06:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.456 23:06:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.456 23:06:23 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.456 23:06:23 -- setup/common.sh@32 -- # continue 00:03:34.456 23:06:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.456 23:06:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.456 23:06:23 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.456 23:06:23 -- setup/common.sh@32 -- # continue 00:03:34.456 23:06:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.456 23:06:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.456 23:06:23 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.456 23:06:23 -- setup/common.sh@32 -- # continue 00:03:34.456 23:06:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.456 23:06:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.456 23:06:23 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.456 23:06:23 -- setup/common.sh@32 -- # continue 00:03:34.456 23:06:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.456 23:06:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.456 23:06:23 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.456 23:06:23 -- setup/common.sh@32 -- # continue 00:03:34.456 23:06:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.456 23:06:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.456 23:06:23 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.456 23:06:23 -- setup/common.sh@32 -- # continue 00:03:34.456 23:06:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.456 23:06:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.456 23:06:23 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.456 23:06:23 -- setup/common.sh@32 -- # continue 00:03:34.456 23:06:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.456 23:06:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.456 23:06:23 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.456 23:06:23 -- setup/common.sh@32 -- # continue 00:03:34.456 23:06:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.456 23:06:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.456 23:06:23 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.456 23:06:23 -- setup/common.sh@32 -- # continue 00:03:34.456 23:06:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.456 23:06:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.456 23:06:23 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.456 23:06:23 -- setup/common.sh@32 -- # continue 00:03:34.456 23:06:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.456 23:06:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.456 23:06:23 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.456 23:06:23 -- setup/common.sh@32 -- # continue 00:03:34.456 23:06:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.456 23:06:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.456 23:06:23 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.456 23:06:23 -- setup/common.sh@32 -- # continue 00:03:34.456 23:06:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.456 23:06:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.456 23:06:23 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.456 23:06:23 -- setup/common.sh@32 -- # continue 00:03:34.456 23:06:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.456 23:06:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.456 23:06:23 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.456 23:06:23 -- setup/common.sh@32 -- # continue 00:03:34.456 23:06:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.456 23:06:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.456 23:06:23 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.456 23:06:23 -- setup/common.sh@32 -- # continue 00:03:34.456 23:06:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.456 23:06:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.456 23:06:23 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.456 23:06:23 -- setup/common.sh@32 -- # continue 00:03:34.456 23:06:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.456 23:06:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.456 23:06:23 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.456 23:06:23 -- setup/common.sh@32 -- # continue 00:03:34.456 23:06:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.456 23:06:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.456 23:06:23 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.456 23:06:23 -- setup/common.sh@32 -- # continue 00:03:34.456 23:06:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.456 23:06:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.456 23:06:23 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.456 23:06:23 -- setup/common.sh@32 -- # continue 00:03:34.456 23:06:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.456 23:06:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.456 23:06:23 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.456 23:06:23 -- setup/common.sh@32 -- # continue 00:03:34.456 23:06:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.456 23:06:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.456 23:06:23 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.456 23:06:23 -- setup/common.sh@32 -- # continue 00:03:34.456 23:06:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.456 23:06:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.456 23:06:23 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.456 23:06:23 -- setup/common.sh@32 -- # continue 00:03:34.456 23:06:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.456 23:06:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.456 23:06:23 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.456 23:06:23 -- setup/common.sh@32 -- # continue 00:03:34.456 23:06:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.456 23:06:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.457 23:06:23 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.457 23:06:23 -- setup/common.sh@33 -- # echo 0 00:03:34.457 23:06:23 -- setup/common.sh@33 -- # return 0 00:03:34.457 23:06:23 -- setup/hugepages.sh@97 -- # anon=0 00:03:34.457 23:06:23 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:34.457 23:06:23 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:34.457 23:06:23 -- setup/common.sh@18 -- # local node= 00:03:34.457 23:06:23 -- setup/common.sh@19 -- # local var val 00:03:34.457 23:06:23 -- setup/common.sh@20 -- # local mem_f mem 00:03:34.457 23:06:23 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:34.457 23:06:23 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:34.457 23:06:23 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:34.457 23:06:23 -- setup/common.sh@28 -- # mapfile -t mem 00:03:34.457 23:06:23 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:34.457 23:06:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.457 23:06:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.457 23:06:23 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338868 kB' 'MemFree: 106661764 kB' 'MemAvailable: 110202432 kB' 'Buffers: 4124 kB' 'Cached: 13007928 kB' 'SwapCached: 0 kB' 'Active: 10130352 kB' 'Inactive: 3515796 kB' 'Active(anon): 9439956 kB' 'Inactive(anon): 0 kB' 'Active(file): 690396 kB' 'Inactive(file): 3515796 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 637460 kB' 'Mapped: 198448 kB' 'Shmem: 8805860 kB' 'KReclaimable: 317616 kB' 'Slab: 1133152 kB' 'SReclaimable: 317616 kB' 'SUnreclaim: 815536 kB' 'KernelStack: 27152 kB' 'PageTables: 9312 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509460 kB' 'Committed_AS: 10834044 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 234828 kB' 'VmallocChunk: 0 kB' 'Percpu: 108288 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3722612 kB' 'DirectMap2M: 43143168 kB' 'DirectMap1G: 89128960 kB' 00:03:34.457 23:06:23 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.457 23:06:23 -- setup/common.sh@32 -- # continue 00:03:34.457 23:06:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.457 23:06:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.457 23:06:23 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.457 23:06:23 -- setup/common.sh@32 -- # continue 00:03:34.457 23:06:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.457 23:06:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.457 23:06:23 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.457 23:06:23 -- setup/common.sh@32 -- # continue 00:03:34.457 23:06:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.457 23:06:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.457 23:06:23 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.457 23:06:23 -- setup/common.sh@32 -- # continue 00:03:34.457 23:06:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.457 23:06:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.457 23:06:23 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.457 23:06:23 -- setup/common.sh@32 -- # continue 00:03:34.457 23:06:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.457 23:06:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.457 23:06:23 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.457 23:06:23 -- setup/common.sh@32 -- # continue 00:03:34.457 23:06:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.457 23:06:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.457 23:06:23 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.457 23:06:23 -- setup/common.sh@32 -- # continue 00:03:34.457 23:06:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.457 23:06:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.457 23:06:23 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.457 23:06:23 -- setup/common.sh@32 -- # continue 00:03:34.457 23:06:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.457 23:06:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.457 23:06:23 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.457 23:06:23 -- setup/common.sh@32 -- # continue 00:03:34.457 23:06:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.457 23:06:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.457 23:06:23 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.457 23:06:23 -- setup/common.sh@32 -- # continue 00:03:34.457 23:06:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.457 23:06:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.457 23:06:23 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.457 23:06:23 -- setup/common.sh@32 -- # continue 00:03:34.457 23:06:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.457 23:06:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.457 23:06:23 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.457 23:06:23 -- setup/common.sh@32 -- # continue 00:03:34.457 23:06:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.457 23:06:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.457 23:06:23 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.457 23:06:23 -- setup/common.sh@32 -- # continue 00:03:34.457 23:06:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.457 23:06:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.457 23:06:23 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.457 23:06:23 -- setup/common.sh@32 -- # continue 00:03:34.457 23:06:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.457 23:06:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.457 23:06:23 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.457 23:06:23 -- setup/common.sh@32 -- # continue 00:03:34.457 23:06:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.457 23:06:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.457 23:06:23 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.457 23:06:23 -- setup/common.sh@32 -- # continue 00:03:34.457 23:06:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.457 23:06:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.457 23:06:23 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.457 23:06:23 -- setup/common.sh@32 -- # continue 00:03:34.457 23:06:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.457 23:06:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.457 23:06:23 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.457 23:06:23 -- setup/common.sh@32 -- # continue 00:03:34.457 23:06:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.457 23:06:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.457 23:06:23 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.457 23:06:23 -- setup/common.sh@32 -- # continue 00:03:34.457 23:06:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.457 23:06:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.457 23:06:23 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.457 23:06:23 -- setup/common.sh@32 -- # continue 00:03:34.457 23:06:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.457 23:06:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.457 23:06:23 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.457 23:06:23 -- setup/common.sh@32 -- # continue 00:03:34.457 23:06:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.457 23:06:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.457 23:06:23 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.457 23:06:23 -- setup/common.sh@32 -- # continue 00:03:34.457 23:06:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.457 23:06:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.457 23:06:23 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.457 23:06:23 -- setup/common.sh@32 -- # continue 00:03:34.457 23:06:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.457 23:06:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.457 23:06:23 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.457 23:06:23 -- setup/common.sh@32 -- # continue 00:03:34.457 23:06:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.457 23:06:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.457 23:06:23 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.457 23:06:23 -- setup/common.sh@32 -- # continue 00:03:34.457 23:06:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.457 23:06:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.457 23:06:23 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.457 23:06:23 -- setup/common.sh@32 -- # continue 00:03:34.457 23:06:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.457 23:06:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.457 23:06:23 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.457 23:06:23 -- setup/common.sh@32 -- # continue 00:03:34.457 23:06:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.457 23:06:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.457 23:06:23 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.457 23:06:23 -- setup/common.sh@32 -- # continue 00:03:34.457 23:06:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.457 23:06:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.457 23:06:23 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.457 23:06:23 -- setup/common.sh@32 -- # continue 00:03:34.457 23:06:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.457 23:06:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.457 23:06:23 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.457 23:06:23 -- setup/common.sh@32 -- # continue 00:03:34.457 23:06:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.457 23:06:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.457 23:06:23 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.457 23:06:23 -- setup/common.sh@32 -- # continue 00:03:34.457 23:06:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.457 23:06:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.457 23:06:23 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.457 23:06:23 -- setup/common.sh@32 -- # continue 00:03:34.457 23:06:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.457 23:06:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.457 23:06:23 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.458 23:06:23 -- setup/common.sh@32 -- # continue 00:03:34.458 23:06:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.458 23:06:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.458 23:06:23 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.458 23:06:23 -- setup/common.sh@32 -- # continue 00:03:34.458 23:06:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.458 23:06:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.458 23:06:23 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.458 23:06:23 -- setup/common.sh@32 -- # continue 00:03:34.458 23:06:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.458 23:06:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.458 23:06:23 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.458 23:06:23 -- setup/common.sh@32 -- # continue 00:03:34.458 23:06:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.458 23:06:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.458 23:06:23 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.458 23:06:23 -- setup/common.sh@32 -- # continue 00:03:34.458 23:06:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.458 23:06:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.458 23:06:23 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.458 23:06:23 -- setup/common.sh@32 -- # continue 00:03:34.458 23:06:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.458 23:06:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.458 23:06:23 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.458 23:06:23 -- setup/common.sh@32 -- # continue 00:03:34.458 23:06:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.458 23:06:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.458 23:06:23 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.458 23:06:23 -- setup/common.sh@32 -- # continue 00:03:34.458 23:06:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.458 23:06:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.458 23:06:23 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.458 23:06:23 -- setup/common.sh@32 -- # continue 00:03:34.458 23:06:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.458 23:06:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.458 23:06:23 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.458 23:06:23 -- setup/common.sh@32 -- # continue 00:03:34.458 23:06:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.458 23:06:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.458 23:06:23 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.458 23:06:23 -- setup/common.sh@32 -- # continue 00:03:34.458 23:06:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.458 23:06:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.458 23:06:23 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.458 23:06:23 -- setup/common.sh@32 -- # continue 00:03:34.458 23:06:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.458 23:06:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.458 23:06:23 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.458 23:06:23 -- setup/common.sh@32 -- # continue 00:03:34.458 23:06:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.458 23:06:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.458 23:06:23 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.458 23:06:23 -- setup/common.sh@32 -- # continue 00:03:34.458 23:06:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.458 23:06:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.458 23:06:23 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.458 23:06:23 -- setup/common.sh@32 -- # continue 00:03:34.458 23:06:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.458 23:06:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.458 23:06:23 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.458 23:06:23 -- setup/common.sh@32 -- # continue 00:03:34.458 23:06:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.458 23:06:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.458 23:06:23 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.458 23:06:23 -- setup/common.sh@32 -- # continue 00:03:34.458 23:06:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.458 23:06:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.458 23:06:23 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.458 23:06:23 -- setup/common.sh@32 -- # continue 00:03:34.458 23:06:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.458 23:06:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.458 23:06:23 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.458 23:06:23 -- setup/common.sh@32 -- # continue 00:03:34.458 23:06:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.458 23:06:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.458 23:06:23 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.458 23:06:23 -- setup/common.sh@33 -- # echo 0 00:03:34.458 23:06:23 -- setup/common.sh@33 -- # return 0 00:03:34.458 23:06:23 -- setup/hugepages.sh@99 -- # surp=0 00:03:34.458 23:06:23 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:34.458 23:06:23 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:34.458 23:06:23 -- setup/common.sh@18 -- # local node= 00:03:34.458 23:06:23 -- setup/common.sh@19 -- # local var val 00:03:34.458 23:06:23 -- setup/common.sh@20 -- # local mem_f mem 00:03:34.458 23:06:23 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:34.458 23:06:23 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:34.458 23:06:23 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:34.458 23:06:23 -- setup/common.sh@28 -- # mapfile -t mem 00:03:34.458 23:06:23 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:34.458 23:06:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.458 23:06:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.458 23:06:23 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338868 kB' 'MemFree: 106661764 kB' 'MemAvailable: 110202432 kB' 'Buffers: 4124 kB' 'Cached: 13007928 kB' 'SwapCached: 0 kB' 'Active: 10130012 kB' 'Inactive: 3515796 kB' 'Active(anon): 9439616 kB' 'Inactive(anon): 0 kB' 'Active(file): 690396 kB' 'Inactive(file): 3515796 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 637120 kB' 'Mapped: 198448 kB' 'Shmem: 8805860 kB' 'KReclaimable: 317616 kB' 'Slab: 1133152 kB' 'SReclaimable: 317616 kB' 'SUnreclaim: 815536 kB' 'KernelStack: 27136 kB' 'PageTables: 9256 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509460 kB' 'Committed_AS: 10834060 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 234828 kB' 'VmallocChunk: 0 kB' 'Percpu: 108288 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3722612 kB' 'DirectMap2M: 43143168 kB' 'DirectMap1G: 89128960 kB' 00:03:34.458 23:06:23 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.458 23:06:23 -- setup/common.sh@32 -- # continue 00:03:34.458 23:06:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.458 23:06:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.458 23:06:23 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.458 23:06:23 -- setup/common.sh@32 -- # continue 00:03:34.458 23:06:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.458 23:06:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.458 23:06:23 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.458 23:06:23 -- setup/common.sh@32 -- # continue 00:03:34.458 23:06:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.458 23:06:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.458 23:06:23 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.458 23:06:23 -- setup/common.sh@32 -- # continue 00:03:34.458 23:06:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.458 23:06:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.458 23:06:23 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.458 23:06:23 -- setup/common.sh@32 -- # continue 00:03:34.458 23:06:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.458 23:06:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.458 23:06:23 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.458 23:06:23 -- setup/common.sh@32 -- # continue 00:03:34.458 23:06:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.458 23:06:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.458 23:06:23 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.458 23:06:23 -- setup/common.sh@32 -- # continue 00:03:34.458 23:06:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.458 23:06:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.458 23:06:23 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.458 23:06:23 -- setup/common.sh@32 -- # continue 00:03:34.458 23:06:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.458 23:06:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.458 23:06:23 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.458 23:06:23 -- setup/common.sh@32 -- # continue 00:03:34.458 23:06:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.458 23:06:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.458 23:06:23 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.458 23:06:23 -- setup/common.sh@32 -- # continue 00:03:34.458 23:06:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.458 23:06:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.458 23:06:23 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.458 23:06:23 -- setup/common.sh@32 -- # continue 00:03:34.458 23:06:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.458 23:06:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.458 23:06:23 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.458 23:06:23 -- setup/common.sh@32 -- # continue 00:03:34.458 23:06:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.458 23:06:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.458 23:06:23 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.458 23:06:23 -- setup/common.sh@32 -- # continue 00:03:34.458 23:06:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.458 23:06:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.458 23:06:23 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.458 23:06:23 -- setup/common.sh@32 -- # continue 00:03:34.458 23:06:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.458 23:06:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.459 23:06:23 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.459 23:06:23 -- setup/common.sh@32 -- # continue 00:03:34.459 23:06:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.459 23:06:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.459 23:06:23 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.459 23:06:23 -- setup/common.sh@32 -- # continue 00:03:34.459 23:06:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.459 23:06:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.459 23:06:23 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.459 23:06:23 -- setup/common.sh@32 -- # continue 00:03:34.459 23:06:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.459 23:06:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.459 23:06:23 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.459 23:06:23 -- setup/common.sh@32 -- # continue 00:03:34.459 23:06:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.459 23:06:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.459 23:06:23 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.459 23:06:23 -- setup/common.sh@32 -- # continue 00:03:34.459 23:06:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.459 23:06:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.459 23:06:23 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.459 23:06:23 -- setup/common.sh@32 -- # continue 00:03:34.459 23:06:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.459 23:06:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.459 23:06:23 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.459 23:06:23 -- setup/common.sh@32 -- # continue 00:03:34.459 23:06:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.459 23:06:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.459 23:06:23 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.459 23:06:23 -- setup/common.sh@32 -- # continue 00:03:34.459 23:06:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.459 23:06:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.459 23:06:23 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.459 23:06:23 -- setup/common.sh@32 -- # continue 00:03:34.459 23:06:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.459 23:06:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.459 23:06:23 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.459 23:06:23 -- setup/common.sh@32 -- # continue 00:03:34.459 23:06:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.459 23:06:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.459 23:06:23 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.459 23:06:23 -- setup/common.sh@32 -- # continue 00:03:34.459 23:06:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.459 23:06:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.459 23:06:23 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.459 23:06:23 -- setup/common.sh@32 -- # continue 00:03:34.459 23:06:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.459 23:06:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.459 23:06:23 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.459 23:06:23 -- setup/common.sh@32 -- # continue 00:03:34.459 23:06:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.459 23:06:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.459 23:06:23 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.459 23:06:23 -- setup/common.sh@32 -- # continue 00:03:34.459 23:06:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.459 23:06:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.459 23:06:23 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.459 23:06:23 -- setup/common.sh@32 -- # continue 00:03:34.459 23:06:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.459 23:06:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.459 23:06:23 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.459 23:06:23 -- setup/common.sh@32 -- # continue 00:03:34.459 23:06:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.459 23:06:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.459 23:06:23 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.459 23:06:23 -- setup/common.sh@32 -- # continue 00:03:34.459 23:06:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.459 23:06:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.459 23:06:23 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.459 23:06:23 -- setup/common.sh@32 -- # continue 00:03:34.459 23:06:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.459 23:06:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.459 23:06:23 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.459 23:06:23 -- setup/common.sh@32 -- # continue 00:03:34.459 23:06:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.459 23:06:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.459 23:06:23 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.459 23:06:23 -- setup/common.sh@32 -- # continue 00:03:34.459 23:06:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.459 23:06:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.459 23:06:23 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.459 23:06:23 -- setup/common.sh@32 -- # continue 00:03:34.459 23:06:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.459 23:06:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.459 23:06:23 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.459 23:06:23 -- setup/common.sh@32 -- # continue 00:03:34.459 23:06:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.459 23:06:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.459 23:06:23 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.459 23:06:23 -- setup/common.sh@32 -- # continue 00:03:34.459 23:06:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.459 23:06:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.459 23:06:23 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.459 23:06:23 -- setup/common.sh@32 -- # continue 00:03:34.459 23:06:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.459 23:06:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.459 23:06:23 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.459 23:06:23 -- setup/common.sh@32 -- # continue 00:03:34.459 23:06:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.459 23:06:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.459 23:06:23 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.459 23:06:23 -- setup/common.sh@32 -- # continue 00:03:34.459 23:06:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.459 23:06:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.459 23:06:23 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.459 23:06:23 -- setup/common.sh@32 -- # continue 00:03:34.459 23:06:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.459 23:06:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.459 23:06:23 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.459 23:06:23 -- setup/common.sh@32 -- # continue 00:03:34.459 23:06:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.459 23:06:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.459 23:06:23 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.459 23:06:23 -- setup/common.sh@32 -- # continue 00:03:34.459 23:06:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.459 23:06:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.459 23:06:23 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.459 23:06:23 -- setup/common.sh@32 -- # continue 00:03:34.459 23:06:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.459 23:06:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.459 23:06:23 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.459 23:06:23 -- setup/common.sh@32 -- # continue 00:03:34.459 23:06:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.459 23:06:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.459 23:06:23 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.459 23:06:23 -- setup/common.sh@32 -- # continue 00:03:34.459 23:06:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.459 23:06:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.459 23:06:23 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.459 23:06:23 -- setup/common.sh@32 -- # continue 00:03:34.459 23:06:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.459 23:06:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.459 23:06:23 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.459 23:06:23 -- setup/common.sh@32 -- # continue 00:03:34.459 23:06:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.459 23:06:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.459 23:06:23 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.459 23:06:23 -- setup/common.sh@32 -- # continue 00:03:34.459 23:06:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.459 23:06:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.459 23:06:23 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.459 23:06:23 -- setup/common.sh@32 -- # continue 00:03:34.459 23:06:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.459 23:06:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.459 23:06:23 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.459 23:06:23 -- setup/common.sh@33 -- # echo 0 00:03:34.459 23:06:23 -- setup/common.sh@33 -- # return 0 00:03:34.459 23:06:23 -- setup/hugepages.sh@100 -- # resv=0 00:03:34.459 23:06:23 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:34.459 nr_hugepages=1024 00:03:34.459 23:06:23 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:34.459 resv_hugepages=0 00:03:34.459 23:06:23 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:34.459 surplus_hugepages=0 00:03:34.459 23:06:23 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:34.459 anon_hugepages=0 00:03:34.459 23:06:23 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:34.459 23:06:23 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:34.459 23:06:23 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:34.459 23:06:23 -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:34.459 23:06:23 -- setup/common.sh@18 -- # local node= 00:03:34.459 23:06:23 -- setup/common.sh@19 -- # local var val 00:03:34.459 23:06:23 -- setup/common.sh@20 -- # local mem_f mem 00:03:34.459 23:06:23 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:34.459 23:06:23 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:34.459 23:06:23 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:34.459 23:06:23 -- setup/common.sh@28 -- # mapfile -t mem 00:03:34.460 23:06:23 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:34.460 23:06:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.460 23:06:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.460 23:06:23 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338868 kB' 'MemFree: 106661764 kB' 'MemAvailable: 110202432 kB' 'Buffers: 4124 kB' 'Cached: 13007928 kB' 'SwapCached: 0 kB' 'Active: 10130516 kB' 'Inactive: 3515796 kB' 'Active(anon): 9440120 kB' 'Inactive(anon): 0 kB' 'Active(file): 690396 kB' 'Inactive(file): 3515796 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 637624 kB' 'Mapped: 198448 kB' 'Shmem: 8805860 kB' 'KReclaimable: 317616 kB' 'Slab: 1133152 kB' 'SReclaimable: 317616 kB' 'SUnreclaim: 815536 kB' 'KernelStack: 27136 kB' 'PageTables: 9256 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509460 kB' 'Committed_AS: 10834072 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 234828 kB' 'VmallocChunk: 0 kB' 'Percpu: 108288 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3722612 kB' 'DirectMap2M: 43143168 kB' 'DirectMap1G: 89128960 kB' 00:03:34.460 23:06:23 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.460 23:06:23 -- setup/common.sh@32 -- # continue 00:03:34.460 23:06:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.460 23:06:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.460 23:06:23 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.460 23:06:23 -- setup/common.sh@32 -- # continue 00:03:34.460 23:06:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.460 23:06:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.460 23:06:23 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.460 23:06:23 -- setup/common.sh@32 -- # continue 00:03:34.460 23:06:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.460 23:06:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.460 23:06:23 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.460 23:06:23 -- setup/common.sh@32 -- # continue 00:03:34.460 23:06:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.460 23:06:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.460 23:06:23 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.460 23:06:23 -- setup/common.sh@32 -- # continue 00:03:34.460 23:06:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.460 23:06:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.460 23:06:23 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.460 23:06:23 -- setup/common.sh@32 -- # continue 00:03:34.460 23:06:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.460 23:06:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.460 23:06:23 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.460 23:06:23 -- setup/common.sh@32 -- # continue 00:03:34.460 23:06:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.460 23:06:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.460 23:06:23 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.460 23:06:23 -- setup/common.sh@32 -- # continue 00:03:34.460 23:06:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.460 23:06:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.460 23:06:23 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.460 23:06:23 -- setup/common.sh@32 -- # continue 00:03:34.460 23:06:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.460 23:06:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.460 23:06:23 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.460 23:06:23 -- setup/common.sh@32 -- # continue 00:03:34.460 23:06:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.460 23:06:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.460 23:06:23 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.460 23:06:23 -- setup/common.sh@32 -- # continue 00:03:34.460 23:06:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.460 23:06:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.460 23:06:23 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.460 23:06:23 -- setup/common.sh@32 -- # continue 00:03:34.460 23:06:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.460 23:06:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.460 23:06:23 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.460 23:06:23 -- setup/common.sh@32 -- # continue 00:03:34.460 23:06:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.460 23:06:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.460 23:06:23 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.724 23:06:23 -- setup/common.sh@32 -- # continue 00:03:34.724 23:06:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.724 23:06:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.724 23:06:23 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.724 23:06:23 -- setup/common.sh@32 -- # continue 00:03:34.724 23:06:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.724 23:06:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.724 23:06:23 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.724 23:06:23 -- setup/common.sh@32 -- # continue 00:03:34.724 23:06:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.724 23:06:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.724 23:06:23 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.724 23:06:23 -- setup/common.sh@32 -- # continue 00:03:34.724 23:06:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.724 23:06:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.724 23:06:23 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.724 23:06:23 -- setup/common.sh@32 -- # continue 00:03:34.724 23:06:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.724 23:06:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.724 23:06:23 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.724 23:06:23 -- setup/common.sh@32 -- # continue 00:03:34.724 23:06:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.724 23:06:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.724 23:06:23 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.724 23:06:23 -- setup/common.sh@32 -- # continue 00:03:34.724 23:06:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.724 23:06:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.724 23:06:23 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.724 23:06:23 -- setup/common.sh@32 -- # continue 00:03:34.724 23:06:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.724 23:06:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.724 23:06:23 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.724 23:06:23 -- setup/common.sh@32 -- # continue 00:03:34.724 23:06:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.724 23:06:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.724 23:06:23 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.724 23:06:23 -- setup/common.sh@32 -- # continue 00:03:34.724 23:06:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.724 23:06:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.724 23:06:23 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.724 23:06:23 -- setup/common.sh@32 -- # continue 00:03:34.724 23:06:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.724 23:06:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.724 23:06:23 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.724 23:06:23 -- setup/common.sh@32 -- # continue 00:03:34.724 23:06:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.724 23:06:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.724 23:06:23 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.724 23:06:23 -- setup/common.sh@32 -- # continue 00:03:34.724 23:06:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.724 23:06:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.724 23:06:23 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.724 23:06:23 -- setup/common.sh@32 -- # continue 00:03:34.724 23:06:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.724 23:06:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.724 23:06:23 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.724 23:06:23 -- setup/common.sh@32 -- # continue 00:03:34.724 23:06:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.724 23:06:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.724 23:06:23 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.724 23:06:23 -- setup/common.sh@32 -- # continue 00:03:34.724 23:06:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.724 23:06:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.724 23:06:23 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.724 23:06:23 -- setup/common.sh@32 -- # continue 00:03:34.724 23:06:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.724 23:06:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.724 23:06:23 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.724 23:06:23 -- setup/common.sh@32 -- # continue 00:03:34.724 23:06:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.724 23:06:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.724 23:06:23 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.724 23:06:23 -- setup/common.sh@32 -- # continue 00:03:34.724 23:06:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.724 23:06:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.724 23:06:23 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.724 23:06:23 -- setup/common.sh@32 -- # continue 00:03:34.724 23:06:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.724 23:06:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.724 23:06:23 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.724 23:06:23 -- setup/common.sh@32 -- # continue 00:03:34.724 23:06:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.724 23:06:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.724 23:06:23 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.724 23:06:23 -- setup/common.sh@32 -- # continue 00:03:34.724 23:06:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.724 23:06:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.724 23:06:23 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.724 23:06:23 -- setup/common.sh@32 -- # continue 00:03:34.724 23:06:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.724 23:06:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.724 23:06:23 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.724 23:06:23 -- setup/common.sh@32 -- # continue 00:03:34.724 23:06:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.725 23:06:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.725 23:06:23 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.725 23:06:23 -- setup/common.sh@32 -- # continue 00:03:34.725 23:06:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.725 23:06:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.725 23:06:23 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.725 23:06:23 -- setup/common.sh@32 -- # continue 00:03:34.725 23:06:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.725 23:06:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.725 23:06:23 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.725 23:06:23 -- setup/common.sh@32 -- # continue 00:03:34.725 23:06:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.725 23:06:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.725 23:06:23 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.725 23:06:23 -- setup/common.sh@32 -- # continue 00:03:34.725 23:06:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.725 23:06:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.725 23:06:23 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.725 23:06:23 -- setup/common.sh@32 -- # continue 00:03:34.725 23:06:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.725 23:06:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.725 23:06:23 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.725 23:06:23 -- setup/common.sh@32 -- # continue 00:03:34.725 23:06:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.725 23:06:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.725 23:06:23 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.725 23:06:23 -- setup/common.sh@32 -- # continue 00:03:34.725 23:06:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.725 23:06:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.725 23:06:23 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.725 23:06:23 -- setup/common.sh@32 -- # continue 00:03:34.725 23:06:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.725 23:06:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.725 23:06:23 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.725 23:06:23 -- setup/common.sh@32 -- # continue 00:03:34.725 23:06:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.725 23:06:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.725 23:06:23 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.725 23:06:23 -- setup/common.sh@32 -- # continue 00:03:34.725 23:06:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.725 23:06:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.725 23:06:23 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.725 23:06:23 -- setup/common.sh@32 -- # continue 00:03:34.725 23:06:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.725 23:06:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.725 23:06:23 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.725 23:06:23 -- setup/common.sh@33 -- # echo 1024 00:03:34.725 23:06:23 -- setup/common.sh@33 -- # return 0 00:03:34.725 23:06:23 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:34.725 23:06:23 -- setup/hugepages.sh@112 -- # get_nodes 00:03:34.725 23:06:23 -- setup/hugepages.sh@27 -- # local node 00:03:34.725 23:06:23 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:34.725 23:06:23 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:34.725 23:06:23 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:34.725 23:06:23 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:34.725 23:06:23 -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:34.725 23:06:23 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:34.725 23:06:23 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:34.725 23:06:23 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:34.725 23:06:23 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:34.725 23:06:23 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:34.725 23:06:23 -- setup/common.sh@18 -- # local node=0 00:03:34.725 23:06:23 -- setup/common.sh@19 -- # local var val 00:03:34.725 23:06:23 -- setup/common.sh@20 -- # local mem_f mem 00:03:34.725 23:06:23 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:34.725 23:06:23 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:34.725 23:06:23 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:34.725 23:06:23 -- setup/common.sh@28 -- # mapfile -t mem 00:03:34.725 23:06:23 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:34.725 23:06:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.725 23:06:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.725 23:06:23 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 65659008 kB' 'MemFree: 59056228 kB' 'MemUsed: 6602780 kB' 'SwapCached: 0 kB' 'Active: 3358476 kB' 'Inactive: 108980 kB' 'Active(anon): 3048956 kB' 'Inactive(anon): 0 kB' 'Active(file): 309520 kB' 'Inactive(file): 108980 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 3367912 kB' 'Mapped: 98076 kB' 'AnonPages: 102792 kB' 'Shmem: 2949412 kB' 'KernelStack: 13368 kB' 'PageTables: 3652 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 164676 kB' 'Slab: 558208 kB' 'SReclaimable: 164676 kB' 'SUnreclaim: 393532 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:34.725 23:06:23 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.725 23:06:23 -- setup/common.sh@32 -- # continue 00:03:34.725 23:06:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.725 23:06:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.725 23:06:23 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.725 23:06:23 -- setup/common.sh@32 -- # continue 00:03:34.725 23:06:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.725 23:06:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.725 23:06:23 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.725 23:06:23 -- setup/common.sh@32 -- # continue 00:03:34.725 23:06:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.725 23:06:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.725 23:06:23 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.725 23:06:23 -- setup/common.sh@32 -- # continue 00:03:34.725 23:06:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.725 23:06:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.725 23:06:23 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.725 23:06:23 -- setup/common.sh@32 -- # continue 00:03:34.725 23:06:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.725 23:06:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.725 23:06:23 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.725 23:06:23 -- setup/common.sh@32 -- # continue 00:03:34.725 23:06:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.725 23:06:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.725 23:06:23 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.725 23:06:23 -- setup/common.sh@32 -- # continue 00:03:34.725 23:06:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.725 23:06:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.725 23:06:23 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.725 23:06:23 -- setup/common.sh@32 -- # continue 00:03:34.725 23:06:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.725 23:06:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.725 23:06:23 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.725 23:06:23 -- setup/common.sh@32 -- # continue 00:03:34.725 23:06:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.725 23:06:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.725 23:06:23 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.725 23:06:23 -- setup/common.sh@32 -- # continue 00:03:34.725 23:06:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.725 23:06:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.725 23:06:23 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.725 23:06:23 -- setup/common.sh@32 -- # continue 00:03:34.725 23:06:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.725 23:06:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.725 23:06:23 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.725 23:06:23 -- setup/common.sh@32 -- # continue 00:03:34.725 23:06:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.725 23:06:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.725 23:06:23 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.725 23:06:23 -- setup/common.sh@32 -- # continue 00:03:34.725 23:06:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.725 23:06:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.725 23:06:23 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.725 23:06:23 -- setup/common.sh@32 -- # continue 00:03:34.725 23:06:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.725 23:06:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.725 23:06:23 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.725 23:06:23 -- setup/common.sh@32 -- # continue 00:03:34.725 23:06:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.725 23:06:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.725 23:06:23 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.725 23:06:23 -- setup/common.sh@32 -- # continue 00:03:34.725 23:06:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.725 23:06:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.725 23:06:23 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.725 23:06:23 -- setup/common.sh@32 -- # continue 00:03:34.725 23:06:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.725 23:06:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.725 23:06:23 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.725 23:06:23 -- setup/common.sh@32 -- # continue 00:03:34.725 23:06:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.725 23:06:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.725 23:06:23 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.725 23:06:23 -- setup/common.sh@32 -- # continue 00:03:34.725 23:06:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.725 23:06:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.725 23:06:23 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.725 23:06:23 -- setup/common.sh@32 -- # continue 00:03:34.725 23:06:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.726 23:06:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.726 23:06:23 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.726 23:06:23 -- setup/common.sh@32 -- # continue 00:03:34.726 23:06:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.726 23:06:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.726 23:06:23 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.726 23:06:23 -- setup/common.sh@32 -- # continue 00:03:34.726 23:06:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.726 23:06:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.726 23:06:23 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.726 23:06:23 -- setup/common.sh@32 -- # continue 00:03:34.726 23:06:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.726 23:06:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.726 23:06:23 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.726 23:06:23 -- setup/common.sh@32 -- # continue 00:03:34.726 23:06:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.726 23:06:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.726 23:06:23 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.726 23:06:23 -- setup/common.sh@32 -- # continue 00:03:34.726 23:06:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.726 23:06:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.726 23:06:23 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.726 23:06:23 -- setup/common.sh@32 -- # continue 00:03:34.726 23:06:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.726 23:06:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.726 23:06:23 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.726 23:06:23 -- setup/common.sh@32 -- # continue 00:03:34.726 23:06:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.726 23:06:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.726 23:06:23 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.726 23:06:23 -- setup/common.sh@32 -- # continue 00:03:34.726 23:06:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.726 23:06:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.726 23:06:23 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.726 23:06:23 -- setup/common.sh@32 -- # continue 00:03:34.726 23:06:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.726 23:06:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.726 23:06:23 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.726 23:06:23 -- setup/common.sh@32 -- # continue 00:03:34.726 23:06:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.726 23:06:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.726 23:06:23 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.726 23:06:23 -- setup/common.sh@32 -- # continue 00:03:34.726 23:06:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.726 23:06:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.726 23:06:23 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.726 23:06:23 -- setup/common.sh@32 -- # continue 00:03:34.726 23:06:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.726 23:06:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.726 23:06:23 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.726 23:06:23 -- setup/common.sh@32 -- # continue 00:03:34.726 23:06:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.726 23:06:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.726 23:06:23 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.726 23:06:23 -- setup/common.sh@32 -- # continue 00:03:34.726 23:06:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.726 23:06:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.726 23:06:23 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.726 23:06:23 -- setup/common.sh@32 -- # continue 00:03:34.726 23:06:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.726 23:06:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.726 23:06:23 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.726 23:06:23 -- setup/common.sh@32 -- # continue 00:03:34.726 23:06:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.726 23:06:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.726 23:06:23 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.726 23:06:23 -- setup/common.sh@33 -- # echo 0 00:03:34.726 23:06:23 -- setup/common.sh@33 -- # return 0 00:03:34.726 23:06:23 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:34.726 23:06:23 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:34.726 23:06:23 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:34.726 23:06:23 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:34.726 23:06:23 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:34.726 23:06:23 -- setup/common.sh@18 -- # local node=1 00:03:34.726 23:06:23 -- setup/common.sh@19 -- # local var val 00:03:34.726 23:06:23 -- setup/common.sh@20 -- # local mem_f mem 00:03:34.726 23:06:23 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:34.726 23:06:23 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:34.726 23:06:23 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:34.726 23:06:23 -- setup/common.sh@28 -- # mapfile -t mem 00:03:34.726 23:06:23 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:34.726 23:06:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.726 23:06:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.726 23:06:23 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60679860 kB' 'MemFree: 47606464 kB' 'MemUsed: 13073396 kB' 'SwapCached: 0 kB' 'Active: 6771600 kB' 'Inactive: 3406816 kB' 'Active(anon): 6390724 kB' 'Inactive(anon): 0 kB' 'Active(file): 380876 kB' 'Inactive(file): 3406816 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 9644192 kB' 'Mapped: 100372 kB' 'AnonPages: 534292 kB' 'Shmem: 5856500 kB' 'KernelStack: 13752 kB' 'PageTables: 5604 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 152940 kB' 'Slab: 574944 kB' 'SReclaimable: 152940 kB' 'SUnreclaim: 422004 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:34.726 23:06:23 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.726 23:06:23 -- setup/common.sh@32 -- # continue 00:03:34.726 23:06:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.726 23:06:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.726 23:06:23 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.726 23:06:23 -- setup/common.sh@32 -- # continue 00:03:34.726 23:06:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.726 23:06:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.726 23:06:23 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.726 23:06:23 -- setup/common.sh@32 -- # continue 00:03:34.726 23:06:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.726 23:06:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.726 23:06:23 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.726 23:06:23 -- setup/common.sh@32 -- # continue 00:03:34.726 23:06:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.726 23:06:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.726 23:06:23 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.726 23:06:23 -- setup/common.sh@32 -- # continue 00:03:34.726 23:06:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.726 23:06:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.726 23:06:23 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.726 23:06:23 -- setup/common.sh@32 -- # continue 00:03:34.726 23:06:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.726 23:06:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.726 23:06:23 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.726 23:06:23 -- setup/common.sh@32 -- # continue 00:03:34.726 23:06:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.726 23:06:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.726 23:06:23 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.726 23:06:23 -- setup/common.sh@32 -- # continue 00:03:34.726 23:06:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.726 23:06:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.726 23:06:23 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.726 23:06:23 -- setup/common.sh@32 -- # continue 00:03:34.726 23:06:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.726 23:06:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.726 23:06:23 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.726 23:06:23 -- setup/common.sh@32 -- # continue 00:03:34.726 23:06:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.726 23:06:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.726 23:06:23 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.726 23:06:23 -- setup/common.sh@32 -- # continue 00:03:34.726 23:06:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.726 23:06:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.726 23:06:23 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.726 23:06:23 -- setup/common.sh@32 -- # continue 00:03:34.726 23:06:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.726 23:06:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.726 23:06:23 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.726 23:06:23 -- setup/common.sh@32 -- # continue 00:03:34.726 23:06:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.726 23:06:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.726 23:06:23 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.726 23:06:23 -- setup/common.sh@32 -- # continue 00:03:34.726 23:06:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.726 23:06:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.726 23:06:23 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.726 23:06:23 -- setup/common.sh@32 -- # continue 00:03:34.726 23:06:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.726 23:06:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.726 23:06:23 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.726 23:06:23 -- setup/common.sh@32 -- # continue 00:03:34.726 23:06:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.726 23:06:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.726 23:06:23 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.726 23:06:23 -- setup/common.sh@32 -- # continue 00:03:34.726 23:06:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.726 23:06:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.726 23:06:23 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.727 23:06:23 -- setup/common.sh@32 -- # continue 00:03:34.727 23:06:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.727 23:06:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.727 23:06:23 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.727 23:06:23 -- setup/common.sh@32 -- # continue 00:03:34.727 23:06:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.727 23:06:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.727 23:06:23 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.727 23:06:23 -- setup/common.sh@32 -- # continue 00:03:34.727 23:06:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.727 23:06:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.727 23:06:23 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.727 23:06:23 -- setup/common.sh@32 -- # continue 00:03:34.727 23:06:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.727 23:06:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.727 23:06:23 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.727 23:06:23 -- setup/common.sh@32 -- # continue 00:03:34.727 23:06:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.727 23:06:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.727 23:06:23 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.727 23:06:23 -- setup/common.sh@32 -- # continue 00:03:34.727 23:06:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.727 23:06:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.727 23:06:23 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.727 23:06:23 -- setup/common.sh@32 -- # continue 00:03:34.727 23:06:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.727 23:06:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.727 23:06:23 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.727 23:06:23 -- setup/common.sh@32 -- # continue 00:03:34.727 23:06:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.727 23:06:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.727 23:06:23 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.727 23:06:23 -- setup/common.sh@32 -- # continue 00:03:34.727 23:06:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.727 23:06:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.727 23:06:23 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.727 23:06:23 -- setup/common.sh@32 -- # continue 00:03:34.727 23:06:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.727 23:06:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.727 23:06:23 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.727 23:06:23 -- setup/common.sh@32 -- # continue 00:03:34.727 23:06:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.727 23:06:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.727 23:06:23 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.727 23:06:23 -- setup/common.sh@32 -- # continue 00:03:34.727 23:06:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.727 23:06:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.727 23:06:23 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.727 23:06:23 -- setup/common.sh@32 -- # continue 00:03:34.727 23:06:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.727 23:06:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.727 23:06:23 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.727 23:06:23 -- setup/common.sh@32 -- # continue 00:03:34.727 23:06:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.727 23:06:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.727 23:06:23 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.727 23:06:23 -- setup/common.sh@32 -- # continue 00:03:34.727 23:06:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.727 23:06:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.727 23:06:23 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.727 23:06:23 -- setup/common.sh@32 -- # continue 00:03:34.727 23:06:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.727 23:06:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.727 23:06:23 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.727 23:06:23 -- setup/common.sh@32 -- # continue 00:03:34.727 23:06:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.727 23:06:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.727 23:06:23 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.727 23:06:23 -- setup/common.sh@32 -- # continue 00:03:34.727 23:06:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.727 23:06:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.727 23:06:23 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.727 23:06:23 -- setup/common.sh@32 -- # continue 00:03:34.727 23:06:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.727 23:06:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.727 23:06:23 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.727 23:06:23 -- setup/common.sh@33 -- # echo 0 00:03:34.727 23:06:23 -- setup/common.sh@33 -- # return 0 00:03:34.727 23:06:23 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:34.727 23:06:23 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:34.727 23:06:23 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:34.727 23:06:23 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:34.727 23:06:23 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:03:34.727 node0=512 expecting 512 00:03:34.727 23:06:23 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:34.727 23:06:23 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:34.727 23:06:23 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:34.727 23:06:23 -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:03:34.727 node1=512 expecting 512 00:03:34.727 23:06:23 -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:03:34.727 00:03:34.727 real 0m3.877s 00:03:34.727 user 0m1.579s 00:03:34.727 sys 0m2.350s 00:03:34.727 23:06:23 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:03:34.727 23:06:23 -- common/autotest_common.sh@10 -- # set +x 00:03:34.727 ************************************ 00:03:34.727 END TEST per_node_1G_alloc 00:03:34.727 ************************************ 00:03:34.727 23:06:23 -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:03:34.727 23:06:23 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:34.727 23:06:23 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:34.727 23:06:23 -- common/autotest_common.sh@10 -- # set +x 00:03:34.989 ************************************ 00:03:34.989 START TEST even_2G_alloc 00:03:34.989 ************************************ 00:03:34.989 23:06:23 -- common/autotest_common.sh@1111 -- # even_2G_alloc 00:03:34.989 23:06:23 -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:03:34.989 23:06:23 -- setup/hugepages.sh@49 -- # local size=2097152 00:03:34.989 23:06:23 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:34.989 23:06:23 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:34.989 23:06:23 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:34.989 23:06:23 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:34.989 23:06:23 -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:34.989 23:06:23 -- setup/hugepages.sh@62 -- # local user_nodes 00:03:34.989 23:06:23 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:34.989 23:06:23 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:34.989 23:06:23 -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:34.989 23:06:23 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:34.989 23:06:23 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:34.989 23:06:23 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:34.989 23:06:23 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:34.989 23:06:23 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:03:34.989 23:06:23 -- setup/hugepages.sh@83 -- # : 512 00:03:34.989 23:06:23 -- setup/hugepages.sh@84 -- # : 1 00:03:34.989 23:06:23 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:34.989 23:06:23 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:03:34.989 23:06:23 -- setup/hugepages.sh@83 -- # : 0 00:03:34.989 23:06:23 -- setup/hugepages.sh@84 -- # : 0 00:03:34.989 23:06:23 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:34.989 23:06:23 -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:03:34.989 23:06:23 -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:03:34.989 23:06:23 -- setup/hugepages.sh@153 -- # setup output 00:03:34.989 23:06:23 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:34.989 23:06:23 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:38.295 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:03:38.295 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:03:38.295 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:03:38.295 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:03:38.295 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:03:38.295 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:03:38.295 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:03:38.295 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:03:38.295 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:03:38.295 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:03:38.295 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:03:38.295 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:03:38.295 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:03:38.295 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:03:38.295 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:03:38.295 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:03:38.295 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:03:38.563 23:06:27 -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:03:38.563 23:06:27 -- setup/hugepages.sh@89 -- # local node 00:03:38.563 23:06:27 -- setup/hugepages.sh@90 -- # local sorted_t 00:03:38.563 23:06:27 -- setup/hugepages.sh@91 -- # local sorted_s 00:03:38.563 23:06:27 -- setup/hugepages.sh@92 -- # local surp 00:03:38.563 23:06:27 -- setup/hugepages.sh@93 -- # local resv 00:03:38.564 23:06:27 -- setup/hugepages.sh@94 -- # local anon 00:03:38.564 23:06:27 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:38.564 23:06:27 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:38.564 23:06:27 -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:38.564 23:06:27 -- setup/common.sh@18 -- # local node= 00:03:38.564 23:06:27 -- setup/common.sh@19 -- # local var val 00:03:38.564 23:06:27 -- setup/common.sh@20 -- # local mem_f mem 00:03:38.564 23:06:27 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:38.564 23:06:27 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:38.564 23:06:27 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:38.564 23:06:27 -- setup/common.sh@28 -- # mapfile -t mem 00:03:38.564 23:06:27 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:38.564 23:06:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.564 23:06:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.564 23:06:27 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338868 kB' 'MemFree: 106678288 kB' 'MemAvailable: 110218956 kB' 'Buffers: 4124 kB' 'Cached: 13008064 kB' 'SwapCached: 0 kB' 'Active: 10132124 kB' 'Inactive: 3515796 kB' 'Active(anon): 9441728 kB' 'Inactive(anon): 0 kB' 'Active(file): 690396 kB' 'Inactive(file): 3515796 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 638324 kB' 'Mapped: 198564 kB' 'Shmem: 8805996 kB' 'KReclaimable: 317616 kB' 'Slab: 1132736 kB' 'SReclaimable: 317616 kB' 'SUnreclaim: 815120 kB' 'KernelStack: 27088 kB' 'PageTables: 9100 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509460 kB' 'Committed_AS: 10834816 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 234892 kB' 'VmallocChunk: 0 kB' 'Percpu: 108288 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3722612 kB' 'DirectMap2M: 43143168 kB' 'DirectMap1G: 89128960 kB' 00:03:38.564 23:06:27 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.564 23:06:27 -- setup/common.sh@32 -- # continue 00:03:38.564 23:06:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.564 23:06:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.564 23:06:27 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.564 23:06:27 -- setup/common.sh@32 -- # continue 00:03:38.564 23:06:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.564 23:06:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.564 23:06:27 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.564 23:06:27 -- setup/common.sh@32 -- # continue 00:03:38.564 23:06:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.564 23:06:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.564 23:06:27 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.564 23:06:27 -- setup/common.sh@32 -- # continue 00:03:38.564 23:06:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.564 23:06:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.564 23:06:27 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.564 23:06:27 -- setup/common.sh@32 -- # continue 00:03:38.564 23:06:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.564 23:06:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.564 23:06:27 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.564 23:06:27 -- setup/common.sh@32 -- # continue 00:03:38.564 23:06:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.564 23:06:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.564 23:06:27 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.564 23:06:27 -- setup/common.sh@32 -- # continue 00:03:38.564 23:06:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.564 23:06:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.564 23:06:27 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.564 23:06:27 -- setup/common.sh@32 -- # continue 00:03:38.564 23:06:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.564 23:06:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.564 23:06:27 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.564 23:06:27 -- setup/common.sh@32 -- # continue 00:03:38.564 23:06:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.564 23:06:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.564 23:06:27 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.564 23:06:27 -- setup/common.sh@32 -- # continue 00:03:38.564 23:06:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.564 23:06:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.564 23:06:27 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.564 23:06:27 -- setup/common.sh@32 -- # continue 00:03:38.564 23:06:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.564 23:06:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.564 23:06:27 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.564 23:06:27 -- setup/common.sh@32 -- # continue 00:03:38.564 23:06:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.564 23:06:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.564 23:06:27 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.564 23:06:27 -- setup/common.sh@32 -- # continue 00:03:38.564 23:06:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.564 23:06:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.564 23:06:27 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.564 23:06:27 -- setup/common.sh@32 -- # continue 00:03:38.564 23:06:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.564 23:06:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.564 23:06:27 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.564 23:06:27 -- setup/common.sh@32 -- # continue 00:03:38.564 23:06:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.564 23:06:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.564 23:06:27 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.564 23:06:27 -- setup/common.sh@32 -- # continue 00:03:38.564 23:06:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.564 23:06:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.564 23:06:27 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.564 23:06:27 -- setup/common.sh@32 -- # continue 00:03:38.564 23:06:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.564 23:06:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.564 23:06:27 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.564 23:06:27 -- setup/common.sh@32 -- # continue 00:03:38.564 23:06:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.564 23:06:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.564 23:06:27 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.564 23:06:27 -- setup/common.sh@32 -- # continue 00:03:38.564 23:06:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.564 23:06:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.564 23:06:27 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.564 23:06:27 -- setup/common.sh@32 -- # continue 00:03:38.564 23:06:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.564 23:06:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.564 23:06:27 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.564 23:06:27 -- setup/common.sh@32 -- # continue 00:03:38.564 23:06:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.564 23:06:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.564 23:06:27 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.564 23:06:27 -- setup/common.sh@32 -- # continue 00:03:38.564 23:06:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.564 23:06:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.564 23:06:27 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.564 23:06:27 -- setup/common.sh@32 -- # continue 00:03:38.564 23:06:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.564 23:06:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.564 23:06:27 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.564 23:06:27 -- setup/common.sh@32 -- # continue 00:03:38.564 23:06:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.564 23:06:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.564 23:06:27 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.564 23:06:27 -- setup/common.sh@32 -- # continue 00:03:38.564 23:06:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.564 23:06:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.564 23:06:27 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.564 23:06:27 -- setup/common.sh@32 -- # continue 00:03:38.564 23:06:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.564 23:06:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.564 23:06:27 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.564 23:06:27 -- setup/common.sh@32 -- # continue 00:03:38.564 23:06:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.564 23:06:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.564 23:06:27 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.564 23:06:27 -- setup/common.sh@32 -- # continue 00:03:38.564 23:06:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.564 23:06:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.564 23:06:27 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.564 23:06:27 -- setup/common.sh@32 -- # continue 00:03:38.564 23:06:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.564 23:06:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.564 23:06:27 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.564 23:06:27 -- setup/common.sh@32 -- # continue 00:03:38.564 23:06:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.564 23:06:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.564 23:06:27 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.564 23:06:27 -- setup/common.sh@32 -- # continue 00:03:38.564 23:06:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.564 23:06:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.564 23:06:27 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.564 23:06:27 -- setup/common.sh@32 -- # continue 00:03:38.564 23:06:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.564 23:06:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.564 23:06:27 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.564 23:06:27 -- setup/common.sh@32 -- # continue 00:03:38.564 23:06:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.564 23:06:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.564 23:06:27 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.564 23:06:27 -- setup/common.sh@32 -- # continue 00:03:38.564 23:06:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.564 23:06:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.564 23:06:27 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.564 23:06:27 -- setup/common.sh@32 -- # continue 00:03:38.564 23:06:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.564 23:06:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.564 23:06:27 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.564 23:06:27 -- setup/common.sh@32 -- # continue 00:03:38.564 23:06:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.564 23:06:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.564 23:06:27 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.564 23:06:27 -- setup/common.sh@32 -- # continue 00:03:38.564 23:06:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.564 23:06:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.564 23:06:27 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.564 23:06:27 -- setup/common.sh@32 -- # continue 00:03:38.564 23:06:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.564 23:06:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.564 23:06:27 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.564 23:06:27 -- setup/common.sh@32 -- # continue 00:03:38.564 23:06:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.564 23:06:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.564 23:06:27 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.564 23:06:27 -- setup/common.sh@32 -- # continue 00:03:38.564 23:06:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.564 23:06:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.564 23:06:27 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.564 23:06:27 -- setup/common.sh@33 -- # echo 0 00:03:38.564 23:06:27 -- setup/common.sh@33 -- # return 0 00:03:38.564 23:06:27 -- setup/hugepages.sh@97 -- # anon=0 00:03:38.564 23:06:27 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:38.564 23:06:27 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:38.564 23:06:27 -- setup/common.sh@18 -- # local node= 00:03:38.564 23:06:27 -- setup/common.sh@19 -- # local var val 00:03:38.564 23:06:27 -- setup/common.sh@20 -- # local mem_f mem 00:03:38.564 23:06:27 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:38.564 23:06:27 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:38.564 23:06:27 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:38.564 23:06:27 -- setup/common.sh@28 -- # mapfile -t mem 00:03:38.564 23:06:27 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:38.564 23:06:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.564 23:06:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.564 23:06:27 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338868 kB' 'MemFree: 106679192 kB' 'MemAvailable: 110219860 kB' 'Buffers: 4124 kB' 'Cached: 13008068 kB' 'SwapCached: 0 kB' 'Active: 10131552 kB' 'Inactive: 3515796 kB' 'Active(anon): 9441156 kB' 'Inactive(anon): 0 kB' 'Active(file): 690396 kB' 'Inactive(file): 3515796 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 638072 kB' 'Mapped: 198508 kB' 'Shmem: 8806000 kB' 'KReclaimable: 317616 kB' 'Slab: 1132720 kB' 'SReclaimable: 317616 kB' 'SUnreclaim: 815104 kB' 'KernelStack: 27088 kB' 'PageTables: 9100 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509460 kB' 'Committed_AS: 10834828 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 234908 kB' 'VmallocChunk: 0 kB' 'Percpu: 108288 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3722612 kB' 'DirectMap2M: 43143168 kB' 'DirectMap1G: 89128960 kB' 00:03:38.564 23:06:27 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.564 23:06:27 -- setup/common.sh@32 -- # continue 00:03:38.564 23:06:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.564 23:06:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.564 23:06:27 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.564 23:06:27 -- setup/common.sh@32 -- # continue 00:03:38.564 23:06:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.564 23:06:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.564 23:06:27 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.564 23:06:27 -- setup/common.sh@32 -- # continue 00:03:38.564 23:06:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.564 23:06:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.564 23:06:27 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.564 23:06:27 -- setup/common.sh@32 -- # continue 00:03:38.564 23:06:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.564 23:06:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.565 23:06:27 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.565 23:06:27 -- setup/common.sh@32 -- # continue 00:03:38.565 23:06:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.565 23:06:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.565 23:06:27 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.565 23:06:27 -- setup/common.sh@32 -- # continue 00:03:38.565 23:06:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.565 23:06:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.565 23:06:27 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.565 23:06:27 -- setup/common.sh@32 -- # continue 00:03:38.565 23:06:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.565 23:06:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.565 23:06:27 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.565 23:06:27 -- setup/common.sh@32 -- # continue 00:03:38.565 23:06:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.565 23:06:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.565 23:06:27 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.565 23:06:27 -- setup/common.sh@32 -- # continue 00:03:38.565 23:06:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.565 23:06:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.565 23:06:27 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.565 23:06:27 -- setup/common.sh@32 -- # continue 00:03:38.565 23:06:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.565 23:06:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.565 23:06:27 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.565 23:06:27 -- setup/common.sh@32 -- # continue 00:03:38.565 23:06:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.565 23:06:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.565 23:06:27 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.565 23:06:27 -- setup/common.sh@32 -- # continue 00:03:38.565 23:06:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.565 23:06:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.565 23:06:27 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.565 23:06:27 -- setup/common.sh@32 -- # continue 00:03:38.565 23:06:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.565 23:06:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.565 23:06:27 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.565 23:06:27 -- setup/common.sh@32 -- # continue 00:03:38.565 23:06:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.565 23:06:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.565 23:06:27 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.565 23:06:27 -- setup/common.sh@32 -- # continue 00:03:38.565 23:06:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.565 23:06:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.565 23:06:27 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.565 23:06:27 -- setup/common.sh@32 -- # continue 00:03:38.565 23:06:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.565 23:06:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.565 23:06:27 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.565 23:06:27 -- setup/common.sh@32 -- # continue 00:03:38.565 23:06:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.565 23:06:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.565 23:06:27 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.565 23:06:27 -- setup/common.sh@32 -- # continue 00:03:38.565 23:06:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.565 23:06:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.565 23:06:27 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.565 23:06:27 -- setup/common.sh@32 -- # continue 00:03:38.565 23:06:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.565 23:06:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.565 23:06:27 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.565 23:06:27 -- setup/common.sh@32 -- # continue 00:03:38.565 23:06:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.565 23:06:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.565 23:06:27 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.565 23:06:27 -- setup/common.sh@32 -- # continue 00:03:38.565 23:06:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.565 23:06:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.565 23:06:27 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.565 23:06:27 -- setup/common.sh@32 -- # continue 00:03:38.565 23:06:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.565 23:06:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.565 23:06:27 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.565 23:06:27 -- setup/common.sh@32 -- # continue 00:03:38.565 23:06:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.565 23:06:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.565 23:06:27 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.565 23:06:27 -- setup/common.sh@32 -- # continue 00:03:38.565 23:06:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.565 23:06:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.565 23:06:27 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.565 23:06:27 -- setup/common.sh@32 -- # continue 00:03:38.565 23:06:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.565 23:06:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.565 23:06:27 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.565 23:06:27 -- setup/common.sh@32 -- # continue 00:03:38.565 23:06:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.565 23:06:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.565 23:06:27 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.565 23:06:27 -- setup/common.sh@32 -- # continue 00:03:38.565 23:06:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.565 23:06:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.565 23:06:27 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.565 23:06:27 -- setup/common.sh@32 -- # continue 00:03:38.565 23:06:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.565 23:06:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.565 23:06:27 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.565 23:06:27 -- setup/common.sh@32 -- # continue 00:03:38.565 23:06:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.565 23:06:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.565 23:06:27 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.565 23:06:27 -- setup/common.sh@32 -- # continue 00:03:38.565 23:06:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.565 23:06:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.565 23:06:27 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.565 23:06:27 -- setup/common.sh@32 -- # continue 00:03:38.565 23:06:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.565 23:06:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.565 23:06:27 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.565 23:06:27 -- setup/common.sh@32 -- # continue 00:03:38.565 23:06:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.565 23:06:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.565 23:06:27 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.565 23:06:27 -- setup/common.sh@32 -- # continue 00:03:38.565 23:06:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.565 23:06:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.565 23:06:27 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.565 23:06:27 -- setup/common.sh@32 -- # continue 00:03:38.565 23:06:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.565 23:06:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.565 23:06:27 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.565 23:06:27 -- setup/common.sh@32 -- # continue 00:03:38.565 23:06:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.565 23:06:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.565 23:06:27 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.565 23:06:27 -- setup/common.sh@32 -- # continue 00:03:38.565 23:06:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.565 23:06:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.565 23:06:27 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.565 23:06:27 -- setup/common.sh@32 -- # continue 00:03:38.565 23:06:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.565 23:06:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.565 23:06:27 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.565 23:06:27 -- setup/common.sh@32 -- # continue 00:03:38.565 23:06:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.565 23:06:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.565 23:06:27 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.565 23:06:27 -- setup/common.sh@32 -- # continue 00:03:38.565 23:06:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.565 23:06:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.565 23:06:27 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.565 23:06:27 -- setup/common.sh@32 -- # continue 00:03:38.565 23:06:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.565 23:06:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.565 23:06:27 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.565 23:06:27 -- setup/common.sh@32 -- # continue 00:03:38.565 23:06:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.565 23:06:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.565 23:06:27 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.565 23:06:27 -- setup/common.sh@32 -- # continue 00:03:38.565 23:06:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.565 23:06:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.565 23:06:27 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.565 23:06:27 -- setup/common.sh@32 -- # continue 00:03:38.565 23:06:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.565 23:06:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.565 23:06:27 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.565 23:06:27 -- setup/common.sh@32 -- # continue 00:03:38.565 23:06:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.565 23:06:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.565 23:06:27 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.565 23:06:27 -- setup/common.sh@32 -- # continue 00:03:38.565 23:06:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.565 23:06:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.565 23:06:27 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.565 23:06:27 -- setup/common.sh@32 -- # continue 00:03:38.565 23:06:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.565 23:06:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.565 23:06:27 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.565 23:06:27 -- setup/common.sh@32 -- # continue 00:03:38.565 23:06:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.565 23:06:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.565 23:06:27 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.565 23:06:27 -- setup/common.sh@32 -- # continue 00:03:38.565 23:06:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.565 23:06:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.565 23:06:27 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.565 23:06:27 -- setup/common.sh@32 -- # continue 00:03:38.565 23:06:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.565 23:06:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.565 23:06:27 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.565 23:06:27 -- setup/common.sh@32 -- # continue 00:03:38.565 23:06:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.565 23:06:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.565 23:06:27 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.565 23:06:27 -- setup/common.sh@32 -- # continue 00:03:38.565 23:06:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.565 23:06:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.565 23:06:27 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.565 23:06:27 -- setup/common.sh@33 -- # echo 0 00:03:38.565 23:06:27 -- setup/common.sh@33 -- # return 0 00:03:38.565 23:06:27 -- setup/hugepages.sh@99 -- # surp=0 00:03:38.565 23:06:27 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:38.565 23:06:27 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:38.565 23:06:27 -- setup/common.sh@18 -- # local node= 00:03:38.565 23:06:27 -- setup/common.sh@19 -- # local var val 00:03:38.565 23:06:27 -- setup/common.sh@20 -- # local mem_f mem 00:03:38.565 23:06:27 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:38.565 23:06:27 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:38.565 23:06:27 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:38.565 23:06:27 -- setup/common.sh@28 -- # mapfile -t mem 00:03:38.565 23:06:27 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:38.565 23:06:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.565 23:06:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.565 23:06:27 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338868 kB' 'MemFree: 106679192 kB' 'MemAvailable: 110219860 kB' 'Buffers: 4124 kB' 'Cached: 13008068 kB' 'SwapCached: 0 kB' 'Active: 10131056 kB' 'Inactive: 3515796 kB' 'Active(anon): 9440660 kB' 'Inactive(anon): 0 kB' 'Active(file): 690396 kB' 'Inactive(file): 3515796 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 638032 kB' 'Mapped: 198432 kB' 'Shmem: 8806000 kB' 'KReclaimable: 317616 kB' 'Slab: 1132736 kB' 'SReclaimable: 317616 kB' 'SUnreclaim: 815120 kB' 'KernelStack: 27088 kB' 'PageTables: 9096 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509460 kB' 'Committed_AS: 10834844 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 234908 kB' 'VmallocChunk: 0 kB' 'Percpu: 108288 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3722612 kB' 'DirectMap2M: 43143168 kB' 'DirectMap1G: 89128960 kB' 00:03:38.565 23:06:27 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.565 23:06:27 -- setup/common.sh@32 -- # continue 00:03:38.565 23:06:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.566 23:06:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.566 23:06:27 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.566 23:06:27 -- setup/common.sh@32 -- # continue 00:03:38.566 23:06:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.566 23:06:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.566 23:06:27 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.566 23:06:27 -- setup/common.sh@32 -- # continue 00:03:38.566 23:06:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.566 23:06:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.566 23:06:27 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.566 23:06:27 -- setup/common.sh@32 -- # continue 00:03:38.566 23:06:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.566 23:06:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.566 23:06:27 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.566 23:06:27 -- setup/common.sh@32 -- # continue 00:03:38.566 23:06:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.566 23:06:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.566 23:06:27 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.566 23:06:27 -- setup/common.sh@32 -- # continue 00:03:38.566 23:06:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.566 23:06:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.566 23:06:27 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.566 23:06:27 -- setup/common.sh@32 -- # continue 00:03:38.566 23:06:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.566 23:06:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.566 23:06:27 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.566 23:06:27 -- setup/common.sh@32 -- # continue 00:03:38.566 23:06:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.566 23:06:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.566 23:06:27 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.566 23:06:27 -- setup/common.sh@32 -- # continue 00:03:38.566 23:06:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.566 23:06:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.566 23:06:27 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.566 23:06:27 -- setup/common.sh@32 -- # continue 00:03:38.566 23:06:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.566 23:06:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.566 23:06:27 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.566 23:06:27 -- setup/common.sh@32 -- # continue 00:03:38.566 23:06:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.566 23:06:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.566 23:06:27 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.566 23:06:27 -- setup/common.sh@32 -- # continue 00:03:38.566 23:06:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.566 23:06:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.566 23:06:27 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.566 23:06:27 -- setup/common.sh@32 -- # continue 00:03:38.566 23:06:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.566 23:06:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.566 23:06:27 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.566 23:06:27 -- setup/common.sh@32 -- # continue 00:03:38.566 23:06:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.566 23:06:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.566 23:06:27 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.566 23:06:27 -- setup/common.sh@32 -- # continue 00:03:38.566 23:06:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.566 23:06:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.566 23:06:27 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.566 23:06:27 -- setup/common.sh@32 -- # continue 00:03:38.566 23:06:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.566 23:06:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.566 23:06:27 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.566 23:06:27 -- setup/common.sh@32 -- # continue 00:03:38.566 23:06:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.566 23:06:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.566 23:06:27 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.566 23:06:27 -- setup/common.sh@32 -- # continue 00:03:38.566 23:06:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.566 23:06:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.566 23:06:27 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.566 23:06:27 -- setup/common.sh@32 -- # continue 00:03:38.566 23:06:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.566 23:06:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.566 23:06:27 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.566 23:06:27 -- setup/common.sh@32 -- # continue 00:03:38.566 23:06:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.566 23:06:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.566 23:06:27 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.566 23:06:27 -- setup/common.sh@32 -- # continue 00:03:38.566 23:06:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.566 23:06:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.566 23:06:27 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.566 23:06:27 -- setup/common.sh@32 -- # continue 00:03:38.566 23:06:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.566 23:06:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.566 23:06:27 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.566 23:06:27 -- setup/common.sh@32 -- # continue 00:03:38.566 23:06:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.566 23:06:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.566 23:06:27 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.566 23:06:27 -- setup/common.sh@32 -- # continue 00:03:38.566 23:06:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.566 23:06:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.566 23:06:27 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.566 23:06:27 -- setup/common.sh@32 -- # continue 00:03:38.566 23:06:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.566 23:06:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.566 23:06:27 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.566 23:06:27 -- setup/common.sh@32 -- # continue 00:03:38.566 23:06:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.566 23:06:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.566 23:06:27 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.566 23:06:27 -- setup/common.sh@32 -- # continue 00:03:38.566 23:06:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.566 23:06:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.566 23:06:27 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.566 23:06:27 -- setup/common.sh@32 -- # continue 00:03:38.566 23:06:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.566 23:06:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.566 23:06:27 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.566 23:06:27 -- setup/common.sh@32 -- # continue 00:03:38.566 23:06:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.566 23:06:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.566 23:06:27 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.566 23:06:27 -- setup/common.sh@32 -- # continue 00:03:38.566 23:06:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.566 23:06:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.566 23:06:27 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.566 23:06:27 -- setup/common.sh@32 -- # continue 00:03:38.566 23:06:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.566 23:06:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.566 23:06:27 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.566 23:06:27 -- setup/common.sh@32 -- # continue 00:03:38.566 23:06:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.566 23:06:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.566 23:06:27 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.566 23:06:27 -- setup/common.sh@32 -- # continue 00:03:38.566 23:06:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.566 23:06:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.566 23:06:27 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.566 23:06:27 -- setup/common.sh@32 -- # continue 00:03:38.566 23:06:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.566 23:06:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.566 23:06:27 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.566 23:06:27 -- setup/common.sh@32 -- # continue 00:03:38.566 23:06:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.566 23:06:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.566 23:06:27 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.566 23:06:27 -- setup/common.sh@32 -- # continue 00:03:38.566 23:06:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.566 23:06:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.566 23:06:27 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.566 23:06:27 -- setup/common.sh@32 -- # continue 00:03:38.566 23:06:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.566 23:06:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.566 23:06:27 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.566 23:06:27 -- setup/common.sh@32 -- # continue 00:03:38.566 23:06:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.566 23:06:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.566 23:06:27 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.566 23:06:27 -- setup/common.sh@32 -- # continue 00:03:38.566 23:06:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.566 23:06:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.566 23:06:27 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.566 23:06:27 -- setup/common.sh@32 -- # continue 00:03:38.566 23:06:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.566 23:06:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.566 23:06:27 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.566 23:06:27 -- setup/common.sh@32 -- # continue 00:03:38.566 23:06:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.566 23:06:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.566 23:06:27 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.566 23:06:27 -- setup/common.sh@32 -- # continue 00:03:38.566 23:06:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.566 23:06:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.566 23:06:27 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.566 23:06:27 -- setup/common.sh@32 -- # continue 00:03:38.566 23:06:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.566 23:06:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.566 23:06:27 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.566 23:06:27 -- setup/common.sh@32 -- # continue 00:03:38.566 23:06:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.566 23:06:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.566 23:06:27 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.566 23:06:27 -- setup/common.sh@32 -- # continue 00:03:38.566 23:06:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.566 23:06:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.566 23:06:27 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.566 23:06:27 -- setup/common.sh@32 -- # continue 00:03:38.566 23:06:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.566 23:06:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.566 23:06:27 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.566 23:06:27 -- setup/common.sh@32 -- # continue 00:03:38.566 23:06:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.566 23:06:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.566 23:06:27 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.566 23:06:27 -- setup/common.sh@32 -- # continue 00:03:38.566 23:06:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.566 23:06:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.566 23:06:27 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.566 23:06:27 -- setup/common.sh@32 -- # continue 00:03:38.566 23:06:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.566 23:06:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.566 23:06:27 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.566 23:06:27 -- setup/common.sh@32 -- # continue 00:03:38.566 23:06:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.566 23:06:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.566 23:06:27 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.566 23:06:27 -- setup/common.sh@33 -- # echo 0 00:03:38.566 23:06:27 -- setup/common.sh@33 -- # return 0 00:03:38.566 23:06:27 -- setup/hugepages.sh@100 -- # resv=0 00:03:38.566 23:06:27 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:38.566 nr_hugepages=1024 00:03:38.566 23:06:27 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:38.566 resv_hugepages=0 00:03:38.566 23:06:27 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:38.566 surplus_hugepages=0 00:03:38.566 23:06:27 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:38.566 anon_hugepages=0 00:03:38.566 23:06:27 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:38.566 23:06:27 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:38.566 23:06:27 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:38.566 23:06:27 -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:38.566 23:06:27 -- setup/common.sh@18 -- # local node= 00:03:38.566 23:06:27 -- setup/common.sh@19 -- # local var val 00:03:38.566 23:06:27 -- setup/common.sh@20 -- # local mem_f mem 00:03:38.566 23:06:27 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:38.566 23:06:27 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:38.566 23:06:27 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:38.566 23:06:27 -- setup/common.sh@28 -- # mapfile -t mem 00:03:38.566 23:06:27 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:38.566 23:06:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.566 23:06:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.567 23:06:27 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338868 kB' 'MemFree: 106680092 kB' 'MemAvailable: 110220760 kB' 'Buffers: 4124 kB' 'Cached: 13008092 kB' 'SwapCached: 0 kB' 'Active: 10131112 kB' 'Inactive: 3515796 kB' 'Active(anon): 9440716 kB' 'Inactive(anon): 0 kB' 'Active(file): 690396 kB' 'Inactive(file): 3515796 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 638028 kB' 'Mapped: 198432 kB' 'Shmem: 8806024 kB' 'KReclaimable: 317616 kB' 'Slab: 1132736 kB' 'SReclaimable: 317616 kB' 'SUnreclaim: 815120 kB' 'KernelStack: 27088 kB' 'PageTables: 9096 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509460 kB' 'Committed_AS: 10834856 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 234908 kB' 'VmallocChunk: 0 kB' 'Percpu: 108288 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3722612 kB' 'DirectMap2M: 43143168 kB' 'DirectMap1G: 89128960 kB' 00:03:38.567 23:06:27 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.567 23:06:27 -- setup/common.sh@32 -- # continue 00:03:38.567 23:06:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.567 23:06:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.567 23:06:27 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.567 23:06:27 -- setup/common.sh@32 -- # continue 00:03:38.567 23:06:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.567 23:06:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.567 23:06:27 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.567 23:06:27 -- setup/common.sh@32 -- # continue 00:03:38.567 23:06:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.567 23:06:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.567 23:06:27 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.567 23:06:27 -- setup/common.sh@32 -- # continue 00:03:38.567 23:06:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.567 23:06:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.567 23:06:27 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.567 23:06:27 -- setup/common.sh@32 -- # continue 00:03:38.567 23:06:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.567 23:06:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.567 23:06:27 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.567 23:06:27 -- setup/common.sh@32 -- # continue 00:03:38.567 23:06:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.567 23:06:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.567 23:06:27 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.567 23:06:27 -- setup/common.sh@32 -- # continue 00:03:38.567 23:06:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.567 23:06:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.567 23:06:27 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.567 23:06:27 -- setup/common.sh@32 -- # continue 00:03:38.567 23:06:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.567 23:06:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.567 23:06:27 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.567 23:06:27 -- setup/common.sh@32 -- # continue 00:03:38.567 23:06:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.567 23:06:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.567 23:06:27 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.567 23:06:27 -- setup/common.sh@32 -- # continue 00:03:38.567 23:06:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.567 23:06:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.567 23:06:27 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.567 23:06:27 -- setup/common.sh@32 -- # continue 00:03:38.567 23:06:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.567 23:06:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.567 23:06:27 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.567 23:06:27 -- setup/common.sh@32 -- # continue 00:03:38.567 23:06:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.567 23:06:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.567 23:06:27 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.567 23:06:27 -- setup/common.sh@32 -- # continue 00:03:38.567 23:06:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.567 23:06:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.567 23:06:27 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.567 23:06:27 -- setup/common.sh@32 -- # continue 00:03:38.567 23:06:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.567 23:06:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.567 23:06:27 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.567 23:06:27 -- setup/common.sh@32 -- # continue 00:03:38.567 23:06:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.567 23:06:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.567 23:06:27 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.567 23:06:27 -- setup/common.sh@32 -- # continue 00:03:38.567 23:06:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.567 23:06:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.567 23:06:27 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.567 23:06:27 -- setup/common.sh@32 -- # continue 00:03:38.567 23:06:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.567 23:06:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.567 23:06:27 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.567 23:06:27 -- setup/common.sh@32 -- # continue 00:03:38.567 23:06:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.567 23:06:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.567 23:06:27 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.567 23:06:27 -- setup/common.sh@32 -- # continue 00:03:38.567 23:06:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.567 23:06:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.567 23:06:27 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.567 23:06:27 -- setup/common.sh@32 -- # continue 00:03:38.567 23:06:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.567 23:06:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.567 23:06:27 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.567 23:06:27 -- setup/common.sh@32 -- # continue 00:03:38.567 23:06:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.567 23:06:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.567 23:06:27 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.567 23:06:27 -- setup/common.sh@32 -- # continue 00:03:38.567 23:06:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.567 23:06:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.567 23:06:27 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.567 23:06:27 -- setup/common.sh@32 -- # continue 00:03:38.567 23:06:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.567 23:06:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.567 23:06:27 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.567 23:06:27 -- setup/common.sh@32 -- # continue 00:03:38.567 23:06:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.567 23:06:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.567 23:06:27 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.567 23:06:27 -- setup/common.sh@32 -- # continue 00:03:38.567 23:06:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.567 23:06:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.567 23:06:27 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.567 23:06:27 -- setup/common.sh@32 -- # continue 00:03:38.567 23:06:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.567 23:06:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.567 23:06:27 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.567 23:06:27 -- setup/common.sh@32 -- # continue 00:03:38.567 23:06:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.567 23:06:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.567 23:06:27 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.567 23:06:27 -- setup/common.sh@32 -- # continue 00:03:38.567 23:06:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.567 23:06:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.567 23:06:27 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.567 23:06:27 -- setup/common.sh@32 -- # continue 00:03:38.567 23:06:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.567 23:06:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.567 23:06:27 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.567 23:06:27 -- setup/common.sh@32 -- # continue 00:03:38.567 23:06:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.567 23:06:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.567 23:06:27 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.567 23:06:27 -- setup/common.sh@32 -- # continue 00:03:38.567 23:06:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.567 23:06:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.567 23:06:27 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.567 23:06:27 -- setup/common.sh@32 -- # continue 00:03:38.567 23:06:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.567 23:06:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.567 23:06:27 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.567 23:06:27 -- setup/common.sh@32 -- # continue 00:03:38.567 23:06:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.567 23:06:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.567 23:06:27 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.567 23:06:27 -- setup/common.sh@32 -- # continue 00:03:38.567 23:06:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.567 23:06:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.567 23:06:27 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.567 23:06:27 -- setup/common.sh@32 -- # continue 00:03:38.567 23:06:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.567 23:06:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.567 23:06:27 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.567 23:06:27 -- setup/common.sh@32 -- # continue 00:03:38.567 23:06:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.567 23:06:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.567 23:06:27 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.567 23:06:27 -- setup/common.sh@32 -- # continue 00:03:38.567 23:06:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.567 23:06:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.567 23:06:27 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.567 23:06:27 -- setup/common.sh@32 -- # continue 00:03:38.567 23:06:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.567 23:06:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.567 23:06:27 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.567 23:06:27 -- setup/common.sh@32 -- # continue 00:03:38.567 23:06:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.567 23:06:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.567 23:06:27 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.567 23:06:27 -- setup/common.sh@32 -- # continue 00:03:38.567 23:06:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.567 23:06:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.567 23:06:27 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.567 23:06:27 -- setup/common.sh@32 -- # continue 00:03:38.567 23:06:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.567 23:06:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.567 23:06:27 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.567 23:06:27 -- setup/common.sh@32 -- # continue 00:03:38.567 23:06:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.567 23:06:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.567 23:06:27 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.567 23:06:27 -- setup/common.sh@32 -- # continue 00:03:38.567 23:06:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.567 23:06:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.567 23:06:27 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.567 23:06:27 -- setup/common.sh@32 -- # continue 00:03:38.567 23:06:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.567 23:06:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.567 23:06:27 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.567 23:06:27 -- setup/common.sh@32 -- # continue 00:03:38.568 23:06:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.568 23:06:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.568 23:06:27 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.568 23:06:27 -- setup/common.sh@32 -- # continue 00:03:38.568 23:06:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.568 23:06:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.568 23:06:27 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.568 23:06:27 -- setup/common.sh@32 -- # continue 00:03:38.568 23:06:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.568 23:06:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.568 23:06:27 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.568 23:06:27 -- setup/common.sh@32 -- # continue 00:03:38.568 23:06:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.568 23:06:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.568 23:06:27 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.568 23:06:27 -- setup/common.sh@33 -- # echo 1024 00:03:38.568 23:06:27 -- setup/common.sh@33 -- # return 0 00:03:38.568 23:06:27 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:38.568 23:06:27 -- setup/hugepages.sh@112 -- # get_nodes 00:03:38.568 23:06:27 -- setup/hugepages.sh@27 -- # local node 00:03:38.568 23:06:27 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:38.568 23:06:27 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:38.568 23:06:27 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:38.568 23:06:27 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:38.568 23:06:27 -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:38.568 23:06:27 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:38.568 23:06:27 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:38.568 23:06:27 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:38.568 23:06:27 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:38.568 23:06:27 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:38.568 23:06:27 -- setup/common.sh@18 -- # local node=0 00:03:38.568 23:06:27 -- setup/common.sh@19 -- # local var val 00:03:38.568 23:06:27 -- setup/common.sh@20 -- # local mem_f mem 00:03:38.568 23:06:27 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:38.568 23:06:27 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:38.568 23:06:27 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:38.568 23:06:27 -- setup/common.sh@28 -- # mapfile -t mem 00:03:38.568 23:06:27 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:38.568 23:06:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.568 23:06:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.568 23:06:27 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 65659008 kB' 'MemFree: 59082536 kB' 'MemUsed: 6576472 kB' 'SwapCached: 0 kB' 'Active: 3359004 kB' 'Inactive: 108980 kB' 'Active(anon): 3049484 kB' 'Inactive(anon): 0 kB' 'Active(file): 309520 kB' 'Inactive(file): 108980 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 3367988 kB' 'Mapped: 98120 kB' 'AnonPages: 103204 kB' 'Shmem: 2949488 kB' 'KernelStack: 13368 kB' 'PageTables: 3660 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 164676 kB' 'Slab: 557964 kB' 'SReclaimable: 164676 kB' 'SUnreclaim: 393288 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:38.568 23:06:27 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.568 23:06:27 -- setup/common.sh@32 -- # continue 00:03:38.568 23:06:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.568 23:06:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.568 23:06:27 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.568 23:06:27 -- setup/common.sh@32 -- # continue 00:03:38.568 23:06:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.568 23:06:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.568 23:06:27 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.568 23:06:27 -- setup/common.sh@32 -- # continue 00:03:38.568 23:06:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.568 23:06:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.568 23:06:27 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.568 23:06:27 -- setup/common.sh@32 -- # continue 00:03:38.568 23:06:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.568 23:06:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.568 23:06:27 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.568 23:06:27 -- setup/common.sh@32 -- # continue 00:03:38.568 23:06:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.568 23:06:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.568 23:06:27 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.568 23:06:27 -- setup/common.sh@32 -- # continue 00:03:38.568 23:06:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.568 23:06:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.568 23:06:27 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.568 23:06:27 -- setup/common.sh@32 -- # continue 00:03:38.568 23:06:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.568 23:06:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.568 23:06:27 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.568 23:06:27 -- setup/common.sh@32 -- # continue 00:03:38.568 23:06:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.568 23:06:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.568 23:06:27 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.568 23:06:27 -- setup/common.sh@32 -- # continue 00:03:38.568 23:06:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.568 23:06:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.568 23:06:27 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.568 23:06:27 -- setup/common.sh@32 -- # continue 00:03:38.568 23:06:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.568 23:06:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.568 23:06:27 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.568 23:06:27 -- setup/common.sh@32 -- # continue 00:03:38.568 23:06:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.568 23:06:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.568 23:06:27 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.568 23:06:27 -- setup/common.sh@32 -- # continue 00:03:38.568 23:06:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.568 23:06:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.568 23:06:27 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.568 23:06:27 -- setup/common.sh@32 -- # continue 00:03:38.568 23:06:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.568 23:06:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.568 23:06:27 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.568 23:06:27 -- setup/common.sh@32 -- # continue 00:03:38.568 23:06:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.568 23:06:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.568 23:06:27 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.568 23:06:27 -- setup/common.sh@32 -- # continue 00:03:38.568 23:06:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.568 23:06:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.568 23:06:27 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.568 23:06:27 -- setup/common.sh@32 -- # continue 00:03:38.568 23:06:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.568 23:06:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.568 23:06:27 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.568 23:06:27 -- setup/common.sh@32 -- # continue 00:03:38.568 23:06:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.568 23:06:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.568 23:06:27 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.568 23:06:27 -- setup/common.sh@32 -- # continue 00:03:38.568 23:06:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.568 23:06:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.568 23:06:27 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.568 23:06:27 -- setup/common.sh@32 -- # continue 00:03:38.568 23:06:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.568 23:06:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.568 23:06:27 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.568 23:06:27 -- setup/common.sh@32 -- # continue 00:03:38.568 23:06:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.568 23:06:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.568 23:06:27 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.568 23:06:27 -- setup/common.sh@32 -- # continue 00:03:38.568 23:06:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.568 23:06:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.568 23:06:27 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.568 23:06:27 -- setup/common.sh@32 -- # continue 00:03:38.568 23:06:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.568 23:06:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.568 23:06:27 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.568 23:06:27 -- setup/common.sh@32 -- # continue 00:03:38.568 23:06:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.568 23:06:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.568 23:06:27 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.568 23:06:27 -- setup/common.sh@32 -- # continue 00:03:38.568 23:06:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.568 23:06:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.568 23:06:27 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.568 23:06:27 -- setup/common.sh@32 -- # continue 00:03:38.568 23:06:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.568 23:06:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.568 23:06:27 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.568 23:06:27 -- setup/common.sh@32 -- # continue 00:03:38.568 23:06:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.568 23:06:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.568 23:06:27 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.568 23:06:27 -- setup/common.sh@32 -- # continue 00:03:38.568 23:06:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.568 23:06:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.568 23:06:27 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.568 23:06:27 -- setup/common.sh@32 -- # continue 00:03:38.568 23:06:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.568 23:06:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.568 23:06:27 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.568 23:06:27 -- setup/common.sh@32 -- # continue 00:03:38.568 23:06:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.568 23:06:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.568 23:06:27 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.568 23:06:27 -- setup/common.sh@32 -- # continue 00:03:38.568 23:06:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.568 23:06:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.568 23:06:27 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.568 23:06:27 -- setup/common.sh@32 -- # continue 00:03:38.568 23:06:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.568 23:06:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.568 23:06:27 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.568 23:06:27 -- setup/common.sh@32 -- # continue 00:03:38.568 23:06:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.568 23:06:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.568 23:06:27 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.568 23:06:27 -- setup/common.sh@32 -- # continue 00:03:38.568 23:06:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.568 23:06:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.568 23:06:27 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.568 23:06:27 -- setup/common.sh@32 -- # continue 00:03:38.568 23:06:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.568 23:06:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.568 23:06:27 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.568 23:06:27 -- setup/common.sh@32 -- # continue 00:03:38.568 23:06:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.568 23:06:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.568 23:06:27 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.568 23:06:27 -- setup/common.sh@32 -- # continue 00:03:38.568 23:06:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.568 23:06:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.568 23:06:27 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.568 23:06:27 -- setup/common.sh@33 -- # echo 0 00:03:38.568 23:06:27 -- setup/common.sh@33 -- # return 0 00:03:38.568 23:06:27 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:38.568 23:06:27 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:38.568 23:06:27 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:38.568 23:06:27 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:38.568 23:06:27 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:38.568 23:06:27 -- setup/common.sh@18 -- # local node=1 00:03:38.568 23:06:27 -- setup/common.sh@19 -- # local var val 00:03:38.568 23:06:27 -- setup/common.sh@20 -- # local mem_f mem 00:03:38.568 23:06:27 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:38.568 23:06:27 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:38.568 23:06:27 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:38.568 23:06:27 -- setup/common.sh@28 -- # mapfile -t mem 00:03:38.568 23:06:27 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:38.568 23:06:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.568 23:06:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.569 23:06:27 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60679860 kB' 'MemFree: 47598192 kB' 'MemUsed: 13081668 kB' 'SwapCached: 0 kB' 'Active: 6772268 kB' 'Inactive: 3406816 kB' 'Active(anon): 6391392 kB' 'Inactive(anon): 0 kB' 'Active(file): 380876 kB' 'Inactive(file): 3406816 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 9644244 kB' 'Mapped: 100320 kB' 'AnonPages: 535000 kB' 'Shmem: 5856552 kB' 'KernelStack: 13720 kB' 'PageTables: 5440 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 152940 kB' 'Slab: 574772 kB' 'SReclaimable: 152940 kB' 'SUnreclaim: 421832 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:38.569 23:06:27 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.569 23:06:27 -- setup/common.sh@32 -- # continue 00:03:38.569 23:06:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.569 23:06:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.569 23:06:27 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.569 23:06:27 -- setup/common.sh@32 -- # continue 00:03:38.569 23:06:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.569 23:06:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.569 23:06:27 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.569 23:06:27 -- setup/common.sh@32 -- # continue 00:03:38.569 23:06:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.569 23:06:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.569 23:06:27 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.569 23:06:27 -- setup/common.sh@32 -- # continue 00:03:38.569 23:06:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.569 23:06:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.569 23:06:27 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.569 23:06:27 -- setup/common.sh@32 -- # continue 00:03:38.569 23:06:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.569 23:06:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.569 23:06:27 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.569 23:06:27 -- setup/common.sh@32 -- # continue 00:03:38.569 23:06:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.569 23:06:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.569 23:06:27 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.569 23:06:27 -- setup/common.sh@32 -- # continue 00:03:38.569 23:06:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.569 23:06:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.569 23:06:27 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.569 23:06:27 -- setup/common.sh@32 -- # continue 00:03:38.569 23:06:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.569 23:06:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.569 23:06:27 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.569 23:06:27 -- setup/common.sh@32 -- # continue 00:03:38.569 23:06:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.569 23:06:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.569 23:06:27 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.569 23:06:27 -- setup/common.sh@32 -- # continue 00:03:38.569 23:06:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.569 23:06:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.569 23:06:27 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.569 23:06:27 -- setup/common.sh@32 -- # continue 00:03:38.569 23:06:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.569 23:06:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.569 23:06:27 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.569 23:06:27 -- setup/common.sh@32 -- # continue 00:03:38.569 23:06:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.569 23:06:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.569 23:06:27 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.569 23:06:27 -- setup/common.sh@32 -- # continue 00:03:38.569 23:06:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.569 23:06:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.569 23:06:27 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.569 23:06:27 -- setup/common.sh@32 -- # continue 00:03:38.569 23:06:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.569 23:06:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.569 23:06:27 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.569 23:06:27 -- setup/common.sh@32 -- # continue 00:03:38.569 23:06:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.569 23:06:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.569 23:06:27 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.569 23:06:27 -- setup/common.sh@32 -- # continue 00:03:38.569 23:06:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.569 23:06:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.569 23:06:27 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.569 23:06:27 -- setup/common.sh@32 -- # continue 00:03:38.569 23:06:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.569 23:06:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.569 23:06:27 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.569 23:06:27 -- setup/common.sh@32 -- # continue 00:03:38.569 23:06:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.569 23:06:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.569 23:06:27 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.569 23:06:27 -- setup/common.sh@32 -- # continue 00:03:38.569 23:06:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.569 23:06:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.569 23:06:27 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.569 23:06:27 -- setup/common.sh@32 -- # continue 00:03:38.569 23:06:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.569 23:06:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.569 23:06:27 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.569 23:06:27 -- setup/common.sh@32 -- # continue 00:03:38.569 23:06:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.569 23:06:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.569 23:06:27 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.569 23:06:27 -- setup/common.sh@32 -- # continue 00:03:38.569 23:06:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.569 23:06:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.569 23:06:27 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.569 23:06:27 -- setup/common.sh@32 -- # continue 00:03:38.569 23:06:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.569 23:06:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.569 23:06:27 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.569 23:06:27 -- setup/common.sh@32 -- # continue 00:03:38.569 23:06:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.569 23:06:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.569 23:06:27 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.569 23:06:27 -- setup/common.sh@32 -- # continue 00:03:38.569 23:06:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.569 23:06:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.569 23:06:27 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.569 23:06:27 -- setup/common.sh@32 -- # continue 00:03:38.569 23:06:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.569 23:06:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.569 23:06:27 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.569 23:06:27 -- setup/common.sh@32 -- # continue 00:03:38.569 23:06:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.569 23:06:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.569 23:06:27 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.569 23:06:27 -- setup/common.sh@32 -- # continue 00:03:38.569 23:06:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.569 23:06:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.569 23:06:27 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.569 23:06:27 -- setup/common.sh@32 -- # continue 00:03:38.569 23:06:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.569 23:06:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.569 23:06:27 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.569 23:06:27 -- setup/common.sh@32 -- # continue 00:03:38.569 23:06:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.569 23:06:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.569 23:06:27 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.569 23:06:27 -- setup/common.sh@32 -- # continue 00:03:38.569 23:06:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.569 23:06:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.569 23:06:27 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.569 23:06:27 -- setup/common.sh@32 -- # continue 00:03:38.569 23:06:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.569 23:06:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.569 23:06:27 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.569 23:06:27 -- setup/common.sh@32 -- # continue 00:03:38.569 23:06:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.569 23:06:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.569 23:06:27 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.569 23:06:27 -- setup/common.sh@32 -- # continue 00:03:38.569 23:06:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.569 23:06:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.569 23:06:27 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.569 23:06:27 -- setup/common.sh@32 -- # continue 00:03:38.569 23:06:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.569 23:06:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.569 23:06:27 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.569 23:06:27 -- setup/common.sh@32 -- # continue 00:03:38.569 23:06:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.569 23:06:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.569 23:06:27 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.569 23:06:27 -- setup/common.sh@33 -- # echo 0 00:03:38.569 23:06:27 -- setup/common.sh@33 -- # return 0 00:03:38.569 23:06:27 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:38.569 23:06:27 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:38.569 23:06:27 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:38.569 23:06:27 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:38.569 23:06:27 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:03:38.569 node0=512 expecting 512 00:03:38.569 23:06:27 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:38.569 23:06:27 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:38.569 23:06:27 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:38.569 23:06:27 -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:03:38.569 node1=512 expecting 512 00:03:38.569 23:06:27 -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:03:38.569 00:03:38.569 real 0m3.812s 00:03:38.569 user 0m1.503s 00:03:38.569 sys 0m2.355s 00:03:38.569 23:06:27 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:03:38.569 23:06:27 -- common/autotest_common.sh@10 -- # set +x 00:03:38.569 ************************************ 00:03:38.569 END TEST even_2G_alloc 00:03:38.569 ************************************ 00:03:38.831 23:06:27 -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:03:38.831 23:06:27 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:38.831 23:06:27 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:38.831 23:06:27 -- common/autotest_common.sh@10 -- # set +x 00:03:38.831 ************************************ 00:03:38.831 START TEST odd_alloc 00:03:38.831 ************************************ 00:03:38.831 23:06:27 -- common/autotest_common.sh@1111 -- # odd_alloc 00:03:38.831 23:06:27 -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:03:38.831 23:06:27 -- setup/hugepages.sh@49 -- # local size=2098176 00:03:38.831 23:06:27 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:38.831 23:06:27 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:38.831 23:06:27 -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:03:38.831 23:06:27 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:38.831 23:06:27 -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:38.831 23:06:27 -- setup/hugepages.sh@62 -- # local user_nodes 00:03:38.831 23:06:27 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:03:38.831 23:06:27 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:38.831 23:06:27 -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:38.831 23:06:27 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:38.831 23:06:27 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:38.831 23:06:27 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:38.831 23:06:27 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:38.831 23:06:27 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:03:38.831 23:06:27 -- setup/hugepages.sh@83 -- # : 513 00:03:38.831 23:06:27 -- setup/hugepages.sh@84 -- # : 1 00:03:38.831 23:06:27 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:38.831 23:06:27 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=513 00:03:38.831 23:06:27 -- setup/hugepages.sh@83 -- # : 0 00:03:38.831 23:06:27 -- setup/hugepages.sh@84 -- # : 0 00:03:38.831 23:06:27 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:38.831 23:06:27 -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:03:38.831 23:06:27 -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:03:38.831 23:06:27 -- setup/hugepages.sh@160 -- # setup output 00:03:38.831 23:06:27 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:38.831 23:06:27 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:42.134 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:03:42.134 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:03:42.134 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:03:42.134 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:03:42.134 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:03:42.134 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:03:42.134 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:03:42.134 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:03:42.134 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:03:42.134 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:03:42.134 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:03:42.134 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:03:42.134 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:03:42.134 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:03:42.134 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:03:42.135 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:03:42.135 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:03:42.461 23:06:31 -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:03:42.461 23:06:31 -- setup/hugepages.sh@89 -- # local node 00:03:42.461 23:06:31 -- setup/hugepages.sh@90 -- # local sorted_t 00:03:42.461 23:06:31 -- setup/hugepages.sh@91 -- # local sorted_s 00:03:42.461 23:06:31 -- setup/hugepages.sh@92 -- # local surp 00:03:42.461 23:06:31 -- setup/hugepages.sh@93 -- # local resv 00:03:42.462 23:06:31 -- setup/hugepages.sh@94 -- # local anon 00:03:42.462 23:06:31 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:42.462 23:06:31 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:42.462 23:06:31 -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:42.462 23:06:31 -- setup/common.sh@18 -- # local node= 00:03:42.462 23:06:31 -- setup/common.sh@19 -- # local var val 00:03:42.462 23:06:31 -- setup/common.sh@20 -- # local mem_f mem 00:03:42.462 23:06:31 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:42.462 23:06:31 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:42.462 23:06:31 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:42.462 23:06:31 -- setup/common.sh@28 -- # mapfile -t mem 00:03:42.462 23:06:31 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:42.462 23:06:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.462 23:06:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.462 23:06:31 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338868 kB' 'MemFree: 106698376 kB' 'MemAvailable: 110239028 kB' 'Buffers: 4124 kB' 'Cached: 13008208 kB' 'SwapCached: 0 kB' 'Active: 10133272 kB' 'Inactive: 3515796 kB' 'Active(anon): 9442876 kB' 'Inactive(anon): 0 kB' 'Active(file): 690396 kB' 'Inactive(file): 3515796 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 639648 kB' 'Mapped: 198576 kB' 'Shmem: 8806140 kB' 'KReclaimable: 317584 kB' 'Slab: 1133116 kB' 'SReclaimable: 317584 kB' 'SUnreclaim: 815532 kB' 'KernelStack: 27232 kB' 'PageTables: 9268 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70508436 kB' 'Committed_AS: 10838212 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235020 kB' 'VmallocChunk: 0 kB' 'Percpu: 108288 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 3722612 kB' 'DirectMap2M: 43143168 kB' 'DirectMap1G: 89128960 kB' 00:03:42.462 23:06:31 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.462 23:06:31 -- setup/common.sh@32 -- # continue 00:03:42.462 23:06:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.462 23:06:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.462 23:06:31 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.462 23:06:31 -- setup/common.sh@32 -- # continue 00:03:42.462 23:06:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.462 23:06:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.462 23:06:31 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.462 23:06:31 -- setup/common.sh@32 -- # continue 00:03:42.462 23:06:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.462 23:06:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.462 23:06:31 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.462 23:06:31 -- setup/common.sh@32 -- # continue 00:03:42.462 23:06:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.462 23:06:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.462 23:06:31 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.462 23:06:31 -- setup/common.sh@32 -- # continue 00:03:42.462 23:06:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.462 23:06:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.462 23:06:31 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.462 23:06:31 -- setup/common.sh@32 -- # continue 00:03:42.462 23:06:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.462 23:06:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.462 23:06:31 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.462 23:06:31 -- setup/common.sh@32 -- # continue 00:03:42.462 23:06:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.462 23:06:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.462 23:06:31 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.462 23:06:31 -- setup/common.sh@32 -- # continue 00:03:42.462 23:06:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.462 23:06:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.462 23:06:31 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.462 23:06:31 -- setup/common.sh@32 -- # continue 00:03:42.462 23:06:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.462 23:06:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.462 23:06:31 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.462 23:06:31 -- setup/common.sh@32 -- # continue 00:03:42.462 23:06:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.462 23:06:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.462 23:06:31 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.462 23:06:31 -- setup/common.sh@32 -- # continue 00:03:42.462 23:06:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.462 23:06:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.462 23:06:31 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.462 23:06:31 -- setup/common.sh@32 -- # continue 00:03:42.462 23:06:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.462 23:06:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.462 23:06:31 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.462 23:06:31 -- setup/common.sh@32 -- # continue 00:03:42.462 23:06:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.462 23:06:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.462 23:06:31 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.462 23:06:31 -- setup/common.sh@32 -- # continue 00:03:42.462 23:06:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.462 23:06:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.462 23:06:31 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.462 23:06:31 -- setup/common.sh@32 -- # continue 00:03:42.462 23:06:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.462 23:06:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.462 23:06:31 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.462 23:06:31 -- setup/common.sh@32 -- # continue 00:03:42.462 23:06:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.462 23:06:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.462 23:06:31 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.462 23:06:31 -- setup/common.sh@32 -- # continue 00:03:42.462 23:06:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.462 23:06:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.462 23:06:31 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.462 23:06:31 -- setup/common.sh@32 -- # continue 00:03:42.462 23:06:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.462 23:06:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.462 23:06:31 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.462 23:06:31 -- setup/common.sh@32 -- # continue 00:03:42.462 23:06:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.462 23:06:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.462 23:06:31 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.462 23:06:31 -- setup/common.sh@32 -- # continue 00:03:42.462 23:06:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.462 23:06:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.462 23:06:31 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.462 23:06:31 -- setup/common.sh@32 -- # continue 00:03:42.462 23:06:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.462 23:06:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.462 23:06:31 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.462 23:06:31 -- setup/common.sh@32 -- # continue 00:03:42.462 23:06:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.462 23:06:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.462 23:06:31 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.462 23:06:31 -- setup/common.sh@32 -- # continue 00:03:42.462 23:06:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.462 23:06:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.462 23:06:31 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.462 23:06:31 -- setup/common.sh@32 -- # continue 00:03:42.462 23:06:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.462 23:06:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.462 23:06:31 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.462 23:06:31 -- setup/common.sh@32 -- # continue 00:03:42.462 23:06:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.462 23:06:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.462 23:06:31 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.462 23:06:31 -- setup/common.sh@32 -- # continue 00:03:42.462 23:06:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.462 23:06:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.462 23:06:31 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.462 23:06:31 -- setup/common.sh@32 -- # continue 00:03:42.462 23:06:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.462 23:06:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.462 23:06:31 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.462 23:06:31 -- setup/common.sh@32 -- # continue 00:03:42.462 23:06:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.462 23:06:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.462 23:06:31 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.462 23:06:31 -- setup/common.sh@32 -- # continue 00:03:42.462 23:06:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.462 23:06:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.462 23:06:31 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.462 23:06:31 -- setup/common.sh@32 -- # continue 00:03:42.462 23:06:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.462 23:06:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.462 23:06:31 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.462 23:06:31 -- setup/common.sh@32 -- # continue 00:03:42.462 23:06:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.462 23:06:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.462 23:06:31 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.462 23:06:31 -- setup/common.sh@32 -- # continue 00:03:42.462 23:06:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.462 23:06:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.462 23:06:31 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.462 23:06:31 -- setup/common.sh@32 -- # continue 00:03:42.462 23:06:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.462 23:06:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.463 23:06:31 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.463 23:06:31 -- setup/common.sh@32 -- # continue 00:03:42.463 23:06:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.463 23:06:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.463 23:06:31 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.463 23:06:31 -- setup/common.sh@32 -- # continue 00:03:42.463 23:06:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.463 23:06:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.463 23:06:31 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.463 23:06:31 -- setup/common.sh@32 -- # continue 00:03:42.463 23:06:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.463 23:06:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.463 23:06:31 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.463 23:06:31 -- setup/common.sh@32 -- # continue 00:03:42.463 23:06:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.463 23:06:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.463 23:06:31 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.463 23:06:31 -- setup/common.sh@32 -- # continue 00:03:42.463 23:06:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.463 23:06:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.463 23:06:31 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.463 23:06:31 -- setup/common.sh@32 -- # continue 00:03:42.463 23:06:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.463 23:06:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.463 23:06:31 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.463 23:06:31 -- setup/common.sh@32 -- # continue 00:03:42.463 23:06:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.463 23:06:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.463 23:06:31 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.463 23:06:31 -- setup/common.sh@33 -- # echo 0 00:03:42.463 23:06:31 -- setup/common.sh@33 -- # return 0 00:03:42.463 23:06:31 -- setup/hugepages.sh@97 -- # anon=0 00:03:42.463 23:06:31 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:42.463 23:06:31 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:42.463 23:06:31 -- setup/common.sh@18 -- # local node= 00:03:42.463 23:06:31 -- setup/common.sh@19 -- # local var val 00:03:42.463 23:06:31 -- setup/common.sh@20 -- # local mem_f mem 00:03:42.463 23:06:31 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:42.463 23:06:31 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:42.463 23:06:31 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:42.463 23:06:31 -- setup/common.sh@28 -- # mapfile -t mem 00:03:42.463 23:06:31 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:42.463 23:06:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.463 23:06:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.463 23:06:31 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338868 kB' 'MemFree: 106698244 kB' 'MemAvailable: 110238896 kB' 'Buffers: 4124 kB' 'Cached: 13008216 kB' 'SwapCached: 0 kB' 'Active: 10132188 kB' 'Inactive: 3515796 kB' 'Active(anon): 9441792 kB' 'Inactive(anon): 0 kB' 'Active(file): 690396 kB' 'Inactive(file): 3515796 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 638536 kB' 'Mapped: 198560 kB' 'Shmem: 8806148 kB' 'KReclaimable: 317584 kB' 'Slab: 1133052 kB' 'SReclaimable: 317584 kB' 'SUnreclaim: 815468 kB' 'KernelStack: 27136 kB' 'PageTables: 9380 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70508436 kB' 'Committed_AS: 10838228 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 234956 kB' 'VmallocChunk: 0 kB' 'Percpu: 108288 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 3722612 kB' 'DirectMap2M: 43143168 kB' 'DirectMap1G: 89128960 kB' 00:03:42.463 23:06:31 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.463 23:06:31 -- setup/common.sh@32 -- # continue 00:03:42.463 23:06:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.463 23:06:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.463 23:06:31 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.463 23:06:31 -- setup/common.sh@32 -- # continue 00:03:42.463 23:06:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.463 23:06:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.463 23:06:31 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.463 23:06:31 -- setup/common.sh@32 -- # continue 00:03:42.463 23:06:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.463 23:06:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.463 23:06:31 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.463 23:06:31 -- setup/common.sh@32 -- # continue 00:03:42.463 23:06:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.463 23:06:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.463 23:06:31 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.463 23:06:31 -- setup/common.sh@32 -- # continue 00:03:42.463 23:06:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.463 23:06:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.463 23:06:31 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.463 23:06:31 -- setup/common.sh@32 -- # continue 00:03:42.463 23:06:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.463 23:06:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.463 23:06:31 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.463 23:06:31 -- setup/common.sh@32 -- # continue 00:03:42.463 23:06:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.463 23:06:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.463 23:06:31 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.463 23:06:31 -- setup/common.sh@32 -- # continue 00:03:42.463 23:06:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.463 23:06:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.463 23:06:31 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.463 23:06:31 -- setup/common.sh@32 -- # continue 00:03:42.463 23:06:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.463 23:06:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.463 23:06:31 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.463 23:06:31 -- setup/common.sh@32 -- # continue 00:03:42.463 23:06:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.463 23:06:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.463 23:06:31 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.463 23:06:31 -- setup/common.sh@32 -- # continue 00:03:42.463 23:06:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.463 23:06:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.463 23:06:31 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.463 23:06:31 -- setup/common.sh@32 -- # continue 00:03:42.463 23:06:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.463 23:06:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.463 23:06:31 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.463 23:06:31 -- setup/common.sh@32 -- # continue 00:03:42.463 23:06:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.463 23:06:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.463 23:06:31 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.463 23:06:31 -- setup/common.sh@32 -- # continue 00:03:42.463 23:06:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.463 23:06:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.463 23:06:31 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.463 23:06:31 -- setup/common.sh@32 -- # continue 00:03:42.463 23:06:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.463 23:06:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.463 23:06:31 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.463 23:06:31 -- setup/common.sh@32 -- # continue 00:03:42.463 23:06:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.463 23:06:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.463 23:06:31 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.463 23:06:31 -- setup/common.sh@32 -- # continue 00:03:42.463 23:06:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.463 23:06:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.463 23:06:31 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.463 23:06:31 -- setup/common.sh@32 -- # continue 00:03:42.463 23:06:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.463 23:06:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.463 23:06:31 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.463 23:06:31 -- setup/common.sh@32 -- # continue 00:03:42.463 23:06:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.463 23:06:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.463 23:06:31 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.463 23:06:31 -- setup/common.sh@32 -- # continue 00:03:42.463 23:06:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.463 23:06:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.463 23:06:31 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.463 23:06:31 -- setup/common.sh@32 -- # continue 00:03:42.463 23:06:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.463 23:06:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.463 23:06:31 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.463 23:06:31 -- setup/common.sh@32 -- # continue 00:03:42.463 23:06:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.463 23:06:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.463 23:06:31 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.463 23:06:31 -- setup/common.sh@32 -- # continue 00:03:42.463 23:06:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.463 23:06:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.463 23:06:31 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.463 23:06:31 -- setup/common.sh@32 -- # continue 00:03:42.463 23:06:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.463 23:06:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.463 23:06:31 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.463 23:06:31 -- setup/common.sh@32 -- # continue 00:03:42.463 23:06:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.463 23:06:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.463 23:06:31 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.463 23:06:31 -- setup/common.sh@32 -- # continue 00:03:42.463 23:06:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.464 23:06:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.464 23:06:31 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.464 23:06:31 -- setup/common.sh@32 -- # continue 00:03:42.464 23:06:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.464 23:06:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.464 23:06:31 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.464 23:06:31 -- setup/common.sh@32 -- # continue 00:03:42.464 23:06:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.464 23:06:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.464 23:06:31 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.464 23:06:31 -- setup/common.sh@32 -- # continue 00:03:42.464 23:06:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.464 23:06:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.464 23:06:31 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.464 23:06:31 -- setup/common.sh@32 -- # continue 00:03:42.464 23:06:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.464 23:06:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.464 23:06:31 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.464 23:06:31 -- setup/common.sh@32 -- # continue 00:03:42.464 23:06:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.464 23:06:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.464 23:06:31 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.464 23:06:31 -- setup/common.sh@32 -- # continue 00:03:42.464 23:06:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.464 23:06:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.464 23:06:31 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.464 23:06:31 -- setup/common.sh@32 -- # continue 00:03:42.464 23:06:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.464 23:06:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.464 23:06:31 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.464 23:06:31 -- setup/common.sh@32 -- # continue 00:03:42.464 23:06:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.464 23:06:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.464 23:06:31 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.464 23:06:31 -- setup/common.sh@32 -- # continue 00:03:42.464 23:06:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.464 23:06:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.464 23:06:31 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.464 23:06:31 -- setup/common.sh@32 -- # continue 00:03:42.464 23:06:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.464 23:06:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.464 23:06:31 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.464 23:06:31 -- setup/common.sh@32 -- # continue 00:03:42.464 23:06:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.464 23:06:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.464 23:06:31 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.464 23:06:31 -- setup/common.sh@32 -- # continue 00:03:42.464 23:06:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.464 23:06:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.464 23:06:31 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.464 23:06:31 -- setup/common.sh@32 -- # continue 00:03:42.464 23:06:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.464 23:06:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.464 23:06:31 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.464 23:06:31 -- setup/common.sh@32 -- # continue 00:03:42.464 23:06:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.464 23:06:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.464 23:06:31 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.464 23:06:31 -- setup/common.sh@32 -- # continue 00:03:42.464 23:06:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.464 23:06:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.464 23:06:31 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.464 23:06:31 -- setup/common.sh@32 -- # continue 00:03:42.464 23:06:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.464 23:06:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.464 23:06:31 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.464 23:06:31 -- setup/common.sh@32 -- # continue 00:03:42.464 23:06:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.464 23:06:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.464 23:06:31 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.464 23:06:31 -- setup/common.sh@32 -- # continue 00:03:42.464 23:06:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.464 23:06:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.464 23:06:31 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.464 23:06:31 -- setup/common.sh@32 -- # continue 00:03:42.464 23:06:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.464 23:06:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.464 23:06:31 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.464 23:06:31 -- setup/common.sh@32 -- # continue 00:03:42.464 23:06:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.464 23:06:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.464 23:06:31 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.464 23:06:31 -- setup/common.sh@32 -- # continue 00:03:42.464 23:06:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.464 23:06:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.464 23:06:31 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.464 23:06:31 -- setup/common.sh@32 -- # continue 00:03:42.464 23:06:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.464 23:06:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.464 23:06:31 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.464 23:06:31 -- setup/common.sh@32 -- # continue 00:03:42.464 23:06:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.464 23:06:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.464 23:06:31 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.464 23:06:31 -- setup/common.sh@32 -- # continue 00:03:42.464 23:06:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.464 23:06:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.464 23:06:31 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.464 23:06:31 -- setup/common.sh@32 -- # continue 00:03:42.464 23:06:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.464 23:06:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.464 23:06:31 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.464 23:06:31 -- setup/common.sh@33 -- # echo 0 00:03:42.464 23:06:31 -- setup/common.sh@33 -- # return 0 00:03:42.464 23:06:31 -- setup/hugepages.sh@99 -- # surp=0 00:03:42.464 23:06:31 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:42.464 23:06:31 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:42.464 23:06:31 -- setup/common.sh@18 -- # local node= 00:03:42.464 23:06:31 -- setup/common.sh@19 -- # local var val 00:03:42.464 23:06:31 -- setup/common.sh@20 -- # local mem_f mem 00:03:42.464 23:06:31 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:42.464 23:06:31 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:42.464 23:06:31 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:42.464 23:06:31 -- setup/common.sh@28 -- # mapfile -t mem 00:03:42.464 23:06:31 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:42.464 23:06:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.464 23:06:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.464 23:06:31 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338868 kB' 'MemFree: 106699032 kB' 'MemAvailable: 110239684 kB' 'Buffers: 4124 kB' 'Cached: 13008224 kB' 'SwapCached: 0 kB' 'Active: 10132128 kB' 'Inactive: 3515796 kB' 'Active(anon): 9441732 kB' 'Inactive(anon): 0 kB' 'Active(file): 690396 kB' 'Inactive(file): 3515796 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 638716 kB' 'Mapped: 198492 kB' 'Shmem: 8806156 kB' 'KReclaimable: 317584 kB' 'Slab: 1133032 kB' 'SReclaimable: 317584 kB' 'SUnreclaim: 815448 kB' 'KernelStack: 27072 kB' 'PageTables: 9092 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70508436 kB' 'Committed_AS: 10836620 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 234924 kB' 'VmallocChunk: 0 kB' 'Percpu: 108288 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 3722612 kB' 'DirectMap2M: 43143168 kB' 'DirectMap1G: 89128960 kB' 00:03:42.464 23:06:31 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.464 23:06:31 -- setup/common.sh@32 -- # continue 00:03:42.464 23:06:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.464 23:06:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.464 23:06:31 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.464 23:06:31 -- setup/common.sh@32 -- # continue 00:03:42.464 23:06:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.464 23:06:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.464 23:06:31 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.464 23:06:31 -- setup/common.sh@32 -- # continue 00:03:42.464 23:06:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.464 23:06:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.464 23:06:31 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.464 23:06:31 -- setup/common.sh@32 -- # continue 00:03:42.464 23:06:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.464 23:06:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.464 23:06:31 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.464 23:06:31 -- setup/common.sh@32 -- # continue 00:03:42.464 23:06:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.464 23:06:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.464 23:06:31 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.464 23:06:31 -- setup/common.sh@32 -- # continue 00:03:42.464 23:06:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.464 23:06:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.464 23:06:31 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.464 23:06:31 -- setup/common.sh@32 -- # continue 00:03:42.464 23:06:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.464 23:06:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.464 23:06:31 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.464 23:06:31 -- setup/common.sh@32 -- # continue 00:03:42.465 23:06:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.465 23:06:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.465 23:06:31 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.465 23:06:31 -- setup/common.sh@32 -- # continue 00:03:42.465 23:06:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.465 23:06:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.465 23:06:31 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.465 23:06:31 -- setup/common.sh@32 -- # continue 00:03:42.465 23:06:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.465 23:06:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.465 23:06:31 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.465 23:06:31 -- setup/common.sh@32 -- # continue 00:03:42.465 23:06:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.465 23:06:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.465 23:06:31 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.465 23:06:31 -- setup/common.sh@32 -- # continue 00:03:42.465 23:06:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.465 23:06:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.465 23:06:31 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.465 23:06:31 -- setup/common.sh@32 -- # continue 00:03:42.465 23:06:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.465 23:06:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.465 23:06:31 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.465 23:06:31 -- setup/common.sh@32 -- # continue 00:03:42.465 23:06:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.465 23:06:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.465 23:06:31 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.465 23:06:31 -- setup/common.sh@32 -- # continue 00:03:42.465 23:06:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.465 23:06:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.465 23:06:31 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.465 23:06:31 -- setup/common.sh@32 -- # continue 00:03:42.465 23:06:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.465 23:06:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.465 23:06:31 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.465 23:06:31 -- setup/common.sh@32 -- # continue 00:03:42.465 23:06:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.465 23:06:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.465 23:06:31 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.465 23:06:31 -- setup/common.sh@32 -- # continue 00:03:42.465 23:06:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.465 23:06:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.465 23:06:31 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.465 23:06:31 -- setup/common.sh@32 -- # continue 00:03:42.465 23:06:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.465 23:06:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.465 23:06:31 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.465 23:06:31 -- setup/common.sh@32 -- # continue 00:03:42.465 23:06:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.465 23:06:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.465 23:06:31 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.465 23:06:31 -- setup/common.sh@32 -- # continue 00:03:42.465 23:06:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.465 23:06:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.465 23:06:31 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.465 23:06:31 -- setup/common.sh@32 -- # continue 00:03:42.465 23:06:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.465 23:06:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.465 23:06:31 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.465 23:06:31 -- setup/common.sh@32 -- # continue 00:03:42.465 23:06:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.465 23:06:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.465 23:06:31 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.465 23:06:31 -- setup/common.sh@32 -- # continue 00:03:42.465 23:06:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.465 23:06:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.465 23:06:31 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.465 23:06:31 -- setup/common.sh@32 -- # continue 00:03:42.465 23:06:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.465 23:06:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.465 23:06:31 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.465 23:06:31 -- setup/common.sh@32 -- # continue 00:03:42.465 23:06:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.465 23:06:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.465 23:06:31 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.465 23:06:31 -- setup/common.sh@32 -- # continue 00:03:42.465 23:06:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.465 23:06:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.465 23:06:31 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.465 23:06:31 -- setup/common.sh@32 -- # continue 00:03:42.465 23:06:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.465 23:06:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.465 23:06:31 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.465 23:06:31 -- setup/common.sh@32 -- # continue 00:03:42.465 23:06:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.465 23:06:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.465 23:06:31 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.465 23:06:31 -- setup/common.sh@32 -- # continue 00:03:42.465 23:06:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.465 23:06:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.465 23:06:31 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.465 23:06:31 -- setup/common.sh@32 -- # continue 00:03:42.465 23:06:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.465 23:06:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.465 23:06:31 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.465 23:06:31 -- setup/common.sh@32 -- # continue 00:03:42.465 23:06:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.465 23:06:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.465 23:06:31 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.465 23:06:31 -- setup/common.sh@32 -- # continue 00:03:42.465 23:06:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.465 23:06:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.465 23:06:31 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.465 23:06:31 -- setup/common.sh@32 -- # continue 00:03:42.465 23:06:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.465 23:06:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.465 23:06:31 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.465 23:06:31 -- setup/common.sh@32 -- # continue 00:03:42.465 23:06:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.465 23:06:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.465 23:06:31 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.465 23:06:31 -- setup/common.sh@32 -- # continue 00:03:42.465 23:06:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.465 23:06:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.465 23:06:31 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.465 23:06:31 -- setup/common.sh@32 -- # continue 00:03:42.465 23:06:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.465 23:06:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.465 23:06:31 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.465 23:06:31 -- setup/common.sh@32 -- # continue 00:03:42.465 23:06:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.465 23:06:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.465 23:06:31 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.465 23:06:31 -- setup/common.sh@32 -- # continue 00:03:42.465 23:06:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.465 23:06:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.465 23:06:31 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.465 23:06:31 -- setup/common.sh@32 -- # continue 00:03:42.755 23:06:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.755 23:06:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.755 23:06:31 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.755 23:06:31 -- setup/common.sh@32 -- # continue 00:03:42.755 23:06:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.755 23:06:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.755 23:06:31 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.755 23:06:31 -- setup/common.sh@32 -- # continue 00:03:42.755 23:06:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.755 23:06:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.755 23:06:31 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.755 23:06:31 -- setup/common.sh@32 -- # continue 00:03:42.755 23:06:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.755 23:06:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.755 23:06:31 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.755 23:06:31 -- setup/common.sh@32 -- # continue 00:03:42.755 23:06:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.755 23:06:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.755 23:06:31 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.755 23:06:31 -- setup/common.sh@32 -- # continue 00:03:42.755 23:06:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.755 23:06:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.755 23:06:31 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.755 23:06:31 -- setup/common.sh@32 -- # continue 00:03:42.755 23:06:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.755 23:06:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.755 23:06:31 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.755 23:06:31 -- setup/common.sh@32 -- # continue 00:03:42.755 23:06:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.755 23:06:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.755 23:06:31 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.755 23:06:31 -- setup/common.sh@32 -- # continue 00:03:42.755 23:06:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.755 23:06:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.755 23:06:31 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.755 23:06:31 -- setup/common.sh@32 -- # continue 00:03:42.755 23:06:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.755 23:06:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.755 23:06:31 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.755 23:06:31 -- setup/common.sh@32 -- # continue 00:03:42.755 23:06:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.755 23:06:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.755 23:06:31 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.755 23:06:31 -- setup/common.sh@33 -- # echo 0 00:03:42.755 23:06:31 -- setup/common.sh@33 -- # return 0 00:03:42.755 23:06:31 -- setup/hugepages.sh@100 -- # resv=0 00:03:42.755 23:06:31 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:03:42.755 nr_hugepages=1025 00:03:42.755 23:06:31 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:42.755 resv_hugepages=0 00:03:42.755 23:06:31 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:42.755 surplus_hugepages=0 00:03:42.755 23:06:31 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:42.755 anon_hugepages=0 00:03:42.755 23:06:31 -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:03:42.755 23:06:31 -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:03:42.755 23:06:31 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:42.755 23:06:31 -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:42.755 23:06:31 -- setup/common.sh@18 -- # local node= 00:03:42.755 23:06:31 -- setup/common.sh@19 -- # local var val 00:03:42.755 23:06:31 -- setup/common.sh@20 -- # local mem_f mem 00:03:42.755 23:06:31 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:42.755 23:06:31 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:42.755 23:06:31 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:42.755 23:06:31 -- setup/common.sh@28 -- # mapfile -t mem 00:03:42.755 23:06:31 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:42.755 23:06:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.755 23:06:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.755 23:06:31 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338868 kB' 'MemFree: 106698160 kB' 'MemAvailable: 110238812 kB' 'Buffers: 4124 kB' 'Cached: 13008236 kB' 'SwapCached: 0 kB' 'Active: 10131668 kB' 'Inactive: 3515796 kB' 'Active(anon): 9441272 kB' 'Inactive(anon): 0 kB' 'Active(file): 690396 kB' 'Inactive(file): 3515796 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 638436 kB' 'Mapped: 198484 kB' 'Shmem: 8806168 kB' 'KReclaimable: 317584 kB' 'Slab: 1133032 kB' 'SReclaimable: 317584 kB' 'SUnreclaim: 815448 kB' 'KernelStack: 27168 kB' 'PageTables: 8812 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70508436 kB' 'Committed_AS: 10855404 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 234956 kB' 'VmallocChunk: 0 kB' 'Percpu: 108288 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 3722612 kB' 'DirectMap2M: 43143168 kB' 'DirectMap1G: 89128960 kB' 00:03:42.755 23:06:31 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.755 23:06:31 -- setup/common.sh@32 -- # continue 00:03:42.755 23:06:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.755 23:06:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.755 23:06:31 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.755 23:06:31 -- setup/common.sh@32 -- # continue 00:03:42.755 23:06:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.755 23:06:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.755 23:06:31 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.755 23:06:31 -- setup/common.sh@32 -- # continue 00:03:42.755 23:06:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.755 23:06:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.755 23:06:31 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.755 23:06:31 -- setup/common.sh@32 -- # continue 00:03:42.755 23:06:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.755 23:06:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.755 23:06:31 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.755 23:06:31 -- setup/common.sh@32 -- # continue 00:03:42.755 23:06:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.755 23:06:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.755 23:06:31 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.755 23:06:31 -- setup/common.sh@32 -- # continue 00:03:42.755 23:06:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.755 23:06:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.755 23:06:31 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.755 23:06:31 -- setup/common.sh@32 -- # continue 00:03:42.755 23:06:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.755 23:06:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.755 23:06:31 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.755 23:06:31 -- setup/common.sh@32 -- # continue 00:03:42.755 23:06:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.755 23:06:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.755 23:06:31 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.755 23:06:31 -- setup/common.sh@32 -- # continue 00:03:42.755 23:06:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.755 23:06:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.755 23:06:31 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.755 23:06:31 -- setup/common.sh@32 -- # continue 00:03:42.755 23:06:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.755 23:06:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.755 23:06:31 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.755 23:06:31 -- setup/common.sh@32 -- # continue 00:03:42.755 23:06:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.755 23:06:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.755 23:06:31 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.755 23:06:31 -- setup/common.sh@32 -- # continue 00:03:42.755 23:06:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.755 23:06:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.755 23:06:31 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.755 23:06:31 -- setup/common.sh@32 -- # continue 00:03:42.755 23:06:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.755 23:06:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.755 23:06:31 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.755 23:06:31 -- setup/common.sh@32 -- # continue 00:03:42.755 23:06:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.755 23:06:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.755 23:06:31 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.756 23:06:31 -- setup/common.sh@32 -- # continue 00:03:42.756 23:06:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.756 23:06:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.756 23:06:31 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.756 23:06:31 -- setup/common.sh@32 -- # continue 00:03:42.756 23:06:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.756 23:06:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.756 23:06:31 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.756 23:06:31 -- setup/common.sh@32 -- # continue 00:03:42.756 23:06:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.756 23:06:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.756 23:06:31 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.756 23:06:31 -- setup/common.sh@32 -- # continue 00:03:42.756 23:06:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.756 23:06:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.756 23:06:31 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.756 23:06:31 -- setup/common.sh@32 -- # continue 00:03:42.756 23:06:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.756 23:06:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.756 23:06:31 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.756 23:06:31 -- setup/common.sh@32 -- # continue 00:03:42.756 23:06:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.756 23:06:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.756 23:06:31 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.756 23:06:31 -- setup/common.sh@32 -- # continue 00:03:42.756 23:06:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.756 23:06:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.756 23:06:31 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.756 23:06:31 -- setup/common.sh@32 -- # continue 00:03:42.756 23:06:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.756 23:06:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.756 23:06:31 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.756 23:06:31 -- setup/common.sh@32 -- # continue 00:03:42.756 23:06:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.756 23:06:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.756 23:06:31 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.756 23:06:31 -- setup/common.sh@32 -- # continue 00:03:42.756 23:06:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.756 23:06:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.756 23:06:31 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.756 23:06:31 -- setup/common.sh@32 -- # continue 00:03:42.756 23:06:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.756 23:06:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.756 23:06:31 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.756 23:06:31 -- setup/common.sh@32 -- # continue 00:03:42.756 23:06:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.756 23:06:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.756 23:06:31 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.756 23:06:31 -- setup/common.sh@32 -- # continue 00:03:42.756 23:06:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.756 23:06:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.756 23:06:31 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.756 23:06:31 -- setup/common.sh@32 -- # continue 00:03:42.756 23:06:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.756 23:06:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.756 23:06:31 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.756 23:06:31 -- setup/common.sh@32 -- # continue 00:03:42.756 23:06:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.756 23:06:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.756 23:06:31 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.756 23:06:31 -- setup/common.sh@32 -- # continue 00:03:42.756 23:06:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.756 23:06:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.756 23:06:31 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.756 23:06:31 -- setup/common.sh@32 -- # continue 00:03:42.756 23:06:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.756 23:06:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.756 23:06:31 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.756 23:06:31 -- setup/common.sh@32 -- # continue 00:03:42.756 23:06:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.756 23:06:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.756 23:06:31 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.756 23:06:31 -- setup/common.sh@32 -- # continue 00:03:42.756 23:06:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.756 23:06:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.756 23:06:31 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.756 23:06:31 -- setup/common.sh@32 -- # continue 00:03:42.756 23:06:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.756 23:06:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.756 23:06:31 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.756 23:06:31 -- setup/common.sh@32 -- # continue 00:03:42.756 23:06:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.756 23:06:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.756 23:06:31 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.756 23:06:31 -- setup/common.sh@32 -- # continue 00:03:42.756 23:06:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.756 23:06:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.756 23:06:31 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.756 23:06:31 -- setup/common.sh@32 -- # continue 00:03:42.756 23:06:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.756 23:06:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.756 23:06:31 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.756 23:06:31 -- setup/common.sh@32 -- # continue 00:03:42.756 23:06:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.756 23:06:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.756 23:06:31 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.756 23:06:31 -- setup/common.sh@32 -- # continue 00:03:42.756 23:06:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.756 23:06:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.756 23:06:31 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.756 23:06:31 -- setup/common.sh@32 -- # continue 00:03:42.756 23:06:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.756 23:06:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.756 23:06:31 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.756 23:06:31 -- setup/common.sh@32 -- # continue 00:03:42.756 23:06:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.756 23:06:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.756 23:06:31 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.756 23:06:31 -- setup/common.sh@32 -- # continue 00:03:42.756 23:06:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.756 23:06:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.756 23:06:31 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.756 23:06:31 -- setup/common.sh@32 -- # continue 00:03:42.756 23:06:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.756 23:06:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.756 23:06:31 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.756 23:06:31 -- setup/common.sh@32 -- # continue 00:03:42.756 23:06:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.756 23:06:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.756 23:06:31 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.756 23:06:31 -- setup/common.sh@32 -- # continue 00:03:42.756 23:06:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.756 23:06:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.756 23:06:31 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.756 23:06:31 -- setup/common.sh@32 -- # continue 00:03:42.756 23:06:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.756 23:06:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.756 23:06:31 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.756 23:06:31 -- setup/common.sh@32 -- # continue 00:03:42.756 23:06:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.756 23:06:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.756 23:06:31 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.756 23:06:31 -- setup/common.sh@32 -- # continue 00:03:42.756 23:06:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.756 23:06:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.756 23:06:31 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.756 23:06:31 -- setup/common.sh@33 -- # echo 1025 00:03:42.756 23:06:31 -- setup/common.sh@33 -- # return 0 00:03:42.756 23:06:31 -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:03:42.756 23:06:31 -- setup/hugepages.sh@112 -- # get_nodes 00:03:42.756 23:06:31 -- setup/hugepages.sh@27 -- # local node 00:03:42.756 23:06:31 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:42.756 23:06:31 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:42.756 23:06:31 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:42.756 23:06:31 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=513 00:03:42.756 23:06:31 -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:42.756 23:06:31 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:42.756 23:06:31 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:42.756 23:06:31 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:42.756 23:06:31 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:42.756 23:06:31 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:42.756 23:06:31 -- setup/common.sh@18 -- # local node=0 00:03:42.756 23:06:31 -- setup/common.sh@19 -- # local var val 00:03:42.756 23:06:31 -- setup/common.sh@20 -- # local mem_f mem 00:03:42.756 23:06:31 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:42.756 23:06:31 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:42.756 23:06:31 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:42.756 23:06:31 -- setup/common.sh@28 -- # mapfile -t mem 00:03:42.756 23:06:31 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:42.756 23:06:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.756 23:06:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.757 23:06:31 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 65659008 kB' 'MemFree: 59073884 kB' 'MemUsed: 6585124 kB' 'SwapCached: 0 kB' 'Active: 3357200 kB' 'Inactive: 108980 kB' 'Active(anon): 3047680 kB' 'Inactive(anon): 0 kB' 'Active(file): 309520 kB' 'Inactive(file): 108980 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 3368012 kB' 'Mapped: 98160 kB' 'AnonPages: 101268 kB' 'Shmem: 2949512 kB' 'KernelStack: 13496 kB' 'PageTables: 3588 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 164676 kB' 'Slab: 558012 kB' 'SReclaimable: 164676 kB' 'SUnreclaim: 393336 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:42.757 23:06:31 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.757 23:06:31 -- setup/common.sh@32 -- # continue 00:03:42.757 23:06:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.757 23:06:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.757 23:06:31 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.757 23:06:31 -- setup/common.sh@32 -- # continue 00:03:42.757 23:06:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.757 23:06:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.757 23:06:31 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.757 23:06:31 -- setup/common.sh@32 -- # continue 00:03:42.757 23:06:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.757 23:06:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.757 23:06:31 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.757 23:06:31 -- setup/common.sh@32 -- # continue 00:03:42.757 23:06:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.757 23:06:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.757 23:06:31 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.757 23:06:31 -- setup/common.sh@32 -- # continue 00:03:42.757 23:06:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.757 23:06:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.757 23:06:31 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.757 23:06:31 -- setup/common.sh@32 -- # continue 00:03:42.757 23:06:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.757 23:06:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.757 23:06:31 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.757 23:06:31 -- setup/common.sh@32 -- # continue 00:03:42.757 23:06:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.757 23:06:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.757 23:06:31 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.757 23:06:31 -- setup/common.sh@32 -- # continue 00:03:42.757 23:06:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.757 23:06:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.757 23:06:31 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.757 23:06:31 -- setup/common.sh@32 -- # continue 00:03:42.757 23:06:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.757 23:06:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.757 23:06:31 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.757 23:06:31 -- setup/common.sh@32 -- # continue 00:03:42.757 23:06:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.757 23:06:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.757 23:06:31 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.757 23:06:31 -- setup/common.sh@32 -- # continue 00:03:42.757 23:06:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.757 23:06:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.757 23:06:31 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.757 23:06:31 -- setup/common.sh@32 -- # continue 00:03:42.757 23:06:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.757 23:06:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.757 23:06:31 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.757 23:06:31 -- setup/common.sh@32 -- # continue 00:03:42.757 23:06:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.757 23:06:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.757 23:06:31 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.757 23:06:31 -- setup/common.sh@32 -- # continue 00:03:42.757 23:06:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.757 23:06:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.757 23:06:31 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.757 23:06:31 -- setup/common.sh@32 -- # continue 00:03:42.757 23:06:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.757 23:06:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.757 23:06:31 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.757 23:06:31 -- setup/common.sh@32 -- # continue 00:03:42.757 23:06:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.757 23:06:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.757 23:06:31 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.757 23:06:31 -- setup/common.sh@32 -- # continue 00:03:42.757 23:06:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.757 23:06:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.757 23:06:31 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.757 23:06:31 -- setup/common.sh@32 -- # continue 00:03:42.757 23:06:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.757 23:06:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.757 23:06:31 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.757 23:06:31 -- setup/common.sh@32 -- # continue 00:03:42.757 23:06:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.757 23:06:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.757 23:06:31 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.757 23:06:31 -- setup/common.sh@32 -- # continue 00:03:42.757 23:06:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.757 23:06:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.757 23:06:31 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.757 23:06:31 -- setup/common.sh@32 -- # continue 00:03:42.757 23:06:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.757 23:06:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.757 23:06:31 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.757 23:06:31 -- setup/common.sh@32 -- # continue 00:03:42.757 23:06:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.757 23:06:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.757 23:06:31 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.757 23:06:31 -- setup/common.sh@32 -- # continue 00:03:42.757 23:06:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.757 23:06:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.757 23:06:31 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.757 23:06:31 -- setup/common.sh@32 -- # continue 00:03:42.757 23:06:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.757 23:06:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.757 23:06:31 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.757 23:06:31 -- setup/common.sh@32 -- # continue 00:03:42.757 23:06:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.757 23:06:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.757 23:06:31 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.757 23:06:31 -- setup/common.sh@32 -- # continue 00:03:42.757 23:06:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.757 23:06:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.757 23:06:31 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.757 23:06:31 -- setup/common.sh@32 -- # continue 00:03:42.757 23:06:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.757 23:06:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.757 23:06:31 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.757 23:06:31 -- setup/common.sh@32 -- # continue 00:03:42.757 23:06:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.757 23:06:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.757 23:06:31 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.757 23:06:31 -- setup/common.sh@32 -- # continue 00:03:42.757 23:06:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.757 23:06:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.757 23:06:31 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.757 23:06:31 -- setup/common.sh@32 -- # continue 00:03:42.757 23:06:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.757 23:06:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.757 23:06:31 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.757 23:06:31 -- setup/common.sh@32 -- # continue 00:03:42.757 23:06:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.757 23:06:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.757 23:06:31 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.757 23:06:31 -- setup/common.sh@32 -- # continue 00:03:42.757 23:06:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.757 23:06:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.757 23:06:31 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.757 23:06:31 -- setup/common.sh@32 -- # continue 00:03:42.757 23:06:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.757 23:06:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.757 23:06:31 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.757 23:06:31 -- setup/common.sh@32 -- # continue 00:03:42.757 23:06:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.757 23:06:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.757 23:06:31 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.757 23:06:31 -- setup/common.sh@32 -- # continue 00:03:42.757 23:06:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.757 23:06:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.757 23:06:31 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.757 23:06:31 -- setup/common.sh@32 -- # continue 00:03:42.757 23:06:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.757 23:06:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.757 23:06:31 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.757 23:06:31 -- setup/common.sh@33 -- # echo 0 00:03:42.757 23:06:31 -- setup/common.sh@33 -- # return 0 00:03:42.757 23:06:31 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:42.757 23:06:31 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:42.757 23:06:31 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:42.757 23:06:31 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:42.757 23:06:31 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:42.757 23:06:31 -- setup/common.sh@18 -- # local node=1 00:03:42.757 23:06:31 -- setup/common.sh@19 -- # local var val 00:03:42.758 23:06:31 -- setup/common.sh@20 -- # local mem_f mem 00:03:42.758 23:06:31 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:42.758 23:06:31 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:42.758 23:06:31 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:42.758 23:06:31 -- setup/common.sh@28 -- # mapfile -t mem 00:03:42.758 23:06:31 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:42.758 23:06:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.758 23:06:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.758 23:06:31 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60679860 kB' 'MemFree: 47621340 kB' 'MemUsed: 13058520 kB' 'SwapCached: 0 kB' 'Active: 6775004 kB' 'Inactive: 3406816 kB' 'Active(anon): 6394128 kB' 'Inactive(anon): 0 kB' 'Active(file): 380876 kB' 'Inactive(file): 3406816 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 9644380 kB' 'Mapped: 100324 kB' 'AnonPages: 537616 kB' 'Shmem: 5856688 kB' 'KernelStack: 13720 kB' 'PageTables: 5720 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 152908 kB' 'Slab: 574888 kB' 'SReclaimable: 152908 kB' 'SUnreclaim: 421980 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 513' 'HugePages_Free: 513' 'HugePages_Surp: 0' 00:03:42.758 23:06:31 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.758 23:06:31 -- setup/common.sh@32 -- # continue 00:03:42.758 23:06:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.758 23:06:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.758 23:06:31 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.758 23:06:31 -- setup/common.sh@32 -- # continue 00:03:42.758 23:06:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.758 23:06:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.758 23:06:31 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.758 23:06:31 -- setup/common.sh@32 -- # continue 00:03:42.758 23:06:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.758 23:06:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.758 23:06:31 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.758 23:06:31 -- setup/common.sh@32 -- # continue 00:03:42.758 23:06:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.758 23:06:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.758 23:06:31 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.758 23:06:31 -- setup/common.sh@32 -- # continue 00:03:42.758 23:06:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.758 23:06:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.758 23:06:31 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.758 23:06:31 -- setup/common.sh@32 -- # continue 00:03:42.758 23:06:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.758 23:06:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.758 23:06:31 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.758 23:06:31 -- setup/common.sh@32 -- # continue 00:03:42.758 23:06:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.758 23:06:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.758 23:06:31 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.758 23:06:31 -- setup/common.sh@32 -- # continue 00:03:42.758 23:06:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.758 23:06:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.758 23:06:31 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.758 23:06:31 -- setup/common.sh@32 -- # continue 00:03:42.758 23:06:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.758 23:06:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.758 23:06:31 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.758 23:06:31 -- setup/common.sh@32 -- # continue 00:03:42.758 23:06:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.758 23:06:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.758 23:06:31 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.758 23:06:31 -- setup/common.sh@32 -- # continue 00:03:42.758 23:06:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.758 23:06:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.758 23:06:31 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.758 23:06:31 -- setup/common.sh@32 -- # continue 00:03:42.758 23:06:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.758 23:06:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.758 23:06:31 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.758 23:06:31 -- setup/common.sh@32 -- # continue 00:03:42.758 23:06:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.758 23:06:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.758 23:06:31 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.758 23:06:31 -- setup/common.sh@32 -- # continue 00:03:42.758 23:06:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.758 23:06:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.758 23:06:31 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.758 23:06:31 -- setup/common.sh@32 -- # continue 00:03:42.758 23:06:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.758 23:06:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.758 23:06:31 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.758 23:06:31 -- setup/common.sh@32 -- # continue 00:03:42.758 23:06:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.758 23:06:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.758 23:06:31 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.758 23:06:31 -- setup/common.sh@32 -- # continue 00:03:42.758 23:06:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.758 23:06:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.758 23:06:31 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.758 23:06:31 -- setup/common.sh@32 -- # continue 00:03:42.758 23:06:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.758 23:06:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.758 23:06:31 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.758 23:06:31 -- setup/common.sh@32 -- # continue 00:03:42.758 23:06:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.758 23:06:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.758 23:06:31 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.758 23:06:31 -- setup/common.sh@32 -- # continue 00:03:42.758 23:06:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.758 23:06:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.758 23:06:31 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.758 23:06:31 -- setup/common.sh@32 -- # continue 00:03:42.758 23:06:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.758 23:06:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.758 23:06:31 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.758 23:06:31 -- setup/common.sh@32 -- # continue 00:03:42.758 23:06:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.758 23:06:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.758 23:06:31 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.758 23:06:31 -- setup/common.sh@32 -- # continue 00:03:42.758 23:06:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.758 23:06:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.758 23:06:31 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.758 23:06:31 -- setup/common.sh@32 -- # continue 00:03:42.758 23:06:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.758 23:06:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.758 23:06:31 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.758 23:06:31 -- setup/common.sh@32 -- # continue 00:03:42.758 23:06:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.758 23:06:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.758 23:06:31 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.758 23:06:31 -- setup/common.sh@32 -- # continue 00:03:42.758 23:06:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.758 23:06:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.758 23:06:31 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.758 23:06:31 -- setup/common.sh@32 -- # continue 00:03:42.758 23:06:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.758 23:06:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.758 23:06:31 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.758 23:06:31 -- setup/common.sh@32 -- # continue 00:03:42.758 23:06:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.758 23:06:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.758 23:06:31 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.758 23:06:31 -- setup/common.sh@32 -- # continue 00:03:42.758 23:06:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.758 23:06:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.758 23:06:31 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.758 23:06:31 -- setup/common.sh@32 -- # continue 00:03:42.758 23:06:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.758 23:06:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.758 23:06:31 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.758 23:06:31 -- setup/common.sh@32 -- # continue 00:03:42.758 23:06:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.758 23:06:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.758 23:06:31 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.758 23:06:31 -- setup/common.sh@32 -- # continue 00:03:42.758 23:06:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.758 23:06:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.758 23:06:31 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.758 23:06:31 -- setup/common.sh@32 -- # continue 00:03:42.758 23:06:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.758 23:06:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.758 23:06:31 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.758 23:06:31 -- setup/common.sh@32 -- # continue 00:03:42.758 23:06:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.758 23:06:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.758 23:06:31 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.758 23:06:31 -- setup/common.sh@32 -- # continue 00:03:42.758 23:06:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.758 23:06:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.758 23:06:31 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.758 23:06:31 -- setup/common.sh@32 -- # continue 00:03:42.758 23:06:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.759 23:06:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.759 23:06:31 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.759 23:06:31 -- setup/common.sh@33 -- # echo 0 00:03:42.759 23:06:31 -- setup/common.sh@33 -- # return 0 00:03:42.759 23:06:31 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:42.759 23:06:31 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:42.759 23:06:31 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:42.759 23:06:31 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:42.759 23:06:31 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 513' 00:03:42.759 node0=512 expecting 513 00:03:42.759 23:06:31 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:42.759 23:06:31 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:42.759 23:06:31 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:42.759 23:06:31 -- setup/hugepages.sh@128 -- # echo 'node1=513 expecting 512' 00:03:42.759 node1=513 expecting 512 00:03:42.759 23:06:31 -- setup/hugepages.sh@130 -- # [[ 512 513 == \5\1\2\ \5\1\3 ]] 00:03:42.759 00:03:42.759 real 0m3.818s 00:03:42.759 user 0m1.425s 00:03:42.759 sys 0m2.421s 00:03:42.759 23:06:31 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:03:42.759 23:06:31 -- common/autotest_common.sh@10 -- # set +x 00:03:42.759 ************************************ 00:03:42.759 END TEST odd_alloc 00:03:42.759 ************************************ 00:03:42.759 23:06:31 -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:03:42.759 23:06:31 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:42.759 23:06:31 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:42.759 23:06:31 -- common/autotest_common.sh@10 -- # set +x 00:03:42.759 ************************************ 00:03:42.759 START TEST custom_alloc 00:03:42.759 ************************************ 00:03:42.759 23:06:31 -- common/autotest_common.sh@1111 -- # custom_alloc 00:03:42.759 23:06:31 -- setup/hugepages.sh@167 -- # local IFS=, 00:03:42.759 23:06:31 -- setup/hugepages.sh@169 -- # local node 00:03:42.759 23:06:31 -- setup/hugepages.sh@170 -- # nodes_hp=() 00:03:42.759 23:06:31 -- setup/hugepages.sh@170 -- # local nodes_hp 00:03:42.759 23:06:31 -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:03:42.759 23:06:31 -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:03:42.759 23:06:31 -- setup/hugepages.sh@49 -- # local size=1048576 00:03:42.759 23:06:31 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:42.759 23:06:31 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:42.759 23:06:31 -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:03:42.759 23:06:31 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:42.759 23:06:31 -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:42.759 23:06:31 -- setup/hugepages.sh@62 -- # local user_nodes 00:03:42.759 23:06:31 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:03:42.759 23:06:31 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:42.759 23:06:31 -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:42.759 23:06:31 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:42.759 23:06:31 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:42.759 23:06:31 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:42.759 23:06:31 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:42.759 23:06:31 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:03:42.759 23:06:31 -- setup/hugepages.sh@83 -- # : 256 00:03:42.759 23:06:31 -- setup/hugepages.sh@84 -- # : 1 00:03:42.759 23:06:31 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:42.759 23:06:31 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:03:42.759 23:06:31 -- setup/hugepages.sh@83 -- # : 0 00:03:42.759 23:06:31 -- setup/hugepages.sh@84 -- # : 0 00:03:42.759 23:06:31 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:42.759 23:06:31 -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:03:42.759 23:06:31 -- setup/hugepages.sh@176 -- # (( 2 > 1 )) 00:03:42.759 23:06:31 -- setup/hugepages.sh@177 -- # get_test_nr_hugepages 2097152 00:03:42.759 23:06:31 -- setup/hugepages.sh@49 -- # local size=2097152 00:03:42.759 23:06:31 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:42.759 23:06:31 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:42.759 23:06:31 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:42.759 23:06:31 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:42.759 23:06:31 -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:42.759 23:06:31 -- setup/hugepages.sh@62 -- # local user_nodes 00:03:42.759 23:06:31 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:42.759 23:06:31 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:42.759 23:06:31 -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:42.759 23:06:31 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:42.759 23:06:31 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:42.759 23:06:31 -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:03:42.759 23:06:31 -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:03:42.759 23:06:31 -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:03:42.759 23:06:31 -- setup/hugepages.sh@78 -- # return 0 00:03:42.759 23:06:31 -- setup/hugepages.sh@178 -- # nodes_hp[1]=1024 00:03:42.759 23:06:31 -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:03:42.759 23:06:31 -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:03:42.759 23:06:31 -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:03:42.759 23:06:31 -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:03:42.759 23:06:31 -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:03:42.759 23:06:31 -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:03:42.759 23:06:31 -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:03:42.759 23:06:31 -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:42.759 23:06:31 -- setup/hugepages.sh@62 -- # local user_nodes 00:03:42.759 23:06:31 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:42.759 23:06:31 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:42.759 23:06:31 -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:42.759 23:06:31 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:42.759 23:06:31 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:42.759 23:06:31 -- setup/hugepages.sh@74 -- # (( 2 > 0 )) 00:03:42.759 23:06:31 -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:03:42.759 23:06:31 -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:03:42.759 23:06:31 -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:03:42.759 23:06:31 -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=1024 00:03:42.759 23:06:31 -- setup/hugepages.sh@78 -- # return 0 00:03:42.759 23:06:31 -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512,nodes_hp[1]=1024' 00:03:42.759 23:06:31 -- setup/hugepages.sh@187 -- # setup output 00:03:42.759 23:06:31 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:42.759 23:06:31 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:46.090 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:03:46.090 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:03:46.090 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:03:46.090 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:03:46.090 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:03:46.090 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:03:46.090 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:03:46.090 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:03:46.090 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:03:46.090 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:03:46.090 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:03:46.090 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:03:46.090 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:03:46.090 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:03:46.090 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:03:46.351 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:03:46.351 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:03:46.619 23:06:35 -- setup/hugepages.sh@188 -- # nr_hugepages=1536 00:03:46.619 23:06:35 -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:03:46.619 23:06:35 -- setup/hugepages.sh@89 -- # local node 00:03:46.619 23:06:35 -- setup/hugepages.sh@90 -- # local sorted_t 00:03:46.619 23:06:35 -- setup/hugepages.sh@91 -- # local sorted_s 00:03:46.619 23:06:35 -- setup/hugepages.sh@92 -- # local surp 00:03:46.619 23:06:35 -- setup/hugepages.sh@93 -- # local resv 00:03:46.619 23:06:35 -- setup/hugepages.sh@94 -- # local anon 00:03:46.619 23:06:35 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:46.619 23:06:35 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:46.619 23:06:35 -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:46.619 23:06:35 -- setup/common.sh@18 -- # local node= 00:03:46.619 23:06:35 -- setup/common.sh@19 -- # local var val 00:03:46.619 23:06:35 -- setup/common.sh@20 -- # local mem_f mem 00:03:46.619 23:06:35 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:46.619 23:06:35 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:46.619 23:06:35 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:46.619 23:06:35 -- setup/common.sh@28 -- # mapfile -t mem 00:03:46.619 23:06:35 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:46.619 23:06:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.619 23:06:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.619 23:06:35 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338868 kB' 'MemFree: 105639708 kB' 'MemAvailable: 109180356 kB' 'Buffers: 4124 kB' 'Cached: 13008368 kB' 'SwapCached: 0 kB' 'Active: 10129420 kB' 'Inactive: 3515796 kB' 'Active(anon): 9439024 kB' 'Inactive(anon): 0 kB' 'Active(file): 690396 kB' 'Inactive(file): 3515796 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 635628 kB' 'Mapped: 198832 kB' 'Shmem: 8806300 kB' 'KReclaimable: 317576 kB' 'Slab: 1133080 kB' 'SReclaimable: 317576 kB' 'SUnreclaim: 815504 kB' 'KernelStack: 27040 kB' 'PageTables: 8976 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 69985172 kB' 'Committed_AS: 10829980 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 234812 kB' 'VmallocChunk: 0 kB' 'Percpu: 108288 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 3722612 kB' 'DirectMap2M: 43143168 kB' 'DirectMap1G: 89128960 kB' 00:03:46.619 23:06:35 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.619 23:06:35 -- setup/common.sh@32 -- # continue 00:03:46.619 23:06:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.619 23:06:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.619 23:06:35 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.619 23:06:35 -- setup/common.sh@32 -- # continue 00:03:46.619 23:06:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.619 23:06:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.619 23:06:35 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.619 23:06:35 -- setup/common.sh@32 -- # continue 00:03:46.619 23:06:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.619 23:06:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.619 23:06:35 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.619 23:06:35 -- setup/common.sh@32 -- # continue 00:03:46.619 23:06:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.619 23:06:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.619 23:06:35 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.619 23:06:35 -- setup/common.sh@32 -- # continue 00:03:46.619 23:06:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.619 23:06:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.619 23:06:35 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.619 23:06:35 -- setup/common.sh@32 -- # continue 00:03:46.619 23:06:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.619 23:06:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.619 23:06:35 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.619 23:06:35 -- setup/common.sh@32 -- # continue 00:03:46.619 23:06:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.619 23:06:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.619 23:06:35 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.619 23:06:35 -- setup/common.sh@32 -- # continue 00:03:46.619 23:06:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.619 23:06:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.619 23:06:35 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.619 23:06:35 -- setup/common.sh@32 -- # continue 00:03:46.619 23:06:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.619 23:06:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.619 23:06:35 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.619 23:06:35 -- setup/common.sh@32 -- # continue 00:03:46.619 23:06:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.619 23:06:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.619 23:06:35 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.619 23:06:35 -- setup/common.sh@32 -- # continue 00:03:46.619 23:06:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.619 23:06:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.619 23:06:35 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.619 23:06:35 -- setup/common.sh@32 -- # continue 00:03:46.619 23:06:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.619 23:06:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.619 23:06:35 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.619 23:06:35 -- setup/common.sh@32 -- # continue 00:03:46.619 23:06:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.619 23:06:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.619 23:06:35 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.619 23:06:35 -- setup/common.sh@32 -- # continue 00:03:46.619 23:06:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.619 23:06:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.619 23:06:35 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.619 23:06:35 -- setup/common.sh@32 -- # continue 00:03:46.619 23:06:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.619 23:06:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.619 23:06:35 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.619 23:06:35 -- setup/common.sh@32 -- # continue 00:03:46.619 23:06:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.619 23:06:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.619 23:06:35 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.619 23:06:35 -- setup/common.sh@32 -- # continue 00:03:46.619 23:06:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.619 23:06:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.619 23:06:35 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.619 23:06:35 -- setup/common.sh@32 -- # continue 00:03:46.619 23:06:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.619 23:06:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.619 23:06:35 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.619 23:06:35 -- setup/common.sh@32 -- # continue 00:03:46.619 23:06:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.619 23:06:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.619 23:06:35 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.619 23:06:35 -- setup/common.sh@32 -- # continue 00:03:46.619 23:06:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.619 23:06:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.619 23:06:35 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.619 23:06:35 -- setup/common.sh@32 -- # continue 00:03:46.619 23:06:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.619 23:06:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.619 23:06:35 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.619 23:06:35 -- setup/common.sh@32 -- # continue 00:03:46.619 23:06:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.619 23:06:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.619 23:06:35 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.619 23:06:35 -- setup/common.sh@32 -- # continue 00:03:46.619 23:06:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.619 23:06:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.619 23:06:35 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.619 23:06:35 -- setup/common.sh@32 -- # continue 00:03:46.619 23:06:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.619 23:06:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.619 23:06:35 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.619 23:06:35 -- setup/common.sh@32 -- # continue 00:03:46.619 23:06:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.619 23:06:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.619 23:06:35 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.619 23:06:35 -- setup/common.sh@32 -- # continue 00:03:46.619 23:06:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.619 23:06:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.619 23:06:35 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.619 23:06:35 -- setup/common.sh@32 -- # continue 00:03:46.619 23:06:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.619 23:06:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.619 23:06:35 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.619 23:06:35 -- setup/common.sh@32 -- # continue 00:03:46.619 23:06:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.619 23:06:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.619 23:06:35 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.619 23:06:35 -- setup/common.sh@32 -- # continue 00:03:46.619 23:06:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.619 23:06:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.619 23:06:35 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.620 23:06:35 -- setup/common.sh@32 -- # continue 00:03:46.620 23:06:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.620 23:06:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.620 23:06:35 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.620 23:06:35 -- setup/common.sh@32 -- # continue 00:03:46.620 23:06:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.620 23:06:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.620 23:06:35 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.620 23:06:35 -- setup/common.sh@32 -- # continue 00:03:46.620 23:06:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.620 23:06:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.620 23:06:35 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.620 23:06:35 -- setup/common.sh@32 -- # continue 00:03:46.620 23:06:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.620 23:06:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.620 23:06:35 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.620 23:06:35 -- setup/common.sh@32 -- # continue 00:03:46.620 23:06:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.620 23:06:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.620 23:06:35 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.620 23:06:35 -- setup/common.sh@32 -- # continue 00:03:46.620 23:06:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.620 23:06:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.620 23:06:35 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.620 23:06:35 -- setup/common.sh@32 -- # continue 00:03:46.620 23:06:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.620 23:06:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.620 23:06:35 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.620 23:06:35 -- setup/common.sh@32 -- # continue 00:03:46.620 23:06:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.620 23:06:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.620 23:06:35 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.620 23:06:35 -- setup/common.sh@32 -- # continue 00:03:46.620 23:06:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.620 23:06:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.620 23:06:35 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.620 23:06:35 -- setup/common.sh@32 -- # continue 00:03:46.620 23:06:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.620 23:06:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.620 23:06:35 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.620 23:06:35 -- setup/common.sh@32 -- # continue 00:03:46.620 23:06:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.620 23:06:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.620 23:06:35 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.620 23:06:35 -- setup/common.sh@33 -- # echo 0 00:03:46.620 23:06:35 -- setup/common.sh@33 -- # return 0 00:03:46.620 23:06:35 -- setup/hugepages.sh@97 -- # anon=0 00:03:46.620 23:06:35 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:46.620 23:06:35 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:46.620 23:06:35 -- setup/common.sh@18 -- # local node= 00:03:46.620 23:06:35 -- setup/common.sh@19 -- # local var val 00:03:46.620 23:06:35 -- setup/common.sh@20 -- # local mem_f mem 00:03:46.620 23:06:35 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:46.620 23:06:35 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:46.620 23:06:35 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:46.620 23:06:35 -- setup/common.sh@28 -- # mapfile -t mem 00:03:46.620 23:06:35 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:46.620 23:06:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.620 23:06:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.620 23:06:35 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338868 kB' 'MemFree: 105642032 kB' 'MemAvailable: 109182668 kB' 'Buffers: 4124 kB' 'Cached: 13008372 kB' 'SwapCached: 0 kB' 'Active: 10129324 kB' 'Inactive: 3515796 kB' 'Active(anon): 9438928 kB' 'Inactive(anon): 0 kB' 'Active(file): 690396 kB' 'Inactive(file): 3515796 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 636028 kB' 'Mapped: 198536 kB' 'Shmem: 8806304 kB' 'KReclaimable: 317552 kB' 'Slab: 1133088 kB' 'SReclaimable: 317552 kB' 'SUnreclaim: 815536 kB' 'KernelStack: 27040 kB' 'PageTables: 8980 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 69985172 kB' 'Committed_AS: 10829992 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 234780 kB' 'VmallocChunk: 0 kB' 'Percpu: 108288 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 3722612 kB' 'DirectMap2M: 43143168 kB' 'DirectMap1G: 89128960 kB' 00:03:46.620 23:06:35 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.620 23:06:35 -- setup/common.sh@32 -- # continue 00:03:46.620 23:06:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.620 23:06:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.620 23:06:35 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.620 23:06:35 -- setup/common.sh@32 -- # continue 00:03:46.620 23:06:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.620 23:06:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.620 23:06:35 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.620 23:06:35 -- setup/common.sh@32 -- # continue 00:03:46.620 23:06:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.620 23:06:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.620 23:06:35 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.620 23:06:35 -- setup/common.sh@32 -- # continue 00:03:46.620 23:06:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.620 23:06:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.620 23:06:35 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.620 23:06:35 -- setup/common.sh@32 -- # continue 00:03:46.620 23:06:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.620 23:06:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.620 23:06:35 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.620 23:06:35 -- setup/common.sh@32 -- # continue 00:03:46.620 23:06:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.620 23:06:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.620 23:06:35 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.620 23:06:35 -- setup/common.sh@32 -- # continue 00:03:46.620 23:06:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.620 23:06:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.620 23:06:35 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.620 23:06:35 -- setup/common.sh@32 -- # continue 00:03:46.620 23:06:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.620 23:06:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.620 23:06:35 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.620 23:06:35 -- setup/common.sh@32 -- # continue 00:03:46.620 23:06:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.620 23:06:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.620 23:06:35 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.620 23:06:35 -- setup/common.sh@32 -- # continue 00:03:46.620 23:06:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.620 23:06:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.620 23:06:35 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.620 23:06:35 -- setup/common.sh@32 -- # continue 00:03:46.620 23:06:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.620 23:06:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.620 23:06:35 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.620 23:06:35 -- setup/common.sh@32 -- # continue 00:03:46.620 23:06:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.620 23:06:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.620 23:06:35 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.620 23:06:35 -- setup/common.sh@32 -- # continue 00:03:46.620 23:06:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.620 23:06:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.620 23:06:35 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.620 23:06:35 -- setup/common.sh@32 -- # continue 00:03:46.620 23:06:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.620 23:06:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.620 23:06:35 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.620 23:06:35 -- setup/common.sh@32 -- # continue 00:03:46.620 23:06:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.620 23:06:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.620 23:06:35 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.620 23:06:35 -- setup/common.sh@32 -- # continue 00:03:46.620 23:06:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.620 23:06:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.620 23:06:35 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.620 23:06:35 -- setup/common.sh@32 -- # continue 00:03:46.620 23:06:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.620 23:06:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.620 23:06:35 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.620 23:06:35 -- setup/common.sh@32 -- # continue 00:03:46.620 23:06:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.620 23:06:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.620 23:06:35 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.620 23:06:35 -- setup/common.sh@32 -- # continue 00:03:46.620 23:06:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.620 23:06:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.620 23:06:35 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.620 23:06:35 -- setup/common.sh@32 -- # continue 00:03:46.620 23:06:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.620 23:06:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.620 23:06:35 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.620 23:06:35 -- setup/common.sh@32 -- # continue 00:03:46.620 23:06:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.621 23:06:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.621 23:06:35 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.621 23:06:35 -- setup/common.sh@32 -- # continue 00:03:46.621 23:06:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.621 23:06:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.621 23:06:35 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.621 23:06:35 -- setup/common.sh@32 -- # continue 00:03:46.621 23:06:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.621 23:06:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.621 23:06:35 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.621 23:06:35 -- setup/common.sh@32 -- # continue 00:03:46.621 23:06:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.621 23:06:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.621 23:06:35 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.621 23:06:35 -- setup/common.sh@32 -- # continue 00:03:46.621 23:06:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.621 23:06:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.621 23:06:35 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.621 23:06:35 -- setup/common.sh@32 -- # continue 00:03:46.621 23:06:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.621 23:06:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.621 23:06:35 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.621 23:06:35 -- setup/common.sh@32 -- # continue 00:03:46.621 23:06:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.621 23:06:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.621 23:06:35 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.621 23:06:35 -- setup/common.sh@32 -- # continue 00:03:46.621 23:06:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.621 23:06:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.621 23:06:35 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.621 23:06:35 -- setup/common.sh@32 -- # continue 00:03:46.621 23:06:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.621 23:06:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.621 23:06:35 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.621 23:06:35 -- setup/common.sh@32 -- # continue 00:03:46.621 23:06:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.621 23:06:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.621 23:06:35 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.621 23:06:35 -- setup/common.sh@32 -- # continue 00:03:46.621 23:06:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.621 23:06:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.621 23:06:35 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.621 23:06:35 -- setup/common.sh@32 -- # continue 00:03:46.621 23:06:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.621 23:06:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.621 23:06:35 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.621 23:06:35 -- setup/common.sh@32 -- # continue 00:03:46.621 23:06:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.621 23:06:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.621 23:06:35 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.621 23:06:35 -- setup/common.sh@32 -- # continue 00:03:46.621 23:06:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.621 23:06:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.621 23:06:35 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.621 23:06:35 -- setup/common.sh@32 -- # continue 00:03:46.621 23:06:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.621 23:06:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.621 23:06:35 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.621 23:06:35 -- setup/common.sh@32 -- # continue 00:03:46.621 23:06:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.621 23:06:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.621 23:06:35 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.621 23:06:35 -- setup/common.sh@32 -- # continue 00:03:46.621 23:06:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.621 23:06:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.621 23:06:35 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.621 23:06:35 -- setup/common.sh@32 -- # continue 00:03:46.621 23:06:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.621 23:06:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.621 23:06:35 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.621 23:06:35 -- setup/common.sh@32 -- # continue 00:03:46.621 23:06:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.621 23:06:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.621 23:06:35 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.621 23:06:35 -- setup/common.sh@32 -- # continue 00:03:46.621 23:06:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.621 23:06:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.621 23:06:35 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.621 23:06:35 -- setup/common.sh@32 -- # continue 00:03:46.621 23:06:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.621 23:06:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.621 23:06:35 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.621 23:06:35 -- setup/common.sh@32 -- # continue 00:03:46.621 23:06:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.621 23:06:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.621 23:06:35 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.621 23:06:35 -- setup/common.sh@32 -- # continue 00:03:46.621 23:06:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.621 23:06:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.621 23:06:35 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.621 23:06:35 -- setup/common.sh@32 -- # continue 00:03:46.621 23:06:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.621 23:06:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.621 23:06:35 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.621 23:06:35 -- setup/common.sh@32 -- # continue 00:03:46.621 23:06:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.621 23:06:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.621 23:06:35 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.621 23:06:35 -- setup/common.sh@32 -- # continue 00:03:46.621 23:06:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.621 23:06:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.621 23:06:35 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.621 23:06:35 -- setup/common.sh@32 -- # continue 00:03:46.621 23:06:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.621 23:06:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.621 23:06:35 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.621 23:06:35 -- setup/common.sh@32 -- # continue 00:03:46.621 23:06:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.621 23:06:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.621 23:06:35 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.621 23:06:35 -- setup/common.sh@32 -- # continue 00:03:46.621 23:06:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.621 23:06:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.621 23:06:35 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.621 23:06:35 -- setup/common.sh@32 -- # continue 00:03:46.621 23:06:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.621 23:06:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.621 23:06:35 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.621 23:06:35 -- setup/common.sh@32 -- # continue 00:03:46.621 23:06:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.621 23:06:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.621 23:06:35 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.621 23:06:35 -- setup/common.sh@33 -- # echo 0 00:03:46.621 23:06:35 -- setup/common.sh@33 -- # return 0 00:03:46.621 23:06:35 -- setup/hugepages.sh@99 -- # surp=0 00:03:46.621 23:06:35 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:46.621 23:06:35 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:46.621 23:06:35 -- setup/common.sh@18 -- # local node= 00:03:46.621 23:06:35 -- setup/common.sh@19 -- # local var val 00:03:46.621 23:06:35 -- setup/common.sh@20 -- # local mem_f mem 00:03:46.621 23:06:35 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:46.621 23:06:35 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:46.621 23:06:35 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:46.621 23:06:35 -- setup/common.sh@28 -- # mapfile -t mem 00:03:46.621 23:06:35 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:46.621 23:06:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.621 23:06:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.621 23:06:35 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338868 kB' 'MemFree: 105641304 kB' 'MemAvailable: 109181940 kB' 'Buffers: 4124 kB' 'Cached: 13008384 kB' 'SwapCached: 0 kB' 'Active: 10129328 kB' 'Inactive: 3515796 kB' 'Active(anon): 9438932 kB' 'Inactive(anon): 0 kB' 'Active(file): 690396 kB' 'Inactive(file): 3515796 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 636024 kB' 'Mapped: 198536 kB' 'Shmem: 8806316 kB' 'KReclaimable: 317552 kB' 'Slab: 1133088 kB' 'SReclaimable: 317552 kB' 'SUnreclaim: 815536 kB' 'KernelStack: 27040 kB' 'PageTables: 8980 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 69985172 kB' 'Committed_AS: 10830008 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 234780 kB' 'VmallocChunk: 0 kB' 'Percpu: 108288 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 3722612 kB' 'DirectMap2M: 43143168 kB' 'DirectMap1G: 89128960 kB' 00:03:46.622 23:06:35 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.622 23:06:35 -- setup/common.sh@32 -- # continue 00:03:46.622 23:06:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.622 23:06:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.622 23:06:35 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.622 23:06:35 -- setup/common.sh@32 -- # continue 00:03:46.622 23:06:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.622 23:06:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.622 23:06:35 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.622 23:06:35 -- setup/common.sh@32 -- # continue 00:03:46.622 23:06:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.622 23:06:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.622 23:06:35 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.622 23:06:35 -- setup/common.sh@32 -- # continue 00:03:46.622 23:06:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.622 23:06:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.622 23:06:35 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.622 23:06:35 -- setup/common.sh@32 -- # continue 00:03:46.622 23:06:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.622 23:06:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.622 23:06:35 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.622 23:06:35 -- setup/common.sh@32 -- # continue 00:03:46.622 23:06:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.622 23:06:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.622 23:06:35 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.622 23:06:35 -- setup/common.sh@32 -- # continue 00:03:46.622 23:06:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.622 23:06:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.622 23:06:35 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.622 23:06:35 -- setup/common.sh@32 -- # continue 00:03:46.622 23:06:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.622 23:06:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.622 23:06:35 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.622 23:06:35 -- setup/common.sh@32 -- # continue 00:03:46.622 23:06:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.622 23:06:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.622 23:06:35 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.622 23:06:35 -- setup/common.sh@32 -- # continue 00:03:46.622 23:06:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.622 23:06:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.622 23:06:35 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.622 23:06:35 -- setup/common.sh@32 -- # continue 00:03:46.622 23:06:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.622 23:06:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.622 23:06:35 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.622 23:06:35 -- setup/common.sh@32 -- # continue 00:03:46.622 23:06:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.622 23:06:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.622 23:06:35 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.622 23:06:35 -- setup/common.sh@32 -- # continue 00:03:46.622 23:06:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.622 23:06:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.622 23:06:35 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.622 23:06:35 -- setup/common.sh@32 -- # continue 00:03:46.622 23:06:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.622 23:06:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.622 23:06:35 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.622 23:06:35 -- setup/common.sh@32 -- # continue 00:03:46.622 23:06:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.622 23:06:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.622 23:06:35 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.622 23:06:35 -- setup/common.sh@32 -- # continue 00:03:46.622 23:06:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.622 23:06:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.622 23:06:35 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.622 23:06:35 -- setup/common.sh@32 -- # continue 00:03:46.622 23:06:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.622 23:06:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.622 23:06:35 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.622 23:06:35 -- setup/common.sh@32 -- # continue 00:03:46.622 23:06:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.622 23:06:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.622 23:06:35 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.622 23:06:35 -- setup/common.sh@32 -- # continue 00:03:46.622 23:06:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.622 23:06:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.622 23:06:35 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.622 23:06:35 -- setup/common.sh@32 -- # continue 00:03:46.622 23:06:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.622 23:06:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.622 23:06:35 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.622 23:06:35 -- setup/common.sh@32 -- # continue 00:03:46.622 23:06:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.622 23:06:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.622 23:06:35 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.622 23:06:35 -- setup/common.sh@32 -- # continue 00:03:46.622 23:06:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.622 23:06:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.622 23:06:35 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.622 23:06:35 -- setup/common.sh@32 -- # continue 00:03:46.622 23:06:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.622 23:06:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.622 23:06:35 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.622 23:06:35 -- setup/common.sh@32 -- # continue 00:03:46.622 23:06:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.622 23:06:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.622 23:06:35 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.622 23:06:35 -- setup/common.sh@32 -- # continue 00:03:46.622 23:06:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.622 23:06:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.622 23:06:35 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.622 23:06:35 -- setup/common.sh@32 -- # continue 00:03:46.622 23:06:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.622 23:06:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.622 23:06:35 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.622 23:06:35 -- setup/common.sh@32 -- # continue 00:03:46.622 23:06:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.622 23:06:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.622 23:06:35 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.622 23:06:35 -- setup/common.sh@32 -- # continue 00:03:46.622 23:06:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.622 23:06:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.622 23:06:35 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.622 23:06:35 -- setup/common.sh@32 -- # continue 00:03:46.622 23:06:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.622 23:06:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.622 23:06:35 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.622 23:06:35 -- setup/common.sh@32 -- # continue 00:03:46.622 23:06:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.622 23:06:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.622 23:06:35 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.622 23:06:35 -- setup/common.sh@32 -- # continue 00:03:46.622 23:06:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.622 23:06:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.622 23:06:35 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.622 23:06:35 -- setup/common.sh@32 -- # continue 00:03:46.622 23:06:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.622 23:06:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.622 23:06:35 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.622 23:06:35 -- setup/common.sh@32 -- # continue 00:03:46.622 23:06:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.622 23:06:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.622 23:06:35 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.622 23:06:35 -- setup/common.sh@32 -- # continue 00:03:46.622 23:06:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.622 23:06:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.622 23:06:35 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.622 23:06:35 -- setup/common.sh@32 -- # continue 00:03:46.622 23:06:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.622 23:06:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.622 23:06:35 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.622 23:06:35 -- setup/common.sh@32 -- # continue 00:03:46.622 23:06:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.622 23:06:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.622 23:06:35 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.622 23:06:35 -- setup/common.sh@32 -- # continue 00:03:46.622 23:06:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.623 23:06:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.623 23:06:35 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.623 23:06:35 -- setup/common.sh@32 -- # continue 00:03:46.623 23:06:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.623 23:06:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.623 23:06:35 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.623 23:06:35 -- setup/common.sh@32 -- # continue 00:03:46.623 23:06:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.623 23:06:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.623 23:06:35 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.623 23:06:35 -- setup/common.sh@32 -- # continue 00:03:46.623 23:06:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.623 23:06:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.623 23:06:35 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.623 23:06:35 -- setup/common.sh@32 -- # continue 00:03:46.623 23:06:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.623 23:06:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.623 23:06:35 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.623 23:06:35 -- setup/common.sh@32 -- # continue 00:03:46.623 23:06:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.623 23:06:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.623 23:06:35 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.623 23:06:35 -- setup/common.sh@32 -- # continue 00:03:46.623 23:06:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.623 23:06:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.623 23:06:35 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.623 23:06:35 -- setup/common.sh@32 -- # continue 00:03:46.623 23:06:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.623 23:06:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.623 23:06:35 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.623 23:06:35 -- setup/common.sh@32 -- # continue 00:03:46.623 23:06:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.623 23:06:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.623 23:06:35 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.623 23:06:35 -- setup/common.sh@32 -- # continue 00:03:46.623 23:06:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.623 23:06:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.623 23:06:35 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.623 23:06:35 -- setup/common.sh@32 -- # continue 00:03:46.623 23:06:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.623 23:06:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.623 23:06:35 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.623 23:06:35 -- setup/common.sh@32 -- # continue 00:03:46.623 23:06:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.623 23:06:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.623 23:06:35 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.623 23:06:35 -- setup/common.sh@32 -- # continue 00:03:46.623 23:06:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.623 23:06:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.623 23:06:35 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.623 23:06:35 -- setup/common.sh@32 -- # continue 00:03:46.623 23:06:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.623 23:06:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.623 23:06:35 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.623 23:06:35 -- setup/common.sh@33 -- # echo 0 00:03:46.623 23:06:35 -- setup/common.sh@33 -- # return 0 00:03:46.623 23:06:35 -- setup/hugepages.sh@100 -- # resv=0 00:03:46.623 23:06:35 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1536 00:03:46.623 nr_hugepages=1536 00:03:46.623 23:06:35 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:46.623 resv_hugepages=0 00:03:46.623 23:06:35 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:46.623 surplus_hugepages=0 00:03:46.623 23:06:35 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:46.623 anon_hugepages=0 00:03:46.623 23:06:35 -- setup/hugepages.sh@107 -- # (( 1536 == nr_hugepages + surp + resv )) 00:03:46.623 23:06:35 -- setup/hugepages.sh@109 -- # (( 1536 == nr_hugepages )) 00:03:46.623 23:06:35 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:46.623 23:06:35 -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:46.623 23:06:35 -- setup/common.sh@18 -- # local node= 00:03:46.623 23:06:35 -- setup/common.sh@19 -- # local var val 00:03:46.623 23:06:35 -- setup/common.sh@20 -- # local mem_f mem 00:03:46.623 23:06:35 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:46.623 23:06:35 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:46.623 23:06:35 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:46.623 23:06:35 -- setup/common.sh@28 -- # mapfile -t mem 00:03:46.623 23:06:35 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:46.623 23:06:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.623 23:06:35 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338868 kB' 'MemFree: 105640860 kB' 'MemAvailable: 109181496 kB' 'Buffers: 4124 kB' 'Cached: 13008396 kB' 'SwapCached: 0 kB' 'Active: 10129348 kB' 'Inactive: 3515796 kB' 'Active(anon): 9438952 kB' 'Inactive(anon): 0 kB' 'Active(file): 690396 kB' 'Inactive(file): 3515796 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 636024 kB' 'Mapped: 198536 kB' 'Shmem: 8806328 kB' 'KReclaimable: 317552 kB' 'Slab: 1133088 kB' 'SReclaimable: 317552 kB' 'SUnreclaim: 815536 kB' 'KernelStack: 27040 kB' 'PageTables: 8980 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 69985172 kB' 'Committed_AS: 10830024 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 234780 kB' 'VmallocChunk: 0 kB' 'Percpu: 108288 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 3722612 kB' 'DirectMap2M: 43143168 kB' 'DirectMap1G: 89128960 kB' 00:03:46.623 23:06:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.623 23:06:35 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.623 23:06:35 -- setup/common.sh@32 -- # continue 00:03:46.623 23:06:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.623 23:06:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.623 23:06:35 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.623 23:06:35 -- setup/common.sh@32 -- # continue 00:03:46.623 23:06:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.623 23:06:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.623 23:06:35 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.623 23:06:35 -- setup/common.sh@32 -- # continue 00:03:46.623 23:06:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.623 23:06:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.623 23:06:35 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.623 23:06:35 -- setup/common.sh@32 -- # continue 00:03:46.623 23:06:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.623 23:06:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.623 23:06:35 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.623 23:06:35 -- setup/common.sh@32 -- # continue 00:03:46.623 23:06:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.623 23:06:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.623 23:06:35 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.623 23:06:35 -- setup/common.sh@32 -- # continue 00:03:46.623 23:06:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.623 23:06:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.623 23:06:35 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.623 23:06:35 -- setup/common.sh@32 -- # continue 00:03:46.623 23:06:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.623 23:06:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.623 23:06:35 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.623 23:06:35 -- setup/common.sh@32 -- # continue 00:03:46.623 23:06:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.623 23:06:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.623 23:06:35 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.623 23:06:35 -- setup/common.sh@32 -- # continue 00:03:46.623 23:06:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.623 23:06:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.623 23:06:35 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.624 23:06:35 -- setup/common.sh@32 -- # continue 00:03:46.624 23:06:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.624 23:06:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.624 23:06:35 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.624 23:06:35 -- setup/common.sh@32 -- # continue 00:03:46.624 23:06:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.624 23:06:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.624 23:06:35 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.624 23:06:35 -- setup/common.sh@32 -- # continue 00:03:46.624 23:06:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.624 23:06:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.624 23:06:35 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.624 23:06:35 -- setup/common.sh@32 -- # continue 00:03:46.624 23:06:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.624 23:06:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.624 23:06:35 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.624 23:06:35 -- setup/common.sh@32 -- # continue 00:03:46.624 23:06:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.624 23:06:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.624 23:06:35 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.624 23:06:35 -- setup/common.sh@32 -- # continue 00:03:46.624 23:06:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.624 23:06:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.624 23:06:35 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.624 23:06:35 -- setup/common.sh@32 -- # continue 00:03:46.624 23:06:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.624 23:06:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.624 23:06:35 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.624 23:06:35 -- setup/common.sh@32 -- # continue 00:03:46.624 23:06:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.624 23:06:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.624 23:06:35 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.624 23:06:35 -- setup/common.sh@32 -- # continue 00:03:46.624 23:06:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.624 23:06:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.624 23:06:35 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.624 23:06:35 -- setup/common.sh@32 -- # continue 00:03:46.624 23:06:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.624 23:06:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.624 23:06:35 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.624 23:06:35 -- setup/common.sh@32 -- # continue 00:03:46.624 23:06:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.624 23:06:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.624 23:06:35 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.624 23:06:35 -- setup/common.sh@32 -- # continue 00:03:46.624 23:06:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.624 23:06:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.624 23:06:35 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.624 23:06:35 -- setup/common.sh@32 -- # continue 00:03:46.624 23:06:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.624 23:06:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.624 23:06:35 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.624 23:06:35 -- setup/common.sh@32 -- # continue 00:03:46.624 23:06:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.624 23:06:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.624 23:06:35 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.624 23:06:35 -- setup/common.sh@32 -- # continue 00:03:46.624 23:06:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.624 23:06:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.624 23:06:35 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.624 23:06:35 -- setup/common.sh@32 -- # continue 00:03:46.624 23:06:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.624 23:06:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.624 23:06:35 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.624 23:06:35 -- setup/common.sh@32 -- # continue 00:03:46.624 23:06:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.624 23:06:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.624 23:06:35 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.624 23:06:35 -- setup/common.sh@32 -- # continue 00:03:46.624 23:06:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.624 23:06:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.624 23:06:35 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.624 23:06:35 -- setup/common.sh@32 -- # continue 00:03:46.624 23:06:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.624 23:06:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.624 23:06:35 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.624 23:06:35 -- setup/common.sh@32 -- # continue 00:03:46.624 23:06:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.624 23:06:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.624 23:06:35 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.624 23:06:35 -- setup/common.sh@32 -- # continue 00:03:46.624 23:06:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.624 23:06:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.624 23:06:35 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.624 23:06:35 -- setup/common.sh@32 -- # continue 00:03:46.624 23:06:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.624 23:06:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.624 23:06:35 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.624 23:06:35 -- setup/common.sh@32 -- # continue 00:03:46.624 23:06:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.624 23:06:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.624 23:06:35 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.624 23:06:35 -- setup/common.sh@32 -- # continue 00:03:46.624 23:06:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.624 23:06:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.624 23:06:35 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.624 23:06:35 -- setup/common.sh@32 -- # continue 00:03:46.624 23:06:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.624 23:06:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.624 23:06:35 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.624 23:06:35 -- setup/common.sh@32 -- # continue 00:03:46.624 23:06:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.624 23:06:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.624 23:06:35 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.624 23:06:35 -- setup/common.sh@32 -- # continue 00:03:46.624 23:06:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.624 23:06:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.624 23:06:35 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.624 23:06:35 -- setup/common.sh@32 -- # continue 00:03:46.624 23:06:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.624 23:06:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.624 23:06:35 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.624 23:06:35 -- setup/common.sh@32 -- # continue 00:03:46.624 23:06:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.624 23:06:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.624 23:06:35 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.624 23:06:35 -- setup/common.sh@32 -- # continue 00:03:46.624 23:06:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.624 23:06:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.624 23:06:35 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.624 23:06:35 -- setup/common.sh@32 -- # continue 00:03:46.624 23:06:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.624 23:06:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.624 23:06:35 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.624 23:06:35 -- setup/common.sh@32 -- # continue 00:03:46.624 23:06:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.624 23:06:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.624 23:06:35 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.624 23:06:35 -- setup/common.sh@32 -- # continue 00:03:46.624 23:06:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.624 23:06:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.624 23:06:35 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.624 23:06:35 -- setup/common.sh@32 -- # continue 00:03:46.624 23:06:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.624 23:06:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.624 23:06:35 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.624 23:06:35 -- setup/common.sh@32 -- # continue 00:03:46.624 23:06:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.624 23:06:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.624 23:06:35 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.624 23:06:35 -- setup/common.sh@32 -- # continue 00:03:46.624 23:06:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.624 23:06:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.624 23:06:35 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.624 23:06:35 -- setup/common.sh@32 -- # continue 00:03:46.624 23:06:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.625 23:06:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.625 23:06:35 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.625 23:06:35 -- setup/common.sh@32 -- # continue 00:03:46.625 23:06:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.625 23:06:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.625 23:06:35 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.625 23:06:35 -- setup/common.sh@32 -- # continue 00:03:46.625 23:06:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.625 23:06:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.625 23:06:35 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.625 23:06:35 -- setup/common.sh@33 -- # echo 1536 00:03:46.625 23:06:35 -- setup/common.sh@33 -- # return 0 00:03:46.625 23:06:35 -- setup/hugepages.sh@110 -- # (( 1536 == nr_hugepages + surp + resv )) 00:03:46.625 23:06:35 -- setup/hugepages.sh@112 -- # get_nodes 00:03:46.625 23:06:35 -- setup/hugepages.sh@27 -- # local node 00:03:46.625 23:06:35 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:46.625 23:06:35 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:46.625 23:06:35 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:46.625 23:06:35 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:46.625 23:06:35 -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:46.625 23:06:35 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:46.625 23:06:35 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:46.625 23:06:35 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:46.625 23:06:35 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:46.625 23:06:35 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:46.625 23:06:35 -- setup/common.sh@18 -- # local node=0 00:03:46.625 23:06:35 -- setup/common.sh@19 -- # local var val 00:03:46.625 23:06:35 -- setup/common.sh@20 -- # local mem_f mem 00:03:46.625 23:06:35 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:46.625 23:06:35 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:46.625 23:06:35 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:46.625 23:06:35 -- setup/common.sh@28 -- # mapfile -t mem 00:03:46.625 23:06:35 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:46.625 23:06:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.625 23:06:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.625 23:06:35 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 65659008 kB' 'MemFree: 59080964 kB' 'MemUsed: 6578044 kB' 'SwapCached: 0 kB' 'Active: 3355892 kB' 'Inactive: 108980 kB' 'Active(anon): 3046372 kB' 'Inactive(anon): 0 kB' 'Active(file): 309520 kB' 'Inactive(file): 108980 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 3368116 kB' 'Mapped: 98188 kB' 'AnonPages: 99964 kB' 'Shmem: 2949616 kB' 'KernelStack: 13384 kB' 'PageTables: 3496 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 164676 kB' 'Slab: 557964 kB' 'SReclaimable: 164676 kB' 'SUnreclaim: 393288 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:46.625 23:06:35 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.625 23:06:35 -- setup/common.sh@32 -- # continue 00:03:46.625 23:06:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.625 23:06:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.625 23:06:35 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.625 23:06:35 -- setup/common.sh@32 -- # continue 00:03:46.625 23:06:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.625 23:06:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.625 23:06:35 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.625 23:06:35 -- setup/common.sh@32 -- # continue 00:03:46.625 23:06:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.625 23:06:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.625 23:06:35 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.625 23:06:35 -- setup/common.sh@32 -- # continue 00:03:46.625 23:06:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.625 23:06:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.625 23:06:35 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.625 23:06:35 -- setup/common.sh@32 -- # continue 00:03:46.625 23:06:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.625 23:06:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.625 23:06:35 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.625 23:06:35 -- setup/common.sh@32 -- # continue 00:03:46.625 23:06:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.625 23:06:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.625 23:06:35 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.625 23:06:35 -- setup/common.sh@32 -- # continue 00:03:46.625 23:06:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.625 23:06:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.625 23:06:35 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.625 23:06:35 -- setup/common.sh@32 -- # continue 00:03:46.625 23:06:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.625 23:06:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.625 23:06:35 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.625 23:06:35 -- setup/common.sh@32 -- # continue 00:03:46.625 23:06:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.625 23:06:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.625 23:06:35 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.625 23:06:35 -- setup/common.sh@32 -- # continue 00:03:46.625 23:06:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.625 23:06:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.625 23:06:35 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.625 23:06:35 -- setup/common.sh@32 -- # continue 00:03:46.625 23:06:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.625 23:06:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.625 23:06:35 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.625 23:06:35 -- setup/common.sh@32 -- # continue 00:03:46.625 23:06:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.625 23:06:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.625 23:06:35 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.625 23:06:35 -- setup/common.sh@32 -- # continue 00:03:46.625 23:06:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.625 23:06:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.625 23:06:35 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.625 23:06:35 -- setup/common.sh@32 -- # continue 00:03:46.625 23:06:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.625 23:06:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.625 23:06:35 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.625 23:06:35 -- setup/common.sh@32 -- # continue 00:03:46.625 23:06:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.625 23:06:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.625 23:06:35 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.625 23:06:35 -- setup/common.sh@32 -- # continue 00:03:46.625 23:06:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.625 23:06:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.625 23:06:35 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.625 23:06:35 -- setup/common.sh@32 -- # continue 00:03:46.625 23:06:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.625 23:06:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.625 23:06:35 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.625 23:06:35 -- setup/common.sh@32 -- # continue 00:03:46.625 23:06:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.625 23:06:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.625 23:06:35 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.625 23:06:35 -- setup/common.sh@32 -- # continue 00:03:46.625 23:06:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.625 23:06:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.625 23:06:35 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.625 23:06:35 -- setup/common.sh@32 -- # continue 00:03:46.625 23:06:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.625 23:06:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.625 23:06:35 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.625 23:06:35 -- setup/common.sh@32 -- # continue 00:03:46.625 23:06:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.625 23:06:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.625 23:06:35 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.625 23:06:35 -- setup/common.sh@32 -- # continue 00:03:46.625 23:06:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.626 23:06:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.626 23:06:35 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.626 23:06:35 -- setup/common.sh@32 -- # continue 00:03:46.626 23:06:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.626 23:06:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.626 23:06:35 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.626 23:06:35 -- setup/common.sh@32 -- # continue 00:03:46.626 23:06:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.626 23:06:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.626 23:06:35 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.626 23:06:35 -- setup/common.sh@32 -- # continue 00:03:46.626 23:06:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.626 23:06:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.626 23:06:35 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.626 23:06:35 -- setup/common.sh@32 -- # continue 00:03:46.626 23:06:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.626 23:06:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.626 23:06:35 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.626 23:06:35 -- setup/common.sh@32 -- # continue 00:03:46.626 23:06:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.626 23:06:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.626 23:06:35 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.626 23:06:35 -- setup/common.sh@32 -- # continue 00:03:46.626 23:06:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.626 23:06:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.626 23:06:35 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.626 23:06:35 -- setup/common.sh@32 -- # continue 00:03:46.626 23:06:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.626 23:06:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.626 23:06:35 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.626 23:06:35 -- setup/common.sh@32 -- # continue 00:03:46.626 23:06:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.626 23:06:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.626 23:06:35 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.626 23:06:35 -- setup/common.sh@32 -- # continue 00:03:46.626 23:06:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.626 23:06:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.626 23:06:35 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.626 23:06:35 -- setup/common.sh@32 -- # continue 00:03:46.626 23:06:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.626 23:06:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.626 23:06:35 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.626 23:06:35 -- setup/common.sh@32 -- # continue 00:03:46.626 23:06:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.626 23:06:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.626 23:06:35 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.626 23:06:35 -- setup/common.sh@32 -- # continue 00:03:46.626 23:06:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.626 23:06:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.626 23:06:35 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.626 23:06:35 -- setup/common.sh@32 -- # continue 00:03:46.626 23:06:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.626 23:06:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.626 23:06:35 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.626 23:06:35 -- setup/common.sh@32 -- # continue 00:03:46.626 23:06:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.626 23:06:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.626 23:06:35 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.626 23:06:35 -- setup/common.sh@33 -- # echo 0 00:03:46.626 23:06:35 -- setup/common.sh@33 -- # return 0 00:03:46.626 23:06:35 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:46.626 23:06:35 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:46.626 23:06:35 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:46.626 23:06:35 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:46.626 23:06:35 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:46.626 23:06:35 -- setup/common.sh@18 -- # local node=1 00:03:46.626 23:06:35 -- setup/common.sh@19 -- # local var val 00:03:46.626 23:06:35 -- setup/common.sh@20 -- # local mem_f mem 00:03:46.626 23:06:35 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:46.626 23:06:35 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:46.626 23:06:35 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:46.626 23:06:35 -- setup/common.sh@28 -- # mapfile -t mem 00:03:46.626 23:06:35 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:46.626 23:06:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.626 23:06:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.626 23:06:35 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60679860 kB' 'MemFree: 46560820 kB' 'MemUsed: 14119040 kB' 'SwapCached: 0 kB' 'Active: 6775828 kB' 'Inactive: 3406816 kB' 'Active(anon): 6394952 kB' 'Inactive(anon): 0 kB' 'Active(file): 380876 kB' 'Inactive(file): 3406816 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 9644404 kB' 'Mapped: 100348 kB' 'AnonPages: 538452 kB' 'Shmem: 5856712 kB' 'KernelStack: 13688 kB' 'PageTables: 5592 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 152876 kB' 'Slab: 575124 kB' 'SReclaimable: 152876 kB' 'SUnreclaim: 422248 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:46.626 23:06:35 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.626 23:06:35 -- setup/common.sh@32 -- # continue 00:03:46.626 23:06:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.626 23:06:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.626 23:06:35 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.626 23:06:35 -- setup/common.sh@32 -- # continue 00:03:46.626 23:06:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.626 23:06:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.626 23:06:35 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.626 23:06:35 -- setup/common.sh@32 -- # continue 00:03:46.626 23:06:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.626 23:06:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.626 23:06:35 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.626 23:06:35 -- setup/common.sh@32 -- # continue 00:03:46.626 23:06:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.626 23:06:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.626 23:06:35 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.626 23:06:35 -- setup/common.sh@32 -- # continue 00:03:46.626 23:06:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.626 23:06:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.626 23:06:35 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.626 23:06:35 -- setup/common.sh@32 -- # continue 00:03:46.626 23:06:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.626 23:06:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.626 23:06:35 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.626 23:06:35 -- setup/common.sh@32 -- # continue 00:03:46.626 23:06:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.626 23:06:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.626 23:06:35 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.626 23:06:35 -- setup/common.sh@32 -- # continue 00:03:46.626 23:06:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.626 23:06:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.626 23:06:35 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.626 23:06:35 -- setup/common.sh@32 -- # continue 00:03:46.626 23:06:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.626 23:06:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.626 23:06:35 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.626 23:06:35 -- setup/common.sh@32 -- # continue 00:03:46.626 23:06:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.626 23:06:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.626 23:06:35 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.626 23:06:35 -- setup/common.sh@32 -- # continue 00:03:46.626 23:06:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.626 23:06:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.626 23:06:35 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.626 23:06:35 -- setup/common.sh@32 -- # continue 00:03:46.626 23:06:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.626 23:06:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.626 23:06:35 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.626 23:06:35 -- setup/common.sh@32 -- # continue 00:03:46.626 23:06:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.626 23:06:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.626 23:06:35 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.626 23:06:35 -- setup/common.sh@32 -- # continue 00:03:46.626 23:06:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.626 23:06:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.626 23:06:35 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.626 23:06:35 -- setup/common.sh@32 -- # continue 00:03:46.626 23:06:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.626 23:06:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.626 23:06:35 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.626 23:06:35 -- setup/common.sh@32 -- # continue 00:03:46.626 23:06:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.626 23:06:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.626 23:06:35 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.626 23:06:35 -- setup/common.sh@32 -- # continue 00:03:46.626 23:06:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.626 23:06:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.626 23:06:35 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.626 23:06:35 -- setup/common.sh@32 -- # continue 00:03:46.626 23:06:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.626 23:06:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.626 23:06:35 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.626 23:06:35 -- setup/common.sh@32 -- # continue 00:03:46.627 23:06:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.627 23:06:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.627 23:06:35 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.627 23:06:35 -- setup/common.sh@32 -- # continue 00:03:46.627 23:06:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.627 23:06:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.627 23:06:35 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.627 23:06:35 -- setup/common.sh@32 -- # continue 00:03:46.627 23:06:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.627 23:06:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.627 23:06:35 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.627 23:06:35 -- setup/common.sh@32 -- # continue 00:03:46.627 23:06:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.627 23:06:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.627 23:06:35 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.627 23:06:35 -- setup/common.sh@32 -- # continue 00:03:46.627 23:06:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.627 23:06:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.627 23:06:35 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.627 23:06:35 -- setup/common.sh@32 -- # continue 00:03:46.627 23:06:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.627 23:06:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.627 23:06:35 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.627 23:06:35 -- setup/common.sh@32 -- # continue 00:03:46.627 23:06:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.627 23:06:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.627 23:06:35 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.627 23:06:35 -- setup/common.sh@32 -- # continue 00:03:46.627 23:06:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.627 23:06:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.627 23:06:35 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.627 23:06:35 -- setup/common.sh@32 -- # continue 00:03:46.627 23:06:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.627 23:06:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.627 23:06:35 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.627 23:06:35 -- setup/common.sh@32 -- # continue 00:03:46.627 23:06:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.627 23:06:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.627 23:06:35 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.627 23:06:35 -- setup/common.sh@32 -- # continue 00:03:46.627 23:06:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.627 23:06:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.627 23:06:35 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.627 23:06:35 -- setup/common.sh@32 -- # continue 00:03:46.627 23:06:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.627 23:06:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.627 23:06:35 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.627 23:06:35 -- setup/common.sh@32 -- # continue 00:03:46.627 23:06:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.627 23:06:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.627 23:06:35 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.627 23:06:35 -- setup/common.sh@32 -- # continue 00:03:46.627 23:06:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.627 23:06:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.627 23:06:35 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.627 23:06:35 -- setup/common.sh@32 -- # continue 00:03:46.627 23:06:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.627 23:06:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.627 23:06:35 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.627 23:06:35 -- setup/common.sh@32 -- # continue 00:03:46.627 23:06:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.627 23:06:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.627 23:06:35 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.627 23:06:35 -- setup/common.sh@32 -- # continue 00:03:46.627 23:06:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.627 23:06:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.627 23:06:35 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.627 23:06:35 -- setup/common.sh@32 -- # continue 00:03:46.627 23:06:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.627 23:06:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.627 23:06:35 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.627 23:06:35 -- setup/common.sh@33 -- # echo 0 00:03:46.627 23:06:35 -- setup/common.sh@33 -- # return 0 00:03:46.627 23:06:35 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:46.627 23:06:35 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:46.627 23:06:35 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:46.627 23:06:35 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:46.627 23:06:35 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:03:46.627 node0=512 expecting 512 00:03:46.627 23:06:35 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:46.627 23:06:35 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:46.627 23:06:35 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:46.627 23:06:35 -- setup/hugepages.sh@128 -- # echo 'node1=1024 expecting 1024' 00:03:46.627 node1=1024 expecting 1024 00:03:46.627 23:06:35 -- setup/hugepages.sh@130 -- # [[ 512,1024 == \5\1\2\,\1\0\2\4 ]] 00:03:46.627 00:03:46.627 real 0m3.861s 00:03:46.627 user 0m1.540s 00:03:46.627 sys 0m2.375s 00:03:46.627 23:06:35 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:03:46.627 23:06:35 -- common/autotest_common.sh@10 -- # set +x 00:03:46.627 ************************************ 00:03:46.627 END TEST custom_alloc 00:03:46.627 ************************************ 00:03:46.888 23:06:35 -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:03:46.888 23:06:35 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:46.888 23:06:35 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:46.888 23:06:35 -- common/autotest_common.sh@10 -- # set +x 00:03:46.888 ************************************ 00:03:46.888 START TEST no_shrink_alloc 00:03:46.889 ************************************ 00:03:46.889 23:06:36 -- common/autotest_common.sh@1111 -- # no_shrink_alloc 00:03:46.889 23:06:36 -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:03:46.889 23:06:36 -- setup/hugepages.sh@49 -- # local size=2097152 00:03:46.889 23:06:36 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:03:46.889 23:06:36 -- setup/hugepages.sh@51 -- # shift 00:03:46.889 23:06:36 -- setup/hugepages.sh@52 -- # node_ids=('0') 00:03:46.889 23:06:36 -- setup/hugepages.sh@52 -- # local node_ids 00:03:46.889 23:06:36 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:46.889 23:06:36 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:46.889 23:06:36 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:03:46.889 23:06:36 -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:03:46.889 23:06:36 -- setup/hugepages.sh@62 -- # local user_nodes 00:03:46.889 23:06:36 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:46.889 23:06:36 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:46.889 23:06:36 -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:46.889 23:06:36 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:46.889 23:06:36 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:03:46.889 23:06:36 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:46.889 23:06:36 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:03:46.889 23:06:36 -- setup/hugepages.sh@73 -- # return 0 00:03:46.889 23:06:36 -- setup/hugepages.sh@198 -- # setup output 00:03:46.889 23:06:36 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:46.889 23:06:36 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:50.191 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:03:50.191 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:03:50.191 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:03:50.191 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:03:50.191 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:03:50.191 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:03:50.191 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:03:50.191 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:03:50.191 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:03:50.191 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:03:50.191 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:03:50.191 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:03:50.191 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:03:50.191 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:03:50.191 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:03:50.191 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:03:50.191 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:03:50.455 23:06:39 -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:03:50.455 23:06:39 -- setup/hugepages.sh@89 -- # local node 00:03:50.455 23:06:39 -- setup/hugepages.sh@90 -- # local sorted_t 00:03:50.455 23:06:39 -- setup/hugepages.sh@91 -- # local sorted_s 00:03:50.455 23:06:39 -- setup/hugepages.sh@92 -- # local surp 00:03:50.455 23:06:39 -- setup/hugepages.sh@93 -- # local resv 00:03:50.455 23:06:39 -- setup/hugepages.sh@94 -- # local anon 00:03:50.455 23:06:39 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:50.455 23:06:39 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:50.455 23:06:39 -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:50.455 23:06:39 -- setup/common.sh@18 -- # local node= 00:03:50.455 23:06:39 -- setup/common.sh@19 -- # local var val 00:03:50.455 23:06:39 -- setup/common.sh@20 -- # local mem_f mem 00:03:50.455 23:06:39 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:50.455 23:06:39 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:50.455 23:06:39 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:50.456 23:06:39 -- setup/common.sh@28 -- # mapfile -t mem 00:03:50.456 23:06:39 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:50.456 23:06:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.456 23:06:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.456 23:06:39 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338868 kB' 'MemFree: 106689528 kB' 'MemAvailable: 110230164 kB' 'Buffers: 4124 kB' 'Cached: 13008508 kB' 'SwapCached: 0 kB' 'Active: 10130680 kB' 'Inactive: 3515796 kB' 'Active(anon): 9440284 kB' 'Inactive(anon): 0 kB' 'Active(file): 690396 kB' 'Inactive(file): 3515796 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 636764 kB' 'Mapped: 198640 kB' 'Shmem: 8806440 kB' 'KReclaimable: 317552 kB' 'Slab: 1132656 kB' 'SReclaimable: 317552 kB' 'SUnreclaim: 815104 kB' 'KernelStack: 27008 kB' 'PageTables: 8888 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509460 kB' 'Committed_AS: 10831380 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 234732 kB' 'VmallocChunk: 0 kB' 'Percpu: 108288 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3722612 kB' 'DirectMap2M: 43143168 kB' 'DirectMap1G: 89128960 kB' 00:03:50.456 23:06:39 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.456 23:06:39 -- setup/common.sh@32 -- # continue 00:03:50.456 23:06:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.456 23:06:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.456 23:06:39 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.456 23:06:39 -- setup/common.sh@32 -- # continue 00:03:50.456 23:06:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.456 23:06:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.456 23:06:39 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.456 23:06:39 -- setup/common.sh@32 -- # continue 00:03:50.456 23:06:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.456 23:06:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.456 23:06:39 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.456 23:06:39 -- setup/common.sh@32 -- # continue 00:03:50.456 23:06:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.456 23:06:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.456 23:06:39 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.456 23:06:39 -- setup/common.sh@32 -- # continue 00:03:50.456 23:06:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.456 23:06:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.456 23:06:39 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.456 23:06:39 -- setup/common.sh@32 -- # continue 00:03:50.456 23:06:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.456 23:06:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.456 23:06:39 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.456 23:06:39 -- setup/common.sh@32 -- # continue 00:03:50.456 23:06:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.456 23:06:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.456 23:06:39 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.456 23:06:39 -- setup/common.sh@32 -- # continue 00:03:50.456 23:06:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.456 23:06:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.456 23:06:39 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.456 23:06:39 -- setup/common.sh@32 -- # continue 00:03:50.456 23:06:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.456 23:06:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.456 23:06:39 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.456 23:06:39 -- setup/common.sh@32 -- # continue 00:03:50.456 23:06:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.456 23:06:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.456 23:06:39 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.456 23:06:39 -- setup/common.sh@32 -- # continue 00:03:50.456 23:06:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.456 23:06:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.456 23:06:39 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.456 23:06:39 -- setup/common.sh@32 -- # continue 00:03:50.456 23:06:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.456 23:06:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.456 23:06:39 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.456 23:06:39 -- setup/common.sh@32 -- # continue 00:03:50.456 23:06:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.456 23:06:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.456 23:06:39 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.456 23:06:39 -- setup/common.sh@32 -- # continue 00:03:50.456 23:06:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.456 23:06:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.456 23:06:39 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.456 23:06:39 -- setup/common.sh@32 -- # continue 00:03:50.456 23:06:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.456 23:06:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.456 23:06:39 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.456 23:06:39 -- setup/common.sh@32 -- # continue 00:03:50.456 23:06:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.456 23:06:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.456 23:06:39 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.456 23:06:39 -- setup/common.sh@32 -- # continue 00:03:50.456 23:06:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.456 23:06:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.456 23:06:39 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.456 23:06:39 -- setup/common.sh@32 -- # continue 00:03:50.456 23:06:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.456 23:06:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.456 23:06:39 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.456 23:06:39 -- setup/common.sh@32 -- # continue 00:03:50.456 23:06:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.456 23:06:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.456 23:06:39 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.456 23:06:39 -- setup/common.sh@32 -- # continue 00:03:50.456 23:06:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.456 23:06:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.456 23:06:39 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.456 23:06:39 -- setup/common.sh@32 -- # continue 00:03:50.456 23:06:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.456 23:06:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.456 23:06:39 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.456 23:06:39 -- setup/common.sh@32 -- # continue 00:03:50.456 23:06:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.456 23:06:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.456 23:06:39 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.456 23:06:39 -- setup/common.sh@32 -- # continue 00:03:50.456 23:06:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.456 23:06:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.456 23:06:39 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.456 23:06:39 -- setup/common.sh@32 -- # continue 00:03:50.456 23:06:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.456 23:06:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.456 23:06:39 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.456 23:06:39 -- setup/common.sh@32 -- # continue 00:03:50.456 23:06:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.456 23:06:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.456 23:06:39 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.456 23:06:39 -- setup/common.sh@32 -- # continue 00:03:50.456 23:06:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.456 23:06:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.456 23:06:39 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.456 23:06:39 -- setup/common.sh@32 -- # continue 00:03:50.456 23:06:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.456 23:06:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.456 23:06:39 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.456 23:06:39 -- setup/common.sh@32 -- # continue 00:03:50.456 23:06:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.456 23:06:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.456 23:06:39 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.456 23:06:39 -- setup/common.sh@32 -- # continue 00:03:50.456 23:06:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.456 23:06:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.456 23:06:39 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.456 23:06:39 -- setup/common.sh@32 -- # continue 00:03:50.456 23:06:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.456 23:06:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.456 23:06:39 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.456 23:06:39 -- setup/common.sh@32 -- # continue 00:03:50.456 23:06:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.456 23:06:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.456 23:06:39 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.456 23:06:39 -- setup/common.sh@32 -- # continue 00:03:50.456 23:06:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.456 23:06:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.456 23:06:39 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.456 23:06:39 -- setup/common.sh@32 -- # continue 00:03:50.456 23:06:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.456 23:06:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.456 23:06:39 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.456 23:06:39 -- setup/common.sh@32 -- # continue 00:03:50.456 23:06:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.456 23:06:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.456 23:06:39 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.456 23:06:39 -- setup/common.sh@32 -- # continue 00:03:50.456 23:06:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.456 23:06:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.456 23:06:39 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.456 23:06:39 -- setup/common.sh@32 -- # continue 00:03:50.456 23:06:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.456 23:06:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.456 23:06:39 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.456 23:06:39 -- setup/common.sh@32 -- # continue 00:03:50.456 23:06:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.456 23:06:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.456 23:06:39 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.456 23:06:39 -- setup/common.sh@32 -- # continue 00:03:50.456 23:06:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.456 23:06:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.456 23:06:39 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.456 23:06:39 -- setup/common.sh@32 -- # continue 00:03:50.456 23:06:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.456 23:06:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.456 23:06:39 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.456 23:06:39 -- setup/common.sh@32 -- # continue 00:03:50.456 23:06:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.456 23:06:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.456 23:06:39 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.456 23:06:39 -- setup/common.sh@33 -- # echo 0 00:03:50.456 23:06:39 -- setup/common.sh@33 -- # return 0 00:03:50.456 23:06:39 -- setup/hugepages.sh@97 -- # anon=0 00:03:50.456 23:06:39 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:50.456 23:06:39 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:50.456 23:06:39 -- setup/common.sh@18 -- # local node= 00:03:50.456 23:06:39 -- setup/common.sh@19 -- # local var val 00:03:50.456 23:06:39 -- setup/common.sh@20 -- # local mem_f mem 00:03:50.456 23:06:39 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:50.456 23:06:39 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:50.456 23:06:39 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:50.456 23:06:39 -- setup/common.sh@28 -- # mapfile -t mem 00:03:50.456 23:06:39 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:50.456 23:06:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.456 23:06:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.456 23:06:39 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338868 kB' 'MemFree: 106690016 kB' 'MemAvailable: 110230652 kB' 'Buffers: 4124 kB' 'Cached: 13008512 kB' 'SwapCached: 0 kB' 'Active: 10129984 kB' 'Inactive: 3515796 kB' 'Active(anon): 9439588 kB' 'Inactive(anon): 0 kB' 'Active(file): 690396 kB' 'Inactive(file): 3515796 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 636476 kB' 'Mapped: 198556 kB' 'Shmem: 8806444 kB' 'KReclaimable: 317552 kB' 'Slab: 1132644 kB' 'SReclaimable: 317552 kB' 'SUnreclaim: 815092 kB' 'KernelStack: 27008 kB' 'PageTables: 8876 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509460 kB' 'Committed_AS: 10831388 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 234716 kB' 'VmallocChunk: 0 kB' 'Percpu: 108288 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3722612 kB' 'DirectMap2M: 43143168 kB' 'DirectMap1G: 89128960 kB' 00:03:50.456 23:06:39 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.456 23:06:39 -- setup/common.sh@32 -- # continue 00:03:50.456 23:06:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.456 23:06:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.456 23:06:39 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.456 23:06:39 -- setup/common.sh@32 -- # continue 00:03:50.456 23:06:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.456 23:06:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.456 23:06:39 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.456 23:06:39 -- setup/common.sh@32 -- # continue 00:03:50.456 23:06:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.456 23:06:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.456 23:06:39 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.456 23:06:39 -- setup/common.sh@32 -- # continue 00:03:50.456 23:06:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.456 23:06:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.456 23:06:39 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.456 23:06:39 -- setup/common.sh@32 -- # continue 00:03:50.456 23:06:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.456 23:06:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.456 23:06:39 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.456 23:06:39 -- setup/common.sh@32 -- # continue 00:03:50.456 23:06:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.456 23:06:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.456 23:06:39 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.456 23:06:39 -- setup/common.sh@32 -- # continue 00:03:50.456 23:06:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.456 23:06:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.456 23:06:39 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.456 23:06:39 -- setup/common.sh@32 -- # continue 00:03:50.456 23:06:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.456 23:06:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.456 23:06:39 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.456 23:06:39 -- setup/common.sh@32 -- # continue 00:03:50.456 23:06:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.456 23:06:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.456 23:06:39 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.456 23:06:39 -- setup/common.sh@32 -- # continue 00:03:50.456 23:06:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.456 23:06:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.456 23:06:39 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.456 23:06:39 -- setup/common.sh@32 -- # continue 00:03:50.456 23:06:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.456 23:06:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.456 23:06:39 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.456 23:06:39 -- setup/common.sh@32 -- # continue 00:03:50.456 23:06:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.456 23:06:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.456 23:06:39 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.456 23:06:39 -- setup/common.sh@32 -- # continue 00:03:50.456 23:06:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.456 23:06:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.456 23:06:39 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.456 23:06:39 -- setup/common.sh@32 -- # continue 00:03:50.456 23:06:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.456 23:06:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.456 23:06:39 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.456 23:06:39 -- setup/common.sh@32 -- # continue 00:03:50.456 23:06:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.456 23:06:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.456 23:06:39 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.456 23:06:39 -- setup/common.sh@32 -- # continue 00:03:50.456 23:06:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.456 23:06:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.456 23:06:39 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.456 23:06:39 -- setup/common.sh@32 -- # continue 00:03:50.456 23:06:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.456 23:06:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.457 23:06:39 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.457 23:06:39 -- setup/common.sh@32 -- # continue 00:03:50.457 23:06:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.457 23:06:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.457 23:06:39 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.457 23:06:39 -- setup/common.sh@32 -- # continue 00:03:50.457 23:06:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.457 23:06:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.457 23:06:39 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.457 23:06:39 -- setup/common.sh@32 -- # continue 00:03:50.457 23:06:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.457 23:06:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.457 23:06:39 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.457 23:06:39 -- setup/common.sh@32 -- # continue 00:03:50.457 23:06:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.457 23:06:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.457 23:06:39 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.457 23:06:39 -- setup/common.sh@32 -- # continue 00:03:50.457 23:06:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.457 23:06:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.457 23:06:39 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.457 23:06:39 -- setup/common.sh@32 -- # continue 00:03:50.457 23:06:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.457 23:06:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.457 23:06:39 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.457 23:06:39 -- setup/common.sh@32 -- # continue 00:03:50.457 23:06:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.457 23:06:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.457 23:06:39 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.457 23:06:39 -- setup/common.sh@32 -- # continue 00:03:50.457 23:06:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.457 23:06:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.457 23:06:39 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.457 23:06:39 -- setup/common.sh@32 -- # continue 00:03:50.457 23:06:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.457 23:06:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.457 23:06:39 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.457 23:06:39 -- setup/common.sh@32 -- # continue 00:03:50.457 23:06:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.457 23:06:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.457 23:06:39 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.457 23:06:39 -- setup/common.sh@32 -- # continue 00:03:50.457 23:06:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.457 23:06:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.457 23:06:39 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.457 23:06:39 -- setup/common.sh@32 -- # continue 00:03:50.457 23:06:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.457 23:06:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.457 23:06:39 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.457 23:06:39 -- setup/common.sh@32 -- # continue 00:03:50.457 23:06:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.457 23:06:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.457 23:06:39 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.457 23:06:39 -- setup/common.sh@32 -- # continue 00:03:50.457 23:06:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.457 23:06:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.457 23:06:39 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.457 23:06:39 -- setup/common.sh@32 -- # continue 00:03:50.457 23:06:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.457 23:06:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.457 23:06:39 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.457 23:06:39 -- setup/common.sh@32 -- # continue 00:03:50.457 23:06:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.457 23:06:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.457 23:06:39 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.457 23:06:39 -- setup/common.sh@32 -- # continue 00:03:50.457 23:06:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.457 23:06:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.457 23:06:39 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.457 23:06:39 -- setup/common.sh@32 -- # continue 00:03:50.457 23:06:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.457 23:06:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.457 23:06:39 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.457 23:06:39 -- setup/common.sh@32 -- # continue 00:03:50.457 23:06:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.457 23:06:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.457 23:06:39 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.457 23:06:39 -- setup/common.sh@32 -- # continue 00:03:50.457 23:06:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.457 23:06:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.457 23:06:39 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.457 23:06:39 -- setup/common.sh@32 -- # continue 00:03:50.457 23:06:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.457 23:06:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.457 23:06:39 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.457 23:06:39 -- setup/common.sh@32 -- # continue 00:03:50.457 23:06:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.457 23:06:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.457 23:06:39 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.457 23:06:39 -- setup/common.sh@32 -- # continue 00:03:50.457 23:06:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.457 23:06:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.457 23:06:39 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.457 23:06:39 -- setup/common.sh@32 -- # continue 00:03:50.457 23:06:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.457 23:06:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.457 23:06:39 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.457 23:06:39 -- setup/common.sh@32 -- # continue 00:03:50.457 23:06:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.457 23:06:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.457 23:06:39 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.457 23:06:39 -- setup/common.sh@32 -- # continue 00:03:50.457 23:06:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.457 23:06:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.457 23:06:39 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.457 23:06:39 -- setup/common.sh@32 -- # continue 00:03:50.457 23:06:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.457 23:06:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.457 23:06:39 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.457 23:06:39 -- setup/common.sh@32 -- # continue 00:03:50.457 23:06:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.457 23:06:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.457 23:06:39 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.457 23:06:39 -- setup/common.sh@32 -- # continue 00:03:50.457 23:06:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.457 23:06:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.457 23:06:39 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.457 23:06:39 -- setup/common.sh@32 -- # continue 00:03:50.457 23:06:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.457 23:06:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.457 23:06:39 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.457 23:06:39 -- setup/common.sh@32 -- # continue 00:03:50.457 23:06:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.457 23:06:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.457 23:06:39 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.457 23:06:39 -- setup/common.sh@32 -- # continue 00:03:50.457 23:06:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.457 23:06:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.457 23:06:39 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.457 23:06:39 -- setup/common.sh@32 -- # continue 00:03:50.457 23:06:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.457 23:06:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.457 23:06:39 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.457 23:06:39 -- setup/common.sh@32 -- # continue 00:03:50.457 23:06:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.457 23:06:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.457 23:06:39 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.457 23:06:39 -- setup/common.sh@33 -- # echo 0 00:03:50.457 23:06:39 -- setup/common.sh@33 -- # return 0 00:03:50.457 23:06:39 -- setup/hugepages.sh@99 -- # surp=0 00:03:50.457 23:06:39 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:50.457 23:06:39 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:50.457 23:06:39 -- setup/common.sh@18 -- # local node= 00:03:50.457 23:06:39 -- setup/common.sh@19 -- # local var val 00:03:50.457 23:06:39 -- setup/common.sh@20 -- # local mem_f mem 00:03:50.457 23:06:39 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:50.457 23:06:39 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:50.457 23:06:39 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:50.457 23:06:39 -- setup/common.sh@28 -- # mapfile -t mem 00:03:50.457 23:06:39 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:50.457 23:06:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.457 23:06:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.457 23:06:39 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338868 kB' 'MemFree: 106689744 kB' 'MemAvailable: 110230380 kB' 'Buffers: 4124 kB' 'Cached: 13008520 kB' 'SwapCached: 0 kB' 'Active: 10129880 kB' 'Inactive: 3515796 kB' 'Active(anon): 9439484 kB' 'Inactive(anon): 0 kB' 'Active(file): 690396 kB' 'Inactive(file): 3515796 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 636380 kB' 'Mapped: 198556 kB' 'Shmem: 8806452 kB' 'KReclaimable: 317552 kB' 'Slab: 1132644 kB' 'SReclaimable: 317552 kB' 'SUnreclaim: 815092 kB' 'KernelStack: 26992 kB' 'PageTables: 8836 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509460 kB' 'Committed_AS: 10832160 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 234764 kB' 'VmallocChunk: 0 kB' 'Percpu: 108288 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3722612 kB' 'DirectMap2M: 43143168 kB' 'DirectMap1G: 89128960 kB' 00:03:50.457 23:06:39 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.457 23:06:39 -- setup/common.sh@32 -- # continue 00:03:50.457 23:06:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.457 23:06:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.457 23:06:39 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.457 23:06:39 -- setup/common.sh@32 -- # continue 00:03:50.457 23:06:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.457 23:06:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.457 23:06:39 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.457 23:06:39 -- setup/common.sh@32 -- # continue 00:03:50.457 23:06:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.457 23:06:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.457 23:06:39 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.457 23:06:39 -- setup/common.sh@32 -- # continue 00:03:50.457 23:06:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.457 23:06:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.457 23:06:39 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.457 23:06:39 -- setup/common.sh@32 -- # continue 00:03:50.457 23:06:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.457 23:06:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.457 23:06:39 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.457 23:06:39 -- setup/common.sh@32 -- # continue 00:03:50.457 23:06:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.457 23:06:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.457 23:06:39 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.457 23:06:39 -- setup/common.sh@32 -- # continue 00:03:50.457 23:06:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.457 23:06:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.457 23:06:39 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.457 23:06:39 -- setup/common.sh@32 -- # continue 00:03:50.457 23:06:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.457 23:06:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.457 23:06:39 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.457 23:06:39 -- setup/common.sh@32 -- # continue 00:03:50.457 23:06:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.457 23:06:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.457 23:06:39 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.457 23:06:39 -- setup/common.sh@32 -- # continue 00:03:50.457 23:06:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.457 23:06:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.457 23:06:39 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.457 23:06:39 -- setup/common.sh@32 -- # continue 00:03:50.457 23:06:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.457 23:06:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.457 23:06:39 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.457 23:06:39 -- setup/common.sh@32 -- # continue 00:03:50.457 23:06:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.457 23:06:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.457 23:06:39 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.457 23:06:39 -- setup/common.sh@32 -- # continue 00:03:50.457 23:06:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.457 23:06:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.457 23:06:39 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.457 23:06:39 -- setup/common.sh@32 -- # continue 00:03:50.457 23:06:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.457 23:06:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.457 23:06:39 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.457 23:06:39 -- setup/common.sh@32 -- # continue 00:03:50.457 23:06:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.457 23:06:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.457 23:06:39 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.457 23:06:39 -- setup/common.sh@32 -- # continue 00:03:50.457 23:06:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.457 23:06:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.457 23:06:39 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.457 23:06:39 -- setup/common.sh@32 -- # continue 00:03:50.457 23:06:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.457 23:06:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.457 23:06:39 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.457 23:06:39 -- setup/common.sh@32 -- # continue 00:03:50.457 23:06:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.457 23:06:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.457 23:06:39 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.457 23:06:39 -- setup/common.sh@32 -- # continue 00:03:50.457 23:06:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.457 23:06:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.457 23:06:39 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.457 23:06:39 -- setup/common.sh@32 -- # continue 00:03:50.457 23:06:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.457 23:06:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.457 23:06:39 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.457 23:06:39 -- setup/common.sh@32 -- # continue 00:03:50.457 23:06:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.457 23:06:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.457 23:06:39 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.457 23:06:39 -- setup/common.sh@32 -- # continue 00:03:50.457 23:06:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.457 23:06:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.457 23:06:39 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.457 23:06:39 -- setup/common.sh@32 -- # continue 00:03:50.457 23:06:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.457 23:06:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.457 23:06:39 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.457 23:06:39 -- setup/common.sh@32 -- # continue 00:03:50.457 23:06:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.457 23:06:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.457 23:06:39 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.457 23:06:39 -- setup/common.sh@32 -- # continue 00:03:50.457 23:06:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.457 23:06:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.457 23:06:39 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.457 23:06:39 -- setup/common.sh@32 -- # continue 00:03:50.457 23:06:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.457 23:06:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.457 23:06:39 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.457 23:06:39 -- setup/common.sh@32 -- # continue 00:03:50.457 23:06:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.457 23:06:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.457 23:06:39 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.457 23:06:39 -- setup/common.sh@32 -- # continue 00:03:50.457 23:06:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.457 23:06:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.457 23:06:39 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.457 23:06:39 -- setup/common.sh@32 -- # continue 00:03:50.457 23:06:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.457 23:06:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.457 23:06:39 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.457 23:06:39 -- setup/common.sh@32 -- # continue 00:03:50.457 23:06:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.457 23:06:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.457 23:06:39 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.457 23:06:39 -- setup/common.sh@32 -- # continue 00:03:50.457 23:06:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.457 23:06:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.457 23:06:39 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.457 23:06:39 -- setup/common.sh@32 -- # continue 00:03:50.457 23:06:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.457 23:06:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.457 23:06:39 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.457 23:06:39 -- setup/common.sh@32 -- # continue 00:03:50.457 23:06:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.457 23:06:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.457 23:06:39 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.457 23:06:39 -- setup/common.sh@32 -- # continue 00:03:50.457 23:06:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.457 23:06:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.457 23:06:39 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.457 23:06:39 -- setup/common.sh@32 -- # continue 00:03:50.457 23:06:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.458 23:06:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.458 23:06:39 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.458 23:06:39 -- setup/common.sh@32 -- # continue 00:03:50.458 23:06:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.458 23:06:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.458 23:06:39 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.458 23:06:39 -- setup/common.sh@32 -- # continue 00:03:50.458 23:06:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.458 23:06:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.458 23:06:39 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.458 23:06:39 -- setup/common.sh@32 -- # continue 00:03:50.458 23:06:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.458 23:06:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.458 23:06:39 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.458 23:06:39 -- setup/common.sh@32 -- # continue 00:03:50.458 23:06:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.458 23:06:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.458 23:06:39 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.458 23:06:39 -- setup/common.sh@32 -- # continue 00:03:50.458 23:06:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.458 23:06:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.458 23:06:39 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.458 23:06:39 -- setup/common.sh@32 -- # continue 00:03:50.458 23:06:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.458 23:06:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.458 23:06:39 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.458 23:06:39 -- setup/common.sh@32 -- # continue 00:03:50.458 23:06:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.458 23:06:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.458 23:06:39 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.458 23:06:39 -- setup/common.sh@32 -- # continue 00:03:50.458 23:06:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.458 23:06:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.458 23:06:39 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.458 23:06:39 -- setup/common.sh@32 -- # continue 00:03:50.458 23:06:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.458 23:06:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.458 23:06:39 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.458 23:06:39 -- setup/common.sh@32 -- # continue 00:03:50.458 23:06:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.458 23:06:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.458 23:06:39 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.458 23:06:39 -- setup/common.sh@32 -- # continue 00:03:50.458 23:06:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.458 23:06:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.458 23:06:39 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.458 23:06:39 -- setup/common.sh@32 -- # continue 00:03:50.458 23:06:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.458 23:06:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.458 23:06:39 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.458 23:06:39 -- setup/common.sh@32 -- # continue 00:03:50.458 23:06:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.458 23:06:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.458 23:06:39 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.458 23:06:39 -- setup/common.sh@32 -- # continue 00:03:50.458 23:06:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.458 23:06:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.458 23:06:39 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.458 23:06:39 -- setup/common.sh@32 -- # continue 00:03:50.458 23:06:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.458 23:06:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.458 23:06:39 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.458 23:06:39 -- setup/common.sh@33 -- # echo 0 00:03:50.458 23:06:39 -- setup/common.sh@33 -- # return 0 00:03:50.458 23:06:39 -- setup/hugepages.sh@100 -- # resv=0 00:03:50.458 23:06:39 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:50.458 nr_hugepages=1024 00:03:50.458 23:06:39 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:50.458 resv_hugepages=0 00:03:50.458 23:06:39 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:50.458 surplus_hugepages=0 00:03:50.458 23:06:39 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:50.458 anon_hugepages=0 00:03:50.458 23:06:39 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:50.458 23:06:39 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:50.458 23:06:39 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:50.458 23:06:39 -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:50.458 23:06:39 -- setup/common.sh@18 -- # local node= 00:03:50.458 23:06:39 -- setup/common.sh@19 -- # local var val 00:03:50.458 23:06:39 -- setup/common.sh@20 -- # local mem_f mem 00:03:50.458 23:06:39 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:50.458 23:06:39 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:50.458 23:06:39 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:50.458 23:06:39 -- setup/common.sh@28 -- # mapfile -t mem 00:03:50.458 23:06:39 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:50.458 23:06:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.458 23:06:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.458 23:06:39 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338868 kB' 'MemFree: 106690628 kB' 'MemAvailable: 110231264 kB' 'Buffers: 4124 kB' 'Cached: 13008524 kB' 'SwapCached: 0 kB' 'Active: 10130400 kB' 'Inactive: 3515796 kB' 'Active(anon): 9440004 kB' 'Inactive(anon): 0 kB' 'Active(file): 690396 kB' 'Inactive(file): 3515796 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 636960 kB' 'Mapped: 198556 kB' 'Shmem: 8806456 kB' 'KReclaimable: 317552 kB' 'Slab: 1132644 kB' 'SReclaimable: 317552 kB' 'SUnreclaim: 815092 kB' 'KernelStack: 27040 kB' 'PageTables: 8980 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509460 kB' 'Committed_AS: 10831420 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 234716 kB' 'VmallocChunk: 0 kB' 'Percpu: 108288 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3722612 kB' 'DirectMap2M: 43143168 kB' 'DirectMap1G: 89128960 kB' 00:03:50.458 23:06:39 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.458 23:06:39 -- setup/common.sh@32 -- # continue 00:03:50.458 23:06:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.458 23:06:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.458 23:06:39 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.458 23:06:39 -- setup/common.sh@32 -- # continue 00:03:50.458 23:06:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.458 23:06:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.720 23:06:39 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.720 23:06:39 -- setup/common.sh@32 -- # continue 00:03:50.720 23:06:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.720 23:06:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.720 23:06:39 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.720 23:06:39 -- setup/common.sh@32 -- # continue 00:03:50.720 23:06:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.720 23:06:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.720 23:06:39 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.720 23:06:39 -- setup/common.sh@32 -- # continue 00:03:50.720 23:06:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.720 23:06:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.720 23:06:39 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.720 23:06:39 -- setup/common.sh@32 -- # continue 00:03:50.720 23:06:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.720 23:06:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.720 23:06:39 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.720 23:06:39 -- setup/common.sh@32 -- # continue 00:03:50.720 23:06:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.720 23:06:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.720 23:06:39 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.720 23:06:39 -- setup/common.sh@32 -- # continue 00:03:50.720 23:06:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.720 23:06:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.720 23:06:39 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.720 23:06:39 -- setup/common.sh@32 -- # continue 00:03:50.720 23:06:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.720 23:06:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.720 23:06:39 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.720 23:06:39 -- setup/common.sh@32 -- # continue 00:03:50.720 23:06:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.720 23:06:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.720 23:06:39 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.721 23:06:39 -- setup/common.sh@32 -- # continue 00:03:50.721 23:06:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.721 23:06:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.721 23:06:39 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.721 23:06:39 -- setup/common.sh@32 -- # continue 00:03:50.721 23:06:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.721 23:06:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.721 23:06:39 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.721 23:06:39 -- setup/common.sh@32 -- # continue 00:03:50.721 23:06:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.721 23:06:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.721 23:06:39 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.721 23:06:39 -- setup/common.sh@32 -- # continue 00:03:50.721 23:06:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.721 23:06:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.721 23:06:39 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.721 23:06:39 -- setup/common.sh@32 -- # continue 00:03:50.721 23:06:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.721 23:06:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.721 23:06:39 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.721 23:06:39 -- setup/common.sh@32 -- # continue 00:03:50.721 23:06:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.721 23:06:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.721 23:06:39 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.721 23:06:39 -- setup/common.sh@32 -- # continue 00:03:50.721 23:06:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.721 23:06:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.721 23:06:39 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.721 23:06:39 -- setup/common.sh@32 -- # continue 00:03:50.721 23:06:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.721 23:06:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.721 23:06:39 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.721 23:06:39 -- setup/common.sh@32 -- # continue 00:03:50.721 23:06:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.721 23:06:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.721 23:06:39 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.721 23:06:39 -- setup/common.sh@32 -- # continue 00:03:50.721 23:06:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.721 23:06:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.721 23:06:39 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.721 23:06:39 -- setup/common.sh@32 -- # continue 00:03:50.721 23:06:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.721 23:06:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.721 23:06:39 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.721 23:06:39 -- setup/common.sh@32 -- # continue 00:03:50.721 23:06:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.721 23:06:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.721 23:06:39 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.721 23:06:39 -- setup/common.sh@32 -- # continue 00:03:50.721 23:06:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.721 23:06:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.721 23:06:39 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.721 23:06:39 -- setup/common.sh@32 -- # continue 00:03:50.721 23:06:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.721 23:06:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.721 23:06:39 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.721 23:06:39 -- setup/common.sh@32 -- # continue 00:03:50.721 23:06:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.721 23:06:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.721 23:06:39 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.721 23:06:39 -- setup/common.sh@32 -- # continue 00:03:50.721 23:06:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.721 23:06:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.721 23:06:39 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.721 23:06:39 -- setup/common.sh@32 -- # continue 00:03:50.721 23:06:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.721 23:06:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.721 23:06:39 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.721 23:06:39 -- setup/common.sh@32 -- # continue 00:03:50.721 23:06:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.721 23:06:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.721 23:06:39 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.721 23:06:39 -- setup/common.sh@32 -- # continue 00:03:50.721 23:06:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.721 23:06:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.721 23:06:39 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.721 23:06:39 -- setup/common.sh@32 -- # continue 00:03:50.721 23:06:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.721 23:06:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.721 23:06:39 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.721 23:06:39 -- setup/common.sh@32 -- # continue 00:03:50.721 23:06:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.721 23:06:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.721 23:06:39 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.721 23:06:39 -- setup/common.sh@32 -- # continue 00:03:50.721 23:06:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.721 23:06:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.721 23:06:39 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.721 23:06:39 -- setup/common.sh@32 -- # continue 00:03:50.721 23:06:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.721 23:06:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.721 23:06:39 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.721 23:06:39 -- setup/common.sh@32 -- # continue 00:03:50.721 23:06:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.721 23:06:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.721 23:06:39 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.721 23:06:39 -- setup/common.sh@32 -- # continue 00:03:50.721 23:06:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.721 23:06:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.721 23:06:39 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.721 23:06:39 -- setup/common.sh@32 -- # continue 00:03:50.721 23:06:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.721 23:06:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.721 23:06:39 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.721 23:06:39 -- setup/common.sh@32 -- # continue 00:03:50.721 23:06:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.721 23:06:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.721 23:06:39 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.721 23:06:39 -- setup/common.sh@32 -- # continue 00:03:50.721 23:06:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.721 23:06:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.721 23:06:39 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.721 23:06:39 -- setup/common.sh@32 -- # continue 00:03:50.721 23:06:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.721 23:06:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.721 23:06:39 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.721 23:06:39 -- setup/common.sh@32 -- # continue 00:03:50.721 23:06:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.721 23:06:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.721 23:06:39 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.721 23:06:39 -- setup/common.sh@32 -- # continue 00:03:50.721 23:06:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.721 23:06:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.721 23:06:39 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.721 23:06:39 -- setup/common.sh@32 -- # continue 00:03:50.721 23:06:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.721 23:06:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.721 23:06:39 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.721 23:06:39 -- setup/common.sh@32 -- # continue 00:03:50.721 23:06:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.721 23:06:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.721 23:06:39 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.721 23:06:39 -- setup/common.sh@32 -- # continue 00:03:50.721 23:06:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.721 23:06:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.721 23:06:39 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.721 23:06:39 -- setup/common.sh@32 -- # continue 00:03:50.721 23:06:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.721 23:06:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.721 23:06:39 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.721 23:06:39 -- setup/common.sh@32 -- # continue 00:03:50.721 23:06:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.721 23:06:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.721 23:06:39 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.721 23:06:39 -- setup/common.sh@32 -- # continue 00:03:50.721 23:06:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.721 23:06:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.721 23:06:39 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.721 23:06:39 -- setup/common.sh@32 -- # continue 00:03:50.721 23:06:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.721 23:06:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.721 23:06:39 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.721 23:06:39 -- setup/common.sh@33 -- # echo 1024 00:03:50.721 23:06:39 -- setup/common.sh@33 -- # return 0 00:03:50.721 23:06:39 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:50.721 23:06:39 -- setup/hugepages.sh@112 -- # get_nodes 00:03:50.721 23:06:39 -- setup/hugepages.sh@27 -- # local node 00:03:50.721 23:06:39 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:50.721 23:06:39 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:50.721 23:06:39 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:50.721 23:06:39 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:50.721 23:06:39 -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:50.721 23:06:39 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:50.722 23:06:39 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:50.722 23:06:39 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:50.722 23:06:39 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:50.722 23:06:39 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:50.722 23:06:39 -- setup/common.sh@18 -- # local node=0 00:03:50.722 23:06:39 -- setup/common.sh@19 -- # local var val 00:03:50.722 23:06:39 -- setup/common.sh@20 -- # local mem_f mem 00:03:50.722 23:06:39 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:50.722 23:06:39 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:50.722 23:06:39 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:50.722 23:06:39 -- setup/common.sh@28 -- # mapfile -t mem 00:03:50.722 23:06:39 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:50.722 23:06:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.722 23:06:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.722 23:06:39 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 65659008 kB' 'MemFree: 58044888 kB' 'MemUsed: 7614120 kB' 'SwapCached: 0 kB' 'Active: 3356340 kB' 'Inactive: 108980 kB' 'Active(anon): 3046820 kB' 'Inactive(anon): 0 kB' 'Active(file): 309520 kB' 'Inactive(file): 108980 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 3368244 kB' 'Mapped: 98228 kB' 'AnonPages: 100388 kB' 'Shmem: 2949744 kB' 'KernelStack: 13304 kB' 'PageTables: 3216 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 164676 kB' 'Slab: 557392 kB' 'SReclaimable: 164676 kB' 'SUnreclaim: 392716 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:50.722 23:06:39 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.722 23:06:39 -- setup/common.sh@32 -- # continue 00:03:50.722 23:06:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.722 23:06:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.722 23:06:39 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.722 23:06:39 -- setup/common.sh@32 -- # continue 00:03:50.722 23:06:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.722 23:06:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.722 23:06:39 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.722 23:06:39 -- setup/common.sh@32 -- # continue 00:03:50.722 23:06:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.722 23:06:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.722 23:06:39 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.722 23:06:39 -- setup/common.sh@32 -- # continue 00:03:50.722 23:06:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.722 23:06:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.722 23:06:39 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.722 23:06:39 -- setup/common.sh@32 -- # continue 00:03:50.722 23:06:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.722 23:06:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.722 23:06:39 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.722 23:06:39 -- setup/common.sh@32 -- # continue 00:03:50.722 23:06:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.722 23:06:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.722 23:06:39 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.722 23:06:39 -- setup/common.sh@32 -- # continue 00:03:50.722 23:06:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.722 23:06:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.722 23:06:39 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.722 23:06:39 -- setup/common.sh@32 -- # continue 00:03:50.722 23:06:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.722 23:06:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.722 23:06:39 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.722 23:06:39 -- setup/common.sh@32 -- # continue 00:03:50.722 23:06:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.722 23:06:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.722 23:06:39 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.722 23:06:39 -- setup/common.sh@32 -- # continue 00:03:50.722 23:06:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.722 23:06:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.722 23:06:39 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.722 23:06:39 -- setup/common.sh@32 -- # continue 00:03:50.722 23:06:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.722 23:06:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.722 23:06:39 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.722 23:06:39 -- setup/common.sh@32 -- # continue 00:03:50.722 23:06:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.722 23:06:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.722 23:06:39 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.722 23:06:39 -- setup/common.sh@32 -- # continue 00:03:50.722 23:06:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.722 23:06:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.722 23:06:39 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.722 23:06:39 -- setup/common.sh@32 -- # continue 00:03:50.722 23:06:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.722 23:06:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.722 23:06:39 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.722 23:06:39 -- setup/common.sh@32 -- # continue 00:03:50.722 23:06:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.722 23:06:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.722 23:06:39 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.722 23:06:39 -- setup/common.sh@32 -- # continue 00:03:50.722 23:06:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.722 23:06:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.722 23:06:39 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.722 23:06:39 -- setup/common.sh@32 -- # continue 00:03:50.722 23:06:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.722 23:06:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.722 23:06:39 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.722 23:06:39 -- setup/common.sh@32 -- # continue 00:03:50.722 23:06:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.722 23:06:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.722 23:06:39 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.722 23:06:39 -- setup/common.sh@32 -- # continue 00:03:50.722 23:06:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.722 23:06:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.722 23:06:39 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.722 23:06:39 -- setup/common.sh@32 -- # continue 00:03:50.722 23:06:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.722 23:06:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.722 23:06:39 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.722 23:06:39 -- setup/common.sh@32 -- # continue 00:03:50.722 23:06:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.722 23:06:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.722 23:06:39 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.722 23:06:39 -- setup/common.sh@32 -- # continue 00:03:50.722 23:06:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.722 23:06:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.722 23:06:39 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.722 23:06:39 -- setup/common.sh@32 -- # continue 00:03:50.722 23:06:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.722 23:06:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.722 23:06:39 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.722 23:06:39 -- setup/common.sh@32 -- # continue 00:03:50.722 23:06:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.722 23:06:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.722 23:06:39 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.722 23:06:39 -- setup/common.sh@32 -- # continue 00:03:50.722 23:06:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.722 23:06:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.722 23:06:39 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.722 23:06:39 -- setup/common.sh@32 -- # continue 00:03:50.722 23:06:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.722 23:06:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.722 23:06:39 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.722 23:06:39 -- setup/common.sh@32 -- # continue 00:03:50.722 23:06:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.722 23:06:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.722 23:06:39 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.722 23:06:39 -- setup/common.sh@32 -- # continue 00:03:50.722 23:06:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.722 23:06:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.722 23:06:39 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.722 23:06:39 -- setup/common.sh@32 -- # continue 00:03:50.722 23:06:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.722 23:06:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.722 23:06:39 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.722 23:06:39 -- setup/common.sh@32 -- # continue 00:03:50.722 23:06:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.722 23:06:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.722 23:06:39 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.722 23:06:39 -- setup/common.sh@32 -- # continue 00:03:50.722 23:06:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.722 23:06:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.722 23:06:39 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.722 23:06:39 -- setup/common.sh@32 -- # continue 00:03:50.722 23:06:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.722 23:06:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.722 23:06:39 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.722 23:06:39 -- setup/common.sh@32 -- # continue 00:03:50.722 23:06:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.722 23:06:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.722 23:06:39 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.722 23:06:39 -- setup/common.sh@32 -- # continue 00:03:50.722 23:06:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.722 23:06:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.722 23:06:39 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.722 23:06:39 -- setup/common.sh@32 -- # continue 00:03:50.723 23:06:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.723 23:06:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.723 23:06:39 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.723 23:06:39 -- setup/common.sh@32 -- # continue 00:03:50.723 23:06:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.723 23:06:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.723 23:06:39 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.723 23:06:39 -- setup/common.sh@33 -- # echo 0 00:03:50.723 23:06:39 -- setup/common.sh@33 -- # return 0 00:03:50.723 23:06:39 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:50.723 23:06:39 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:50.723 23:06:39 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:50.723 23:06:39 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:50.723 23:06:39 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:50.723 node0=1024 expecting 1024 00:03:50.723 23:06:39 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:50.723 23:06:39 -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:03:50.723 23:06:39 -- setup/hugepages.sh@202 -- # NRHUGE=512 00:03:50.723 23:06:39 -- setup/hugepages.sh@202 -- # setup output 00:03:50.723 23:06:39 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:50.723 23:06:39 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:54.022 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:03:54.022 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:03:54.022 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:03:54.022 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:03:54.022 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:03:54.022 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:03:54.022 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:03:54.022 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:03:54.022 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:03:54.022 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:03:54.022 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:03:54.022 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:03:54.022 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:03:54.022 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:03:54.022 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:03:54.022 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:03:54.022 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:03:54.283 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:03:54.283 23:06:43 -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:03:54.283 23:06:43 -- setup/hugepages.sh@89 -- # local node 00:03:54.283 23:06:43 -- setup/hugepages.sh@90 -- # local sorted_t 00:03:54.283 23:06:43 -- setup/hugepages.sh@91 -- # local sorted_s 00:03:54.283 23:06:43 -- setup/hugepages.sh@92 -- # local surp 00:03:54.283 23:06:43 -- setup/hugepages.sh@93 -- # local resv 00:03:54.283 23:06:43 -- setup/hugepages.sh@94 -- # local anon 00:03:54.283 23:06:43 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:54.283 23:06:43 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:54.283 23:06:43 -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:54.283 23:06:43 -- setup/common.sh@18 -- # local node= 00:03:54.283 23:06:43 -- setup/common.sh@19 -- # local var val 00:03:54.283 23:06:43 -- setup/common.sh@20 -- # local mem_f mem 00:03:54.283 23:06:43 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:54.283 23:06:43 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:54.283 23:06:43 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:54.283 23:06:43 -- setup/common.sh@28 -- # mapfile -t mem 00:03:54.283 23:06:43 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:54.283 23:06:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.283 23:06:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.283 23:06:43 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338868 kB' 'MemFree: 106729684 kB' 'MemAvailable: 110270320 kB' 'Buffers: 4124 kB' 'Cached: 13008628 kB' 'SwapCached: 0 kB' 'Active: 10136880 kB' 'Inactive: 3515796 kB' 'Active(anon): 9446484 kB' 'Inactive(anon): 0 kB' 'Active(file): 690396 kB' 'Inactive(file): 3515796 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 643168 kB' 'Mapped: 199092 kB' 'Shmem: 8806560 kB' 'KReclaimable: 317552 kB' 'Slab: 1132520 kB' 'SReclaimable: 317552 kB' 'SUnreclaim: 814968 kB' 'KernelStack: 27168 kB' 'PageTables: 9180 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509460 kB' 'Committed_AS: 10841276 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 234940 kB' 'VmallocChunk: 0 kB' 'Percpu: 108288 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3722612 kB' 'DirectMap2M: 43143168 kB' 'DirectMap1G: 89128960 kB' 00:03:54.284 23:06:43 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.284 23:06:43 -- setup/common.sh@32 -- # continue 00:03:54.284 23:06:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.284 23:06:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.284 23:06:43 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.284 23:06:43 -- setup/common.sh@32 -- # continue 00:03:54.284 23:06:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.284 23:06:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.284 23:06:43 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.284 23:06:43 -- setup/common.sh@32 -- # continue 00:03:54.284 23:06:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.284 23:06:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.284 23:06:43 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.284 23:06:43 -- setup/common.sh@32 -- # continue 00:03:54.284 23:06:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.284 23:06:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.284 23:06:43 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.284 23:06:43 -- setup/common.sh@32 -- # continue 00:03:54.284 23:06:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.284 23:06:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.284 23:06:43 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.284 23:06:43 -- setup/common.sh@32 -- # continue 00:03:54.284 23:06:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.284 23:06:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.284 23:06:43 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.284 23:06:43 -- setup/common.sh@32 -- # continue 00:03:54.284 23:06:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.284 23:06:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.284 23:06:43 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.284 23:06:43 -- setup/common.sh@32 -- # continue 00:03:54.284 23:06:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.284 23:06:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.284 23:06:43 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.284 23:06:43 -- setup/common.sh@32 -- # continue 00:03:54.284 23:06:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.284 23:06:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.284 23:06:43 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.284 23:06:43 -- setup/common.sh@32 -- # continue 00:03:54.284 23:06:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.284 23:06:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.284 23:06:43 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.284 23:06:43 -- setup/common.sh@32 -- # continue 00:03:54.284 23:06:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.284 23:06:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.284 23:06:43 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.284 23:06:43 -- setup/common.sh@32 -- # continue 00:03:54.284 23:06:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.284 23:06:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.284 23:06:43 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.284 23:06:43 -- setup/common.sh@32 -- # continue 00:03:54.284 23:06:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.284 23:06:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.284 23:06:43 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.284 23:06:43 -- setup/common.sh@32 -- # continue 00:03:54.284 23:06:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.284 23:06:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.284 23:06:43 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.284 23:06:43 -- setup/common.sh@32 -- # continue 00:03:54.284 23:06:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.284 23:06:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.284 23:06:43 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.284 23:06:43 -- setup/common.sh@32 -- # continue 00:03:54.284 23:06:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.284 23:06:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.284 23:06:43 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.284 23:06:43 -- setup/common.sh@32 -- # continue 00:03:54.284 23:06:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.284 23:06:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.284 23:06:43 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.284 23:06:43 -- setup/common.sh@32 -- # continue 00:03:54.284 23:06:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.284 23:06:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.284 23:06:43 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.284 23:06:43 -- setup/common.sh@32 -- # continue 00:03:54.284 23:06:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.284 23:06:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.284 23:06:43 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.284 23:06:43 -- setup/common.sh@32 -- # continue 00:03:54.284 23:06:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.284 23:06:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.284 23:06:43 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.284 23:06:43 -- setup/common.sh@32 -- # continue 00:03:54.284 23:06:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.284 23:06:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.284 23:06:43 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.284 23:06:43 -- setup/common.sh@32 -- # continue 00:03:54.284 23:06:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.284 23:06:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.284 23:06:43 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.284 23:06:43 -- setup/common.sh@32 -- # continue 00:03:54.284 23:06:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.284 23:06:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.284 23:06:43 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.284 23:06:43 -- setup/common.sh@32 -- # continue 00:03:54.284 23:06:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.284 23:06:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.284 23:06:43 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.284 23:06:43 -- setup/common.sh@32 -- # continue 00:03:54.284 23:06:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.284 23:06:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.284 23:06:43 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.284 23:06:43 -- setup/common.sh@32 -- # continue 00:03:54.284 23:06:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.284 23:06:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.284 23:06:43 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.284 23:06:43 -- setup/common.sh@32 -- # continue 00:03:54.284 23:06:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.284 23:06:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.284 23:06:43 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.284 23:06:43 -- setup/common.sh@32 -- # continue 00:03:54.284 23:06:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.284 23:06:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.284 23:06:43 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.284 23:06:43 -- setup/common.sh@32 -- # continue 00:03:54.284 23:06:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.284 23:06:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.284 23:06:43 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.284 23:06:43 -- setup/common.sh@32 -- # continue 00:03:54.284 23:06:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.284 23:06:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.284 23:06:43 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.284 23:06:43 -- setup/common.sh@32 -- # continue 00:03:54.284 23:06:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.284 23:06:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.284 23:06:43 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.284 23:06:43 -- setup/common.sh@32 -- # continue 00:03:54.284 23:06:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.284 23:06:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.284 23:06:43 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.284 23:06:43 -- setup/common.sh@32 -- # continue 00:03:54.284 23:06:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.284 23:06:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.284 23:06:43 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.284 23:06:43 -- setup/common.sh@32 -- # continue 00:03:54.284 23:06:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.284 23:06:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.284 23:06:43 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.284 23:06:43 -- setup/common.sh@32 -- # continue 00:03:54.284 23:06:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.284 23:06:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.284 23:06:43 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.284 23:06:43 -- setup/common.sh@32 -- # continue 00:03:54.284 23:06:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.284 23:06:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.284 23:06:43 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.284 23:06:43 -- setup/common.sh@32 -- # continue 00:03:54.284 23:06:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.284 23:06:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.284 23:06:43 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.284 23:06:43 -- setup/common.sh@32 -- # continue 00:03:54.284 23:06:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.284 23:06:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.284 23:06:43 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.284 23:06:43 -- setup/common.sh@32 -- # continue 00:03:54.284 23:06:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.284 23:06:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.284 23:06:43 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.284 23:06:43 -- setup/common.sh@32 -- # continue 00:03:54.284 23:06:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.284 23:06:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.284 23:06:43 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.284 23:06:43 -- setup/common.sh@33 -- # echo 0 00:03:54.284 23:06:43 -- setup/common.sh@33 -- # return 0 00:03:54.284 23:06:43 -- setup/hugepages.sh@97 -- # anon=0 00:03:54.284 23:06:43 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:54.285 23:06:43 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:54.285 23:06:43 -- setup/common.sh@18 -- # local node= 00:03:54.285 23:06:43 -- setup/common.sh@19 -- # local var val 00:03:54.285 23:06:43 -- setup/common.sh@20 -- # local mem_f mem 00:03:54.285 23:06:43 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:54.285 23:06:43 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:54.285 23:06:43 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:54.285 23:06:43 -- setup/common.sh@28 -- # mapfile -t mem 00:03:54.285 23:06:43 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:54.285 23:06:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.285 23:06:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.285 23:06:43 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338868 kB' 'MemFree: 106732096 kB' 'MemAvailable: 110272732 kB' 'Buffers: 4124 kB' 'Cached: 13008632 kB' 'SwapCached: 0 kB' 'Active: 10130828 kB' 'Inactive: 3515796 kB' 'Active(anon): 9440432 kB' 'Inactive(anon): 0 kB' 'Active(file): 690396 kB' 'Inactive(file): 3515796 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 637144 kB' 'Mapped: 198600 kB' 'Shmem: 8806564 kB' 'KReclaimable: 317552 kB' 'Slab: 1132540 kB' 'SReclaimable: 317552 kB' 'SUnreclaim: 814988 kB' 'KernelStack: 26912 kB' 'PageTables: 8240 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509460 kB' 'Committed_AS: 10835164 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 234892 kB' 'VmallocChunk: 0 kB' 'Percpu: 108288 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3722612 kB' 'DirectMap2M: 43143168 kB' 'DirectMap1G: 89128960 kB' 00:03:54.285 23:06:43 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.285 23:06:43 -- setup/common.sh@32 -- # continue 00:03:54.285 23:06:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.285 23:06:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.285 23:06:43 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.285 23:06:43 -- setup/common.sh@32 -- # continue 00:03:54.285 23:06:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.285 23:06:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.285 23:06:43 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.285 23:06:43 -- setup/common.sh@32 -- # continue 00:03:54.285 23:06:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.285 23:06:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.285 23:06:43 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.285 23:06:43 -- setup/common.sh@32 -- # continue 00:03:54.285 23:06:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.285 23:06:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.285 23:06:43 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.285 23:06:43 -- setup/common.sh@32 -- # continue 00:03:54.285 23:06:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.285 23:06:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.285 23:06:43 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.285 23:06:43 -- setup/common.sh@32 -- # continue 00:03:54.285 23:06:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.285 23:06:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.285 23:06:43 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.285 23:06:43 -- setup/common.sh@32 -- # continue 00:03:54.285 23:06:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.285 23:06:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.285 23:06:43 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.285 23:06:43 -- setup/common.sh@32 -- # continue 00:03:54.285 23:06:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.285 23:06:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.285 23:06:43 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.285 23:06:43 -- setup/common.sh@32 -- # continue 00:03:54.285 23:06:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.285 23:06:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.285 23:06:43 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.285 23:06:43 -- setup/common.sh@32 -- # continue 00:03:54.285 23:06:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.285 23:06:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.285 23:06:43 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.285 23:06:43 -- setup/common.sh@32 -- # continue 00:03:54.552 23:06:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.552 23:06:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.552 23:06:43 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.552 23:06:43 -- setup/common.sh@32 -- # continue 00:03:54.552 23:06:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.552 23:06:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.552 23:06:43 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.552 23:06:43 -- setup/common.sh@32 -- # continue 00:03:54.552 23:06:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.552 23:06:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.552 23:06:43 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.552 23:06:43 -- setup/common.sh@32 -- # continue 00:03:54.552 23:06:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.552 23:06:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.552 23:06:43 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.552 23:06:43 -- setup/common.sh@32 -- # continue 00:03:54.552 23:06:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.552 23:06:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.552 23:06:43 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.552 23:06:43 -- setup/common.sh@32 -- # continue 00:03:54.552 23:06:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.552 23:06:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.552 23:06:43 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.552 23:06:43 -- setup/common.sh@32 -- # continue 00:03:54.552 23:06:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.552 23:06:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.552 23:06:43 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.552 23:06:43 -- setup/common.sh@32 -- # continue 00:03:54.552 23:06:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.552 23:06:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.552 23:06:43 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.552 23:06:43 -- setup/common.sh@32 -- # continue 00:03:54.552 23:06:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.552 23:06:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.552 23:06:43 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.552 23:06:43 -- setup/common.sh@32 -- # continue 00:03:54.552 23:06:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.552 23:06:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.552 23:06:43 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.552 23:06:43 -- setup/common.sh@32 -- # continue 00:03:54.552 23:06:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.552 23:06:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.552 23:06:43 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.552 23:06:43 -- setup/common.sh@32 -- # continue 00:03:54.552 23:06:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.552 23:06:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.552 23:06:43 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.552 23:06:43 -- setup/common.sh@32 -- # continue 00:03:54.552 23:06:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.552 23:06:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.552 23:06:43 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.552 23:06:43 -- setup/common.sh@32 -- # continue 00:03:54.552 23:06:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.552 23:06:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.552 23:06:43 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.552 23:06:43 -- setup/common.sh@32 -- # continue 00:03:54.552 23:06:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.552 23:06:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.552 23:06:43 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.552 23:06:43 -- setup/common.sh@32 -- # continue 00:03:54.552 23:06:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.552 23:06:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.552 23:06:43 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.552 23:06:43 -- setup/common.sh@32 -- # continue 00:03:54.552 23:06:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.552 23:06:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.552 23:06:43 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.552 23:06:43 -- setup/common.sh@32 -- # continue 00:03:54.552 23:06:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.552 23:06:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.552 23:06:43 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.552 23:06:43 -- setup/common.sh@32 -- # continue 00:03:54.552 23:06:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.552 23:06:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.552 23:06:43 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.552 23:06:43 -- setup/common.sh@32 -- # continue 00:03:54.552 23:06:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.552 23:06:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.552 23:06:43 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.552 23:06:43 -- setup/common.sh@32 -- # continue 00:03:54.552 23:06:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.552 23:06:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.552 23:06:43 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.552 23:06:43 -- setup/common.sh@32 -- # continue 00:03:54.552 23:06:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.552 23:06:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.552 23:06:43 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.552 23:06:43 -- setup/common.sh@32 -- # continue 00:03:54.552 23:06:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.552 23:06:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.552 23:06:43 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.552 23:06:43 -- setup/common.sh@32 -- # continue 00:03:54.552 23:06:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.552 23:06:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.552 23:06:43 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.552 23:06:43 -- setup/common.sh@32 -- # continue 00:03:54.552 23:06:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.552 23:06:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.552 23:06:43 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.552 23:06:43 -- setup/common.sh@32 -- # continue 00:03:54.552 23:06:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.552 23:06:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.552 23:06:43 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.552 23:06:43 -- setup/common.sh@32 -- # continue 00:03:54.552 23:06:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.552 23:06:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.552 23:06:43 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.552 23:06:43 -- setup/common.sh@32 -- # continue 00:03:54.552 23:06:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.552 23:06:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.552 23:06:43 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.552 23:06:43 -- setup/common.sh@32 -- # continue 00:03:54.552 23:06:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.552 23:06:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.552 23:06:43 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.552 23:06:43 -- setup/common.sh@32 -- # continue 00:03:54.552 23:06:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.552 23:06:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.552 23:06:43 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.552 23:06:43 -- setup/common.sh@32 -- # continue 00:03:54.552 23:06:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.552 23:06:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.552 23:06:43 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.552 23:06:43 -- setup/common.sh@32 -- # continue 00:03:54.552 23:06:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.552 23:06:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.552 23:06:43 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.552 23:06:43 -- setup/common.sh@32 -- # continue 00:03:54.552 23:06:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.552 23:06:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.552 23:06:43 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.552 23:06:43 -- setup/common.sh@32 -- # continue 00:03:54.552 23:06:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.552 23:06:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.552 23:06:43 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.552 23:06:43 -- setup/common.sh@32 -- # continue 00:03:54.552 23:06:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.552 23:06:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.552 23:06:43 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.552 23:06:43 -- setup/common.sh@32 -- # continue 00:03:54.552 23:06:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.552 23:06:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.552 23:06:43 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.552 23:06:43 -- setup/common.sh@32 -- # continue 00:03:54.552 23:06:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.552 23:06:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.552 23:06:43 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.553 23:06:43 -- setup/common.sh@32 -- # continue 00:03:54.553 23:06:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.553 23:06:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.553 23:06:43 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.553 23:06:43 -- setup/common.sh@32 -- # continue 00:03:54.553 23:06:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.553 23:06:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.553 23:06:43 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.553 23:06:43 -- setup/common.sh@32 -- # continue 00:03:54.553 23:06:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.553 23:06:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.553 23:06:43 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.553 23:06:43 -- setup/common.sh@32 -- # continue 00:03:54.553 23:06:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.553 23:06:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.553 23:06:43 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.553 23:06:43 -- setup/common.sh@33 -- # echo 0 00:03:54.553 23:06:43 -- setup/common.sh@33 -- # return 0 00:03:54.553 23:06:43 -- setup/hugepages.sh@99 -- # surp=0 00:03:54.553 23:06:43 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:54.553 23:06:43 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:54.553 23:06:43 -- setup/common.sh@18 -- # local node= 00:03:54.553 23:06:43 -- setup/common.sh@19 -- # local var val 00:03:54.553 23:06:43 -- setup/common.sh@20 -- # local mem_f mem 00:03:54.553 23:06:43 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:54.553 23:06:43 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:54.553 23:06:43 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:54.553 23:06:43 -- setup/common.sh@28 -- # mapfile -t mem 00:03:54.553 23:06:43 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:54.553 23:06:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.553 23:06:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.553 23:06:43 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338868 kB' 'MemFree: 106731788 kB' 'MemAvailable: 110272424 kB' 'Buffers: 4124 kB' 'Cached: 13008644 kB' 'SwapCached: 0 kB' 'Active: 10130644 kB' 'Inactive: 3515796 kB' 'Active(anon): 9440248 kB' 'Inactive(anon): 0 kB' 'Active(file): 690396 kB' 'Inactive(file): 3515796 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 637376 kB' 'Mapped: 198608 kB' 'Shmem: 8806576 kB' 'KReclaimable: 317552 kB' 'Slab: 1132188 kB' 'SReclaimable: 317552 kB' 'SUnreclaim: 814636 kB' 'KernelStack: 26960 kB' 'PageTables: 9120 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509460 kB' 'Committed_AS: 10835180 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 234876 kB' 'VmallocChunk: 0 kB' 'Percpu: 108288 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3722612 kB' 'DirectMap2M: 43143168 kB' 'DirectMap1G: 89128960 kB' 00:03:54.553 23:06:43 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.553 23:06:43 -- setup/common.sh@32 -- # continue 00:03:54.553 23:06:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.553 23:06:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.553 23:06:43 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.553 23:06:43 -- setup/common.sh@32 -- # continue 00:03:54.553 23:06:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.553 23:06:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.553 23:06:43 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.553 23:06:43 -- setup/common.sh@32 -- # continue 00:03:54.553 23:06:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.553 23:06:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.553 23:06:43 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.553 23:06:43 -- setup/common.sh@32 -- # continue 00:03:54.553 23:06:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.553 23:06:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.553 23:06:43 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.553 23:06:43 -- setup/common.sh@32 -- # continue 00:03:54.553 23:06:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.553 23:06:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.553 23:06:43 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.553 23:06:43 -- setup/common.sh@32 -- # continue 00:03:54.553 23:06:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.553 23:06:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.553 23:06:43 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.553 23:06:43 -- setup/common.sh@32 -- # continue 00:03:54.553 23:06:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.553 23:06:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.553 23:06:43 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.553 23:06:43 -- setup/common.sh@32 -- # continue 00:03:54.553 23:06:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.553 23:06:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.553 23:06:43 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.553 23:06:43 -- setup/common.sh@32 -- # continue 00:03:54.553 23:06:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.553 23:06:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.553 23:06:43 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.553 23:06:43 -- setup/common.sh@32 -- # continue 00:03:54.553 23:06:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.553 23:06:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.553 23:06:43 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.553 23:06:43 -- setup/common.sh@32 -- # continue 00:03:54.553 23:06:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.553 23:06:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.553 23:06:43 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.553 23:06:43 -- setup/common.sh@32 -- # continue 00:03:54.553 23:06:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.553 23:06:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.553 23:06:43 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.553 23:06:43 -- setup/common.sh@32 -- # continue 00:03:54.553 23:06:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.553 23:06:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.553 23:06:43 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.553 23:06:43 -- setup/common.sh@32 -- # continue 00:03:54.553 23:06:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.553 23:06:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.553 23:06:43 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.553 23:06:43 -- setup/common.sh@32 -- # continue 00:03:54.553 23:06:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.553 23:06:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.553 23:06:43 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.553 23:06:43 -- setup/common.sh@32 -- # continue 00:03:54.553 23:06:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.553 23:06:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.553 23:06:43 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.553 23:06:43 -- setup/common.sh@32 -- # continue 00:03:54.553 23:06:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.553 23:06:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.553 23:06:43 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.553 23:06:43 -- setup/common.sh@32 -- # continue 00:03:54.553 23:06:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.553 23:06:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.553 23:06:43 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.553 23:06:43 -- setup/common.sh@32 -- # continue 00:03:54.553 23:06:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.553 23:06:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.553 23:06:43 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.553 23:06:43 -- setup/common.sh@32 -- # continue 00:03:54.553 23:06:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.553 23:06:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.553 23:06:43 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.553 23:06:43 -- setup/common.sh@32 -- # continue 00:03:54.553 23:06:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.553 23:06:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.553 23:06:43 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.553 23:06:43 -- setup/common.sh@32 -- # continue 00:03:54.553 23:06:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.553 23:06:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.553 23:06:43 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.553 23:06:43 -- setup/common.sh@32 -- # continue 00:03:54.553 23:06:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.553 23:06:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.553 23:06:43 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.553 23:06:43 -- setup/common.sh@32 -- # continue 00:03:54.553 23:06:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.553 23:06:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.553 23:06:43 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.553 23:06:43 -- setup/common.sh@32 -- # continue 00:03:54.553 23:06:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.553 23:06:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.553 23:06:43 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.553 23:06:43 -- setup/common.sh@32 -- # continue 00:03:54.553 23:06:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.553 23:06:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.553 23:06:43 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.553 23:06:43 -- setup/common.sh@32 -- # continue 00:03:54.553 23:06:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.553 23:06:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.553 23:06:43 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.553 23:06:43 -- setup/common.sh@32 -- # continue 00:03:54.553 23:06:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.553 23:06:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.553 23:06:43 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.553 23:06:43 -- setup/common.sh@32 -- # continue 00:03:54.553 23:06:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.554 23:06:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.554 23:06:43 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.554 23:06:43 -- setup/common.sh@32 -- # continue 00:03:54.554 23:06:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.554 23:06:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.554 23:06:43 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.554 23:06:43 -- setup/common.sh@32 -- # continue 00:03:54.554 23:06:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.554 23:06:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.554 23:06:43 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.554 23:06:43 -- setup/common.sh@32 -- # continue 00:03:54.554 23:06:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.554 23:06:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.554 23:06:43 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.554 23:06:43 -- setup/common.sh@32 -- # continue 00:03:54.554 23:06:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.554 23:06:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.554 23:06:43 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.554 23:06:43 -- setup/common.sh@32 -- # continue 00:03:54.554 23:06:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.554 23:06:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.554 23:06:43 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.554 23:06:43 -- setup/common.sh@32 -- # continue 00:03:54.554 23:06:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.554 23:06:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.554 23:06:43 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.554 23:06:43 -- setup/common.sh@32 -- # continue 00:03:54.554 23:06:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.554 23:06:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.554 23:06:43 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.554 23:06:43 -- setup/common.sh@32 -- # continue 00:03:54.554 23:06:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.554 23:06:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.554 23:06:43 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.554 23:06:43 -- setup/common.sh@32 -- # continue 00:03:54.554 23:06:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.554 23:06:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.554 23:06:43 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.554 23:06:43 -- setup/common.sh@32 -- # continue 00:03:54.554 23:06:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.554 23:06:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.554 23:06:43 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.554 23:06:43 -- setup/common.sh@32 -- # continue 00:03:54.554 23:06:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.554 23:06:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.554 23:06:43 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.554 23:06:43 -- setup/common.sh@32 -- # continue 00:03:54.554 23:06:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.554 23:06:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.554 23:06:43 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.554 23:06:43 -- setup/common.sh@32 -- # continue 00:03:54.554 23:06:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.554 23:06:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.554 23:06:43 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.554 23:06:43 -- setup/common.sh@32 -- # continue 00:03:54.554 23:06:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.554 23:06:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.554 23:06:43 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.554 23:06:43 -- setup/common.sh@32 -- # continue 00:03:54.554 23:06:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.554 23:06:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.554 23:06:43 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.554 23:06:43 -- setup/common.sh@32 -- # continue 00:03:54.554 23:06:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.554 23:06:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.554 23:06:43 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.554 23:06:43 -- setup/common.sh@32 -- # continue 00:03:54.554 23:06:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.554 23:06:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.554 23:06:43 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.554 23:06:43 -- setup/common.sh@32 -- # continue 00:03:54.554 23:06:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.554 23:06:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.554 23:06:43 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.554 23:06:43 -- setup/common.sh@32 -- # continue 00:03:54.554 23:06:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.554 23:06:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.554 23:06:43 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.554 23:06:43 -- setup/common.sh@32 -- # continue 00:03:54.554 23:06:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.554 23:06:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.554 23:06:43 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.554 23:06:43 -- setup/common.sh@32 -- # continue 00:03:54.554 23:06:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.554 23:06:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.554 23:06:43 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.554 23:06:43 -- setup/common.sh@33 -- # echo 0 00:03:54.554 23:06:43 -- setup/common.sh@33 -- # return 0 00:03:54.554 23:06:43 -- setup/hugepages.sh@100 -- # resv=0 00:03:54.554 23:06:43 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:54.554 nr_hugepages=1024 00:03:54.554 23:06:43 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:54.554 resv_hugepages=0 00:03:54.554 23:06:43 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:54.554 surplus_hugepages=0 00:03:54.554 23:06:43 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:54.554 anon_hugepages=0 00:03:54.554 23:06:43 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:54.554 23:06:43 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:54.554 23:06:43 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:54.554 23:06:43 -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:54.554 23:06:43 -- setup/common.sh@18 -- # local node= 00:03:54.554 23:06:43 -- setup/common.sh@19 -- # local var val 00:03:54.554 23:06:43 -- setup/common.sh@20 -- # local mem_f mem 00:03:54.554 23:06:43 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:54.554 23:06:43 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:54.554 23:06:43 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:54.554 23:06:43 -- setup/common.sh@28 -- # mapfile -t mem 00:03:54.554 23:06:43 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:54.554 23:06:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.554 23:06:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.554 23:06:43 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338868 kB' 'MemFree: 106733612 kB' 'MemAvailable: 110274248 kB' 'Buffers: 4124 kB' 'Cached: 13008660 kB' 'SwapCached: 0 kB' 'Active: 10131240 kB' 'Inactive: 3515796 kB' 'Active(anon): 9440844 kB' 'Inactive(anon): 0 kB' 'Active(file): 690396 kB' 'Inactive(file): 3515796 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 637504 kB' 'Mapped: 198608 kB' 'Shmem: 8806592 kB' 'KReclaimable: 317552 kB' 'Slab: 1132188 kB' 'SReclaimable: 317552 kB' 'SUnreclaim: 814636 kB' 'KernelStack: 26976 kB' 'PageTables: 8808 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509460 kB' 'Committed_AS: 10835196 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 234844 kB' 'VmallocChunk: 0 kB' 'Percpu: 108288 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3722612 kB' 'DirectMap2M: 43143168 kB' 'DirectMap1G: 89128960 kB' 00:03:54.554 23:06:43 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.554 23:06:43 -- setup/common.sh@32 -- # continue 00:03:54.554 23:06:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.554 23:06:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.554 23:06:43 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.554 23:06:43 -- setup/common.sh@32 -- # continue 00:03:54.554 23:06:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.554 23:06:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.554 23:06:43 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.554 23:06:43 -- setup/common.sh@32 -- # continue 00:03:54.554 23:06:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.554 23:06:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.554 23:06:43 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.554 23:06:43 -- setup/common.sh@32 -- # continue 00:03:54.554 23:06:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.554 23:06:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.554 23:06:43 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.554 23:06:43 -- setup/common.sh@32 -- # continue 00:03:54.554 23:06:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.554 23:06:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.554 23:06:43 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.554 23:06:43 -- setup/common.sh@32 -- # continue 00:03:54.554 23:06:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.554 23:06:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.554 23:06:43 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.554 23:06:43 -- setup/common.sh@32 -- # continue 00:03:54.554 23:06:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.554 23:06:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.554 23:06:43 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.554 23:06:43 -- setup/common.sh@32 -- # continue 00:03:54.554 23:06:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.554 23:06:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.554 23:06:43 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.554 23:06:43 -- setup/common.sh@32 -- # continue 00:03:54.554 23:06:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.554 23:06:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.555 23:06:43 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.555 23:06:43 -- setup/common.sh@32 -- # continue 00:03:54.555 23:06:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.555 23:06:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.555 23:06:43 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.555 23:06:43 -- setup/common.sh@32 -- # continue 00:03:54.555 23:06:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.555 23:06:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.555 23:06:43 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.555 23:06:43 -- setup/common.sh@32 -- # continue 00:03:54.555 23:06:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.555 23:06:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.555 23:06:43 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.555 23:06:43 -- setup/common.sh@32 -- # continue 00:03:54.555 23:06:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.555 23:06:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.555 23:06:43 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.555 23:06:43 -- setup/common.sh@32 -- # continue 00:03:54.555 23:06:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.555 23:06:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.555 23:06:43 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.555 23:06:43 -- setup/common.sh@32 -- # continue 00:03:54.555 23:06:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.555 23:06:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.555 23:06:43 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.555 23:06:43 -- setup/common.sh@32 -- # continue 00:03:54.555 23:06:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.555 23:06:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.555 23:06:43 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.555 23:06:43 -- setup/common.sh@32 -- # continue 00:03:54.555 23:06:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.555 23:06:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.555 23:06:43 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.555 23:06:43 -- setup/common.sh@32 -- # continue 00:03:54.555 23:06:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.555 23:06:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.555 23:06:43 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.555 23:06:43 -- setup/common.sh@32 -- # continue 00:03:54.555 23:06:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.555 23:06:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.555 23:06:43 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.555 23:06:43 -- setup/common.sh@32 -- # continue 00:03:54.555 23:06:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.555 23:06:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.555 23:06:43 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.555 23:06:43 -- setup/common.sh@32 -- # continue 00:03:54.555 23:06:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.555 23:06:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.555 23:06:43 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.555 23:06:43 -- setup/common.sh@32 -- # continue 00:03:54.555 23:06:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.555 23:06:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.555 23:06:43 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.555 23:06:43 -- setup/common.sh@32 -- # continue 00:03:54.555 23:06:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.555 23:06:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.555 23:06:43 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.555 23:06:43 -- setup/common.sh@32 -- # continue 00:03:54.555 23:06:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.555 23:06:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.555 23:06:43 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.555 23:06:43 -- setup/common.sh@32 -- # continue 00:03:54.555 23:06:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.555 23:06:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.555 23:06:43 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.555 23:06:43 -- setup/common.sh@32 -- # continue 00:03:54.555 23:06:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.555 23:06:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.555 23:06:43 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.555 23:06:43 -- setup/common.sh@32 -- # continue 00:03:54.555 23:06:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.555 23:06:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.555 23:06:43 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.555 23:06:43 -- setup/common.sh@32 -- # continue 00:03:54.555 23:06:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.555 23:06:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.555 23:06:43 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.555 23:06:43 -- setup/common.sh@32 -- # continue 00:03:54.555 23:06:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.555 23:06:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.555 23:06:43 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.555 23:06:43 -- setup/common.sh@32 -- # continue 00:03:54.555 23:06:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.555 23:06:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.555 23:06:43 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.555 23:06:43 -- setup/common.sh@32 -- # continue 00:03:54.555 23:06:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.555 23:06:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.555 23:06:43 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.555 23:06:43 -- setup/common.sh@32 -- # continue 00:03:54.555 23:06:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.555 23:06:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.555 23:06:43 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.555 23:06:43 -- setup/common.sh@32 -- # continue 00:03:54.555 23:06:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.555 23:06:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.555 23:06:43 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.555 23:06:43 -- setup/common.sh@32 -- # continue 00:03:54.555 23:06:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.555 23:06:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.555 23:06:43 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.555 23:06:43 -- setup/common.sh@32 -- # continue 00:03:54.555 23:06:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.555 23:06:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.555 23:06:43 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.555 23:06:43 -- setup/common.sh@32 -- # continue 00:03:54.555 23:06:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.555 23:06:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.555 23:06:43 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.555 23:06:43 -- setup/common.sh@32 -- # continue 00:03:54.555 23:06:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.555 23:06:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.555 23:06:43 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.555 23:06:43 -- setup/common.sh@32 -- # continue 00:03:54.555 23:06:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.555 23:06:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.555 23:06:43 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.555 23:06:43 -- setup/common.sh@32 -- # continue 00:03:54.555 23:06:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.555 23:06:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.555 23:06:43 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.555 23:06:43 -- setup/common.sh@32 -- # continue 00:03:54.555 23:06:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.555 23:06:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.555 23:06:43 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.555 23:06:43 -- setup/common.sh@32 -- # continue 00:03:54.555 23:06:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.555 23:06:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.555 23:06:43 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.555 23:06:43 -- setup/common.sh@32 -- # continue 00:03:54.555 23:06:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.555 23:06:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.555 23:06:43 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.555 23:06:43 -- setup/common.sh@32 -- # continue 00:03:54.555 23:06:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.555 23:06:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.555 23:06:43 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.555 23:06:43 -- setup/common.sh@32 -- # continue 00:03:54.555 23:06:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.555 23:06:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.555 23:06:43 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.555 23:06:43 -- setup/common.sh@32 -- # continue 00:03:54.555 23:06:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.555 23:06:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.555 23:06:43 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.555 23:06:43 -- setup/common.sh@32 -- # continue 00:03:54.555 23:06:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.555 23:06:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.555 23:06:43 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.555 23:06:43 -- setup/common.sh@32 -- # continue 00:03:54.555 23:06:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.555 23:06:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.555 23:06:43 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.555 23:06:43 -- setup/common.sh@32 -- # continue 00:03:54.555 23:06:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.555 23:06:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.555 23:06:43 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.555 23:06:43 -- setup/common.sh@33 -- # echo 1024 00:03:54.555 23:06:43 -- setup/common.sh@33 -- # return 0 00:03:54.555 23:06:43 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:54.555 23:06:43 -- setup/hugepages.sh@112 -- # get_nodes 00:03:54.555 23:06:43 -- setup/hugepages.sh@27 -- # local node 00:03:54.556 23:06:43 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:54.556 23:06:43 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:54.556 23:06:43 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:54.556 23:06:43 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:54.556 23:06:43 -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:54.556 23:06:43 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:54.556 23:06:43 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:54.556 23:06:43 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:54.556 23:06:43 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:54.556 23:06:43 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:54.556 23:06:43 -- setup/common.sh@18 -- # local node=0 00:03:54.556 23:06:43 -- setup/common.sh@19 -- # local var val 00:03:54.556 23:06:43 -- setup/common.sh@20 -- # local mem_f mem 00:03:54.556 23:06:43 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:54.556 23:06:43 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:54.556 23:06:43 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:54.556 23:06:43 -- setup/common.sh@28 -- # mapfile -t mem 00:03:54.556 23:06:43 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:54.556 23:06:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.556 23:06:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.556 23:06:43 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 65659008 kB' 'MemFree: 58051068 kB' 'MemUsed: 7607940 kB' 'SwapCached: 0 kB' 'Active: 3357124 kB' 'Inactive: 108980 kB' 'Active(anon): 3047604 kB' 'Inactive(anon): 0 kB' 'Active(file): 309520 kB' 'Inactive(file): 108980 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 3368348 kB' 'Mapped: 98240 kB' 'AnonPages: 100876 kB' 'Shmem: 2949848 kB' 'KernelStack: 13336 kB' 'PageTables: 3276 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 164676 kB' 'Slab: 556928 kB' 'SReclaimable: 164676 kB' 'SUnreclaim: 392252 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:54.556 23:06:43 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.556 23:06:43 -- setup/common.sh@32 -- # continue 00:03:54.556 23:06:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.556 23:06:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.556 23:06:43 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.556 23:06:43 -- setup/common.sh@32 -- # continue 00:03:54.556 23:06:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.556 23:06:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.556 23:06:43 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.556 23:06:43 -- setup/common.sh@32 -- # continue 00:03:54.556 23:06:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.556 23:06:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.556 23:06:43 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.556 23:06:43 -- setup/common.sh@32 -- # continue 00:03:54.556 23:06:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.556 23:06:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.556 23:06:43 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.556 23:06:43 -- setup/common.sh@32 -- # continue 00:03:54.556 23:06:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.556 23:06:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.556 23:06:43 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.556 23:06:43 -- setup/common.sh@32 -- # continue 00:03:54.556 23:06:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.556 23:06:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.556 23:06:43 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.556 23:06:43 -- setup/common.sh@32 -- # continue 00:03:54.556 23:06:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.556 23:06:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.556 23:06:43 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.556 23:06:43 -- setup/common.sh@32 -- # continue 00:03:54.556 23:06:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.556 23:06:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.556 23:06:43 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.556 23:06:43 -- setup/common.sh@32 -- # continue 00:03:54.556 23:06:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.556 23:06:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.556 23:06:43 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.556 23:06:43 -- setup/common.sh@32 -- # continue 00:03:54.556 23:06:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.556 23:06:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.556 23:06:43 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.556 23:06:43 -- setup/common.sh@32 -- # continue 00:03:54.556 23:06:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.556 23:06:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.556 23:06:43 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.556 23:06:43 -- setup/common.sh@32 -- # continue 00:03:54.556 23:06:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.556 23:06:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.556 23:06:43 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.556 23:06:43 -- setup/common.sh@32 -- # continue 00:03:54.556 23:06:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.556 23:06:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.556 23:06:43 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.556 23:06:43 -- setup/common.sh@32 -- # continue 00:03:54.556 23:06:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.556 23:06:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.556 23:06:43 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.556 23:06:43 -- setup/common.sh@32 -- # continue 00:03:54.556 23:06:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.556 23:06:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.556 23:06:43 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.556 23:06:43 -- setup/common.sh@32 -- # continue 00:03:54.556 23:06:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.556 23:06:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.556 23:06:43 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.556 23:06:43 -- setup/common.sh@32 -- # continue 00:03:54.556 23:06:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.556 23:06:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.556 23:06:43 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.556 23:06:43 -- setup/common.sh@32 -- # continue 00:03:54.556 23:06:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.556 23:06:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.556 23:06:43 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.556 23:06:43 -- setup/common.sh@32 -- # continue 00:03:54.556 23:06:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.556 23:06:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.556 23:06:43 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.556 23:06:43 -- setup/common.sh@32 -- # continue 00:03:54.556 23:06:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.556 23:06:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.556 23:06:43 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.556 23:06:43 -- setup/common.sh@32 -- # continue 00:03:54.556 23:06:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.556 23:06:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.556 23:06:43 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.556 23:06:43 -- setup/common.sh@32 -- # continue 00:03:54.556 23:06:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.556 23:06:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.556 23:06:43 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.556 23:06:43 -- setup/common.sh@32 -- # continue 00:03:54.556 23:06:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.556 23:06:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.556 23:06:43 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.556 23:06:43 -- setup/common.sh@32 -- # continue 00:03:54.556 23:06:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.556 23:06:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.557 23:06:43 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.557 23:06:43 -- setup/common.sh@32 -- # continue 00:03:54.557 23:06:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.557 23:06:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.557 23:06:43 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.557 23:06:43 -- setup/common.sh@32 -- # continue 00:03:54.557 23:06:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.557 23:06:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.557 23:06:43 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.557 23:06:43 -- setup/common.sh@32 -- # continue 00:03:54.557 23:06:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.557 23:06:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.557 23:06:43 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.557 23:06:43 -- setup/common.sh@32 -- # continue 00:03:54.557 23:06:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.557 23:06:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.557 23:06:43 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.557 23:06:43 -- setup/common.sh@32 -- # continue 00:03:54.557 23:06:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.557 23:06:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.557 23:06:43 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.557 23:06:43 -- setup/common.sh@32 -- # continue 00:03:54.557 23:06:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.557 23:06:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.557 23:06:43 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.557 23:06:43 -- setup/common.sh@32 -- # continue 00:03:54.557 23:06:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.557 23:06:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.557 23:06:43 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.557 23:06:43 -- setup/common.sh@32 -- # continue 00:03:54.557 23:06:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.557 23:06:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.557 23:06:43 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.557 23:06:43 -- setup/common.sh@32 -- # continue 00:03:54.557 23:06:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.557 23:06:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.557 23:06:43 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.557 23:06:43 -- setup/common.sh@32 -- # continue 00:03:54.557 23:06:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.557 23:06:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.557 23:06:43 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.557 23:06:43 -- setup/common.sh@32 -- # continue 00:03:54.557 23:06:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.557 23:06:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.557 23:06:43 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.557 23:06:43 -- setup/common.sh@32 -- # continue 00:03:54.557 23:06:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.557 23:06:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.557 23:06:43 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.557 23:06:43 -- setup/common.sh@33 -- # echo 0 00:03:54.557 23:06:43 -- setup/common.sh@33 -- # return 0 00:03:54.557 23:06:43 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:54.557 23:06:43 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:54.557 23:06:43 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:54.557 23:06:43 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:54.557 23:06:43 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:54.557 node0=1024 expecting 1024 00:03:54.557 23:06:43 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:54.557 00:03:54.557 real 0m7.581s 00:03:54.557 user 0m3.039s 00:03:54.557 sys 0m4.633s 00:03:54.557 23:06:43 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:03:54.557 23:06:43 -- common/autotest_common.sh@10 -- # set +x 00:03:54.557 ************************************ 00:03:54.557 END TEST no_shrink_alloc 00:03:54.557 ************************************ 00:03:54.557 23:06:43 -- setup/hugepages.sh@217 -- # clear_hp 00:03:54.557 23:06:43 -- setup/hugepages.sh@37 -- # local node hp 00:03:54.557 23:06:43 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:54.557 23:06:43 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:54.557 23:06:43 -- setup/hugepages.sh@41 -- # echo 0 00:03:54.557 23:06:43 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:54.557 23:06:43 -- setup/hugepages.sh@41 -- # echo 0 00:03:54.557 23:06:43 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:54.557 23:06:43 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:54.557 23:06:43 -- setup/hugepages.sh@41 -- # echo 0 00:03:54.557 23:06:43 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:54.557 23:06:43 -- setup/hugepages.sh@41 -- # echo 0 00:03:54.557 23:06:43 -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:03:54.557 23:06:43 -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:03:54.557 00:03:54.557 real 0m28.205s 00:03:54.557 user 0m11.085s 00:03:54.557 sys 0m17.306s 00:03:54.557 23:06:43 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:03:54.557 23:06:43 -- common/autotest_common.sh@10 -- # set +x 00:03:54.557 ************************************ 00:03:54.557 END TEST hugepages 00:03:54.557 ************************************ 00:03:54.557 23:06:43 -- setup/test-setup.sh@14 -- # run_test driver /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/driver.sh 00:03:54.557 23:06:43 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:54.557 23:06:43 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:54.557 23:06:43 -- common/autotest_common.sh@10 -- # set +x 00:03:54.818 ************************************ 00:03:54.818 START TEST driver 00:03:54.818 ************************************ 00:03:54.818 23:06:43 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/driver.sh 00:03:54.818 * Looking for test storage... 00:03:54.818 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:03:54.818 23:06:43 -- setup/driver.sh@68 -- # setup reset 00:03:54.818 23:06:43 -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:54.818 23:06:43 -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:00.106 23:06:48 -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:04:00.106 23:06:48 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:00.106 23:06:48 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:00.106 23:06:48 -- common/autotest_common.sh@10 -- # set +x 00:04:00.106 ************************************ 00:04:00.106 START TEST guess_driver 00:04:00.106 ************************************ 00:04:00.106 23:06:48 -- common/autotest_common.sh@1111 -- # guess_driver 00:04:00.106 23:06:48 -- setup/driver.sh@46 -- # local driver setup_driver marker 00:04:00.106 23:06:48 -- setup/driver.sh@47 -- # local fail=0 00:04:00.106 23:06:48 -- setup/driver.sh@49 -- # pick_driver 00:04:00.106 23:06:48 -- setup/driver.sh@36 -- # vfio 00:04:00.106 23:06:48 -- setup/driver.sh@21 -- # local iommu_grups 00:04:00.106 23:06:48 -- setup/driver.sh@22 -- # local unsafe_vfio 00:04:00.106 23:06:48 -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:04:00.106 23:06:48 -- setup/driver.sh@25 -- # unsafe_vfio=N 00:04:00.106 23:06:48 -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:04:00.106 23:06:48 -- setup/driver.sh@29 -- # (( 322 > 0 )) 00:04:00.106 23:06:48 -- setup/driver.sh@30 -- # is_driver vfio_pci 00:04:00.106 23:06:48 -- setup/driver.sh@14 -- # mod vfio_pci 00:04:00.106 23:06:48 -- setup/driver.sh@12 -- # dep vfio_pci 00:04:00.106 23:06:48 -- setup/driver.sh@11 -- # modprobe --show-depends vfio_pci 00:04:00.106 23:06:48 -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/virt/lib/irqbypass.ko.xz 00:04:00.106 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:04:00.106 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:04:00.106 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:04:00.106 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:04:00.106 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio_iommu_type1.ko.xz 00:04:00.106 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci-core.ko.xz 00:04:00.106 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci.ko.xz == *\.\k\o* ]] 00:04:00.106 23:06:48 -- setup/driver.sh@30 -- # return 0 00:04:00.106 23:06:48 -- setup/driver.sh@37 -- # echo vfio-pci 00:04:00.106 23:06:48 -- setup/driver.sh@49 -- # driver=vfio-pci 00:04:00.106 23:06:48 -- setup/driver.sh@51 -- # [[ vfio-pci == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:04:00.106 23:06:48 -- setup/driver.sh@56 -- # echo 'Looking for driver=vfio-pci' 00:04:00.106 Looking for driver=vfio-pci 00:04:00.106 23:06:48 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:00.106 23:06:48 -- setup/driver.sh@45 -- # setup output config 00:04:00.106 23:06:48 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:00.106 23:06:48 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:02.651 23:06:51 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:02.651 23:06:51 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:02.651 23:06:51 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:02.651 23:06:51 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:02.651 23:06:51 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:02.651 23:06:51 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:02.651 23:06:51 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:02.651 23:06:51 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:02.651 23:06:51 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:02.651 23:06:51 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:02.651 23:06:51 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:02.651 23:06:51 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:02.651 23:06:51 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:02.651 23:06:51 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:02.651 23:06:51 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:02.651 23:06:51 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:02.651 23:06:51 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:02.651 23:06:51 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:02.651 23:06:51 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:02.651 23:06:51 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:02.651 23:06:51 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:02.651 23:06:51 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:02.651 23:06:51 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:02.651 23:06:51 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:02.651 23:06:51 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:02.651 23:06:51 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:02.651 23:06:51 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:02.651 23:06:51 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:02.651 23:06:51 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:02.651 23:06:51 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:02.651 23:06:51 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:02.651 23:06:51 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:02.651 23:06:51 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:02.651 23:06:51 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:02.651 23:06:51 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:02.651 23:06:51 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:02.651 23:06:51 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:02.651 23:06:51 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:02.651 23:06:51 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:02.651 23:06:51 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:02.651 23:06:51 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:02.651 23:06:51 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:02.651 23:06:51 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:02.651 23:06:51 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:02.651 23:06:51 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:02.651 23:06:51 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:02.651 23:06:51 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:02.651 23:06:51 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:02.651 23:06:51 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:02.651 23:06:51 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:02.651 23:06:51 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:03.220 23:06:52 -- setup/driver.sh@64 -- # (( fail == 0 )) 00:04:03.220 23:06:52 -- setup/driver.sh@65 -- # setup reset 00:04:03.220 23:06:52 -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:03.220 23:06:52 -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:08.516 00:04:08.516 real 0m7.944s 00:04:08.516 user 0m2.380s 00:04:08.516 sys 0m4.610s 00:04:08.516 23:06:56 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:08.516 23:06:56 -- common/autotest_common.sh@10 -- # set +x 00:04:08.516 ************************************ 00:04:08.516 END TEST guess_driver 00:04:08.516 ************************************ 00:04:08.516 00:04:08.516 real 0m12.906s 00:04:08.516 user 0m3.759s 00:04:08.516 sys 0m7.371s 00:04:08.516 23:06:56 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:08.517 23:06:56 -- common/autotest_common.sh@10 -- # set +x 00:04:08.517 ************************************ 00:04:08.517 END TEST driver 00:04:08.517 ************************************ 00:04:08.517 23:06:56 -- setup/test-setup.sh@15 -- # run_test devices /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/devices.sh 00:04:08.517 23:06:56 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:08.517 23:06:56 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:08.517 23:06:56 -- common/autotest_common.sh@10 -- # set +x 00:04:08.517 ************************************ 00:04:08.517 START TEST devices 00:04:08.517 ************************************ 00:04:08.517 23:06:56 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/devices.sh 00:04:08.517 * Looking for test storage... 00:04:08.517 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:04:08.517 23:06:57 -- setup/devices.sh@190 -- # trap cleanup EXIT 00:04:08.517 23:06:57 -- setup/devices.sh@192 -- # setup reset 00:04:08.517 23:06:57 -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:08.517 23:06:57 -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:12.718 23:07:01 -- setup/devices.sh@194 -- # get_zoned_devs 00:04:12.718 23:07:01 -- common/autotest_common.sh@1655 -- # zoned_devs=() 00:04:12.718 23:07:01 -- common/autotest_common.sh@1655 -- # local -gA zoned_devs 00:04:12.718 23:07:01 -- common/autotest_common.sh@1656 -- # local nvme bdf 00:04:12.718 23:07:01 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:04:12.718 23:07:01 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n1 00:04:12.718 23:07:01 -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:04:12.718 23:07:01 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:12.718 23:07:01 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:04:12.718 23:07:01 -- setup/devices.sh@196 -- # blocks=() 00:04:12.718 23:07:01 -- setup/devices.sh@196 -- # declare -a blocks 00:04:12.718 23:07:01 -- setup/devices.sh@197 -- # blocks_to_pci=() 00:04:12.718 23:07:01 -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:04:12.718 23:07:01 -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:04:12.718 23:07:01 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:04:12.718 23:07:01 -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:04:12.718 23:07:01 -- setup/devices.sh@201 -- # ctrl=nvme0 00:04:12.718 23:07:01 -- setup/devices.sh@202 -- # pci=0000:65:00.0 00:04:12.718 23:07:01 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\6\5\:\0\0\.\0* ]] 00:04:12.718 23:07:01 -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:04:12.718 23:07:01 -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:04:12.718 23:07:01 -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:04:12.718 No valid GPT data, bailing 00:04:12.718 23:07:01 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:04:12.718 23:07:01 -- scripts/common.sh@391 -- # pt= 00:04:12.718 23:07:01 -- scripts/common.sh@392 -- # return 1 00:04:12.718 23:07:01 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:04:12.718 23:07:01 -- setup/common.sh@76 -- # local dev=nvme0n1 00:04:12.718 23:07:01 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:04:12.718 23:07:01 -- setup/common.sh@80 -- # echo 1920383410176 00:04:12.718 23:07:01 -- setup/devices.sh@204 -- # (( 1920383410176 >= min_disk_size )) 00:04:12.718 23:07:01 -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:04:12.718 23:07:01 -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:65:00.0 00:04:12.718 23:07:01 -- setup/devices.sh@209 -- # (( 1 > 0 )) 00:04:12.718 23:07:01 -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:04:12.718 23:07:01 -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:04:12.718 23:07:01 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:12.718 23:07:01 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:12.718 23:07:01 -- common/autotest_common.sh@10 -- # set +x 00:04:12.718 ************************************ 00:04:12.718 START TEST nvme_mount 00:04:12.718 ************************************ 00:04:12.718 23:07:01 -- common/autotest_common.sh@1111 -- # nvme_mount 00:04:12.718 23:07:01 -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:04:12.718 23:07:01 -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:04:12.718 23:07:01 -- setup/devices.sh@97 -- # nvme_mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:12.718 23:07:01 -- setup/devices.sh@98 -- # nvme_dummy_test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:12.718 23:07:01 -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:04:12.718 23:07:01 -- setup/common.sh@39 -- # local disk=nvme0n1 00:04:12.718 23:07:01 -- setup/common.sh@40 -- # local part_no=1 00:04:12.718 23:07:01 -- setup/common.sh@41 -- # local size=1073741824 00:04:12.718 23:07:01 -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:04:12.718 23:07:01 -- setup/common.sh@44 -- # parts=() 00:04:12.718 23:07:01 -- setup/common.sh@44 -- # local parts 00:04:12.718 23:07:01 -- setup/common.sh@46 -- # (( part = 1 )) 00:04:12.718 23:07:01 -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:12.718 23:07:01 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:12.718 23:07:01 -- setup/common.sh@46 -- # (( part++ )) 00:04:12.718 23:07:01 -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:12.718 23:07:01 -- setup/common.sh@51 -- # (( size /= 512 )) 00:04:12.718 23:07:01 -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:04:12.718 23:07:01 -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:04:13.288 Creating new GPT entries in memory. 00:04:13.288 GPT data structures destroyed! You may now partition the disk using fdisk or 00:04:13.288 other utilities. 00:04:13.288 23:07:02 -- setup/common.sh@57 -- # (( part = 1 )) 00:04:13.288 23:07:02 -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:13.288 23:07:02 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:13.288 23:07:02 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:13.288 23:07:02 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:04:14.243 Creating new GPT entries in memory. 00:04:14.243 The operation has completed successfully. 00:04:14.243 23:07:03 -- setup/common.sh@57 -- # (( part++ )) 00:04:14.243 23:07:03 -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:14.243 23:07:03 -- setup/common.sh@62 -- # wait 3699945 00:04:14.243 23:07:03 -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:14.243 23:07:03 -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount size= 00:04:14.243 23:07:03 -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:14.243 23:07:03 -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:04:14.243 23:07:03 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:04:14.243 23:07:03 -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:14.243 23:07:03 -- setup/devices.sh@105 -- # verify 0000:65:00.0 nvme0n1:nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:14.243 23:07:03 -- setup/devices.sh@48 -- # local dev=0000:65:00.0 00:04:14.243 23:07:03 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:04:14.243 23:07:03 -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:14.243 23:07:03 -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:14.243 23:07:03 -- setup/devices.sh@53 -- # local found=0 00:04:14.243 23:07:03 -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:14.243 23:07:03 -- setup/devices.sh@56 -- # : 00:04:14.243 23:07:03 -- setup/devices.sh@59 -- # local pci status 00:04:14.243 23:07:03 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:14.243 23:07:03 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:65:00.0 00:04:14.243 23:07:03 -- setup/devices.sh@47 -- # setup output config 00:04:14.243 23:07:03 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:14.243 23:07:03 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:17.549 23:07:06 -- setup/devices.sh@62 -- # [[ 0000:80:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:17.549 23:07:06 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:17.549 23:07:06 -- setup/devices.sh@62 -- # [[ 0000:80:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:17.549 23:07:06 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:17.549 23:07:06 -- setup/devices.sh@62 -- # [[ 0000:80:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:17.549 23:07:06 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:17.549 23:07:06 -- setup/devices.sh@62 -- # [[ 0000:80:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:17.549 23:07:06 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:17.549 23:07:06 -- setup/devices.sh@62 -- # [[ 0000:80:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:17.549 23:07:06 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:17.549 23:07:06 -- setup/devices.sh@62 -- # [[ 0000:80:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:17.549 23:07:06 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:17.549 23:07:06 -- setup/devices.sh@62 -- # [[ 0000:80:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:17.549 23:07:06 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:17.549 23:07:06 -- setup/devices.sh@62 -- # [[ 0000:80:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:17.549 23:07:06 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:17.549 23:07:06 -- setup/devices.sh@62 -- # [[ 0000:65:00.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:17.549 23:07:06 -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:04:17.549 23:07:06 -- setup/devices.sh@63 -- # found=1 00:04:17.549 23:07:06 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:17.549 23:07:06 -- setup/devices.sh@62 -- # [[ 0000:00:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:17.549 23:07:06 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:17.549 23:07:06 -- setup/devices.sh@62 -- # [[ 0000:00:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:17.549 23:07:06 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:17.549 23:07:06 -- setup/devices.sh@62 -- # [[ 0000:00:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:17.549 23:07:06 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:17.549 23:07:06 -- setup/devices.sh@62 -- # [[ 0000:00:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:17.549 23:07:06 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:17.549 23:07:06 -- setup/devices.sh@62 -- # [[ 0000:00:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:17.549 23:07:06 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:17.549 23:07:06 -- setup/devices.sh@62 -- # [[ 0000:00:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:17.549 23:07:06 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:17.549 23:07:06 -- setup/devices.sh@62 -- # [[ 0000:00:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:17.549 23:07:06 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:17.549 23:07:06 -- setup/devices.sh@62 -- # [[ 0000:00:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:17.549 23:07:06 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:18.122 23:07:07 -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:18.122 23:07:07 -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount ]] 00:04:18.122 23:07:07 -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:18.122 23:07:07 -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:18.122 23:07:07 -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:18.122 23:07:07 -- setup/devices.sh@110 -- # cleanup_nvme 00:04:18.122 23:07:07 -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:18.122 23:07:07 -- setup/devices.sh@21 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:18.122 23:07:07 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:18.122 23:07:07 -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:04:18.122 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:18.122 23:07:07 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:18.122 23:07:07 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:18.384 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:04:18.384 /dev/nvme0n1: 8 bytes were erased at offset 0x1bf1fc55e00 (gpt): 45 46 49 20 50 41 52 54 00:04:18.384 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:04:18.385 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:04:18.385 23:07:07 -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 1024M 00:04:18.385 23:07:07 -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount size=1024M 00:04:18.385 23:07:07 -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:18.385 23:07:07 -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:04:18.385 23:07:07 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:04:18.385 23:07:07 -- setup/common.sh@72 -- # mount /dev/nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:18.385 23:07:07 -- setup/devices.sh@116 -- # verify 0000:65:00.0 nvme0n1:nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:18.385 23:07:07 -- setup/devices.sh@48 -- # local dev=0000:65:00.0 00:04:18.385 23:07:07 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:04:18.385 23:07:07 -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:18.385 23:07:07 -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:18.385 23:07:07 -- setup/devices.sh@53 -- # local found=0 00:04:18.385 23:07:07 -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:18.385 23:07:07 -- setup/devices.sh@56 -- # : 00:04:18.385 23:07:07 -- setup/devices.sh@59 -- # local pci status 00:04:18.385 23:07:07 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:18.385 23:07:07 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:65:00.0 00:04:18.385 23:07:07 -- setup/devices.sh@47 -- # setup output config 00:04:18.385 23:07:07 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:18.385 23:07:07 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:21.773 23:07:10 -- setup/devices.sh@62 -- # [[ 0000:80:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:21.773 23:07:10 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:21.773 23:07:10 -- setup/devices.sh@62 -- # [[ 0000:80:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:21.773 23:07:10 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:21.773 23:07:10 -- setup/devices.sh@62 -- # [[ 0000:80:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:21.773 23:07:10 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:21.773 23:07:10 -- setup/devices.sh@62 -- # [[ 0000:80:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:21.773 23:07:10 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:21.773 23:07:10 -- setup/devices.sh@62 -- # [[ 0000:80:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:21.773 23:07:10 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:21.773 23:07:10 -- setup/devices.sh@62 -- # [[ 0000:80:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:21.773 23:07:10 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:21.773 23:07:10 -- setup/devices.sh@62 -- # [[ 0000:80:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:21.773 23:07:10 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:21.773 23:07:10 -- setup/devices.sh@62 -- # [[ 0000:80:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:21.773 23:07:10 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:21.773 23:07:10 -- setup/devices.sh@62 -- # [[ 0000:65:00.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:21.773 23:07:10 -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:04:21.773 23:07:10 -- setup/devices.sh@63 -- # found=1 00:04:21.773 23:07:10 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:21.773 23:07:10 -- setup/devices.sh@62 -- # [[ 0000:00:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:21.773 23:07:10 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:21.773 23:07:10 -- setup/devices.sh@62 -- # [[ 0000:00:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:21.773 23:07:10 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:21.773 23:07:10 -- setup/devices.sh@62 -- # [[ 0000:00:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:21.773 23:07:10 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:21.773 23:07:10 -- setup/devices.sh@62 -- # [[ 0000:00:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:21.773 23:07:10 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:21.773 23:07:10 -- setup/devices.sh@62 -- # [[ 0000:00:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:21.773 23:07:10 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:21.773 23:07:10 -- setup/devices.sh@62 -- # [[ 0000:00:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:21.773 23:07:10 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:21.773 23:07:10 -- setup/devices.sh@62 -- # [[ 0000:00:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:21.773 23:07:10 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:21.773 23:07:10 -- setup/devices.sh@62 -- # [[ 0000:00:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:21.773 23:07:10 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:22.034 23:07:11 -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:22.034 23:07:11 -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount ]] 00:04:22.034 23:07:11 -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:22.034 23:07:11 -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:22.034 23:07:11 -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:22.034 23:07:11 -- setup/devices.sh@123 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:22.034 23:07:11 -- setup/devices.sh@125 -- # verify 0000:65:00.0 data@nvme0n1 '' '' 00:04:22.034 23:07:11 -- setup/devices.sh@48 -- # local dev=0000:65:00.0 00:04:22.034 23:07:11 -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:04:22.034 23:07:11 -- setup/devices.sh@50 -- # local mount_point= 00:04:22.034 23:07:11 -- setup/devices.sh@51 -- # local test_file= 00:04:22.034 23:07:11 -- setup/devices.sh@53 -- # local found=0 00:04:22.034 23:07:11 -- setup/devices.sh@55 -- # [[ -n '' ]] 00:04:22.034 23:07:11 -- setup/devices.sh@59 -- # local pci status 00:04:22.034 23:07:11 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:22.034 23:07:11 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:65:00.0 00:04:22.034 23:07:11 -- setup/devices.sh@47 -- # setup output config 00:04:22.034 23:07:11 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:22.034 23:07:11 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:25.329 23:07:14 -- setup/devices.sh@62 -- # [[ 0000:80:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:25.329 23:07:14 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:25.329 23:07:14 -- setup/devices.sh@62 -- # [[ 0000:80:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:25.329 23:07:14 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:25.329 23:07:14 -- setup/devices.sh@62 -- # [[ 0000:80:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:25.329 23:07:14 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:25.329 23:07:14 -- setup/devices.sh@62 -- # [[ 0000:80:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:25.329 23:07:14 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:25.329 23:07:14 -- setup/devices.sh@62 -- # [[ 0000:80:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:25.329 23:07:14 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:25.329 23:07:14 -- setup/devices.sh@62 -- # [[ 0000:80:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:25.329 23:07:14 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:25.329 23:07:14 -- setup/devices.sh@62 -- # [[ 0000:80:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:25.329 23:07:14 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:25.329 23:07:14 -- setup/devices.sh@62 -- # [[ 0000:80:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:25.329 23:07:14 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:25.329 23:07:14 -- setup/devices.sh@62 -- # [[ 0000:65:00.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:25.329 23:07:14 -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:04:25.329 23:07:14 -- setup/devices.sh@63 -- # found=1 00:04:25.329 23:07:14 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:25.329 23:07:14 -- setup/devices.sh@62 -- # [[ 0000:00:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:25.329 23:07:14 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:25.329 23:07:14 -- setup/devices.sh@62 -- # [[ 0000:00:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:25.329 23:07:14 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:25.329 23:07:14 -- setup/devices.sh@62 -- # [[ 0000:00:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:25.329 23:07:14 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:25.329 23:07:14 -- setup/devices.sh@62 -- # [[ 0000:00:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:25.329 23:07:14 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:25.329 23:07:14 -- setup/devices.sh@62 -- # [[ 0000:00:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:25.329 23:07:14 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:25.329 23:07:14 -- setup/devices.sh@62 -- # [[ 0000:00:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:25.329 23:07:14 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:25.329 23:07:14 -- setup/devices.sh@62 -- # [[ 0000:00:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:25.329 23:07:14 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:25.329 23:07:14 -- setup/devices.sh@62 -- # [[ 0000:00:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:25.329 23:07:14 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:25.590 23:07:14 -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:25.590 23:07:14 -- setup/devices.sh@68 -- # [[ -n '' ]] 00:04:25.590 23:07:14 -- setup/devices.sh@68 -- # return 0 00:04:25.590 23:07:14 -- setup/devices.sh@128 -- # cleanup_nvme 00:04:25.590 23:07:14 -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:25.851 23:07:14 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:25.851 23:07:14 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:25.851 23:07:14 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:25.851 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:25.851 00:04:25.851 real 0m13.487s 00:04:25.851 user 0m4.232s 00:04:25.851 sys 0m7.095s 00:04:25.851 23:07:14 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:25.851 23:07:14 -- common/autotest_common.sh@10 -- # set +x 00:04:25.851 ************************************ 00:04:25.851 END TEST nvme_mount 00:04:25.851 ************************************ 00:04:25.851 23:07:14 -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:04:25.851 23:07:14 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:25.851 23:07:14 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:25.851 23:07:14 -- common/autotest_common.sh@10 -- # set +x 00:04:25.851 ************************************ 00:04:25.851 START TEST dm_mount 00:04:25.851 ************************************ 00:04:25.851 23:07:15 -- common/autotest_common.sh@1111 -- # dm_mount 00:04:25.851 23:07:15 -- setup/devices.sh@144 -- # pv=nvme0n1 00:04:25.851 23:07:15 -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:04:25.851 23:07:15 -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:04:25.851 23:07:15 -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:04:25.851 23:07:15 -- setup/common.sh@39 -- # local disk=nvme0n1 00:04:25.851 23:07:15 -- setup/common.sh@40 -- # local part_no=2 00:04:25.851 23:07:15 -- setup/common.sh@41 -- # local size=1073741824 00:04:25.851 23:07:15 -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:04:25.851 23:07:15 -- setup/common.sh@44 -- # parts=() 00:04:25.852 23:07:15 -- setup/common.sh@44 -- # local parts 00:04:25.852 23:07:15 -- setup/common.sh@46 -- # (( part = 1 )) 00:04:25.852 23:07:15 -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:25.852 23:07:15 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:25.852 23:07:15 -- setup/common.sh@46 -- # (( part++ )) 00:04:25.852 23:07:15 -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:25.852 23:07:15 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:25.852 23:07:15 -- setup/common.sh@46 -- # (( part++ )) 00:04:25.852 23:07:15 -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:25.852 23:07:15 -- setup/common.sh@51 -- # (( size /= 512 )) 00:04:25.852 23:07:15 -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:04:25.852 23:07:15 -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:04:27.234 Creating new GPT entries in memory. 00:04:27.234 GPT data structures destroyed! You may now partition the disk using fdisk or 00:04:27.234 other utilities. 00:04:27.234 23:07:16 -- setup/common.sh@57 -- # (( part = 1 )) 00:04:27.234 23:07:16 -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:27.234 23:07:16 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:27.234 23:07:16 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:27.234 23:07:16 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:04:28.174 Creating new GPT entries in memory. 00:04:28.174 The operation has completed successfully. 00:04:28.174 23:07:17 -- setup/common.sh@57 -- # (( part++ )) 00:04:28.174 23:07:17 -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:28.174 23:07:17 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:28.174 23:07:17 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:28.174 23:07:17 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:2099200:4196351 00:04:29.114 The operation has completed successfully. 00:04:29.114 23:07:18 -- setup/common.sh@57 -- # (( part++ )) 00:04:29.114 23:07:18 -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:29.114 23:07:18 -- setup/common.sh@62 -- # wait 3705213 00:04:29.114 23:07:18 -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:04:29.114 23:07:18 -- setup/devices.sh@151 -- # dm_mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:29.114 23:07:18 -- setup/devices.sh@152 -- # dm_dummy_test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:29.114 23:07:18 -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:04:29.114 23:07:18 -- setup/devices.sh@160 -- # for t in {1..5} 00:04:29.114 23:07:18 -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:29.114 23:07:18 -- setup/devices.sh@161 -- # break 00:04:29.114 23:07:18 -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:29.114 23:07:18 -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:04:29.114 23:07:18 -- setup/devices.sh@165 -- # dm=/dev/dm-1 00:04:29.114 23:07:18 -- setup/devices.sh@166 -- # dm=dm-1 00:04:29.114 23:07:18 -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-1 ]] 00:04:29.114 23:07:18 -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-1 ]] 00:04:29.114 23:07:18 -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:29.114 23:07:18 -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount size= 00:04:29.114 23:07:18 -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:29.114 23:07:18 -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:29.114 23:07:18 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:04:29.114 23:07:18 -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:29.114 23:07:18 -- setup/devices.sh@174 -- # verify 0000:65:00.0 nvme0n1:nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:29.114 23:07:18 -- setup/devices.sh@48 -- # local dev=0000:65:00.0 00:04:29.114 23:07:18 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:04:29.114 23:07:18 -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:29.114 23:07:18 -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:29.114 23:07:18 -- setup/devices.sh@53 -- # local found=0 00:04:29.114 23:07:18 -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:04:29.114 23:07:18 -- setup/devices.sh@56 -- # : 00:04:29.114 23:07:18 -- setup/devices.sh@59 -- # local pci status 00:04:29.114 23:07:18 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:29.114 23:07:18 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:65:00.0 00:04:29.114 23:07:18 -- setup/devices.sh@47 -- # setup output config 00:04:29.114 23:07:18 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:29.114 23:07:18 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:32.415 23:07:20 -- setup/devices.sh@62 -- # [[ 0000:80:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:32.415 23:07:20 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:32.415 23:07:20 -- setup/devices.sh@62 -- # [[ 0000:80:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:32.415 23:07:20 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:32.415 23:07:20 -- setup/devices.sh@62 -- # [[ 0000:80:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:32.415 23:07:20 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:32.415 23:07:20 -- setup/devices.sh@62 -- # [[ 0000:80:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:32.415 23:07:20 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:32.415 23:07:20 -- setup/devices.sh@62 -- # [[ 0000:80:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:32.415 23:07:20 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:32.415 23:07:20 -- setup/devices.sh@62 -- # [[ 0000:80:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:32.415 23:07:20 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:32.415 23:07:20 -- setup/devices.sh@62 -- # [[ 0000:80:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:32.415 23:07:20 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:32.415 23:07:20 -- setup/devices.sh@62 -- # [[ 0000:80:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:32.415 23:07:20 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:32.415 23:07:21 -- setup/devices.sh@62 -- # [[ 0000:65:00.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:32.415 23:07:21 -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-1,holder@nvme0n1p2:dm-1,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:04:32.415 23:07:21 -- setup/devices.sh@63 -- # found=1 00:04:32.415 23:07:21 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:32.415 23:07:21 -- setup/devices.sh@62 -- # [[ 0000:00:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:32.415 23:07:21 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:32.415 23:07:21 -- setup/devices.sh@62 -- # [[ 0000:00:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:32.415 23:07:21 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:32.415 23:07:21 -- setup/devices.sh@62 -- # [[ 0000:00:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:32.415 23:07:21 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:32.415 23:07:21 -- setup/devices.sh@62 -- # [[ 0000:00:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:32.415 23:07:21 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:32.415 23:07:21 -- setup/devices.sh@62 -- # [[ 0000:00:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:32.415 23:07:21 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:32.415 23:07:21 -- setup/devices.sh@62 -- # [[ 0000:00:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:32.415 23:07:21 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:32.415 23:07:21 -- setup/devices.sh@62 -- # [[ 0000:00:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:32.415 23:07:21 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:32.415 23:07:21 -- setup/devices.sh@62 -- # [[ 0000:00:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:32.415 23:07:21 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:32.415 23:07:21 -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:32.415 23:07:21 -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount ]] 00:04:32.415 23:07:21 -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:32.415 23:07:21 -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:04:32.415 23:07:21 -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:32.415 23:07:21 -- setup/devices.sh@182 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:32.415 23:07:21 -- setup/devices.sh@184 -- # verify 0000:65:00.0 holder@nvme0n1p1:dm-1,holder@nvme0n1p2:dm-1 '' '' 00:04:32.415 23:07:21 -- setup/devices.sh@48 -- # local dev=0000:65:00.0 00:04:32.415 23:07:21 -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-1,holder@nvme0n1p2:dm-1 00:04:32.415 23:07:21 -- setup/devices.sh@50 -- # local mount_point= 00:04:32.415 23:07:21 -- setup/devices.sh@51 -- # local test_file= 00:04:32.415 23:07:21 -- setup/devices.sh@53 -- # local found=0 00:04:32.415 23:07:21 -- setup/devices.sh@55 -- # [[ -n '' ]] 00:04:32.415 23:07:21 -- setup/devices.sh@59 -- # local pci status 00:04:32.415 23:07:21 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:32.415 23:07:21 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:65:00.0 00:04:32.415 23:07:21 -- setup/devices.sh@47 -- # setup output config 00:04:32.415 23:07:21 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:32.415 23:07:21 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:34.957 23:07:24 -- setup/devices.sh@62 -- # [[ 0000:80:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:34.957 23:07:24 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:34.957 23:07:24 -- setup/devices.sh@62 -- # [[ 0000:80:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:34.957 23:07:24 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:34.957 23:07:24 -- setup/devices.sh@62 -- # [[ 0000:80:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:34.957 23:07:24 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:34.957 23:07:24 -- setup/devices.sh@62 -- # [[ 0000:80:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:34.957 23:07:24 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:34.957 23:07:24 -- setup/devices.sh@62 -- # [[ 0000:80:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:34.957 23:07:24 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:34.957 23:07:24 -- setup/devices.sh@62 -- # [[ 0000:80:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:34.957 23:07:24 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:34.957 23:07:24 -- setup/devices.sh@62 -- # [[ 0000:80:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:34.957 23:07:24 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:34.957 23:07:24 -- setup/devices.sh@62 -- # [[ 0000:80:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:34.957 23:07:24 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:34.957 23:07:24 -- setup/devices.sh@62 -- # [[ 0000:65:00.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:34.957 23:07:24 -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-1,holder@nvme0n1p2:dm-1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\1\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\1* ]] 00:04:34.957 23:07:24 -- setup/devices.sh@63 -- # found=1 00:04:34.957 23:07:24 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:34.957 23:07:24 -- setup/devices.sh@62 -- # [[ 0000:00:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:34.957 23:07:24 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:34.957 23:07:24 -- setup/devices.sh@62 -- # [[ 0000:00:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:34.957 23:07:24 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:34.957 23:07:24 -- setup/devices.sh@62 -- # [[ 0000:00:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:34.957 23:07:24 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:34.957 23:07:24 -- setup/devices.sh@62 -- # [[ 0000:00:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:34.957 23:07:24 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:34.957 23:07:24 -- setup/devices.sh@62 -- # [[ 0000:00:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:34.957 23:07:24 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:34.957 23:07:24 -- setup/devices.sh@62 -- # [[ 0000:00:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:34.957 23:07:24 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:34.957 23:07:24 -- setup/devices.sh@62 -- # [[ 0000:00:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:34.957 23:07:24 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:34.957 23:07:24 -- setup/devices.sh@62 -- # [[ 0000:00:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:34.957 23:07:24 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:35.526 23:07:24 -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:35.526 23:07:24 -- setup/devices.sh@68 -- # [[ -n '' ]] 00:04:35.526 23:07:24 -- setup/devices.sh@68 -- # return 0 00:04:35.526 23:07:24 -- setup/devices.sh@187 -- # cleanup_dm 00:04:35.526 23:07:24 -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:35.526 23:07:24 -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:04:35.526 23:07:24 -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:04:35.526 23:07:24 -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:35.526 23:07:24 -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:04:35.527 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:35.527 23:07:24 -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:04:35.527 23:07:24 -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:04:35.527 00:04:35.527 real 0m9.591s 00:04:35.527 user 0m2.195s 00:04:35.527 sys 0m4.192s 00:04:35.527 23:07:24 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:35.527 23:07:24 -- common/autotest_common.sh@10 -- # set +x 00:04:35.527 ************************************ 00:04:35.527 END TEST dm_mount 00:04:35.527 ************************************ 00:04:35.527 23:07:24 -- setup/devices.sh@1 -- # cleanup 00:04:35.527 23:07:24 -- setup/devices.sh@11 -- # cleanup_nvme 00:04:35.527 23:07:24 -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:35.527 23:07:24 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:35.527 23:07:24 -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:04:35.527 23:07:24 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:35.527 23:07:24 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:35.787 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:04:35.787 /dev/nvme0n1: 8 bytes were erased at offset 0x1bf1fc55e00 (gpt): 45 46 49 20 50 41 52 54 00:04:35.787 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:04:35.787 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:04:35.787 23:07:24 -- setup/devices.sh@12 -- # cleanup_dm 00:04:35.787 23:07:24 -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:35.787 23:07:24 -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:04:35.787 23:07:24 -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:35.787 23:07:24 -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:04:35.787 23:07:24 -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:04:35.787 23:07:24 -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:04:35.787 00:04:35.787 real 0m27.986s 00:04:35.787 user 0m8.089s 00:04:35.787 sys 0m14.377s 00:04:35.787 23:07:24 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:35.787 23:07:24 -- common/autotest_common.sh@10 -- # set +x 00:04:35.787 ************************************ 00:04:35.787 END TEST devices 00:04:35.787 ************************************ 00:04:35.787 00:04:35.787 real 1m35.150s 00:04:35.787 user 0m31.364s 00:04:35.787 sys 0m54.207s 00:04:35.787 23:07:24 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:35.787 23:07:24 -- common/autotest_common.sh@10 -- # set +x 00:04:35.787 ************************************ 00:04:35.787 END TEST setup.sh 00:04:35.787 ************************************ 00:04:35.787 23:07:25 -- spdk/autotest.sh@128 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:04:39.085 Hugepages 00:04:39.085 node hugesize free / total 00:04:39.085 node0 1048576kB 0 / 0 00:04:39.085 node0 2048kB 2048 / 2048 00:04:39.085 node1 1048576kB 0 / 0 00:04:39.085 node1 2048kB 0 / 0 00:04:39.085 00:04:39.085 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:39.085 I/OAT 0000:00:01.0 8086 0b00 0 ioatdma - - 00:04:39.085 I/OAT 0000:00:01.1 8086 0b00 0 ioatdma - - 00:04:39.085 I/OAT 0000:00:01.2 8086 0b00 0 ioatdma - - 00:04:39.085 I/OAT 0000:00:01.3 8086 0b00 0 ioatdma - - 00:04:39.085 I/OAT 0000:00:01.4 8086 0b00 0 ioatdma - - 00:04:39.085 I/OAT 0000:00:01.5 8086 0b00 0 ioatdma - - 00:04:39.085 I/OAT 0000:00:01.6 8086 0b00 0 ioatdma - - 00:04:39.085 I/OAT 0000:00:01.7 8086 0b00 0 ioatdma - - 00:04:39.085 NVMe 0000:65:00.0 144d a80a 0 nvme nvme0 nvme0n1 00:04:39.085 I/OAT 0000:80:01.0 8086 0b00 1 ioatdma - - 00:04:39.085 I/OAT 0000:80:01.1 8086 0b00 1 ioatdma - - 00:04:39.085 I/OAT 0000:80:01.2 8086 0b00 1 ioatdma - - 00:04:39.085 I/OAT 0000:80:01.3 8086 0b00 1 ioatdma - - 00:04:39.085 I/OAT 0000:80:01.4 8086 0b00 1 ioatdma - - 00:04:39.085 I/OAT 0000:80:01.5 8086 0b00 1 ioatdma - - 00:04:39.085 I/OAT 0000:80:01.6 8086 0b00 1 ioatdma - - 00:04:39.085 I/OAT 0000:80:01.7 8086 0b00 1 ioatdma - - 00:04:39.085 23:07:28 -- spdk/autotest.sh@130 -- # uname -s 00:04:39.085 23:07:28 -- spdk/autotest.sh@130 -- # [[ Linux == Linux ]] 00:04:39.085 23:07:28 -- spdk/autotest.sh@132 -- # nvme_namespace_revert 00:04:39.085 23:07:28 -- common/autotest_common.sh@1517 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:42.383 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:04:42.383 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:04:42.383 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:04:42.383 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:04:42.383 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:04:42.383 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:04:42.383 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:04:42.383 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:04:42.383 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:04:42.383 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:04:42.383 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:04:42.383 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:04:42.383 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:04:42.383 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:04:42.383 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:04:42.383 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:04:44.294 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:04:44.555 23:07:33 -- common/autotest_common.sh@1518 -- # sleep 1 00:04:45.497 23:07:34 -- common/autotest_common.sh@1519 -- # bdfs=() 00:04:45.497 23:07:34 -- common/autotest_common.sh@1519 -- # local bdfs 00:04:45.497 23:07:34 -- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs)) 00:04:45.497 23:07:34 -- common/autotest_common.sh@1520 -- # get_nvme_bdfs 00:04:45.497 23:07:34 -- common/autotest_common.sh@1499 -- # bdfs=() 00:04:45.497 23:07:34 -- common/autotest_common.sh@1499 -- # local bdfs 00:04:45.497 23:07:34 -- common/autotest_common.sh@1500 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:45.497 23:07:34 -- common/autotest_common.sh@1500 -- # jq -r '.config[].params.traddr' 00:04:45.497 23:07:34 -- common/autotest_common.sh@1500 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:04:45.497 23:07:34 -- common/autotest_common.sh@1501 -- # (( 1 == 0 )) 00:04:45.497 23:07:34 -- common/autotest_common.sh@1505 -- # printf '%s\n' 0000:65:00.0 00:04:45.497 23:07:34 -- common/autotest_common.sh@1522 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:48.798 Waiting for block devices as requested 00:04:48.798 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:04:48.798 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:04:48.798 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:04:48.798 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:04:48.798 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:04:48.798 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:04:48.798 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:04:49.057 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:04:49.057 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:04:49.318 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:04:49.318 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:04:49.318 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:04:49.318 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:04:49.578 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:04:49.578 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:04:49.578 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:04:49.838 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:04:50.098 23:07:39 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:04:50.098 23:07:39 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:65:00.0 00:04:50.098 23:07:39 -- common/autotest_common.sh@1488 -- # readlink -f /sys/class/nvme/nvme0 00:04:50.098 23:07:39 -- common/autotest_common.sh@1488 -- # grep 0000:65:00.0/nvme/nvme 00:04:50.098 23:07:39 -- common/autotest_common.sh@1488 -- # bdf_sysfs_path=/sys/devices/pci0000:64/0000:64:02.0/0000:65:00.0/nvme/nvme0 00:04:50.098 23:07:39 -- common/autotest_common.sh@1489 -- # [[ -z /sys/devices/pci0000:64/0000:64:02.0/0000:65:00.0/nvme/nvme0 ]] 00:04:50.098 23:07:39 -- common/autotest_common.sh@1493 -- # basename /sys/devices/pci0000:64/0000:64:02.0/0000:65:00.0/nvme/nvme0 00:04:50.098 23:07:39 -- common/autotest_common.sh@1493 -- # printf '%s\n' nvme0 00:04:50.098 23:07:39 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme0 00:04:50.098 23:07:39 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme0 ]] 00:04:50.098 23:07:39 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme0 00:04:50.098 23:07:39 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:04:50.098 23:07:39 -- common/autotest_common.sh@1531 -- # grep oacs 00:04:50.098 23:07:39 -- common/autotest_common.sh@1531 -- # oacs=' 0x5f' 00:04:50.098 23:07:39 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:04:50.098 23:07:39 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:04:50.098 23:07:39 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:04:50.098 23:07:39 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0 00:04:50.098 23:07:39 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:04:50.098 23:07:39 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:04:50.098 23:07:39 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:04:50.098 23:07:39 -- common/autotest_common.sh@1543 -- # continue 00:04:50.098 23:07:39 -- spdk/autotest.sh@135 -- # timing_exit pre_cleanup 00:04:50.098 23:07:39 -- common/autotest_common.sh@716 -- # xtrace_disable 00:04:50.098 23:07:39 -- common/autotest_common.sh@10 -- # set +x 00:04:50.098 23:07:39 -- spdk/autotest.sh@138 -- # timing_enter afterboot 00:04:50.098 23:07:39 -- common/autotest_common.sh@710 -- # xtrace_disable 00:04:50.098 23:07:39 -- common/autotest_common.sh@10 -- # set +x 00:04:50.098 23:07:39 -- spdk/autotest.sh@139 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:53.398 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:04:53.398 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:04:53.398 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:04:53.398 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:04:53.398 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:04:53.398 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:04:53.398 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:04:53.398 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:04:53.398 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:04:53.399 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:04:53.399 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:04:53.399 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:04:53.399 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:04:53.399 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:04:53.399 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:04:53.659 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:04:53.659 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:04:53.919 23:07:43 -- spdk/autotest.sh@140 -- # timing_exit afterboot 00:04:53.919 23:07:43 -- common/autotest_common.sh@716 -- # xtrace_disable 00:04:53.919 23:07:43 -- common/autotest_common.sh@10 -- # set +x 00:04:53.919 23:07:43 -- spdk/autotest.sh@144 -- # opal_revert_cleanup 00:04:53.919 23:07:43 -- common/autotest_common.sh@1577 -- # mapfile -t bdfs 00:04:53.919 23:07:43 -- common/autotest_common.sh@1577 -- # get_nvme_bdfs_by_id 0x0a54 00:04:53.919 23:07:43 -- common/autotest_common.sh@1563 -- # bdfs=() 00:04:53.919 23:07:43 -- common/autotest_common.sh@1563 -- # local bdfs 00:04:53.919 23:07:43 -- common/autotest_common.sh@1565 -- # get_nvme_bdfs 00:04:53.919 23:07:43 -- common/autotest_common.sh@1499 -- # bdfs=() 00:04:53.919 23:07:43 -- common/autotest_common.sh@1499 -- # local bdfs 00:04:53.919 23:07:43 -- common/autotest_common.sh@1500 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:53.919 23:07:43 -- common/autotest_common.sh@1500 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:04:53.919 23:07:43 -- common/autotest_common.sh@1500 -- # jq -r '.config[].params.traddr' 00:04:53.919 23:07:43 -- common/autotest_common.sh@1501 -- # (( 1 == 0 )) 00:04:53.919 23:07:43 -- common/autotest_common.sh@1505 -- # printf '%s\n' 0000:65:00.0 00:04:53.919 23:07:43 -- common/autotest_common.sh@1565 -- # for bdf in $(get_nvme_bdfs) 00:04:53.919 23:07:43 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:65:00.0/device 00:04:53.919 23:07:43 -- common/autotest_common.sh@1566 -- # device=0xa80a 00:04:53.919 23:07:43 -- common/autotest_common.sh@1567 -- # [[ 0xa80a == \0\x\0\a\5\4 ]] 00:04:53.919 23:07:43 -- common/autotest_common.sh@1572 -- # printf '%s\n' 00:04:53.919 23:07:43 -- common/autotest_common.sh@1578 -- # [[ -z '' ]] 00:04:53.919 23:07:43 -- common/autotest_common.sh@1579 -- # return 0 00:04:53.919 23:07:43 -- spdk/autotest.sh@150 -- # '[' 0 -eq 1 ']' 00:04:53.919 23:07:43 -- spdk/autotest.sh@154 -- # '[' 1 -eq 1 ']' 00:04:53.919 23:07:43 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:04:53.919 23:07:43 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:04:53.919 23:07:43 -- spdk/autotest.sh@162 -- # timing_enter lib 00:04:53.919 23:07:43 -- common/autotest_common.sh@710 -- # xtrace_disable 00:04:53.919 23:07:43 -- common/autotest_common.sh@10 -- # set +x 00:04:53.919 23:07:43 -- spdk/autotest.sh@164 -- # run_test env /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:04:53.919 23:07:43 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:53.919 23:07:43 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:53.919 23:07:43 -- common/autotest_common.sh@10 -- # set +x 00:04:54.179 ************************************ 00:04:54.179 START TEST env 00:04:54.179 ************************************ 00:04:54.179 23:07:43 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:04:54.179 * Looking for test storage... 00:04:54.180 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env 00:04:54.180 23:07:43 -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:04:54.180 23:07:43 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:54.180 23:07:43 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:54.180 23:07:43 -- common/autotest_common.sh@10 -- # set +x 00:04:54.441 ************************************ 00:04:54.441 START TEST env_memory 00:04:54.441 ************************************ 00:04:54.441 23:07:43 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:04:54.441 00:04:54.441 00:04:54.441 CUnit - A unit testing framework for C - Version 2.1-3 00:04:54.441 http://cunit.sourceforge.net/ 00:04:54.441 00:04:54.441 00:04:54.441 Suite: memory 00:04:54.441 Test: alloc and free memory map ...[2024-04-26 23:07:43.560491] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:04:54.441 passed 00:04:54.441 Test: mem map translation ...[2024-04-26 23:07:43.585825] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:04:54.441 [2024-04-26 23:07:43.585863] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:04:54.441 [2024-04-26 23:07:43.585911] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:04:54.441 [2024-04-26 23:07:43.585919] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:04:54.441 passed 00:04:54.441 Test: mem map registration ...[2024-04-26 23:07:43.641123] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:04:54.441 [2024-04-26 23:07:43.641145] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:04:54.441 passed 00:04:54.703 Test: mem map adjacent registrations ...passed 00:04:54.703 00:04:54.703 Run Summary: Type Total Ran Passed Failed Inactive 00:04:54.703 suites 1 1 n/a 0 0 00:04:54.703 tests 4 4 4 0 0 00:04:54.703 asserts 152 152 152 0 n/a 00:04:54.703 00:04:54.703 Elapsed time = 0.194 seconds 00:04:54.703 00:04:54.703 real 0m0.208s 00:04:54.703 user 0m0.196s 00:04:54.703 sys 0m0.010s 00:04:54.703 23:07:43 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:54.703 23:07:43 -- common/autotest_common.sh@10 -- # set +x 00:04:54.703 ************************************ 00:04:54.703 END TEST env_memory 00:04:54.703 ************************************ 00:04:54.703 23:07:43 -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:04:54.703 23:07:43 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:54.703 23:07:43 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:54.703 23:07:43 -- common/autotest_common.sh@10 -- # set +x 00:04:54.703 ************************************ 00:04:54.703 START TEST env_vtophys 00:04:54.703 ************************************ 00:04:54.703 23:07:43 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:04:54.703 EAL: lib.eal log level changed from notice to debug 00:04:54.703 EAL: Detected lcore 0 as core 0 on socket 0 00:04:54.703 EAL: Detected lcore 1 as core 1 on socket 0 00:04:54.703 EAL: Detected lcore 2 as core 2 on socket 0 00:04:54.703 EAL: Detected lcore 3 as core 3 on socket 0 00:04:54.703 EAL: Detected lcore 4 as core 4 on socket 0 00:04:54.703 EAL: Detected lcore 5 as core 5 on socket 0 00:04:54.703 EAL: Detected lcore 6 as core 6 on socket 0 00:04:54.703 EAL: Detected lcore 7 as core 7 on socket 0 00:04:54.703 EAL: Detected lcore 8 as core 8 on socket 0 00:04:54.703 EAL: Detected lcore 9 as core 9 on socket 0 00:04:54.703 EAL: Detected lcore 10 as core 10 on socket 0 00:04:54.703 EAL: Detected lcore 11 as core 11 on socket 0 00:04:54.703 EAL: Detected lcore 12 as core 12 on socket 0 00:04:54.703 EAL: Detected lcore 13 as core 13 on socket 0 00:04:54.703 EAL: Detected lcore 14 as core 14 on socket 0 00:04:54.703 EAL: Detected lcore 15 as core 15 on socket 0 00:04:54.703 EAL: Detected lcore 16 as core 16 on socket 0 00:04:54.703 EAL: Detected lcore 17 as core 17 on socket 0 00:04:54.703 EAL: Detected lcore 18 as core 18 on socket 0 00:04:54.703 EAL: Detected lcore 19 as core 19 on socket 0 00:04:54.703 EAL: Detected lcore 20 as core 20 on socket 0 00:04:54.703 EAL: Detected lcore 21 as core 21 on socket 0 00:04:54.703 EAL: Detected lcore 22 as core 22 on socket 0 00:04:54.703 EAL: Detected lcore 23 as core 23 on socket 0 00:04:54.703 EAL: Detected lcore 24 as core 24 on socket 0 00:04:54.703 EAL: Detected lcore 25 as core 25 on socket 0 00:04:54.703 EAL: Detected lcore 26 as core 26 on socket 0 00:04:54.703 EAL: Detected lcore 27 as core 27 on socket 0 00:04:54.703 EAL: Detected lcore 28 as core 28 on socket 0 00:04:54.703 EAL: Detected lcore 29 as core 29 on socket 0 00:04:54.703 EAL: Detected lcore 30 as core 30 on socket 0 00:04:54.703 EAL: Detected lcore 31 as core 31 on socket 0 00:04:54.703 EAL: Detected lcore 32 as core 32 on socket 0 00:04:54.703 EAL: Detected lcore 33 as core 33 on socket 0 00:04:54.703 EAL: Detected lcore 34 as core 34 on socket 0 00:04:54.703 EAL: Detected lcore 35 as core 35 on socket 0 00:04:54.703 EAL: Detected lcore 36 as core 0 on socket 1 00:04:54.703 EAL: Detected lcore 37 as core 1 on socket 1 00:04:54.703 EAL: Detected lcore 38 as core 2 on socket 1 00:04:54.703 EAL: Detected lcore 39 as core 3 on socket 1 00:04:54.703 EAL: Detected lcore 40 as core 4 on socket 1 00:04:54.703 EAL: Detected lcore 41 as core 5 on socket 1 00:04:54.703 EAL: Detected lcore 42 as core 6 on socket 1 00:04:54.703 EAL: Detected lcore 43 as core 7 on socket 1 00:04:54.704 EAL: Detected lcore 44 as core 8 on socket 1 00:04:54.704 EAL: Detected lcore 45 as core 9 on socket 1 00:04:54.704 EAL: Detected lcore 46 as core 10 on socket 1 00:04:54.704 EAL: Detected lcore 47 as core 11 on socket 1 00:04:54.704 EAL: Detected lcore 48 as core 12 on socket 1 00:04:54.704 EAL: Detected lcore 49 as core 13 on socket 1 00:04:54.704 EAL: Detected lcore 50 as core 14 on socket 1 00:04:54.704 EAL: Detected lcore 51 as core 15 on socket 1 00:04:54.704 EAL: Detected lcore 52 as core 16 on socket 1 00:04:54.704 EAL: Detected lcore 53 as core 17 on socket 1 00:04:54.704 EAL: Detected lcore 54 as core 18 on socket 1 00:04:54.704 EAL: Detected lcore 55 as core 19 on socket 1 00:04:54.704 EAL: Detected lcore 56 as core 20 on socket 1 00:04:54.704 EAL: Detected lcore 57 as core 21 on socket 1 00:04:54.704 EAL: Detected lcore 58 as core 22 on socket 1 00:04:54.704 EAL: Detected lcore 59 as core 23 on socket 1 00:04:54.704 EAL: Detected lcore 60 as core 24 on socket 1 00:04:54.704 EAL: Detected lcore 61 as core 25 on socket 1 00:04:54.704 EAL: Detected lcore 62 as core 26 on socket 1 00:04:54.704 EAL: Detected lcore 63 as core 27 on socket 1 00:04:54.704 EAL: Detected lcore 64 as core 28 on socket 1 00:04:54.704 EAL: Detected lcore 65 as core 29 on socket 1 00:04:54.704 EAL: Detected lcore 66 as core 30 on socket 1 00:04:54.704 EAL: Detected lcore 67 as core 31 on socket 1 00:04:54.704 EAL: Detected lcore 68 as core 32 on socket 1 00:04:54.704 EAL: Detected lcore 69 as core 33 on socket 1 00:04:54.704 EAL: Detected lcore 70 as core 34 on socket 1 00:04:54.704 EAL: Detected lcore 71 as core 35 on socket 1 00:04:54.704 EAL: Detected lcore 72 as core 0 on socket 0 00:04:54.704 EAL: Detected lcore 73 as core 1 on socket 0 00:04:54.704 EAL: Detected lcore 74 as core 2 on socket 0 00:04:54.704 EAL: Detected lcore 75 as core 3 on socket 0 00:04:54.704 EAL: Detected lcore 76 as core 4 on socket 0 00:04:54.704 EAL: Detected lcore 77 as core 5 on socket 0 00:04:54.704 EAL: Detected lcore 78 as core 6 on socket 0 00:04:54.704 EAL: Detected lcore 79 as core 7 on socket 0 00:04:54.704 EAL: Detected lcore 80 as core 8 on socket 0 00:04:54.704 EAL: Detected lcore 81 as core 9 on socket 0 00:04:54.704 EAL: Detected lcore 82 as core 10 on socket 0 00:04:54.704 EAL: Detected lcore 83 as core 11 on socket 0 00:04:54.704 EAL: Detected lcore 84 as core 12 on socket 0 00:04:54.704 EAL: Detected lcore 85 as core 13 on socket 0 00:04:54.704 EAL: Detected lcore 86 as core 14 on socket 0 00:04:54.704 EAL: Detected lcore 87 as core 15 on socket 0 00:04:54.704 EAL: Detected lcore 88 as core 16 on socket 0 00:04:54.704 EAL: Detected lcore 89 as core 17 on socket 0 00:04:54.704 EAL: Detected lcore 90 as core 18 on socket 0 00:04:54.704 EAL: Detected lcore 91 as core 19 on socket 0 00:04:54.704 EAL: Detected lcore 92 as core 20 on socket 0 00:04:54.704 EAL: Detected lcore 93 as core 21 on socket 0 00:04:54.704 EAL: Detected lcore 94 as core 22 on socket 0 00:04:54.704 EAL: Detected lcore 95 as core 23 on socket 0 00:04:54.704 EAL: Detected lcore 96 as core 24 on socket 0 00:04:54.704 EAL: Detected lcore 97 as core 25 on socket 0 00:04:54.704 EAL: Detected lcore 98 as core 26 on socket 0 00:04:54.704 EAL: Detected lcore 99 as core 27 on socket 0 00:04:54.704 EAL: Detected lcore 100 as core 28 on socket 0 00:04:54.704 EAL: Detected lcore 101 as core 29 on socket 0 00:04:54.704 EAL: Detected lcore 102 as core 30 on socket 0 00:04:54.704 EAL: Detected lcore 103 as core 31 on socket 0 00:04:54.704 EAL: Detected lcore 104 as core 32 on socket 0 00:04:54.704 EAL: Detected lcore 105 as core 33 on socket 0 00:04:54.704 EAL: Detected lcore 106 as core 34 on socket 0 00:04:54.704 EAL: Detected lcore 107 as core 35 on socket 0 00:04:54.704 EAL: Detected lcore 108 as core 0 on socket 1 00:04:54.704 EAL: Detected lcore 109 as core 1 on socket 1 00:04:54.704 EAL: Detected lcore 110 as core 2 on socket 1 00:04:54.704 EAL: Detected lcore 111 as core 3 on socket 1 00:04:54.704 EAL: Detected lcore 112 as core 4 on socket 1 00:04:54.704 EAL: Detected lcore 113 as core 5 on socket 1 00:04:54.704 EAL: Detected lcore 114 as core 6 on socket 1 00:04:54.704 EAL: Detected lcore 115 as core 7 on socket 1 00:04:54.704 EAL: Detected lcore 116 as core 8 on socket 1 00:04:54.704 EAL: Detected lcore 117 as core 9 on socket 1 00:04:54.704 EAL: Detected lcore 118 as core 10 on socket 1 00:04:54.704 EAL: Detected lcore 119 as core 11 on socket 1 00:04:54.704 EAL: Detected lcore 120 as core 12 on socket 1 00:04:54.704 EAL: Detected lcore 121 as core 13 on socket 1 00:04:54.704 EAL: Detected lcore 122 as core 14 on socket 1 00:04:54.704 EAL: Detected lcore 123 as core 15 on socket 1 00:04:54.704 EAL: Detected lcore 124 as core 16 on socket 1 00:04:54.704 EAL: Detected lcore 125 as core 17 on socket 1 00:04:54.704 EAL: Detected lcore 126 as core 18 on socket 1 00:04:54.704 EAL: Detected lcore 127 as core 19 on socket 1 00:04:54.704 EAL: Skipped lcore 128 as core 20 on socket 1 00:04:54.704 EAL: Skipped lcore 129 as core 21 on socket 1 00:04:54.704 EAL: Skipped lcore 130 as core 22 on socket 1 00:04:54.704 EAL: Skipped lcore 131 as core 23 on socket 1 00:04:54.704 EAL: Skipped lcore 132 as core 24 on socket 1 00:04:54.704 EAL: Skipped lcore 133 as core 25 on socket 1 00:04:54.704 EAL: Skipped lcore 134 as core 26 on socket 1 00:04:54.704 EAL: Skipped lcore 135 as core 27 on socket 1 00:04:54.704 EAL: Skipped lcore 136 as core 28 on socket 1 00:04:54.704 EAL: Skipped lcore 137 as core 29 on socket 1 00:04:54.704 EAL: Skipped lcore 138 as core 30 on socket 1 00:04:54.704 EAL: Skipped lcore 139 as core 31 on socket 1 00:04:54.704 EAL: Skipped lcore 140 as core 32 on socket 1 00:04:54.704 EAL: Skipped lcore 141 as core 33 on socket 1 00:04:54.704 EAL: Skipped lcore 142 as core 34 on socket 1 00:04:54.704 EAL: Skipped lcore 143 as core 35 on socket 1 00:04:54.704 EAL: Maximum logical cores by configuration: 128 00:04:54.704 EAL: Detected CPU lcores: 128 00:04:54.704 EAL: Detected NUMA nodes: 2 00:04:54.704 EAL: Checking presence of .so 'librte_eal.so.24.0' 00:04:54.704 EAL: Detected shared linkage of DPDK 00:04:54.704 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_pci.so.24.0 00:04:54.704 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_vdev.so.24.0 00:04:54.704 EAL: Registered [vdev] bus. 00:04:54.704 EAL: bus.vdev log level changed from disabled to notice 00:04:54.704 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_mempool_ring.so.24.0 00:04:54.704 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_net_i40e.so.24.0 00:04:54.704 EAL: pmd.net.i40e.init log level changed from disabled to notice 00:04:54.704 EAL: pmd.net.i40e.driver log level changed from disabled to notice 00:04:54.704 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_pci.so 00:04:54.704 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_vdev.so 00:04:54.704 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_mempool_ring.so 00:04:54.704 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_net_i40e.so 00:04:54.704 EAL: No shared files mode enabled, IPC will be disabled 00:04:54.704 EAL: No shared files mode enabled, IPC is disabled 00:04:54.704 EAL: Bus pci wants IOVA as 'DC' 00:04:54.704 EAL: Bus vdev wants IOVA as 'DC' 00:04:54.704 EAL: Buses did not request a specific IOVA mode. 00:04:54.704 EAL: IOMMU is available, selecting IOVA as VA mode. 00:04:54.704 EAL: Selected IOVA mode 'VA' 00:04:54.704 EAL: No free 2048 kB hugepages reported on node 1 00:04:54.704 EAL: Probing VFIO support... 00:04:54.704 EAL: IOMMU type 1 (Type 1) is supported 00:04:54.704 EAL: IOMMU type 7 (sPAPR) is not supported 00:04:54.704 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:04:54.704 EAL: VFIO support initialized 00:04:54.704 EAL: Ask a virtual area of 0x2e000 bytes 00:04:54.704 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:04:54.704 EAL: Setting up physically contiguous memory... 00:04:54.704 EAL: Setting maximum number of open files to 524288 00:04:54.704 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:04:54.704 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:04:54.704 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:04:54.704 EAL: Ask a virtual area of 0x61000 bytes 00:04:54.704 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:04:54.704 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:54.704 EAL: Ask a virtual area of 0x400000000 bytes 00:04:54.704 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:04:54.704 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:04:54.704 EAL: Ask a virtual area of 0x61000 bytes 00:04:54.704 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:04:54.704 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:54.704 EAL: Ask a virtual area of 0x400000000 bytes 00:04:54.704 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:04:54.704 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:04:54.704 EAL: Ask a virtual area of 0x61000 bytes 00:04:54.704 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:04:54.704 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:54.704 EAL: Ask a virtual area of 0x400000000 bytes 00:04:54.704 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:04:54.704 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:04:54.704 EAL: Ask a virtual area of 0x61000 bytes 00:04:54.704 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:04:54.704 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:54.704 EAL: Ask a virtual area of 0x400000000 bytes 00:04:54.704 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:04:54.704 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:04:54.704 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:04:54.704 EAL: Ask a virtual area of 0x61000 bytes 00:04:54.704 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:04:54.704 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:54.704 EAL: Ask a virtual area of 0x400000000 bytes 00:04:54.704 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:04:54.704 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:04:54.704 EAL: Ask a virtual area of 0x61000 bytes 00:04:54.704 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:04:54.704 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:54.704 EAL: Ask a virtual area of 0x400000000 bytes 00:04:54.704 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:04:54.704 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:04:54.704 EAL: Ask a virtual area of 0x61000 bytes 00:04:54.704 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:04:54.704 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:54.705 EAL: Ask a virtual area of 0x400000000 bytes 00:04:54.705 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:04:54.705 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:04:54.705 EAL: Ask a virtual area of 0x61000 bytes 00:04:54.705 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:04:54.705 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:54.705 EAL: Ask a virtual area of 0x400000000 bytes 00:04:54.705 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:04:54.705 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:04:54.705 EAL: Hugepages will be freed exactly as allocated. 00:04:54.705 EAL: No shared files mode enabled, IPC is disabled 00:04:54.705 EAL: No shared files mode enabled, IPC is disabled 00:04:54.705 EAL: TSC frequency is ~2400000 KHz 00:04:54.705 EAL: Main lcore 0 is ready (tid=7fb661d58a00;cpuset=[0]) 00:04:54.705 EAL: Trying to obtain current memory policy. 00:04:54.705 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:54.705 EAL: Restoring previous memory policy: 0 00:04:54.705 EAL: request: mp_malloc_sync 00:04:54.705 EAL: No shared files mode enabled, IPC is disabled 00:04:54.705 EAL: Heap on socket 0 was expanded by 2MB 00:04:54.705 EAL: No shared files mode enabled, IPC is disabled 00:04:54.705 EAL: No shared files mode enabled, IPC is disabled 00:04:54.705 EAL: No PCI address specified using 'addr=' in: bus=pci 00:04:54.705 EAL: Mem event callback 'spdk:(nil)' registered 00:04:54.705 00:04:54.705 00:04:54.705 CUnit - A unit testing framework for C - Version 2.1-3 00:04:54.705 http://cunit.sourceforge.net/ 00:04:54.705 00:04:54.705 00:04:54.705 Suite: components_suite 00:04:54.705 Test: vtophys_malloc_test ...passed 00:04:54.705 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:04:54.705 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:54.705 EAL: Restoring previous memory policy: 4 00:04:54.705 EAL: Calling mem event callback 'spdk:(nil)' 00:04:54.705 EAL: request: mp_malloc_sync 00:04:54.705 EAL: No shared files mode enabled, IPC is disabled 00:04:54.705 EAL: Heap on socket 0 was expanded by 4MB 00:04:54.705 EAL: Calling mem event callback 'spdk:(nil)' 00:04:54.705 EAL: request: mp_malloc_sync 00:04:54.705 EAL: No shared files mode enabled, IPC is disabled 00:04:54.705 EAL: Heap on socket 0 was shrunk by 4MB 00:04:54.705 EAL: Trying to obtain current memory policy. 00:04:54.705 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:54.705 EAL: Restoring previous memory policy: 4 00:04:54.705 EAL: Calling mem event callback 'spdk:(nil)' 00:04:54.705 EAL: request: mp_malloc_sync 00:04:54.705 EAL: No shared files mode enabled, IPC is disabled 00:04:54.705 EAL: Heap on socket 0 was expanded by 6MB 00:04:54.705 EAL: Calling mem event callback 'spdk:(nil)' 00:04:54.705 EAL: request: mp_malloc_sync 00:04:54.705 EAL: No shared files mode enabled, IPC is disabled 00:04:54.705 EAL: Heap on socket 0 was shrunk by 6MB 00:04:54.705 EAL: Trying to obtain current memory policy. 00:04:54.705 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:54.705 EAL: Restoring previous memory policy: 4 00:04:54.705 EAL: Calling mem event callback 'spdk:(nil)' 00:04:54.705 EAL: request: mp_malloc_sync 00:04:54.705 EAL: No shared files mode enabled, IPC is disabled 00:04:54.705 EAL: Heap on socket 0 was expanded by 10MB 00:04:54.705 EAL: Calling mem event callback 'spdk:(nil)' 00:04:54.705 EAL: request: mp_malloc_sync 00:04:54.705 EAL: No shared files mode enabled, IPC is disabled 00:04:54.705 EAL: Heap on socket 0 was shrunk by 10MB 00:04:54.705 EAL: Trying to obtain current memory policy. 00:04:54.705 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:54.705 EAL: Restoring previous memory policy: 4 00:04:54.705 EAL: Calling mem event callback 'spdk:(nil)' 00:04:54.705 EAL: request: mp_malloc_sync 00:04:54.705 EAL: No shared files mode enabled, IPC is disabled 00:04:54.705 EAL: Heap on socket 0 was expanded by 18MB 00:04:54.705 EAL: Calling mem event callback 'spdk:(nil)' 00:04:54.705 EAL: request: mp_malloc_sync 00:04:54.705 EAL: No shared files mode enabled, IPC is disabled 00:04:54.705 EAL: Heap on socket 0 was shrunk by 18MB 00:04:54.705 EAL: Trying to obtain current memory policy. 00:04:54.705 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:54.966 EAL: Restoring previous memory policy: 4 00:04:54.966 EAL: Calling mem event callback 'spdk:(nil)' 00:04:54.966 EAL: request: mp_malloc_sync 00:04:54.966 EAL: No shared files mode enabled, IPC is disabled 00:04:54.966 EAL: Heap on socket 0 was expanded by 34MB 00:04:54.966 EAL: Calling mem event callback 'spdk:(nil)' 00:04:54.966 EAL: request: mp_malloc_sync 00:04:54.966 EAL: No shared files mode enabled, IPC is disabled 00:04:54.966 EAL: Heap on socket 0 was shrunk by 34MB 00:04:54.966 EAL: Trying to obtain current memory policy. 00:04:54.966 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:54.966 EAL: Restoring previous memory policy: 4 00:04:54.966 EAL: Calling mem event callback 'spdk:(nil)' 00:04:54.966 EAL: request: mp_malloc_sync 00:04:54.966 EAL: No shared files mode enabled, IPC is disabled 00:04:54.966 EAL: Heap on socket 0 was expanded by 66MB 00:04:54.966 EAL: Calling mem event callback 'spdk:(nil)' 00:04:54.966 EAL: request: mp_malloc_sync 00:04:54.966 EAL: No shared files mode enabled, IPC is disabled 00:04:54.966 EAL: Heap on socket 0 was shrunk by 66MB 00:04:54.966 EAL: Trying to obtain current memory policy. 00:04:54.966 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:54.966 EAL: Restoring previous memory policy: 4 00:04:54.966 EAL: Calling mem event callback 'spdk:(nil)' 00:04:54.966 EAL: request: mp_malloc_sync 00:04:54.966 EAL: No shared files mode enabled, IPC is disabled 00:04:54.966 EAL: Heap on socket 0 was expanded by 130MB 00:04:54.966 EAL: Calling mem event callback 'spdk:(nil)' 00:04:54.966 EAL: request: mp_malloc_sync 00:04:54.966 EAL: No shared files mode enabled, IPC is disabled 00:04:54.966 EAL: Heap on socket 0 was shrunk by 130MB 00:04:54.966 EAL: Trying to obtain current memory policy. 00:04:54.966 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:54.966 EAL: Restoring previous memory policy: 4 00:04:54.966 EAL: Calling mem event callback 'spdk:(nil)' 00:04:54.966 EAL: request: mp_malloc_sync 00:04:54.966 EAL: No shared files mode enabled, IPC is disabled 00:04:54.966 EAL: Heap on socket 0 was expanded by 258MB 00:04:54.966 EAL: Calling mem event callback 'spdk:(nil)' 00:04:54.966 EAL: request: mp_malloc_sync 00:04:54.966 EAL: No shared files mode enabled, IPC is disabled 00:04:54.966 EAL: Heap on socket 0 was shrunk by 258MB 00:04:54.966 EAL: Trying to obtain current memory policy. 00:04:54.966 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:54.966 EAL: Restoring previous memory policy: 4 00:04:54.966 EAL: Calling mem event callback 'spdk:(nil)' 00:04:54.966 EAL: request: mp_malloc_sync 00:04:54.966 EAL: No shared files mode enabled, IPC is disabled 00:04:54.966 EAL: Heap on socket 0 was expanded by 514MB 00:04:55.227 EAL: Calling mem event callback 'spdk:(nil)' 00:04:55.227 EAL: request: mp_malloc_sync 00:04:55.227 EAL: No shared files mode enabled, IPC is disabled 00:04:55.227 EAL: Heap on socket 0 was shrunk by 514MB 00:04:55.227 EAL: Trying to obtain current memory policy. 00:04:55.227 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:55.227 EAL: Restoring previous memory policy: 4 00:04:55.227 EAL: Calling mem event callback 'spdk:(nil)' 00:04:55.227 EAL: request: mp_malloc_sync 00:04:55.227 EAL: No shared files mode enabled, IPC is disabled 00:04:55.227 EAL: Heap on socket 0 was expanded by 1026MB 00:04:55.515 EAL: Calling mem event callback 'spdk:(nil)' 00:04:55.515 EAL: request: mp_malloc_sync 00:04:55.515 EAL: No shared files mode enabled, IPC is disabled 00:04:55.515 EAL: Heap on socket 0 was shrunk by 1026MB 00:04:55.515 passed 00:04:55.515 00:04:55.515 Run Summary: Type Total Ran Passed Failed Inactive 00:04:55.515 suites 1 1 n/a 0 0 00:04:55.515 tests 2 2 2 0 0 00:04:55.515 asserts 497 497 497 0 n/a 00:04:55.515 00:04:55.515 Elapsed time = 0.647 seconds 00:04:55.515 EAL: Calling mem event callback 'spdk:(nil)' 00:04:55.515 EAL: request: mp_malloc_sync 00:04:55.515 EAL: No shared files mode enabled, IPC is disabled 00:04:55.515 EAL: Heap on socket 0 was shrunk by 2MB 00:04:55.515 EAL: No shared files mode enabled, IPC is disabled 00:04:55.515 EAL: No shared files mode enabled, IPC is disabled 00:04:55.515 EAL: No shared files mode enabled, IPC is disabled 00:04:55.515 00:04:55.515 real 0m0.752s 00:04:55.515 user 0m0.413s 00:04:55.515 sys 0m0.312s 00:04:55.515 23:07:44 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:55.515 23:07:44 -- common/autotest_common.sh@10 -- # set +x 00:04:55.515 ************************************ 00:04:55.515 END TEST env_vtophys 00:04:55.515 ************************************ 00:04:55.515 23:07:44 -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:04:55.515 23:07:44 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:55.515 23:07:44 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:55.515 23:07:44 -- common/autotest_common.sh@10 -- # set +x 00:04:55.827 ************************************ 00:04:55.827 START TEST env_pci 00:04:55.827 ************************************ 00:04:55.827 23:07:44 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:04:55.827 00:04:55.827 00:04:55.827 CUnit - A unit testing framework for C - Version 2.1-3 00:04:55.827 http://cunit.sourceforge.net/ 00:04:55.827 00:04:55.827 00:04:55.827 Suite: pci 00:04:55.827 Test: pci_hook ...[2024-04-26 23:07:44.793998] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 3716423 has claimed it 00:04:55.827 EAL: Cannot find device (10000:00:01.0) 00:04:55.827 EAL: Failed to attach device on primary process 00:04:55.827 passed 00:04:55.827 00:04:55.827 Run Summary: Type Total Ran Passed Failed Inactive 00:04:55.827 suites 1 1 n/a 0 0 00:04:55.827 tests 1 1 1 0 0 00:04:55.827 asserts 25 25 25 0 n/a 00:04:55.827 00:04:55.827 Elapsed time = 0.029 seconds 00:04:55.827 00:04:55.827 real 0m0.048s 00:04:55.827 user 0m0.017s 00:04:55.827 sys 0m0.031s 00:04:55.827 23:07:44 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:55.827 23:07:44 -- common/autotest_common.sh@10 -- # set +x 00:04:55.827 ************************************ 00:04:55.827 END TEST env_pci 00:04:55.827 ************************************ 00:04:55.827 23:07:44 -- env/env.sh@14 -- # argv='-c 0x1 ' 00:04:55.827 23:07:44 -- env/env.sh@15 -- # uname 00:04:55.827 23:07:44 -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:04:55.827 23:07:44 -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:04:55.828 23:07:44 -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:55.828 23:07:44 -- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']' 00:04:55.828 23:07:44 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:55.828 23:07:44 -- common/autotest_common.sh@10 -- # set +x 00:04:55.828 ************************************ 00:04:55.828 START TEST env_dpdk_post_init 00:04:55.828 ************************************ 00:04:55.828 23:07:45 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:55.828 EAL: Detected CPU lcores: 128 00:04:55.828 EAL: Detected NUMA nodes: 2 00:04:55.828 EAL: Detected shared linkage of DPDK 00:04:55.828 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:55.828 EAL: Selected IOVA mode 'VA' 00:04:55.828 EAL: No free 2048 kB hugepages reported on node 1 00:04:55.828 EAL: VFIO support initialized 00:04:55.828 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:56.088 EAL: Using IOMMU type 1 (Type 1) 00:04:56.088 EAL: Ignore mapping IO port bar(1) 00:04:56.088 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.0 (socket 0) 00:04:56.348 EAL: Ignore mapping IO port bar(1) 00:04:56.348 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.1 (socket 0) 00:04:56.609 EAL: Ignore mapping IO port bar(1) 00:04:56.609 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.2 (socket 0) 00:04:56.870 EAL: Ignore mapping IO port bar(1) 00:04:56.870 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.3 (socket 0) 00:04:56.870 EAL: Ignore mapping IO port bar(1) 00:04:57.130 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.4 (socket 0) 00:04:57.130 EAL: Ignore mapping IO port bar(1) 00:04:57.390 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.5 (socket 0) 00:04:57.390 EAL: Ignore mapping IO port bar(1) 00:04:57.650 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.6 (socket 0) 00:04:57.650 EAL: Ignore mapping IO port bar(1) 00:04:57.650 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.7 (socket 0) 00:04:57.911 EAL: Probe PCI driver: spdk_nvme (144d:a80a) device: 0000:65:00.0 (socket 0) 00:04:58.176 EAL: Ignore mapping IO port bar(1) 00:04:58.176 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.0 (socket 1) 00:04:58.436 EAL: Ignore mapping IO port bar(1) 00:04:58.436 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.1 (socket 1) 00:04:58.436 EAL: Ignore mapping IO port bar(1) 00:04:58.696 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.2 (socket 1) 00:04:58.696 EAL: Ignore mapping IO port bar(1) 00:04:58.956 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.3 (socket 1) 00:04:58.956 EAL: Ignore mapping IO port bar(1) 00:04:59.216 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.4 (socket 1) 00:04:59.216 EAL: Ignore mapping IO port bar(1) 00:04:59.216 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.5 (socket 1) 00:04:59.476 EAL: Ignore mapping IO port bar(1) 00:04:59.476 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.6 (socket 1) 00:04:59.736 EAL: Ignore mapping IO port bar(1) 00:04:59.736 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.7 (socket 1) 00:04:59.736 EAL: Releasing PCI mapped resource for 0000:65:00.0 00:04:59.736 EAL: Calling pci_unmap_resource for 0000:65:00.0 at 0x202001020000 00:04:59.995 Starting DPDK initialization... 00:04:59.995 Starting SPDK post initialization... 00:04:59.995 SPDK NVMe probe 00:04:59.995 Attaching to 0000:65:00.0 00:04:59.995 Attached to 0000:65:00.0 00:04:59.995 Cleaning up... 00:05:01.906 00:05:01.906 real 0m5.698s 00:05:01.906 user 0m0.173s 00:05:01.906 sys 0m0.067s 00:05:01.906 23:07:50 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:01.906 23:07:50 -- common/autotest_common.sh@10 -- # set +x 00:05:01.906 ************************************ 00:05:01.906 END TEST env_dpdk_post_init 00:05:01.906 ************************************ 00:05:01.906 23:07:50 -- env/env.sh@26 -- # uname 00:05:01.906 23:07:50 -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:05:01.906 23:07:50 -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:05:01.906 23:07:50 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:01.906 23:07:50 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:01.906 23:07:50 -- common/autotest_common.sh@10 -- # set +x 00:05:01.906 ************************************ 00:05:01.906 START TEST env_mem_callbacks 00:05:01.906 ************************************ 00:05:01.906 23:07:50 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:05:01.906 EAL: Detected CPU lcores: 128 00:05:01.906 EAL: Detected NUMA nodes: 2 00:05:01.906 EAL: Detected shared linkage of DPDK 00:05:01.906 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:01.906 EAL: Selected IOVA mode 'VA' 00:05:01.906 EAL: No free 2048 kB hugepages reported on node 1 00:05:01.906 EAL: VFIO support initialized 00:05:01.906 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:01.906 00:05:01.906 00:05:01.906 CUnit - A unit testing framework for C - Version 2.1-3 00:05:01.906 http://cunit.sourceforge.net/ 00:05:01.906 00:05:01.906 00:05:01.906 Suite: memory 00:05:01.906 Test: test ... 00:05:01.906 register 0x200000200000 2097152 00:05:01.906 malloc 3145728 00:05:01.906 register 0x200000400000 4194304 00:05:01.906 buf 0x200000500000 len 3145728 PASSED 00:05:01.906 malloc 64 00:05:01.906 buf 0x2000004fff40 len 64 PASSED 00:05:01.906 malloc 4194304 00:05:01.906 register 0x200000800000 6291456 00:05:01.906 buf 0x200000a00000 len 4194304 PASSED 00:05:01.906 free 0x200000500000 3145728 00:05:01.906 free 0x2000004fff40 64 00:05:01.906 unregister 0x200000400000 4194304 PASSED 00:05:01.906 free 0x200000a00000 4194304 00:05:01.906 unregister 0x200000800000 6291456 PASSED 00:05:01.906 malloc 8388608 00:05:01.907 register 0x200000400000 10485760 00:05:01.907 buf 0x200000600000 len 8388608 PASSED 00:05:01.907 free 0x200000600000 8388608 00:05:01.907 unregister 0x200000400000 10485760 PASSED 00:05:01.907 passed 00:05:01.907 00:05:01.907 Run Summary: Type Total Ran Passed Failed Inactive 00:05:01.907 suites 1 1 n/a 0 0 00:05:01.907 tests 1 1 1 0 0 00:05:01.907 asserts 15 15 15 0 n/a 00:05:01.907 00:05:01.907 Elapsed time = 0.008 seconds 00:05:01.907 00:05:01.907 real 0m0.064s 00:05:01.907 user 0m0.021s 00:05:01.907 sys 0m0.043s 00:05:01.907 23:07:50 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:01.907 23:07:50 -- common/autotest_common.sh@10 -- # set +x 00:05:01.907 ************************************ 00:05:01.907 END TEST env_mem_callbacks 00:05:01.907 ************************************ 00:05:01.907 00:05:01.907 real 0m7.669s 00:05:01.907 user 0m1.152s 00:05:01.907 sys 0m0.976s 00:05:01.907 23:07:50 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:01.907 23:07:50 -- common/autotest_common.sh@10 -- # set +x 00:05:01.907 ************************************ 00:05:01.907 END TEST env 00:05:01.907 ************************************ 00:05:01.907 23:07:51 -- spdk/autotest.sh@165 -- # run_test rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:05:01.907 23:07:51 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:01.907 23:07:51 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:01.907 23:07:51 -- common/autotest_common.sh@10 -- # set +x 00:05:01.907 ************************************ 00:05:01.907 START TEST rpc 00:05:01.907 ************************************ 00:05:01.907 23:07:51 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:05:02.167 * Looking for test storage... 00:05:02.167 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:05:02.167 23:07:51 -- rpc/rpc.sh@65 -- # spdk_pid=3717896 00:05:02.167 23:07:51 -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:02.167 23:07:51 -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:05:02.167 23:07:51 -- rpc/rpc.sh@67 -- # waitforlisten 3717896 00:05:02.167 23:07:51 -- common/autotest_common.sh@817 -- # '[' -z 3717896 ']' 00:05:02.167 23:07:51 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:02.167 23:07:51 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:02.167 23:07:51 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:02.167 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:02.167 23:07:51 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:02.168 23:07:51 -- common/autotest_common.sh@10 -- # set +x 00:05:02.168 [2024-04-26 23:07:51.305099] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:05:02.168 [2024-04-26 23:07:51.305160] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3717896 ] 00:05:02.168 EAL: No free 2048 kB hugepages reported on node 1 00:05:02.168 [2024-04-26 23:07:51.373058] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:02.168 [2024-04-26 23:07:51.410289] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:05:02.168 [2024-04-26 23:07:51.410337] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 3717896' to capture a snapshot of events at runtime. 00:05:02.168 [2024-04-26 23:07:51.410345] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:05:02.168 [2024-04-26 23:07:51.410352] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:05:02.168 [2024-04-26 23:07:51.410358] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid3717896 for offline analysis/debug. 00:05:02.168 [2024-04-26 23:07:51.410394] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:03.110 23:07:52 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:05:03.110 23:07:52 -- common/autotest_common.sh@850 -- # return 0 00:05:03.110 23:07:52 -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:05:03.111 23:07:52 -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:05:03.111 23:07:52 -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:05:03.111 23:07:52 -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:05:03.111 23:07:52 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:03.111 23:07:52 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:03.111 23:07:52 -- common/autotest_common.sh@10 -- # set +x 00:05:03.111 ************************************ 00:05:03.111 START TEST rpc_integrity 00:05:03.111 ************************************ 00:05:03.111 23:07:52 -- common/autotest_common.sh@1111 -- # rpc_integrity 00:05:03.111 23:07:52 -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:03.111 23:07:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:03.111 23:07:52 -- common/autotest_common.sh@10 -- # set +x 00:05:03.111 23:07:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:03.111 23:07:52 -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:03.111 23:07:52 -- rpc/rpc.sh@13 -- # jq length 00:05:03.111 23:07:52 -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:03.111 23:07:52 -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:03.111 23:07:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:03.111 23:07:52 -- common/autotest_common.sh@10 -- # set +x 00:05:03.111 23:07:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:03.111 23:07:52 -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:05:03.111 23:07:52 -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:03.111 23:07:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:03.111 23:07:52 -- common/autotest_common.sh@10 -- # set +x 00:05:03.111 23:07:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:03.111 23:07:52 -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:03.111 { 00:05:03.111 "name": "Malloc0", 00:05:03.111 "aliases": [ 00:05:03.111 "0dd53635-134b-4936-8ec7-d2b5b13ea828" 00:05:03.111 ], 00:05:03.111 "product_name": "Malloc disk", 00:05:03.111 "block_size": 512, 00:05:03.111 "num_blocks": 16384, 00:05:03.111 "uuid": "0dd53635-134b-4936-8ec7-d2b5b13ea828", 00:05:03.111 "assigned_rate_limits": { 00:05:03.111 "rw_ios_per_sec": 0, 00:05:03.111 "rw_mbytes_per_sec": 0, 00:05:03.111 "r_mbytes_per_sec": 0, 00:05:03.111 "w_mbytes_per_sec": 0 00:05:03.111 }, 00:05:03.111 "claimed": false, 00:05:03.111 "zoned": false, 00:05:03.111 "supported_io_types": { 00:05:03.111 "read": true, 00:05:03.111 "write": true, 00:05:03.111 "unmap": true, 00:05:03.111 "write_zeroes": true, 00:05:03.111 "flush": true, 00:05:03.111 "reset": true, 00:05:03.111 "compare": false, 00:05:03.111 "compare_and_write": false, 00:05:03.111 "abort": true, 00:05:03.111 "nvme_admin": false, 00:05:03.111 "nvme_io": false 00:05:03.111 }, 00:05:03.111 "memory_domains": [ 00:05:03.111 { 00:05:03.111 "dma_device_id": "system", 00:05:03.111 "dma_device_type": 1 00:05:03.111 }, 00:05:03.111 { 00:05:03.111 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:03.111 "dma_device_type": 2 00:05:03.111 } 00:05:03.111 ], 00:05:03.111 "driver_specific": {} 00:05:03.111 } 00:05:03.111 ]' 00:05:03.111 23:07:52 -- rpc/rpc.sh@17 -- # jq length 00:05:03.111 23:07:52 -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:03.111 23:07:52 -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:05:03.111 23:07:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:03.111 23:07:52 -- common/autotest_common.sh@10 -- # set +x 00:05:03.111 [2024-04-26 23:07:52.362389] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:05:03.111 [2024-04-26 23:07:52.362421] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:03.111 [2024-04-26 23:07:52.362434] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xe11850 00:05:03.111 [2024-04-26 23:07:52.362442] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:03.111 [2024-04-26 23:07:52.363779] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:03.111 [2024-04-26 23:07:52.363800] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:03.372 Passthru0 00:05:03.372 23:07:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:03.372 23:07:52 -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:03.372 23:07:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:03.372 23:07:52 -- common/autotest_common.sh@10 -- # set +x 00:05:03.372 23:07:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:03.372 23:07:52 -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:03.372 { 00:05:03.372 "name": "Malloc0", 00:05:03.372 "aliases": [ 00:05:03.372 "0dd53635-134b-4936-8ec7-d2b5b13ea828" 00:05:03.372 ], 00:05:03.372 "product_name": "Malloc disk", 00:05:03.372 "block_size": 512, 00:05:03.372 "num_blocks": 16384, 00:05:03.372 "uuid": "0dd53635-134b-4936-8ec7-d2b5b13ea828", 00:05:03.372 "assigned_rate_limits": { 00:05:03.372 "rw_ios_per_sec": 0, 00:05:03.372 "rw_mbytes_per_sec": 0, 00:05:03.372 "r_mbytes_per_sec": 0, 00:05:03.372 "w_mbytes_per_sec": 0 00:05:03.372 }, 00:05:03.372 "claimed": true, 00:05:03.372 "claim_type": "exclusive_write", 00:05:03.372 "zoned": false, 00:05:03.372 "supported_io_types": { 00:05:03.372 "read": true, 00:05:03.372 "write": true, 00:05:03.372 "unmap": true, 00:05:03.372 "write_zeroes": true, 00:05:03.372 "flush": true, 00:05:03.372 "reset": true, 00:05:03.372 "compare": false, 00:05:03.372 "compare_and_write": false, 00:05:03.372 "abort": true, 00:05:03.372 "nvme_admin": false, 00:05:03.372 "nvme_io": false 00:05:03.372 }, 00:05:03.372 "memory_domains": [ 00:05:03.372 { 00:05:03.372 "dma_device_id": "system", 00:05:03.372 "dma_device_type": 1 00:05:03.372 }, 00:05:03.372 { 00:05:03.372 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:03.372 "dma_device_type": 2 00:05:03.372 } 00:05:03.372 ], 00:05:03.372 "driver_specific": {} 00:05:03.372 }, 00:05:03.372 { 00:05:03.372 "name": "Passthru0", 00:05:03.372 "aliases": [ 00:05:03.372 "47f1c6a4-599d-5e2a-879a-f9de51750215" 00:05:03.372 ], 00:05:03.372 "product_name": "passthru", 00:05:03.372 "block_size": 512, 00:05:03.372 "num_blocks": 16384, 00:05:03.372 "uuid": "47f1c6a4-599d-5e2a-879a-f9de51750215", 00:05:03.372 "assigned_rate_limits": { 00:05:03.372 "rw_ios_per_sec": 0, 00:05:03.372 "rw_mbytes_per_sec": 0, 00:05:03.372 "r_mbytes_per_sec": 0, 00:05:03.372 "w_mbytes_per_sec": 0 00:05:03.372 }, 00:05:03.372 "claimed": false, 00:05:03.372 "zoned": false, 00:05:03.372 "supported_io_types": { 00:05:03.372 "read": true, 00:05:03.372 "write": true, 00:05:03.372 "unmap": true, 00:05:03.372 "write_zeroes": true, 00:05:03.372 "flush": true, 00:05:03.372 "reset": true, 00:05:03.372 "compare": false, 00:05:03.372 "compare_and_write": false, 00:05:03.372 "abort": true, 00:05:03.373 "nvme_admin": false, 00:05:03.373 "nvme_io": false 00:05:03.373 }, 00:05:03.373 "memory_domains": [ 00:05:03.373 { 00:05:03.373 "dma_device_id": "system", 00:05:03.373 "dma_device_type": 1 00:05:03.373 }, 00:05:03.373 { 00:05:03.373 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:03.373 "dma_device_type": 2 00:05:03.373 } 00:05:03.373 ], 00:05:03.373 "driver_specific": { 00:05:03.373 "passthru": { 00:05:03.373 "name": "Passthru0", 00:05:03.373 "base_bdev_name": "Malloc0" 00:05:03.373 } 00:05:03.373 } 00:05:03.373 } 00:05:03.373 ]' 00:05:03.373 23:07:52 -- rpc/rpc.sh@21 -- # jq length 00:05:03.373 23:07:52 -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:03.373 23:07:52 -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:03.373 23:07:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:03.373 23:07:52 -- common/autotest_common.sh@10 -- # set +x 00:05:03.373 23:07:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:03.373 23:07:52 -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:05:03.373 23:07:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:03.373 23:07:52 -- common/autotest_common.sh@10 -- # set +x 00:05:03.373 23:07:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:03.373 23:07:52 -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:03.373 23:07:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:03.373 23:07:52 -- common/autotest_common.sh@10 -- # set +x 00:05:03.373 23:07:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:03.373 23:07:52 -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:03.373 23:07:52 -- rpc/rpc.sh@26 -- # jq length 00:05:03.373 23:07:52 -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:03.373 00:05:03.373 real 0m0.290s 00:05:03.373 user 0m0.190s 00:05:03.373 sys 0m0.036s 00:05:03.373 23:07:52 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:03.373 23:07:52 -- common/autotest_common.sh@10 -- # set +x 00:05:03.373 ************************************ 00:05:03.373 END TEST rpc_integrity 00:05:03.373 ************************************ 00:05:03.373 23:07:52 -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:05:03.373 23:07:52 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:03.373 23:07:52 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:03.373 23:07:52 -- common/autotest_common.sh@10 -- # set +x 00:05:03.634 ************************************ 00:05:03.634 START TEST rpc_plugins 00:05:03.634 ************************************ 00:05:03.634 23:07:52 -- common/autotest_common.sh@1111 -- # rpc_plugins 00:05:03.634 23:07:52 -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:05:03.634 23:07:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:03.634 23:07:52 -- common/autotest_common.sh@10 -- # set +x 00:05:03.634 23:07:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:03.634 23:07:52 -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:05:03.634 23:07:52 -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:05:03.634 23:07:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:03.634 23:07:52 -- common/autotest_common.sh@10 -- # set +x 00:05:03.634 23:07:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:03.634 23:07:52 -- rpc/rpc.sh@31 -- # bdevs='[ 00:05:03.634 { 00:05:03.634 "name": "Malloc1", 00:05:03.634 "aliases": [ 00:05:03.634 "4381f3c1-9132-4d71-b194-7694f4e0a0f2" 00:05:03.634 ], 00:05:03.634 "product_name": "Malloc disk", 00:05:03.634 "block_size": 4096, 00:05:03.634 "num_blocks": 256, 00:05:03.634 "uuid": "4381f3c1-9132-4d71-b194-7694f4e0a0f2", 00:05:03.634 "assigned_rate_limits": { 00:05:03.634 "rw_ios_per_sec": 0, 00:05:03.634 "rw_mbytes_per_sec": 0, 00:05:03.634 "r_mbytes_per_sec": 0, 00:05:03.634 "w_mbytes_per_sec": 0 00:05:03.634 }, 00:05:03.634 "claimed": false, 00:05:03.634 "zoned": false, 00:05:03.634 "supported_io_types": { 00:05:03.634 "read": true, 00:05:03.634 "write": true, 00:05:03.634 "unmap": true, 00:05:03.634 "write_zeroes": true, 00:05:03.634 "flush": true, 00:05:03.634 "reset": true, 00:05:03.634 "compare": false, 00:05:03.634 "compare_and_write": false, 00:05:03.634 "abort": true, 00:05:03.634 "nvme_admin": false, 00:05:03.634 "nvme_io": false 00:05:03.634 }, 00:05:03.634 "memory_domains": [ 00:05:03.634 { 00:05:03.634 "dma_device_id": "system", 00:05:03.634 "dma_device_type": 1 00:05:03.634 }, 00:05:03.634 { 00:05:03.634 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:03.634 "dma_device_type": 2 00:05:03.634 } 00:05:03.634 ], 00:05:03.634 "driver_specific": {} 00:05:03.634 } 00:05:03.634 ]' 00:05:03.634 23:07:52 -- rpc/rpc.sh@32 -- # jq length 00:05:03.634 23:07:52 -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:05:03.634 23:07:52 -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:05:03.634 23:07:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:03.634 23:07:52 -- common/autotest_common.sh@10 -- # set +x 00:05:03.634 23:07:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:03.634 23:07:52 -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:05:03.634 23:07:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:03.634 23:07:52 -- common/autotest_common.sh@10 -- # set +x 00:05:03.634 23:07:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:03.634 23:07:52 -- rpc/rpc.sh@35 -- # bdevs='[]' 00:05:03.634 23:07:52 -- rpc/rpc.sh@36 -- # jq length 00:05:03.634 23:07:52 -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:05:03.634 00:05:03.634 real 0m0.152s 00:05:03.634 user 0m0.095s 00:05:03.634 sys 0m0.018s 00:05:03.634 23:07:52 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:03.634 23:07:52 -- common/autotest_common.sh@10 -- # set +x 00:05:03.634 ************************************ 00:05:03.634 END TEST rpc_plugins 00:05:03.634 ************************************ 00:05:03.634 23:07:52 -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:05:03.634 23:07:52 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:03.634 23:07:52 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:03.634 23:07:52 -- common/autotest_common.sh@10 -- # set +x 00:05:03.896 ************************************ 00:05:03.896 START TEST rpc_trace_cmd_test 00:05:03.896 ************************************ 00:05:03.896 23:07:53 -- common/autotest_common.sh@1111 -- # rpc_trace_cmd_test 00:05:03.896 23:07:53 -- rpc/rpc.sh@40 -- # local info 00:05:03.896 23:07:53 -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:05:03.896 23:07:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:03.896 23:07:53 -- common/autotest_common.sh@10 -- # set +x 00:05:03.896 23:07:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:03.896 23:07:53 -- rpc/rpc.sh@42 -- # info='{ 00:05:03.896 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid3717896", 00:05:03.896 "tpoint_group_mask": "0x8", 00:05:03.896 "iscsi_conn": { 00:05:03.896 "mask": "0x2", 00:05:03.896 "tpoint_mask": "0x0" 00:05:03.896 }, 00:05:03.897 "scsi": { 00:05:03.897 "mask": "0x4", 00:05:03.897 "tpoint_mask": "0x0" 00:05:03.897 }, 00:05:03.897 "bdev": { 00:05:03.897 "mask": "0x8", 00:05:03.897 "tpoint_mask": "0xffffffffffffffff" 00:05:03.897 }, 00:05:03.897 "nvmf_rdma": { 00:05:03.897 "mask": "0x10", 00:05:03.897 "tpoint_mask": "0x0" 00:05:03.897 }, 00:05:03.897 "nvmf_tcp": { 00:05:03.897 "mask": "0x20", 00:05:03.897 "tpoint_mask": "0x0" 00:05:03.897 }, 00:05:03.897 "ftl": { 00:05:03.897 "mask": "0x40", 00:05:03.897 "tpoint_mask": "0x0" 00:05:03.897 }, 00:05:03.897 "blobfs": { 00:05:03.897 "mask": "0x80", 00:05:03.897 "tpoint_mask": "0x0" 00:05:03.897 }, 00:05:03.897 "dsa": { 00:05:03.897 "mask": "0x200", 00:05:03.897 "tpoint_mask": "0x0" 00:05:03.897 }, 00:05:03.897 "thread": { 00:05:03.897 "mask": "0x400", 00:05:03.897 "tpoint_mask": "0x0" 00:05:03.897 }, 00:05:03.897 "nvme_pcie": { 00:05:03.897 "mask": "0x800", 00:05:03.897 "tpoint_mask": "0x0" 00:05:03.897 }, 00:05:03.897 "iaa": { 00:05:03.897 "mask": "0x1000", 00:05:03.897 "tpoint_mask": "0x0" 00:05:03.897 }, 00:05:03.897 "nvme_tcp": { 00:05:03.897 "mask": "0x2000", 00:05:03.897 "tpoint_mask": "0x0" 00:05:03.897 }, 00:05:03.897 "bdev_nvme": { 00:05:03.897 "mask": "0x4000", 00:05:03.897 "tpoint_mask": "0x0" 00:05:03.897 }, 00:05:03.897 "sock": { 00:05:03.897 "mask": "0x8000", 00:05:03.897 "tpoint_mask": "0x0" 00:05:03.897 } 00:05:03.897 }' 00:05:03.897 23:07:53 -- rpc/rpc.sh@43 -- # jq length 00:05:03.897 23:07:53 -- rpc/rpc.sh@43 -- # '[' 16 -gt 2 ']' 00:05:03.897 23:07:53 -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:05:03.897 23:07:53 -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:05:03.897 23:07:53 -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:05:04.157 23:07:53 -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:05:04.157 23:07:53 -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:05:04.157 23:07:53 -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:05:04.157 23:07:53 -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:05:04.157 23:07:53 -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:05:04.157 00:05:04.158 real 0m0.248s 00:05:04.158 user 0m0.219s 00:05:04.158 sys 0m0.020s 00:05:04.158 23:07:53 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:04.158 23:07:53 -- common/autotest_common.sh@10 -- # set +x 00:05:04.158 ************************************ 00:05:04.158 END TEST rpc_trace_cmd_test 00:05:04.158 ************************************ 00:05:04.158 23:07:53 -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:05:04.158 23:07:53 -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:05:04.158 23:07:53 -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:05:04.158 23:07:53 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:04.158 23:07:53 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:04.158 23:07:53 -- common/autotest_common.sh@10 -- # set +x 00:05:04.418 ************************************ 00:05:04.418 START TEST rpc_daemon_integrity 00:05:04.418 ************************************ 00:05:04.418 23:07:53 -- common/autotest_common.sh@1111 -- # rpc_integrity 00:05:04.418 23:07:53 -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:04.418 23:07:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:04.418 23:07:53 -- common/autotest_common.sh@10 -- # set +x 00:05:04.418 23:07:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:04.418 23:07:53 -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:04.418 23:07:53 -- rpc/rpc.sh@13 -- # jq length 00:05:04.418 23:07:53 -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:04.418 23:07:53 -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:04.418 23:07:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:04.418 23:07:53 -- common/autotest_common.sh@10 -- # set +x 00:05:04.418 23:07:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:04.418 23:07:53 -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:05:04.418 23:07:53 -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:04.418 23:07:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:04.418 23:07:53 -- common/autotest_common.sh@10 -- # set +x 00:05:04.418 23:07:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:04.418 23:07:53 -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:04.418 { 00:05:04.418 "name": "Malloc2", 00:05:04.418 "aliases": [ 00:05:04.418 "c59c2ce6-4be4-4da0-a0f9-5fe3834b894b" 00:05:04.418 ], 00:05:04.418 "product_name": "Malloc disk", 00:05:04.418 "block_size": 512, 00:05:04.418 "num_blocks": 16384, 00:05:04.418 "uuid": "c59c2ce6-4be4-4da0-a0f9-5fe3834b894b", 00:05:04.418 "assigned_rate_limits": { 00:05:04.418 "rw_ios_per_sec": 0, 00:05:04.418 "rw_mbytes_per_sec": 0, 00:05:04.418 "r_mbytes_per_sec": 0, 00:05:04.418 "w_mbytes_per_sec": 0 00:05:04.418 }, 00:05:04.418 "claimed": false, 00:05:04.418 "zoned": false, 00:05:04.418 "supported_io_types": { 00:05:04.418 "read": true, 00:05:04.418 "write": true, 00:05:04.418 "unmap": true, 00:05:04.418 "write_zeroes": true, 00:05:04.418 "flush": true, 00:05:04.418 "reset": true, 00:05:04.418 "compare": false, 00:05:04.418 "compare_and_write": false, 00:05:04.418 "abort": true, 00:05:04.418 "nvme_admin": false, 00:05:04.418 "nvme_io": false 00:05:04.418 }, 00:05:04.418 "memory_domains": [ 00:05:04.418 { 00:05:04.418 "dma_device_id": "system", 00:05:04.418 "dma_device_type": 1 00:05:04.418 }, 00:05:04.418 { 00:05:04.418 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:04.418 "dma_device_type": 2 00:05:04.418 } 00:05:04.418 ], 00:05:04.418 "driver_specific": {} 00:05:04.418 } 00:05:04.418 ]' 00:05:04.418 23:07:53 -- rpc/rpc.sh@17 -- # jq length 00:05:04.418 23:07:53 -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:04.418 23:07:53 -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:05:04.418 23:07:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:04.418 23:07:53 -- common/autotest_common.sh@10 -- # set +x 00:05:04.418 [2024-04-26 23:07:53.617782] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:05:04.418 [2024-04-26 23:07:53.617811] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:04.418 [2024-04-26 23:07:53.617824] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xfb54d0 00:05:04.418 [2024-04-26 23:07:53.617831] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:04.418 [2024-04-26 23:07:53.619093] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:04.418 [2024-04-26 23:07:53.619112] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:04.418 Passthru0 00:05:04.418 23:07:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:04.418 23:07:53 -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:04.418 23:07:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:04.418 23:07:53 -- common/autotest_common.sh@10 -- # set +x 00:05:04.418 23:07:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:04.418 23:07:53 -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:04.418 { 00:05:04.418 "name": "Malloc2", 00:05:04.418 "aliases": [ 00:05:04.418 "c59c2ce6-4be4-4da0-a0f9-5fe3834b894b" 00:05:04.418 ], 00:05:04.418 "product_name": "Malloc disk", 00:05:04.418 "block_size": 512, 00:05:04.418 "num_blocks": 16384, 00:05:04.418 "uuid": "c59c2ce6-4be4-4da0-a0f9-5fe3834b894b", 00:05:04.418 "assigned_rate_limits": { 00:05:04.418 "rw_ios_per_sec": 0, 00:05:04.418 "rw_mbytes_per_sec": 0, 00:05:04.418 "r_mbytes_per_sec": 0, 00:05:04.418 "w_mbytes_per_sec": 0 00:05:04.418 }, 00:05:04.418 "claimed": true, 00:05:04.418 "claim_type": "exclusive_write", 00:05:04.418 "zoned": false, 00:05:04.418 "supported_io_types": { 00:05:04.418 "read": true, 00:05:04.418 "write": true, 00:05:04.418 "unmap": true, 00:05:04.418 "write_zeroes": true, 00:05:04.418 "flush": true, 00:05:04.418 "reset": true, 00:05:04.418 "compare": false, 00:05:04.418 "compare_and_write": false, 00:05:04.418 "abort": true, 00:05:04.418 "nvme_admin": false, 00:05:04.418 "nvme_io": false 00:05:04.418 }, 00:05:04.418 "memory_domains": [ 00:05:04.418 { 00:05:04.418 "dma_device_id": "system", 00:05:04.418 "dma_device_type": 1 00:05:04.418 }, 00:05:04.418 { 00:05:04.418 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:04.418 "dma_device_type": 2 00:05:04.418 } 00:05:04.418 ], 00:05:04.418 "driver_specific": {} 00:05:04.418 }, 00:05:04.418 { 00:05:04.418 "name": "Passthru0", 00:05:04.418 "aliases": [ 00:05:04.418 "990bb83b-d397-5318-b4f7-751c5fe8daf3" 00:05:04.418 ], 00:05:04.418 "product_name": "passthru", 00:05:04.418 "block_size": 512, 00:05:04.418 "num_blocks": 16384, 00:05:04.418 "uuid": "990bb83b-d397-5318-b4f7-751c5fe8daf3", 00:05:04.418 "assigned_rate_limits": { 00:05:04.418 "rw_ios_per_sec": 0, 00:05:04.418 "rw_mbytes_per_sec": 0, 00:05:04.418 "r_mbytes_per_sec": 0, 00:05:04.418 "w_mbytes_per_sec": 0 00:05:04.418 }, 00:05:04.418 "claimed": false, 00:05:04.418 "zoned": false, 00:05:04.418 "supported_io_types": { 00:05:04.418 "read": true, 00:05:04.418 "write": true, 00:05:04.418 "unmap": true, 00:05:04.418 "write_zeroes": true, 00:05:04.418 "flush": true, 00:05:04.418 "reset": true, 00:05:04.418 "compare": false, 00:05:04.418 "compare_and_write": false, 00:05:04.418 "abort": true, 00:05:04.418 "nvme_admin": false, 00:05:04.418 "nvme_io": false 00:05:04.418 }, 00:05:04.418 "memory_domains": [ 00:05:04.418 { 00:05:04.418 "dma_device_id": "system", 00:05:04.418 "dma_device_type": 1 00:05:04.418 }, 00:05:04.418 { 00:05:04.418 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:04.418 "dma_device_type": 2 00:05:04.418 } 00:05:04.418 ], 00:05:04.418 "driver_specific": { 00:05:04.418 "passthru": { 00:05:04.418 "name": "Passthru0", 00:05:04.418 "base_bdev_name": "Malloc2" 00:05:04.418 } 00:05:04.418 } 00:05:04.418 } 00:05:04.418 ]' 00:05:04.418 23:07:53 -- rpc/rpc.sh@21 -- # jq length 00:05:04.678 23:07:53 -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:04.678 23:07:53 -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:04.678 23:07:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:04.678 23:07:53 -- common/autotest_common.sh@10 -- # set +x 00:05:04.678 23:07:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:04.678 23:07:53 -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:05:04.678 23:07:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:04.678 23:07:53 -- common/autotest_common.sh@10 -- # set +x 00:05:04.678 23:07:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:04.678 23:07:53 -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:04.678 23:07:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:04.678 23:07:53 -- common/autotest_common.sh@10 -- # set +x 00:05:04.678 23:07:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:04.678 23:07:53 -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:04.678 23:07:53 -- rpc/rpc.sh@26 -- # jq length 00:05:04.678 23:07:53 -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:04.678 00:05:04.678 real 0m0.291s 00:05:04.678 user 0m0.181s 00:05:04.678 sys 0m0.048s 00:05:04.678 23:07:53 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:04.678 23:07:53 -- common/autotest_common.sh@10 -- # set +x 00:05:04.678 ************************************ 00:05:04.678 END TEST rpc_daemon_integrity 00:05:04.678 ************************************ 00:05:04.678 23:07:53 -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:05:04.678 23:07:53 -- rpc/rpc.sh@84 -- # killprocess 3717896 00:05:04.678 23:07:53 -- common/autotest_common.sh@936 -- # '[' -z 3717896 ']' 00:05:04.678 23:07:53 -- common/autotest_common.sh@940 -- # kill -0 3717896 00:05:04.678 23:07:53 -- common/autotest_common.sh@941 -- # uname 00:05:04.678 23:07:53 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:04.678 23:07:53 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3717896 00:05:04.678 23:07:53 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:04.678 23:07:53 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:04.678 23:07:53 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3717896' 00:05:04.678 killing process with pid 3717896 00:05:04.678 23:07:53 -- common/autotest_common.sh@955 -- # kill 3717896 00:05:04.678 23:07:53 -- common/autotest_common.sh@960 -- # wait 3717896 00:05:04.938 00:05:04.938 real 0m2.908s 00:05:04.938 user 0m3.861s 00:05:04.938 sys 0m0.898s 00:05:04.938 23:07:54 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:04.938 23:07:54 -- common/autotest_common.sh@10 -- # set +x 00:05:04.938 ************************************ 00:05:04.938 END TEST rpc 00:05:04.938 ************************************ 00:05:04.938 23:07:54 -- spdk/autotest.sh@166 -- # run_test skip_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:05:04.938 23:07:54 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:04.938 23:07:54 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:04.938 23:07:54 -- common/autotest_common.sh@10 -- # set +x 00:05:05.198 ************************************ 00:05:05.198 START TEST skip_rpc 00:05:05.198 ************************************ 00:05:05.198 23:07:54 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:05:05.198 * Looking for test storage... 00:05:05.198 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:05:05.198 23:07:54 -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:05:05.198 23:07:54 -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:05:05.198 23:07:54 -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:05:05.198 23:07:54 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:05.198 23:07:54 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:05.198 23:07:54 -- common/autotest_common.sh@10 -- # set +x 00:05:05.198 ************************************ 00:05:05.198 START TEST skip_rpc 00:05:05.198 ************************************ 00:05:05.198 23:07:54 -- common/autotest_common.sh@1111 -- # test_skip_rpc 00:05:05.198 23:07:54 -- rpc/skip_rpc.sh@16 -- # local spdk_pid=3718786 00:05:05.198 23:07:54 -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:05.198 23:07:54 -- rpc/skip_rpc.sh@19 -- # sleep 5 00:05:05.198 23:07:54 -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:05:05.457 [2024-04-26 23:07:54.504736] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:05:05.458 [2024-04-26 23:07:54.504792] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3718786 ] 00:05:05.458 EAL: No free 2048 kB hugepages reported on node 1 00:05:05.458 [2024-04-26 23:07:54.569513] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:05.458 [2024-04-26 23:07:54.606274] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:10.756 23:07:59 -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:05:10.756 23:07:59 -- common/autotest_common.sh@638 -- # local es=0 00:05:10.756 23:07:59 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd spdk_get_version 00:05:10.756 23:07:59 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:05:10.756 23:07:59 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:05:10.756 23:07:59 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:05:10.756 23:07:59 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:05:10.756 23:07:59 -- common/autotest_common.sh@641 -- # rpc_cmd spdk_get_version 00:05:10.756 23:07:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:10.756 23:07:59 -- common/autotest_common.sh@10 -- # set +x 00:05:10.756 23:07:59 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:05:10.756 23:07:59 -- common/autotest_common.sh@641 -- # es=1 00:05:10.756 23:07:59 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:05:10.756 23:07:59 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:05:10.756 23:07:59 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:05:10.756 23:07:59 -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:05:10.756 23:07:59 -- rpc/skip_rpc.sh@23 -- # killprocess 3718786 00:05:10.756 23:07:59 -- common/autotest_common.sh@936 -- # '[' -z 3718786 ']' 00:05:10.756 23:07:59 -- common/autotest_common.sh@940 -- # kill -0 3718786 00:05:10.756 23:07:59 -- common/autotest_common.sh@941 -- # uname 00:05:10.756 23:07:59 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:10.756 23:07:59 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3718786 00:05:10.756 23:07:59 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:10.756 23:07:59 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:10.756 23:07:59 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3718786' 00:05:10.756 killing process with pid 3718786 00:05:10.757 23:07:59 -- common/autotest_common.sh@955 -- # kill 3718786 00:05:10.757 23:07:59 -- common/autotest_common.sh@960 -- # wait 3718786 00:05:10.757 00:05:10.757 real 0m5.251s 00:05:10.757 user 0m5.055s 00:05:10.757 sys 0m0.225s 00:05:10.757 23:07:59 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:10.757 23:07:59 -- common/autotest_common.sh@10 -- # set +x 00:05:10.757 ************************************ 00:05:10.757 END TEST skip_rpc 00:05:10.757 ************************************ 00:05:10.757 23:07:59 -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:05:10.757 23:07:59 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:10.757 23:07:59 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:10.757 23:07:59 -- common/autotest_common.sh@10 -- # set +x 00:05:10.757 ************************************ 00:05:10.757 START TEST skip_rpc_with_json 00:05:10.757 ************************************ 00:05:10.757 23:07:59 -- common/autotest_common.sh@1111 -- # test_skip_rpc_with_json 00:05:10.757 23:07:59 -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:05:10.757 23:07:59 -- rpc/skip_rpc.sh@28 -- # local spdk_pid=3719827 00:05:10.757 23:07:59 -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:10.757 23:07:59 -- rpc/skip_rpc.sh@31 -- # waitforlisten 3719827 00:05:10.757 23:07:59 -- common/autotest_common.sh@817 -- # '[' -z 3719827 ']' 00:05:10.757 23:07:59 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:10.757 23:07:59 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:10.757 23:07:59 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:10.757 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:10.757 23:07:59 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:10.757 23:07:59 -- common/autotest_common.sh@10 -- # set +x 00:05:10.757 23:07:59 -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:10.757 [2024-04-26 23:07:59.922829] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:05:10.757 [2024-04-26 23:07:59.922885] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3719827 ] 00:05:10.757 EAL: No free 2048 kB hugepages reported on node 1 00:05:10.757 [2024-04-26 23:07:59.985410] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:11.017 [2024-04-26 23:08:00.025179] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:11.588 23:08:00 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:05:11.588 23:08:00 -- common/autotest_common.sh@850 -- # return 0 00:05:11.588 23:08:00 -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:05:11.588 23:08:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:11.588 23:08:00 -- common/autotest_common.sh@10 -- # set +x 00:05:11.588 [2024-04-26 23:08:00.682842] nvmf_rpc.c:2513:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:05:11.588 request: 00:05:11.588 { 00:05:11.588 "trtype": "tcp", 00:05:11.588 "method": "nvmf_get_transports", 00:05:11.588 "req_id": 1 00:05:11.588 } 00:05:11.588 Got JSON-RPC error response 00:05:11.588 response: 00:05:11.588 { 00:05:11.588 "code": -19, 00:05:11.588 "message": "No such device" 00:05:11.588 } 00:05:11.588 23:08:00 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:05:11.588 23:08:00 -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:05:11.588 23:08:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:11.588 23:08:00 -- common/autotest_common.sh@10 -- # set +x 00:05:11.588 [2024-04-26 23:08:00.690949] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:11.588 23:08:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:11.588 23:08:00 -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:05:11.588 23:08:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:11.588 23:08:00 -- common/autotest_common.sh@10 -- # set +x 00:05:11.588 23:08:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:11.588 23:08:00 -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:05:11.588 { 00:05:11.588 "subsystems": [ 00:05:11.588 { 00:05:11.588 "subsystem": "vfio_user_target", 00:05:11.588 "config": null 00:05:11.588 }, 00:05:11.588 { 00:05:11.588 "subsystem": "keyring", 00:05:11.588 "config": [] 00:05:11.588 }, 00:05:11.588 { 00:05:11.588 "subsystem": "iobuf", 00:05:11.588 "config": [ 00:05:11.588 { 00:05:11.588 "method": "iobuf_set_options", 00:05:11.588 "params": { 00:05:11.588 "small_pool_count": 8192, 00:05:11.588 "large_pool_count": 1024, 00:05:11.588 "small_bufsize": 8192, 00:05:11.588 "large_bufsize": 135168 00:05:11.588 } 00:05:11.588 } 00:05:11.588 ] 00:05:11.588 }, 00:05:11.588 { 00:05:11.588 "subsystem": "sock", 00:05:11.588 "config": [ 00:05:11.588 { 00:05:11.588 "method": "sock_impl_set_options", 00:05:11.588 "params": { 00:05:11.588 "impl_name": "posix", 00:05:11.588 "recv_buf_size": 2097152, 00:05:11.588 "send_buf_size": 2097152, 00:05:11.588 "enable_recv_pipe": true, 00:05:11.588 "enable_quickack": false, 00:05:11.588 "enable_placement_id": 0, 00:05:11.588 "enable_zerocopy_send_server": true, 00:05:11.588 "enable_zerocopy_send_client": false, 00:05:11.588 "zerocopy_threshold": 0, 00:05:11.588 "tls_version": 0, 00:05:11.588 "enable_ktls": false 00:05:11.588 } 00:05:11.588 }, 00:05:11.588 { 00:05:11.588 "method": "sock_impl_set_options", 00:05:11.588 "params": { 00:05:11.588 "impl_name": "ssl", 00:05:11.589 "recv_buf_size": 4096, 00:05:11.589 "send_buf_size": 4096, 00:05:11.589 "enable_recv_pipe": true, 00:05:11.589 "enable_quickack": false, 00:05:11.589 "enable_placement_id": 0, 00:05:11.589 "enable_zerocopy_send_server": true, 00:05:11.589 "enable_zerocopy_send_client": false, 00:05:11.589 "zerocopy_threshold": 0, 00:05:11.589 "tls_version": 0, 00:05:11.589 "enable_ktls": false 00:05:11.589 } 00:05:11.589 } 00:05:11.589 ] 00:05:11.589 }, 00:05:11.589 { 00:05:11.589 "subsystem": "vmd", 00:05:11.589 "config": [] 00:05:11.589 }, 00:05:11.589 { 00:05:11.589 "subsystem": "accel", 00:05:11.589 "config": [ 00:05:11.589 { 00:05:11.589 "method": "accel_set_options", 00:05:11.589 "params": { 00:05:11.589 "small_cache_size": 128, 00:05:11.589 "large_cache_size": 16, 00:05:11.589 "task_count": 2048, 00:05:11.589 "sequence_count": 2048, 00:05:11.589 "buf_count": 2048 00:05:11.589 } 00:05:11.589 } 00:05:11.589 ] 00:05:11.589 }, 00:05:11.589 { 00:05:11.589 "subsystem": "bdev", 00:05:11.589 "config": [ 00:05:11.589 { 00:05:11.589 "method": "bdev_set_options", 00:05:11.589 "params": { 00:05:11.589 "bdev_io_pool_size": 65535, 00:05:11.589 "bdev_io_cache_size": 256, 00:05:11.589 "bdev_auto_examine": true, 00:05:11.589 "iobuf_small_cache_size": 128, 00:05:11.589 "iobuf_large_cache_size": 16 00:05:11.589 } 00:05:11.589 }, 00:05:11.589 { 00:05:11.589 "method": "bdev_raid_set_options", 00:05:11.589 "params": { 00:05:11.589 "process_window_size_kb": 1024 00:05:11.589 } 00:05:11.589 }, 00:05:11.589 { 00:05:11.589 "method": "bdev_iscsi_set_options", 00:05:11.589 "params": { 00:05:11.589 "timeout_sec": 30 00:05:11.589 } 00:05:11.589 }, 00:05:11.589 { 00:05:11.589 "method": "bdev_nvme_set_options", 00:05:11.589 "params": { 00:05:11.589 "action_on_timeout": "none", 00:05:11.589 "timeout_us": 0, 00:05:11.589 "timeout_admin_us": 0, 00:05:11.589 "keep_alive_timeout_ms": 10000, 00:05:11.589 "arbitration_burst": 0, 00:05:11.589 "low_priority_weight": 0, 00:05:11.589 "medium_priority_weight": 0, 00:05:11.589 "high_priority_weight": 0, 00:05:11.589 "nvme_adminq_poll_period_us": 10000, 00:05:11.589 "nvme_ioq_poll_period_us": 0, 00:05:11.589 "io_queue_requests": 0, 00:05:11.589 "delay_cmd_submit": true, 00:05:11.589 "transport_retry_count": 4, 00:05:11.589 "bdev_retry_count": 3, 00:05:11.589 "transport_ack_timeout": 0, 00:05:11.589 "ctrlr_loss_timeout_sec": 0, 00:05:11.589 "reconnect_delay_sec": 0, 00:05:11.589 "fast_io_fail_timeout_sec": 0, 00:05:11.589 "disable_auto_failback": false, 00:05:11.589 "generate_uuids": false, 00:05:11.589 "transport_tos": 0, 00:05:11.589 "nvme_error_stat": false, 00:05:11.589 "rdma_srq_size": 0, 00:05:11.589 "io_path_stat": false, 00:05:11.589 "allow_accel_sequence": false, 00:05:11.589 "rdma_max_cq_size": 0, 00:05:11.589 "rdma_cm_event_timeout_ms": 0, 00:05:11.589 "dhchap_digests": [ 00:05:11.589 "sha256", 00:05:11.589 "sha384", 00:05:11.589 "sha512" 00:05:11.589 ], 00:05:11.589 "dhchap_dhgroups": [ 00:05:11.589 "null", 00:05:11.589 "ffdhe2048", 00:05:11.589 "ffdhe3072", 00:05:11.589 "ffdhe4096", 00:05:11.589 "ffdhe6144", 00:05:11.589 "ffdhe8192" 00:05:11.589 ] 00:05:11.589 } 00:05:11.589 }, 00:05:11.589 { 00:05:11.589 "method": "bdev_nvme_set_hotplug", 00:05:11.589 "params": { 00:05:11.589 "period_us": 100000, 00:05:11.589 "enable": false 00:05:11.589 } 00:05:11.589 }, 00:05:11.589 { 00:05:11.589 "method": "bdev_wait_for_examine" 00:05:11.589 } 00:05:11.589 ] 00:05:11.589 }, 00:05:11.589 { 00:05:11.589 "subsystem": "scsi", 00:05:11.589 "config": null 00:05:11.589 }, 00:05:11.589 { 00:05:11.589 "subsystem": "scheduler", 00:05:11.589 "config": [ 00:05:11.589 { 00:05:11.589 "method": "framework_set_scheduler", 00:05:11.589 "params": { 00:05:11.589 "name": "static" 00:05:11.589 } 00:05:11.589 } 00:05:11.589 ] 00:05:11.589 }, 00:05:11.589 { 00:05:11.589 "subsystem": "vhost_scsi", 00:05:11.589 "config": [] 00:05:11.589 }, 00:05:11.589 { 00:05:11.589 "subsystem": "vhost_blk", 00:05:11.589 "config": [] 00:05:11.589 }, 00:05:11.589 { 00:05:11.589 "subsystem": "ublk", 00:05:11.589 "config": [] 00:05:11.589 }, 00:05:11.589 { 00:05:11.589 "subsystem": "nbd", 00:05:11.589 "config": [] 00:05:11.589 }, 00:05:11.589 { 00:05:11.589 "subsystem": "nvmf", 00:05:11.589 "config": [ 00:05:11.589 { 00:05:11.589 "method": "nvmf_set_config", 00:05:11.589 "params": { 00:05:11.589 "discovery_filter": "match_any", 00:05:11.589 "admin_cmd_passthru": { 00:05:11.589 "identify_ctrlr": false 00:05:11.589 } 00:05:11.589 } 00:05:11.589 }, 00:05:11.589 { 00:05:11.589 "method": "nvmf_set_max_subsystems", 00:05:11.589 "params": { 00:05:11.589 "max_subsystems": 1024 00:05:11.589 } 00:05:11.589 }, 00:05:11.589 { 00:05:11.589 "method": "nvmf_set_crdt", 00:05:11.589 "params": { 00:05:11.589 "crdt1": 0, 00:05:11.589 "crdt2": 0, 00:05:11.589 "crdt3": 0 00:05:11.589 } 00:05:11.589 }, 00:05:11.589 { 00:05:11.589 "method": "nvmf_create_transport", 00:05:11.589 "params": { 00:05:11.589 "trtype": "TCP", 00:05:11.589 "max_queue_depth": 128, 00:05:11.589 "max_io_qpairs_per_ctrlr": 127, 00:05:11.589 "in_capsule_data_size": 4096, 00:05:11.589 "max_io_size": 131072, 00:05:11.589 "io_unit_size": 131072, 00:05:11.589 "max_aq_depth": 128, 00:05:11.589 "num_shared_buffers": 511, 00:05:11.589 "buf_cache_size": 4294967295, 00:05:11.589 "dif_insert_or_strip": false, 00:05:11.589 "zcopy": false, 00:05:11.589 "c2h_success": true, 00:05:11.589 "sock_priority": 0, 00:05:11.589 "abort_timeout_sec": 1, 00:05:11.589 "ack_timeout": 0, 00:05:11.589 "data_wr_pool_size": 0 00:05:11.589 } 00:05:11.589 } 00:05:11.589 ] 00:05:11.589 }, 00:05:11.589 { 00:05:11.589 "subsystem": "iscsi", 00:05:11.589 "config": [ 00:05:11.589 { 00:05:11.589 "method": "iscsi_set_options", 00:05:11.589 "params": { 00:05:11.589 "node_base": "iqn.2016-06.io.spdk", 00:05:11.589 "max_sessions": 128, 00:05:11.589 "max_connections_per_session": 2, 00:05:11.589 "max_queue_depth": 64, 00:05:11.589 "default_time2wait": 2, 00:05:11.589 "default_time2retain": 20, 00:05:11.589 "first_burst_length": 8192, 00:05:11.589 "immediate_data": true, 00:05:11.589 "allow_duplicated_isid": false, 00:05:11.589 "error_recovery_level": 0, 00:05:11.589 "nop_timeout": 60, 00:05:11.589 "nop_in_interval": 30, 00:05:11.589 "disable_chap": false, 00:05:11.589 "require_chap": false, 00:05:11.589 "mutual_chap": false, 00:05:11.589 "chap_group": 0, 00:05:11.589 "max_large_datain_per_connection": 64, 00:05:11.589 "max_r2t_per_connection": 4, 00:05:11.589 "pdu_pool_size": 36864, 00:05:11.589 "immediate_data_pool_size": 16384, 00:05:11.589 "data_out_pool_size": 2048 00:05:11.589 } 00:05:11.589 } 00:05:11.589 ] 00:05:11.589 } 00:05:11.589 ] 00:05:11.589 } 00:05:11.589 23:08:00 -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:05:11.589 23:08:00 -- rpc/skip_rpc.sh@40 -- # killprocess 3719827 00:05:11.589 23:08:00 -- common/autotest_common.sh@936 -- # '[' -z 3719827 ']' 00:05:11.589 23:08:00 -- common/autotest_common.sh@940 -- # kill -0 3719827 00:05:11.589 23:08:00 -- common/autotest_common.sh@941 -- # uname 00:05:11.850 23:08:00 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:11.850 23:08:00 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3719827 00:05:11.850 23:08:00 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:11.850 23:08:00 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:11.850 23:08:00 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3719827' 00:05:11.850 killing process with pid 3719827 00:05:11.850 23:08:00 -- common/autotest_common.sh@955 -- # kill 3719827 00:05:11.850 23:08:00 -- common/autotest_common.sh@960 -- # wait 3719827 00:05:11.850 23:08:01 -- rpc/skip_rpc.sh@47 -- # local spdk_pid=3720167 00:05:11.850 23:08:01 -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:05:11.850 23:08:01 -- rpc/skip_rpc.sh@48 -- # sleep 5 00:05:17.141 23:08:06 -- rpc/skip_rpc.sh@50 -- # killprocess 3720167 00:05:17.141 23:08:06 -- common/autotest_common.sh@936 -- # '[' -z 3720167 ']' 00:05:17.141 23:08:06 -- common/autotest_common.sh@940 -- # kill -0 3720167 00:05:17.141 23:08:06 -- common/autotest_common.sh@941 -- # uname 00:05:17.141 23:08:06 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:17.142 23:08:06 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3720167 00:05:17.142 23:08:06 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:17.142 23:08:06 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:17.142 23:08:06 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3720167' 00:05:17.142 killing process with pid 3720167 00:05:17.142 23:08:06 -- common/autotest_common.sh@955 -- # kill 3720167 00:05:17.142 23:08:06 -- common/autotest_common.sh@960 -- # wait 3720167 00:05:17.142 23:08:06 -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:05:17.142 23:08:06 -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:05:17.142 00:05:17.142 real 0m6.473s 00:05:17.142 user 0m6.355s 00:05:17.142 sys 0m0.493s 00:05:17.142 23:08:06 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:17.142 23:08:06 -- common/autotest_common.sh@10 -- # set +x 00:05:17.142 ************************************ 00:05:17.142 END TEST skip_rpc_with_json 00:05:17.142 ************************************ 00:05:17.142 23:08:06 -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:05:17.142 23:08:06 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:17.142 23:08:06 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:17.142 23:08:06 -- common/autotest_common.sh@10 -- # set +x 00:05:17.403 ************************************ 00:05:17.403 START TEST skip_rpc_with_delay 00:05:17.403 ************************************ 00:05:17.403 23:08:06 -- common/autotest_common.sh@1111 -- # test_skip_rpc_with_delay 00:05:17.403 23:08:06 -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:17.403 23:08:06 -- common/autotest_common.sh@638 -- # local es=0 00:05:17.403 23:08:06 -- common/autotest_common.sh@640 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:17.403 23:08:06 -- common/autotest_common.sh@626 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:17.403 23:08:06 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:05:17.403 23:08:06 -- common/autotest_common.sh@630 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:17.403 23:08:06 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:05:17.403 23:08:06 -- common/autotest_common.sh@632 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:17.403 23:08:06 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:05:17.403 23:08:06 -- common/autotest_common.sh@632 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:17.403 23:08:06 -- common/autotest_common.sh@632 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:05:17.403 23:08:06 -- common/autotest_common.sh@641 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:17.403 [2024-04-26 23:08:06.589141] app.c: 751:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:05:17.403 [2024-04-26 23:08:06.589209] app.c: 630:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:05:17.403 23:08:06 -- common/autotest_common.sh@641 -- # es=1 00:05:17.403 23:08:06 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:05:17.403 23:08:06 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:05:17.403 23:08:06 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:05:17.403 00:05:17.403 real 0m0.071s 00:05:17.403 user 0m0.048s 00:05:17.403 sys 0m0.023s 00:05:17.403 23:08:06 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:17.403 23:08:06 -- common/autotest_common.sh@10 -- # set +x 00:05:17.403 ************************************ 00:05:17.403 END TEST skip_rpc_with_delay 00:05:17.403 ************************************ 00:05:17.403 23:08:06 -- rpc/skip_rpc.sh@77 -- # uname 00:05:17.403 23:08:06 -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:05:17.403 23:08:06 -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:05:17.403 23:08:06 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:17.403 23:08:06 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:17.403 23:08:06 -- common/autotest_common.sh@10 -- # set +x 00:05:17.663 ************************************ 00:05:17.663 START TEST exit_on_failed_rpc_init 00:05:17.663 ************************************ 00:05:17.663 23:08:06 -- common/autotest_common.sh@1111 -- # test_exit_on_failed_rpc_init 00:05:17.663 23:08:06 -- rpc/skip_rpc.sh@62 -- # local spdk_pid=3721252 00:05:17.663 23:08:06 -- rpc/skip_rpc.sh@63 -- # waitforlisten 3721252 00:05:17.663 23:08:06 -- common/autotest_common.sh@817 -- # '[' -z 3721252 ']' 00:05:17.663 23:08:06 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:17.663 23:08:06 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:17.663 23:08:06 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:17.663 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:17.663 23:08:06 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:17.663 23:08:06 -- common/autotest_common.sh@10 -- # set +x 00:05:17.663 23:08:06 -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:17.663 [2024-04-26 23:08:06.827357] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:05:17.663 [2024-04-26 23:08:06.827405] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3721252 ] 00:05:17.663 EAL: No free 2048 kB hugepages reported on node 1 00:05:17.663 [2024-04-26 23:08:06.887781] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:17.924 [2024-04-26 23:08:06.919243] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:18.495 23:08:07 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:05:18.495 23:08:07 -- common/autotest_common.sh@850 -- # return 0 00:05:18.495 23:08:07 -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:18.495 23:08:07 -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:05:18.495 23:08:07 -- common/autotest_common.sh@638 -- # local es=0 00:05:18.495 23:08:07 -- common/autotest_common.sh@640 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:05:18.495 23:08:07 -- common/autotest_common.sh@626 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:18.495 23:08:07 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:05:18.495 23:08:07 -- common/autotest_common.sh@630 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:18.495 23:08:07 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:05:18.495 23:08:07 -- common/autotest_common.sh@632 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:18.495 23:08:07 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:05:18.495 23:08:07 -- common/autotest_common.sh@632 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:18.495 23:08:07 -- common/autotest_common.sh@632 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:05:18.495 23:08:07 -- common/autotest_common.sh@641 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:05:18.495 [2024-04-26 23:08:07.612191] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:05:18.495 [2024-04-26 23:08:07.612245] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3721500 ] 00:05:18.495 EAL: No free 2048 kB hugepages reported on node 1 00:05:18.495 [2024-04-26 23:08:07.670879] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:18.495 [2024-04-26 23:08:07.699827] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:18.495 [2024-04-26 23:08:07.699895] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:05:18.495 [2024-04-26 23:08:07.699904] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:05:18.495 [2024-04-26 23:08:07.699911] app.c: 966:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:18.495 23:08:07 -- common/autotest_common.sh@641 -- # es=234 00:05:18.495 23:08:07 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:05:18.495 23:08:07 -- common/autotest_common.sh@650 -- # es=106 00:05:18.495 23:08:07 -- common/autotest_common.sh@651 -- # case "$es" in 00:05:18.495 23:08:07 -- common/autotest_common.sh@658 -- # es=1 00:05:18.495 23:08:07 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:05:18.495 23:08:07 -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:05:18.495 23:08:07 -- rpc/skip_rpc.sh@70 -- # killprocess 3721252 00:05:18.495 23:08:07 -- common/autotest_common.sh@936 -- # '[' -z 3721252 ']' 00:05:18.495 23:08:07 -- common/autotest_common.sh@940 -- # kill -0 3721252 00:05:18.495 23:08:07 -- common/autotest_common.sh@941 -- # uname 00:05:18.755 23:08:07 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:18.755 23:08:07 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3721252 00:05:18.755 23:08:07 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:18.755 23:08:07 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:18.755 23:08:07 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3721252' 00:05:18.755 killing process with pid 3721252 00:05:18.755 23:08:07 -- common/autotest_common.sh@955 -- # kill 3721252 00:05:18.755 23:08:07 -- common/autotest_common.sh@960 -- # wait 3721252 00:05:18.755 00:05:18.755 real 0m1.215s 00:05:18.755 user 0m1.370s 00:05:18.755 sys 0m0.341s 00:05:18.755 23:08:07 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:18.755 23:08:07 -- common/autotest_common.sh@10 -- # set +x 00:05:18.755 ************************************ 00:05:18.755 END TEST exit_on_failed_rpc_init 00:05:18.755 ************************************ 00:05:19.016 23:08:08 -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:05:19.016 00:05:19.016 real 0m13.798s 00:05:19.016 user 0m13.127s 00:05:19.016 sys 0m1.521s 00:05:19.016 23:08:08 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:19.016 23:08:08 -- common/autotest_common.sh@10 -- # set +x 00:05:19.016 ************************************ 00:05:19.016 END TEST skip_rpc 00:05:19.016 ************************************ 00:05:19.016 23:08:08 -- spdk/autotest.sh@167 -- # run_test rpc_client /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:05:19.016 23:08:08 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:19.016 23:08:08 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:19.016 23:08:08 -- common/autotest_common.sh@10 -- # set +x 00:05:19.016 ************************************ 00:05:19.016 START TEST rpc_client 00:05:19.016 ************************************ 00:05:19.016 23:08:08 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:05:19.276 * Looking for test storage... 00:05:19.276 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client 00:05:19.276 23:08:08 -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:05:19.276 OK 00:05:19.276 23:08:08 -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:05:19.276 00:05:19.276 real 0m0.131s 00:05:19.276 user 0m0.056s 00:05:19.276 sys 0m0.082s 00:05:19.276 23:08:08 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:19.276 23:08:08 -- common/autotest_common.sh@10 -- # set +x 00:05:19.276 ************************************ 00:05:19.276 END TEST rpc_client 00:05:19.276 ************************************ 00:05:19.276 23:08:08 -- spdk/autotest.sh@168 -- # run_test json_config /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:05:19.276 23:08:08 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:19.276 23:08:08 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:19.276 23:08:08 -- common/autotest_common.sh@10 -- # set +x 00:05:19.276 ************************************ 00:05:19.276 START TEST json_config 00:05:19.276 ************************************ 00:05:19.276 23:08:08 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:05:19.538 23:08:08 -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:19.538 23:08:08 -- nvmf/common.sh@7 -- # uname -s 00:05:19.538 23:08:08 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:19.538 23:08:08 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:19.538 23:08:08 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:19.538 23:08:08 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:19.538 23:08:08 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:19.538 23:08:08 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:19.538 23:08:08 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:19.538 23:08:08 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:19.538 23:08:08 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:19.538 23:08:08 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:19.538 23:08:08 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:05:19.538 23:08:08 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:05:19.538 23:08:08 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:19.538 23:08:08 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:19.538 23:08:08 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:19.538 23:08:08 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:19.538 23:08:08 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:19.538 23:08:08 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:19.538 23:08:08 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:19.538 23:08:08 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:19.538 23:08:08 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:19.538 23:08:08 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:19.539 23:08:08 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:19.539 23:08:08 -- paths/export.sh@5 -- # export PATH 00:05:19.539 23:08:08 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:19.539 23:08:08 -- nvmf/common.sh@47 -- # : 0 00:05:19.539 23:08:08 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:05:19.539 23:08:08 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:05:19.539 23:08:08 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:19.539 23:08:08 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:19.539 23:08:08 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:19.539 23:08:08 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:05:19.539 23:08:08 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:05:19.539 23:08:08 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:05:19.539 23:08:08 -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:05:19.539 23:08:08 -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:05:19.539 23:08:08 -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:05:19.539 23:08:08 -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:05:19.539 23:08:08 -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:05:19.539 23:08:08 -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:05:19.539 23:08:08 -- json_config/json_config.sh@31 -- # declare -A app_pid 00:05:19.539 23:08:08 -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:05:19.539 23:08:08 -- json_config/json_config.sh@32 -- # declare -A app_socket 00:05:19.539 23:08:08 -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:05:19.539 23:08:08 -- json_config/json_config.sh@33 -- # declare -A app_params 00:05:19.539 23:08:08 -- json_config/json_config.sh@34 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json') 00:05:19.539 23:08:08 -- json_config/json_config.sh@34 -- # declare -A configs_path 00:05:19.539 23:08:08 -- json_config/json_config.sh@40 -- # last_event_id=0 00:05:19.539 23:08:08 -- json_config/json_config.sh@355 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:19.539 23:08:08 -- json_config/json_config.sh@356 -- # echo 'INFO: JSON configuration test init' 00:05:19.539 INFO: JSON configuration test init 00:05:19.539 23:08:08 -- json_config/json_config.sh@357 -- # json_config_test_init 00:05:19.539 23:08:08 -- json_config/json_config.sh@262 -- # timing_enter json_config_test_init 00:05:19.539 23:08:08 -- common/autotest_common.sh@710 -- # xtrace_disable 00:05:19.539 23:08:08 -- common/autotest_common.sh@10 -- # set +x 00:05:19.539 23:08:08 -- json_config/json_config.sh@263 -- # timing_enter json_config_setup_target 00:05:19.539 23:08:08 -- common/autotest_common.sh@710 -- # xtrace_disable 00:05:19.539 23:08:08 -- common/autotest_common.sh@10 -- # set +x 00:05:19.539 23:08:08 -- json_config/json_config.sh@265 -- # json_config_test_start_app target --wait-for-rpc 00:05:19.539 23:08:08 -- json_config/common.sh@9 -- # local app=target 00:05:19.539 23:08:08 -- json_config/common.sh@10 -- # shift 00:05:19.539 23:08:08 -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:19.539 23:08:08 -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:19.539 23:08:08 -- json_config/common.sh@15 -- # local app_extra_params= 00:05:19.539 23:08:08 -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:19.539 23:08:08 -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:19.539 23:08:08 -- json_config/common.sh@22 -- # app_pid["$app"]=3721721 00:05:19.539 23:08:08 -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:19.539 Waiting for target to run... 00:05:19.539 23:08:08 -- json_config/common.sh@25 -- # waitforlisten 3721721 /var/tmp/spdk_tgt.sock 00:05:19.539 23:08:08 -- common/autotest_common.sh@817 -- # '[' -z 3721721 ']' 00:05:19.539 23:08:08 -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:05:19.539 23:08:08 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:19.539 23:08:08 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:19.539 23:08:08 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:19.539 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:19.539 23:08:08 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:19.539 23:08:08 -- common/autotest_common.sh@10 -- # set +x 00:05:19.539 [2024-04-26 23:08:08.674887] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:05:19.539 [2024-04-26 23:08:08.674959] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3721721 ] 00:05:19.539 EAL: No free 2048 kB hugepages reported on node 1 00:05:19.800 [2024-04-26 23:08:08.943884] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:19.800 [2024-04-26 23:08:08.960665] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:20.371 23:08:09 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:05:20.371 23:08:09 -- common/autotest_common.sh@850 -- # return 0 00:05:20.371 23:08:09 -- json_config/common.sh@26 -- # echo '' 00:05:20.371 00:05:20.371 23:08:09 -- json_config/json_config.sh@269 -- # create_accel_config 00:05:20.371 23:08:09 -- json_config/json_config.sh@93 -- # timing_enter create_accel_config 00:05:20.371 23:08:09 -- common/autotest_common.sh@710 -- # xtrace_disable 00:05:20.371 23:08:09 -- common/autotest_common.sh@10 -- # set +x 00:05:20.371 23:08:09 -- json_config/json_config.sh@95 -- # [[ 0 -eq 1 ]] 00:05:20.371 23:08:09 -- json_config/json_config.sh@101 -- # timing_exit create_accel_config 00:05:20.371 23:08:09 -- common/autotest_common.sh@716 -- # xtrace_disable 00:05:20.371 23:08:09 -- common/autotest_common.sh@10 -- # set +x 00:05:20.371 23:08:09 -- json_config/json_config.sh@273 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:05:20.371 23:08:09 -- json_config/json_config.sh@274 -- # tgt_rpc load_config 00:05:20.371 23:08:09 -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:05:20.942 23:08:09 -- json_config/json_config.sh@276 -- # tgt_check_notification_types 00:05:20.942 23:08:09 -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:05:20.942 23:08:09 -- common/autotest_common.sh@710 -- # xtrace_disable 00:05:20.942 23:08:09 -- common/autotest_common.sh@10 -- # set +x 00:05:20.942 23:08:09 -- json_config/json_config.sh@45 -- # local ret=0 00:05:20.942 23:08:09 -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:05:20.942 23:08:09 -- json_config/json_config.sh@46 -- # local enabled_types 00:05:20.942 23:08:09 -- json_config/json_config.sh@48 -- # tgt_rpc notify_get_types 00:05:20.942 23:08:09 -- json_config/json_config.sh@48 -- # jq -r '.[]' 00:05:20.942 23:08:09 -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:05:20.942 23:08:10 -- json_config/json_config.sh@48 -- # get_types=('bdev_register' 'bdev_unregister') 00:05:20.942 23:08:10 -- json_config/json_config.sh@48 -- # local get_types 00:05:20.942 23:08:10 -- json_config/json_config.sh@49 -- # [[ bdev_register bdev_unregister != \b\d\e\v\_\r\e\g\i\s\t\e\r\ \b\d\e\v\_\u\n\r\e\g\i\s\t\e\r ]] 00:05:20.942 23:08:10 -- json_config/json_config.sh@54 -- # timing_exit tgt_check_notification_types 00:05:20.942 23:08:10 -- common/autotest_common.sh@716 -- # xtrace_disable 00:05:20.942 23:08:10 -- common/autotest_common.sh@10 -- # set +x 00:05:20.942 23:08:10 -- json_config/json_config.sh@55 -- # return 0 00:05:20.942 23:08:10 -- json_config/json_config.sh@278 -- # [[ 0 -eq 1 ]] 00:05:20.942 23:08:10 -- json_config/json_config.sh@282 -- # [[ 0 -eq 1 ]] 00:05:20.942 23:08:10 -- json_config/json_config.sh@286 -- # [[ 0 -eq 1 ]] 00:05:20.942 23:08:10 -- json_config/json_config.sh@290 -- # [[ 1 -eq 1 ]] 00:05:20.942 23:08:10 -- json_config/json_config.sh@291 -- # create_nvmf_subsystem_config 00:05:20.942 23:08:10 -- json_config/json_config.sh@230 -- # timing_enter create_nvmf_subsystem_config 00:05:20.942 23:08:10 -- common/autotest_common.sh@710 -- # xtrace_disable 00:05:20.942 23:08:10 -- common/autotest_common.sh@10 -- # set +x 00:05:20.942 23:08:10 -- json_config/json_config.sh@232 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:05:21.203 23:08:10 -- json_config/json_config.sh@233 -- # [[ tcp == \r\d\m\a ]] 00:05:21.203 23:08:10 -- json_config/json_config.sh@237 -- # [[ -z 127.0.0.1 ]] 00:05:21.203 23:08:10 -- json_config/json_config.sh@242 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:05:21.203 23:08:10 -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:05:21.203 MallocForNvmf0 00:05:21.203 23:08:10 -- json_config/json_config.sh@243 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:05:21.203 23:08:10 -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:05:21.465 MallocForNvmf1 00:05:21.465 23:08:10 -- json_config/json_config.sh@245 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:05:21.465 23:08:10 -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:05:21.465 [2024-04-26 23:08:10.637452] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:21.465 23:08:10 -- json_config/json_config.sh@246 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:05:21.465 23:08:10 -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:05:21.727 23:08:10 -- json_config/json_config.sh@247 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:05:21.727 23:08:10 -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:05:21.727 23:08:10 -- json_config/json_config.sh@248 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:05:21.727 23:08:10 -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:05:21.987 23:08:11 -- json_config/json_config.sh@249 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:05:21.987 23:08:11 -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:05:22.249 [2024-04-26 23:08:11.255477] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:05:22.249 23:08:11 -- json_config/json_config.sh@251 -- # timing_exit create_nvmf_subsystem_config 00:05:22.249 23:08:11 -- common/autotest_common.sh@716 -- # xtrace_disable 00:05:22.249 23:08:11 -- common/autotest_common.sh@10 -- # set +x 00:05:22.249 23:08:11 -- json_config/json_config.sh@293 -- # timing_exit json_config_setup_target 00:05:22.249 23:08:11 -- common/autotest_common.sh@716 -- # xtrace_disable 00:05:22.249 23:08:11 -- common/autotest_common.sh@10 -- # set +x 00:05:22.249 23:08:11 -- json_config/json_config.sh@295 -- # [[ 0 -eq 1 ]] 00:05:22.249 23:08:11 -- json_config/json_config.sh@300 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:05:22.249 23:08:11 -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:05:22.249 MallocBdevForConfigChangeCheck 00:05:22.249 23:08:11 -- json_config/json_config.sh@302 -- # timing_exit json_config_test_init 00:05:22.249 23:08:11 -- common/autotest_common.sh@716 -- # xtrace_disable 00:05:22.249 23:08:11 -- common/autotest_common.sh@10 -- # set +x 00:05:22.510 23:08:11 -- json_config/json_config.sh@359 -- # tgt_rpc save_config 00:05:22.510 23:08:11 -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:22.771 23:08:11 -- json_config/json_config.sh@361 -- # echo 'INFO: shutting down applications...' 00:05:22.771 INFO: shutting down applications... 00:05:22.771 23:08:11 -- json_config/json_config.sh@362 -- # [[ 0 -eq 1 ]] 00:05:22.771 23:08:11 -- json_config/json_config.sh@368 -- # json_config_clear target 00:05:22.771 23:08:11 -- json_config/json_config.sh@332 -- # [[ -n 22 ]] 00:05:22.771 23:08:11 -- json_config/json_config.sh@333 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:05:23.034 Calling clear_iscsi_subsystem 00:05:23.034 Calling clear_nvmf_subsystem 00:05:23.034 Calling clear_nbd_subsystem 00:05:23.034 Calling clear_ublk_subsystem 00:05:23.034 Calling clear_vhost_blk_subsystem 00:05:23.034 Calling clear_vhost_scsi_subsystem 00:05:23.034 Calling clear_bdev_subsystem 00:05:23.034 23:08:12 -- json_config/json_config.sh@337 -- # local config_filter=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py 00:05:23.034 23:08:12 -- json_config/json_config.sh@343 -- # count=100 00:05:23.034 23:08:12 -- json_config/json_config.sh@344 -- # '[' 100 -gt 0 ']' 00:05:23.034 23:08:12 -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:23.034 23:08:12 -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:05:23.034 23:08:12 -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:05:23.294 23:08:12 -- json_config/json_config.sh@345 -- # break 00:05:23.294 23:08:12 -- json_config/json_config.sh@350 -- # '[' 100 -eq 0 ']' 00:05:23.294 23:08:12 -- json_config/json_config.sh@369 -- # json_config_test_shutdown_app target 00:05:23.294 23:08:12 -- json_config/common.sh@31 -- # local app=target 00:05:23.294 23:08:12 -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:23.294 23:08:12 -- json_config/common.sh@35 -- # [[ -n 3721721 ]] 00:05:23.294 23:08:12 -- json_config/common.sh@38 -- # kill -SIGINT 3721721 00:05:23.294 23:08:12 -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:23.294 23:08:12 -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:23.294 23:08:12 -- json_config/common.sh@41 -- # kill -0 3721721 00:05:23.294 23:08:12 -- json_config/common.sh@45 -- # sleep 0.5 00:05:23.863 23:08:13 -- json_config/common.sh@40 -- # (( i++ )) 00:05:23.863 23:08:13 -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:23.863 23:08:13 -- json_config/common.sh@41 -- # kill -0 3721721 00:05:23.863 23:08:13 -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:23.863 23:08:13 -- json_config/common.sh@43 -- # break 00:05:23.863 23:08:13 -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:23.863 23:08:13 -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:23.863 SPDK target shutdown done 00:05:23.863 23:08:13 -- json_config/json_config.sh@371 -- # echo 'INFO: relaunching applications...' 00:05:23.863 INFO: relaunching applications... 00:05:23.863 23:08:13 -- json_config/json_config.sh@372 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:23.863 23:08:13 -- json_config/common.sh@9 -- # local app=target 00:05:23.863 23:08:13 -- json_config/common.sh@10 -- # shift 00:05:23.863 23:08:13 -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:23.863 23:08:13 -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:23.863 23:08:13 -- json_config/common.sh@15 -- # local app_extra_params= 00:05:23.863 23:08:13 -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:23.863 23:08:13 -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:23.863 23:08:13 -- json_config/common.sh@22 -- # app_pid["$app"]=3722842 00:05:23.863 23:08:13 -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:23.863 Waiting for target to run... 00:05:23.863 23:08:13 -- json_config/common.sh@25 -- # waitforlisten 3722842 /var/tmp/spdk_tgt.sock 00:05:23.863 23:08:13 -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:23.863 23:08:13 -- common/autotest_common.sh@817 -- # '[' -z 3722842 ']' 00:05:23.863 23:08:13 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:23.863 23:08:13 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:23.863 23:08:13 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:23.863 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:23.863 23:08:13 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:23.863 23:08:13 -- common/autotest_common.sh@10 -- # set +x 00:05:23.863 [2024-04-26 23:08:13.084031] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:05:23.863 [2024-04-26 23:08:13.084086] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3722842 ] 00:05:23.863 EAL: No free 2048 kB hugepages reported on node 1 00:05:24.123 [2024-04-26 23:08:13.355720] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:24.123 [2024-04-26 23:08:13.372663] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:24.693 [2024-04-26 23:08:13.839765] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:24.693 [2024-04-26 23:08:13.872111] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:05:24.693 23:08:13 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:05:24.693 23:08:13 -- common/autotest_common.sh@850 -- # return 0 00:05:24.693 23:08:13 -- json_config/common.sh@26 -- # echo '' 00:05:24.694 00:05:24.694 23:08:13 -- json_config/json_config.sh@373 -- # [[ 0 -eq 1 ]] 00:05:24.694 23:08:13 -- json_config/json_config.sh@377 -- # echo 'INFO: Checking if target configuration is the same...' 00:05:24.694 INFO: Checking if target configuration is the same... 00:05:24.694 23:08:13 -- json_config/json_config.sh@378 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:24.694 23:08:13 -- json_config/json_config.sh@378 -- # tgt_rpc save_config 00:05:24.694 23:08:13 -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:24.694 + '[' 2 -ne 2 ']' 00:05:24.694 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:05:24.694 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:05:24.694 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:05:24.694 +++ basename /dev/fd/62 00:05:24.694 ++ mktemp /tmp/62.XXX 00:05:24.694 + tmp_file_1=/tmp/62.WLo 00:05:24.694 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:24.694 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:05:24.694 + tmp_file_2=/tmp/spdk_tgt_config.json.JtN 00:05:24.694 + ret=0 00:05:24.694 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:24.954 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:25.214 + diff -u /tmp/62.WLo /tmp/spdk_tgt_config.json.JtN 00:05:25.214 + echo 'INFO: JSON config files are the same' 00:05:25.214 INFO: JSON config files are the same 00:05:25.214 + rm /tmp/62.WLo /tmp/spdk_tgt_config.json.JtN 00:05:25.214 + exit 0 00:05:25.214 23:08:14 -- json_config/json_config.sh@379 -- # [[ 0 -eq 1 ]] 00:05:25.214 23:08:14 -- json_config/json_config.sh@384 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:05:25.214 INFO: changing configuration and checking if this can be detected... 00:05:25.214 23:08:14 -- json_config/json_config.sh@386 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:05:25.214 23:08:14 -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:05:25.214 23:08:14 -- json_config/json_config.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:25.214 23:08:14 -- json_config/json_config.sh@387 -- # tgt_rpc save_config 00:05:25.214 23:08:14 -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:25.214 + '[' 2 -ne 2 ']' 00:05:25.214 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:05:25.214 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:05:25.214 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:05:25.214 +++ basename /dev/fd/62 00:05:25.214 ++ mktemp /tmp/62.XXX 00:05:25.214 + tmp_file_1=/tmp/62.f0p 00:05:25.214 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:25.214 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:05:25.214 + tmp_file_2=/tmp/spdk_tgt_config.json.haf 00:05:25.214 + ret=0 00:05:25.214 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:25.520 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:25.520 + diff -u /tmp/62.f0p /tmp/spdk_tgt_config.json.haf 00:05:25.520 + ret=1 00:05:25.520 + echo '=== Start of file: /tmp/62.f0p ===' 00:05:25.520 + cat /tmp/62.f0p 00:05:25.520 + echo '=== End of file: /tmp/62.f0p ===' 00:05:25.520 + echo '' 00:05:25.520 + echo '=== Start of file: /tmp/spdk_tgt_config.json.haf ===' 00:05:25.520 + cat /tmp/spdk_tgt_config.json.haf 00:05:25.520 + echo '=== End of file: /tmp/spdk_tgt_config.json.haf ===' 00:05:25.520 + echo '' 00:05:25.520 + rm /tmp/62.f0p /tmp/spdk_tgt_config.json.haf 00:05:25.520 + exit 1 00:05:25.520 23:08:14 -- json_config/json_config.sh@391 -- # echo 'INFO: configuration change detected.' 00:05:25.520 INFO: configuration change detected. 00:05:25.520 23:08:14 -- json_config/json_config.sh@394 -- # json_config_test_fini 00:05:25.520 23:08:14 -- json_config/json_config.sh@306 -- # timing_enter json_config_test_fini 00:05:25.520 23:08:14 -- common/autotest_common.sh@710 -- # xtrace_disable 00:05:25.520 23:08:14 -- common/autotest_common.sh@10 -- # set +x 00:05:25.520 23:08:14 -- json_config/json_config.sh@307 -- # local ret=0 00:05:25.520 23:08:14 -- json_config/json_config.sh@309 -- # [[ -n '' ]] 00:05:25.520 23:08:14 -- json_config/json_config.sh@317 -- # [[ -n 3722842 ]] 00:05:25.520 23:08:14 -- json_config/json_config.sh@320 -- # cleanup_bdev_subsystem_config 00:05:25.520 23:08:14 -- json_config/json_config.sh@184 -- # timing_enter cleanup_bdev_subsystem_config 00:05:25.520 23:08:14 -- common/autotest_common.sh@710 -- # xtrace_disable 00:05:25.520 23:08:14 -- common/autotest_common.sh@10 -- # set +x 00:05:25.520 23:08:14 -- json_config/json_config.sh@186 -- # [[ 0 -eq 1 ]] 00:05:25.520 23:08:14 -- json_config/json_config.sh@193 -- # uname -s 00:05:25.520 23:08:14 -- json_config/json_config.sh@193 -- # [[ Linux = Linux ]] 00:05:25.520 23:08:14 -- json_config/json_config.sh@194 -- # rm -f /sample_aio 00:05:25.520 23:08:14 -- json_config/json_config.sh@197 -- # [[ 0 -eq 1 ]] 00:05:25.520 23:08:14 -- json_config/json_config.sh@201 -- # timing_exit cleanup_bdev_subsystem_config 00:05:25.520 23:08:14 -- common/autotest_common.sh@716 -- # xtrace_disable 00:05:25.520 23:08:14 -- common/autotest_common.sh@10 -- # set +x 00:05:25.807 23:08:14 -- json_config/json_config.sh@323 -- # killprocess 3722842 00:05:25.807 23:08:14 -- common/autotest_common.sh@936 -- # '[' -z 3722842 ']' 00:05:25.807 23:08:14 -- common/autotest_common.sh@940 -- # kill -0 3722842 00:05:25.807 23:08:14 -- common/autotest_common.sh@941 -- # uname 00:05:25.807 23:08:14 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:25.807 23:08:14 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3722842 00:05:25.807 23:08:14 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:25.807 23:08:14 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:25.807 23:08:14 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3722842' 00:05:25.807 killing process with pid 3722842 00:05:25.807 23:08:14 -- common/autotest_common.sh@955 -- # kill 3722842 00:05:25.807 23:08:14 -- common/autotest_common.sh@960 -- # wait 3722842 00:05:26.066 23:08:15 -- json_config/json_config.sh@326 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:26.066 23:08:15 -- json_config/json_config.sh@327 -- # timing_exit json_config_test_fini 00:05:26.066 23:08:15 -- common/autotest_common.sh@716 -- # xtrace_disable 00:05:26.066 23:08:15 -- common/autotest_common.sh@10 -- # set +x 00:05:26.066 23:08:15 -- json_config/json_config.sh@328 -- # return 0 00:05:26.066 23:08:15 -- json_config/json_config.sh@396 -- # echo 'INFO: Success' 00:05:26.066 INFO: Success 00:05:26.066 00:05:26.066 real 0m6.657s 00:05:26.066 user 0m7.941s 00:05:26.066 sys 0m1.656s 00:05:26.066 23:08:15 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:26.066 23:08:15 -- common/autotest_common.sh@10 -- # set +x 00:05:26.066 ************************************ 00:05:26.066 END TEST json_config 00:05:26.066 ************************************ 00:05:26.066 23:08:15 -- spdk/autotest.sh@169 -- # run_test json_config_extra_key /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:05:26.066 23:08:15 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:26.066 23:08:15 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:26.066 23:08:15 -- common/autotest_common.sh@10 -- # set +x 00:05:26.327 ************************************ 00:05:26.327 START TEST json_config_extra_key 00:05:26.327 ************************************ 00:05:26.327 23:08:15 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:05:26.327 23:08:15 -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:26.327 23:08:15 -- nvmf/common.sh@7 -- # uname -s 00:05:26.327 23:08:15 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:26.327 23:08:15 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:26.327 23:08:15 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:26.327 23:08:15 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:26.327 23:08:15 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:26.327 23:08:15 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:26.327 23:08:15 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:26.327 23:08:15 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:26.327 23:08:15 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:26.327 23:08:15 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:26.327 23:08:15 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:05:26.327 23:08:15 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:05:26.327 23:08:15 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:26.327 23:08:15 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:26.327 23:08:15 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:26.327 23:08:15 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:26.327 23:08:15 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:26.327 23:08:15 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:26.328 23:08:15 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:26.328 23:08:15 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:26.328 23:08:15 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:26.328 23:08:15 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:26.328 23:08:15 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:26.328 23:08:15 -- paths/export.sh@5 -- # export PATH 00:05:26.328 23:08:15 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:26.328 23:08:15 -- nvmf/common.sh@47 -- # : 0 00:05:26.328 23:08:15 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:05:26.328 23:08:15 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:05:26.328 23:08:15 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:26.328 23:08:15 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:26.328 23:08:15 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:26.328 23:08:15 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:05:26.328 23:08:15 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:05:26.328 23:08:15 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:05:26.328 23:08:15 -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:05:26.328 23:08:15 -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:05:26.328 23:08:15 -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:05:26.328 23:08:15 -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:05:26.328 23:08:15 -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:05:26.328 23:08:15 -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:05:26.328 23:08:15 -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:05:26.328 23:08:15 -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json') 00:05:26.328 23:08:15 -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:05:26.328 23:08:15 -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:26.328 23:08:15 -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:05:26.328 INFO: launching applications... 00:05:26.328 23:08:15 -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:05:26.328 23:08:15 -- json_config/common.sh@9 -- # local app=target 00:05:26.328 23:08:15 -- json_config/common.sh@10 -- # shift 00:05:26.328 23:08:15 -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:26.328 23:08:15 -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:26.328 23:08:15 -- json_config/common.sh@15 -- # local app_extra_params= 00:05:26.328 23:08:15 -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:26.328 23:08:15 -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:26.328 23:08:15 -- json_config/common.sh@22 -- # app_pid["$app"]=3723328 00:05:26.328 23:08:15 -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:26.328 Waiting for target to run... 00:05:26.328 23:08:15 -- json_config/common.sh@25 -- # waitforlisten 3723328 /var/tmp/spdk_tgt.sock 00:05:26.328 23:08:15 -- common/autotest_common.sh@817 -- # '[' -z 3723328 ']' 00:05:26.328 23:08:15 -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:05:26.328 23:08:15 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:26.328 23:08:15 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:26.328 23:08:15 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:26.328 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:26.328 23:08:15 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:26.328 23:08:15 -- common/autotest_common.sh@10 -- # set +x 00:05:26.328 [2024-04-26 23:08:15.505196] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:05:26.328 [2024-04-26 23:08:15.505252] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3723328 ] 00:05:26.328 EAL: No free 2048 kB hugepages reported on node 1 00:05:26.898 [2024-04-26 23:08:15.879521] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:26.898 [2024-04-26 23:08:15.904507] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:27.157 23:08:16 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:05:27.157 23:08:16 -- common/autotest_common.sh@850 -- # return 0 00:05:27.157 23:08:16 -- json_config/common.sh@26 -- # echo '' 00:05:27.157 00:05:27.157 23:08:16 -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:05:27.157 INFO: shutting down applications... 00:05:27.157 23:08:16 -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:05:27.157 23:08:16 -- json_config/common.sh@31 -- # local app=target 00:05:27.157 23:08:16 -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:27.157 23:08:16 -- json_config/common.sh@35 -- # [[ -n 3723328 ]] 00:05:27.157 23:08:16 -- json_config/common.sh@38 -- # kill -SIGINT 3723328 00:05:27.157 23:08:16 -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:27.157 23:08:16 -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:27.157 23:08:16 -- json_config/common.sh@41 -- # kill -0 3723328 00:05:27.157 23:08:16 -- json_config/common.sh@45 -- # sleep 0.5 00:05:27.728 23:08:16 -- json_config/common.sh@40 -- # (( i++ )) 00:05:27.728 23:08:16 -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:27.728 23:08:16 -- json_config/common.sh@41 -- # kill -0 3723328 00:05:27.728 23:08:16 -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:27.728 23:08:16 -- json_config/common.sh@43 -- # break 00:05:27.728 23:08:16 -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:27.728 23:08:16 -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:27.728 SPDK target shutdown done 00:05:27.728 23:08:16 -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:05:27.728 Success 00:05:27.728 00:05:27.728 real 0m1.420s 00:05:27.728 user 0m0.929s 00:05:27.728 sys 0m0.472s 00:05:27.728 23:08:16 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:27.728 23:08:16 -- common/autotest_common.sh@10 -- # set +x 00:05:27.728 ************************************ 00:05:27.728 END TEST json_config_extra_key 00:05:27.728 ************************************ 00:05:27.728 23:08:16 -- spdk/autotest.sh@170 -- # run_test alias_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:27.728 23:08:16 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:27.728 23:08:16 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:27.728 23:08:16 -- common/autotest_common.sh@10 -- # set +x 00:05:27.728 ************************************ 00:05:27.728 START TEST alias_rpc 00:05:27.728 ************************************ 00:05:27.728 23:08:16 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:27.988 * Looking for test storage... 00:05:27.989 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc 00:05:27.989 23:08:17 -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:27.989 23:08:17 -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=3723704 00:05:27.989 23:08:17 -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 3723704 00:05:27.989 23:08:17 -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:27.989 23:08:17 -- common/autotest_common.sh@817 -- # '[' -z 3723704 ']' 00:05:27.989 23:08:17 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:27.989 23:08:17 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:27.989 23:08:17 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:27.989 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:27.989 23:08:17 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:27.989 23:08:17 -- common/autotest_common.sh@10 -- # set +x 00:05:27.989 [2024-04-26 23:08:17.094475] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:05:27.989 [2024-04-26 23:08:17.094545] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3723704 ] 00:05:27.989 EAL: No free 2048 kB hugepages reported on node 1 00:05:27.989 [2024-04-26 23:08:17.163755] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:27.989 [2024-04-26 23:08:17.200988] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:28.928 23:08:17 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:05:28.928 23:08:17 -- common/autotest_common.sh@850 -- # return 0 00:05:28.928 23:08:17 -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_config -i 00:05:28.928 23:08:18 -- alias_rpc/alias_rpc.sh@19 -- # killprocess 3723704 00:05:28.928 23:08:18 -- common/autotest_common.sh@936 -- # '[' -z 3723704 ']' 00:05:28.928 23:08:18 -- common/autotest_common.sh@940 -- # kill -0 3723704 00:05:28.928 23:08:18 -- common/autotest_common.sh@941 -- # uname 00:05:28.928 23:08:18 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:28.928 23:08:18 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3723704 00:05:28.928 23:08:18 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:28.928 23:08:18 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:28.928 23:08:18 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3723704' 00:05:28.928 killing process with pid 3723704 00:05:28.928 23:08:18 -- common/autotest_common.sh@955 -- # kill 3723704 00:05:28.928 23:08:18 -- common/autotest_common.sh@960 -- # wait 3723704 00:05:29.187 00:05:29.187 real 0m1.352s 00:05:29.187 user 0m1.469s 00:05:29.187 sys 0m0.380s 00:05:29.187 23:08:18 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:29.187 23:08:18 -- common/autotest_common.sh@10 -- # set +x 00:05:29.187 ************************************ 00:05:29.187 END TEST alias_rpc 00:05:29.187 ************************************ 00:05:29.187 23:08:18 -- spdk/autotest.sh@172 -- # [[ 0 -eq 0 ]] 00:05:29.187 23:08:18 -- spdk/autotest.sh@173 -- # run_test spdkcli_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:05:29.187 23:08:18 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:29.187 23:08:18 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:29.187 23:08:18 -- common/autotest_common.sh@10 -- # set +x 00:05:29.448 ************************************ 00:05:29.448 START TEST spdkcli_tcp 00:05:29.448 ************************************ 00:05:29.448 23:08:18 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:05:29.448 * Looking for test storage... 00:05:29.448 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:05:29.448 23:08:18 -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:05:29.448 23:08:18 -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:05:29.448 23:08:18 -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:05:29.448 23:08:18 -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:05:29.448 23:08:18 -- spdkcli/tcp.sh@19 -- # PORT=9998 00:05:29.448 23:08:18 -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:05:29.448 23:08:18 -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:05:29.448 23:08:18 -- common/autotest_common.sh@710 -- # xtrace_disable 00:05:29.448 23:08:18 -- common/autotest_common.sh@10 -- # set +x 00:05:29.448 23:08:18 -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=3724106 00:05:29.448 23:08:18 -- spdkcli/tcp.sh@27 -- # waitforlisten 3724106 00:05:29.448 23:08:18 -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:05:29.448 23:08:18 -- common/autotest_common.sh@817 -- # '[' -z 3724106 ']' 00:05:29.448 23:08:18 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:29.448 23:08:18 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:29.448 23:08:18 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:29.448 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:29.448 23:08:18 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:29.448 23:08:18 -- common/autotest_common.sh@10 -- # set +x 00:05:29.448 [2024-04-26 23:08:18.644666] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:05:29.448 [2024-04-26 23:08:18.644720] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3724106 ] 00:05:29.448 EAL: No free 2048 kB hugepages reported on node 1 00:05:29.710 [2024-04-26 23:08:18.708660] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:29.710 [2024-04-26 23:08:18.742442] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:29.710 [2024-04-26 23:08:18.742448] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:30.281 23:08:19 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:05:30.281 23:08:19 -- common/autotest_common.sh@850 -- # return 0 00:05:30.281 23:08:19 -- spdkcli/tcp.sh@31 -- # socat_pid=3724393 00:05:30.281 23:08:19 -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:05:30.281 23:08:19 -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:05:30.542 [ 00:05:30.542 "bdev_malloc_delete", 00:05:30.542 "bdev_malloc_create", 00:05:30.542 "bdev_null_resize", 00:05:30.542 "bdev_null_delete", 00:05:30.542 "bdev_null_create", 00:05:30.542 "bdev_nvme_cuse_unregister", 00:05:30.542 "bdev_nvme_cuse_register", 00:05:30.542 "bdev_opal_new_user", 00:05:30.542 "bdev_opal_set_lock_state", 00:05:30.542 "bdev_opal_delete", 00:05:30.542 "bdev_opal_get_info", 00:05:30.542 "bdev_opal_create", 00:05:30.542 "bdev_nvme_opal_revert", 00:05:30.542 "bdev_nvme_opal_init", 00:05:30.542 "bdev_nvme_send_cmd", 00:05:30.542 "bdev_nvme_get_path_iostat", 00:05:30.542 "bdev_nvme_get_mdns_discovery_info", 00:05:30.542 "bdev_nvme_stop_mdns_discovery", 00:05:30.542 "bdev_nvme_start_mdns_discovery", 00:05:30.542 "bdev_nvme_set_multipath_policy", 00:05:30.542 "bdev_nvme_set_preferred_path", 00:05:30.542 "bdev_nvme_get_io_paths", 00:05:30.542 "bdev_nvme_remove_error_injection", 00:05:30.542 "bdev_nvme_add_error_injection", 00:05:30.542 "bdev_nvme_get_discovery_info", 00:05:30.542 "bdev_nvme_stop_discovery", 00:05:30.542 "bdev_nvme_start_discovery", 00:05:30.542 "bdev_nvme_get_controller_health_info", 00:05:30.542 "bdev_nvme_disable_controller", 00:05:30.542 "bdev_nvme_enable_controller", 00:05:30.542 "bdev_nvme_reset_controller", 00:05:30.542 "bdev_nvme_get_transport_statistics", 00:05:30.542 "bdev_nvme_apply_firmware", 00:05:30.542 "bdev_nvme_detach_controller", 00:05:30.542 "bdev_nvme_get_controllers", 00:05:30.542 "bdev_nvme_attach_controller", 00:05:30.542 "bdev_nvme_set_hotplug", 00:05:30.542 "bdev_nvme_set_options", 00:05:30.542 "bdev_passthru_delete", 00:05:30.542 "bdev_passthru_create", 00:05:30.542 "bdev_lvol_grow_lvstore", 00:05:30.542 "bdev_lvol_get_lvols", 00:05:30.542 "bdev_lvol_get_lvstores", 00:05:30.542 "bdev_lvol_delete", 00:05:30.542 "bdev_lvol_set_read_only", 00:05:30.542 "bdev_lvol_resize", 00:05:30.542 "bdev_lvol_decouple_parent", 00:05:30.542 "bdev_lvol_inflate", 00:05:30.542 "bdev_lvol_rename", 00:05:30.542 "bdev_lvol_clone_bdev", 00:05:30.542 "bdev_lvol_clone", 00:05:30.542 "bdev_lvol_snapshot", 00:05:30.542 "bdev_lvol_create", 00:05:30.542 "bdev_lvol_delete_lvstore", 00:05:30.542 "bdev_lvol_rename_lvstore", 00:05:30.542 "bdev_lvol_create_lvstore", 00:05:30.542 "bdev_raid_set_options", 00:05:30.542 "bdev_raid_remove_base_bdev", 00:05:30.542 "bdev_raid_add_base_bdev", 00:05:30.542 "bdev_raid_delete", 00:05:30.542 "bdev_raid_create", 00:05:30.542 "bdev_raid_get_bdevs", 00:05:30.542 "bdev_error_inject_error", 00:05:30.542 "bdev_error_delete", 00:05:30.542 "bdev_error_create", 00:05:30.542 "bdev_split_delete", 00:05:30.542 "bdev_split_create", 00:05:30.542 "bdev_delay_delete", 00:05:30.542 "bdev_delay_create", 00:05:30.542 "bdev_delay_update_latency", 00:05:30.542 "bdev_zone_block_delete", 00:05:30.542 "bdev_zone_block_create", 00:05:30.542 "blobfs_create", 00:05:30.542 "blobfs_detect", 00:05:30.542 "blobfs_set_cache_size", 00:05:30.542 "bdev_aio_delete", 00:05:30.542 "bdev_aio_rescan", 00:05:30.542 "bdev_aio_create", 00:05:30.542 "bdev_ftl_set_property", 00:05:30.542 "bdev_ftl_get_properties", 00:05:30.542 "bdev_ftl_get_stats", 00:05:30.542 "bdev_ftl_unmap", 00:05:30.542 "bdev_ftl_unload", 00:05:30.542 "bdev_ftl_delete", 00:05:30.542 "bdev_ftl_load", 00:05:30.542 "bdev_ftl_create", 00:05:30.542 "bdev_virtio_attach_controller", 00:05:30.542 "bdev_virtio_scsi_get_devices", 00:05:30.542 "bdev_virtio_detach_controller", 00:05:30.542 "bdev_virtio_blk_set_hotplug", 00:05:30.542 "bdev_iscsi_delete", 00:05:30.542 "bdev_iscsi_create", 00:05:30.542 "bdev_iscsi_set_options", 00:05:30.542 "accel_error_inject_error", 00:05:30.542 "ioat_scan_accel_module", 00:05:30.542 "dsa_scan_accel_module", 00:05:30.542 "iaa_scan_accel_module", 00:05:30.542 "vfu_virtio_create_scsi_endpoint", 00:05:30.542 "vfu_virtio_scsi_remove_target", 00:05:30.542 "vfu_virtio_scsi_add_target", 00:05:30.542 "vfu_virtio_create_blk_endpoint", 00:05:30.542 "vfu_virtio_delete_endpoint", 00:05:30.542 "keyring_file_remove_key", 00:05:30.542 "keyring_file_add_key", 00:05:30.542 "iscsi_get_histogram", 00:05:30.542 "iscsi_enable_histogram", 00:05:30.542 "iscsi_set_options", 00:05:30.542 "iscsi_get_auth_groups", 00:05:30.542 "iscsi_auth_group_remove_secret", 00:05:30.542 "iscsi_auth_group_add_secret", 00:05:30.542 "iscsi_delete_auth_group", 00:05:30.542 "iscsi_create_auth_group", 00:05:30.542 "iscsi_set_discovery_auth", 00:05:30.542 "iscsi_get_options", 00:05:30.542 "iscsi_target_node_request_logout", 00:05:30.542 "iscsi_target_node_set_redirect", 00:05:30.542 "iscsi_target_node_set_auth", 00:05:30.542 "iscsi_target_node_add_lun", 00:05:30.542 "iscsi_get_stats", 00:05:30.542 "iscsi_get_connections", 00:05:30.542 "iscsi_portal_group_set_auth", 00:05:30.542 "iscsi_start_portal_group", 00:05:30.542 "iscsi_delete_portal_group", 00:05:30.542 "iscsi_create_portal_group", 00:05:30.542 "iscsi_get_portal_groups", 00:05:30.542 "iscsi_delete_target_node", 00:05:30.542 "iscsi_target_node_remove_pg_ig_maps", 00:05:30.542 "iscsi_target_node_add_pg_ig_maps", 00:05:30.542 "iscsi_create_target_node", 00:05:30.542 "iscsi_get_target_nodes", 00:05:30.542 "iscsi_delete_initiator_group", 00:05:30.542 "iscsi_initiator_group_remove_initiators", 00:05:30.542 "iscsi_initiator_group_add_initiators", 00:05:30.542 "iscsi_create_initiator_group", 00:05:30.542 "iscsi_get_initiator_groups", 00:05:30.542 "nvmf_set_crdt", 00:05:30.542 "nvmf_set_config", 00:05:30.542 "nvmf_set_max_subsystems", 00:05:30.542 "nvmf_subsystem_get_listeners", 00:05:30.542 "nvmf_subsystem_get_qpairs", 00:05:30.542 "nvmf_subsystem_get_controllers", 00:05:30.542 "nvmf_get_stats", 00:05:30.542 "nvmf_get_transports", 00:05:30.542 "nvmf_create_transport", 00:05:30.542 "nvmf_get_targets", 00:05:30.542 "nvmf_delete_target", 00:05:30.542 "nvmf_create_target", 00:05:30.542 "nvmf_subsystem_allow_any_host", 00:05:30.542 "nvmf_subsystem_remove_host", 00:05:30.542 "nvmf_subsystem_add_host", 00:05:30.542 "nvmf_ns_remove_host", 00:05:30.542 "nvmf_ns_add_host", 00:05:30.542 "nvmf_subsystem_remove_ns", 00:05:30.542 "nvmf_subsystem_add_ns", 00:05:30.542 "nvmf_subsystem_listener_set_ana_state", 00:05:30.542 "nvmf_discovery_get_referrals", 00:05:30.542 "nvmf_discovery_remove_referral", 00:05:30.542 "nvmf_discovery_add_referral", 00:05:30.542 "nvmf_subsystem_remove_listener", 00:05:30.542 "nvmf_subsystem_add_listener", 00:05:30.542 "nvmf_delete_subsystem", 00:05:30.542 "nvmf_create_subsystem", 00:05:30.542 "nvmf_get_subsystems", 00:05:30.542 "env_dpdk_get_mem_stats", 00:05:30.542 "nbd_get_disks", 00:05:30.542 "nbd_stop_disk", 00:05:30.542 "nbd_start_disk", 00:05:30.542 "ublk_recover_disk", 00:05:30.542 "ublk_get_disks", 00:05:30.542 "ublk_stop_disk", 00:05:30.542 "ublk_start_disk", 00:05:30.542 "ublk_destroy_target", 00:05:30.542 "ublk_create_target", 00:05:30.542 "virtio_blk_create_transport", 00:05:30.542 "virtio_blk_get_transports", 00:05:30.542 "vhost_controller_set_coalescing", 00:05:30.542 "vhost_get_controllers", 00:05:30.542 "vhost_delete_controller", 00:05:30.542 "vhost_create_blk_controller", 00:05:30.542 "vhost_scsi_controller_remove_target", 00:05:30.542 "vhost_scsi_controller_add_target", 00:05:30.542 "vhost_start_scsi_controller", 00:05:30.542 "vhost_create_scsi_controller", 00:05:30.542 "thread_set_cpumask", 00:05:30.542 "framework_get_scheduler", 00:05:30.542 "framework_set_scheduler", 00:05:30.542 "framework_get_reactors", 00:05:30.542 "thread_get_io_channels", 00:05:30.542 "thread_get_pollers", 00:05:30.542 "thread_get_stats", 00:05:30.542 "framework_monitor_context_switch", 00:05:30.542 "spdk_kill_instance", 00:05:30.542 "log_enable_timestamps", 00:05:30.542 "log_get_flags", 00:05:30.542 "log_clear_flag", 00:05:30.542 "log_set_flag", 00:05:30.542 "log_get_level", 00:05:30.542 "log_set_level", 00:05:30.542 "log_get_print_level", 00:05:30.542 "log_set_print_level", 00:05:30.542 "framework_enable_cpumask_locks", 00:05:30.542 "framework_disable_cpumask_locks", 00:05:30.542 "framework_wait_init", 00:05:30.542 "framework_start_init", 00:05:30.542 "scsi_get_devices", 00:05:30.542 "bdev_get_histogram", 00:05:30.542 "bdev_enable_histogram", 00:05:30.542 "bdev_set_qos_limit", 00:05:30.542 "bdev_set_qd_sampling_period", 00:05:30.542 "bdev_get_bdevs", 00:05:30.542 "bdev_reset_iostat", 00:05:30.542 "bdev_get_iostat", 00:05:30.542 "bdev_examine", 00:05:30.542 "bdev_wait_for_examine", 00:05:30.542 "bdev_set_options", 00:05:30.542 "notify_get_notifications", 00:05:30.542 "notify_get_types", 00:05:30.542 "accel_get_stats", 00:05:30.542 "accel_set_options", 00:05:30.543 "accel_set_driver", 00:05:30.543 "accel_crypto_key_destroy", 00:05:30.543 "accel_crypto_keys_get", 00:05:30.543 "accel_crypto_key_create", 00:05:30.543 "accel_assign_opc", 00:05:30.543 "accel_get_module_info", 00:05:30.543 "accel_get_opc_assignments", 00:05:30.543 "vmd_rescan", 00:05:30.543 "vmd_remove_device", 00:05:30.543 "vmd_enable", 00:05:30.543 "sock_get_default_impl", 00:05:30.543 "sock_set_default_impl", 00:05:30.543 "sock_impl_set_options", 00:05:30.543 "sock_impl_get_options", 00:05:30.543 "iobuf_get_stats", 00:05:30.543 "iobuf_set_options", 00:05:30.543 "keyring_get_keys", 00:05:30.543 "framework_get_pci_devices", 00:05:30.543 "framework_get_config", 00:05:30.543 "framework_get_subsystems", 00:05:30.543 "vfu_tgt_set_base_path", 00:05:30.543 "trace_get_info", 00:05:30.543 "trace_get_tpoint_group_mask", 00:05:30.543 "trace_disable_tpoint_group", 00:05:30.543 "trace_enable_tpoint_group", 00:05:30.543 "trace_clear_tpoint_mask", 00:05:30.543 "trace_set_tpoint_mask", 00:05:30.543 "spdk_get_version", 00:05:30.543 "rpc_get_methods" 00:05:30.543 ] 00:05:30.543 23:08:19 -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:05:30.543 23:08:19 -- common/autotest_common.sh@716 -- # xtrace_disable 00:05:30.543 23:08:19 -- common/autotest_common.sh@10 -- # set +x 00:05:30.543 23:08:19 -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:05:30.543 23:08:19 -- spdkcli/tcp.sh@38 -- # killprocess 3724106 00:05:30.543 23:08:19 -- common/autotest_common.sh@936 -- # '[' -z 3724106 ']' 00:05:30.543 23:08:19 -- common/autotest_common.sh@940 -- # kill -0 3724106 00:05:30.543 23:08:19 -- common/autotest_common.sh@941 -- # uname 00:05:30.543 23:08:19 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:30.543 23:08:19 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3724106 00:05:30.543 23:08:19 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:30.543 23:08:19 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:30.543 23:08:19 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3724106' 00:05:30.543 killing process with pid 3724106 00:05:30.543 23:08:19 -- common/autotest_common.sh@955 -- # kill 3724106 00:05:30.543 23:08:19 -- common/autotest_common.sh@960 -- # wait 3724106 00:05:30.803 00:05:30.803 real 0m1.377s 00:05:30.803 user 0m2.578s 00:05:30.803 sys 0m0.403s 00:05:30.803 23:08:19 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:30.803 23:08:19 -- common/autotest_common.sh@10 -- # set +x 00:05:30.803 ************************************ 00:05:30.803 END TEST spdkcli_tcp 00:05:30.803 ************************************ 00:05:30.803 23:08:19 -- spdk/autotest.sh@176 -- # run_test dpdk_mem_utility /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:30.803 23:08:19 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:30.803 23:08:19 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:30.803 23:08:19 -- common/autotest_common.sh@10 -- # set +x 00:05:30.803 ************************************ 00:05:30.803 START TEST dpdk_mem_utility 00:05:30.803 ************************************ 00:05:30.803 23:08:20 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:31.063 * Looking for test storage... 00:05:31.063 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility 00:05:31.063 23:08:20 -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:05:31.063 23:08:20 -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:31.063 23:08:20 -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=3724514 00:05:31.063 23:08:20 -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 3724514 00:05:31.063 23:08:20 -- common/autotest_common.sh@817 -- # '[' -z 3724514 ']' 00:05:31.063 23:08:20 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:31.063 23:08:20 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:31.063 23:08:20 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:31.063 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:31.063 23:08:20 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:31.063 23:08:20 -- common/autotest_common.sh@10 -- # set +x 00:05:31.063 [2024-04-26 23:08:20.172811] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:05:31.063 [2024-04-26 23:08:20.172877] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3724514 ] 00:05:31.063 EAL: No free 2048 kB hugepages reported on node 1 00:05:31.063 [2024-04-26 23:08:20.235354] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:31.063 [2024-04-26 23:08:20.266017] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:31.325 23:08:20 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:05:31.325 23:08:20 -- common/autotest_common.sh@850 -- # return 0 00:05:31.325 23:08:20 -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:05:31.325 23:08:20 -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:05:31.325 23:08:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:31.325 23:08:20 -- common/autotest_common.sh@10 -- # set +x 00:05:31.325 { 00:05:31.325 "filename": "/tmp/spdk_mem_dump.txt" 00:05:31.325 } 00:05:31.325 23:08:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:31.325 23:08:20 -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:05:31.325 DPDK memory size 814.000000 MiB in 1 heap(s) 00:05:31.325 1 heaps totaling size 814.000000 MiB 00:05:31.325 size: 814.000000 MiB heap id: 0 00:05:31.325 end heaps---------- 00:05:31.325 8 mempools totaling size 598.116089 MiB 00:05:31.325 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:05:31.325 size: 158.602051 MiB name: PDU_data_out_Pool 00:05:31.325 size: 84.521057 MiB name: bdev_io_3724514 00:05:31.325 size: 51.011292 MiB name: evtpool_3724514 00:05:31.325 size: 50.003479 MiB name: msgpool_3724514 00:05:31.325 size: 21.763794 MiB name: PDU_Pool 00:05:31.325 size: 19.513306 MiB name: SCSI_TASK_Pool 00:05:31.325 size: 0.026123 MiB name: Session_Pool 00:05:31.325 end mempools------- 00:05:31.325 6 memzones totaling size 4.142822 MiB 00:05:31.325 size: 1.000366 MiB name: RG_ring_0_3724514 00:05:31.325 size: 1.000366 MiB name: RG_ring_1_3724514 00:05:31.325 size: 1.000366 MiB name: RG_ring_4_3724514 00:05:31.325 size: 1.000366 MiB name: RG_ring_5_3724514 00:05:31.325 size: 0.125366 MiB name: RG_ring_2_3724514 00:05:31.325 size: 0.015991 MiB name: RG_ring_3_3724514 00:05:31.325 end memzones------- 00:05:31.325 23:08:20 -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:05:31.325 heap id: 0 total size: 814.000000 MiB number of busy elements: 41 number of free elements: 15 00:05:31.325 list of free elements. size: 12.519348 MiB 00:05:31.325 element at address: 0x200000400000 with size: 1.999512 MiB 00:05:31.325 element at address: 0x200018e00000 with size: 0.999878 MiB 00:05:31.325 element at address: 0x200019000000 with size: 0.999878 MiB 00:05:31.325 element at address: 0x200003e00000 with size: 0.996277 MiB 00:05:31.325 element at address: 0x200031c00000 with size: 0.994446 MiB 00:05:31.325 element at address: 0x200013800000 with size: 0.978699 MiB 00:05:31.325 element at address: 0x200007000000 with size: 0.959839 MiB 00:05:31.325 element at address: 0x200019200000 with size: 0.936584 MiB 00:05:31.325 element at address: 0x200000200000 with size: 0.841614 MiB 00:05:31.325 element at address: 0x20001aa00000 with size: 0.582886 MiB 00:05:31.325 element at address: 0x20000b200000 with size: 0.490723 MiB 00:05:31.325 element at address: 0x200000800000 with size: 0.487793 MiB 00:05:31.325 element at address: 0x200019400000 with size: 0.485657 MiB 00:05:31.325 element at address: 0x200027e00000 with size: 0.410034 MiB 00:05:31.325 element at address: 0x200003a00000 with size: 0.355530 MiB 00:05:31.325 list of standard malloc elements. size: 199.218079 MiB 00:05:31.325 element at address: 0x20000b3fff80 with size: 132.000122 MiB 00:05:31.325 element at address: 0x2000071fff80 with size: 64.000122 MiB 00:05:31.325 element at address: 0x200018efff80 with size: 1.000122 MiB 00:05:31.325 element at address: 0x2000190fff80 with size: 1.000122 MiB 00:05:31.325 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:05:31.325 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:05:31.325 element at address: 0x2000192eff00 with size: 0.062622 MiB 00:05:31.325 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:05:31.325 element at address: 0x2000192efdc0 with size: 0.000305 MiB 00:05:31.325 element at address: 0x2000002d7740 with size: 0.000183 MiB 00:05:31.325 element at address: 0x2000002d7800 with size: 0.000183 MiB 00:05:31.325 element at address: 0x2000002d78c0 with size: 0.000183 MiB 00:05:31.325 element at address: 0x2000002d7ac0 with size: 0.000183 MiB 00:05:31.325 element at address: 0x2000002d7b80 with size: 0.000183 MiB 00:05:31.325 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:05:31.325 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:05:31.325 element at address: 0x20000087ce00 with size: 0.000183 MiB 00:05:31.325 element at address: 0x20000087cec0 with size: 0.000183 MiB 00:05:31.325 element at address: 0x2000008fd180 with size: 0.000183 MiB 00:05:31.325 element at address: 0x200003a5b040 with size: 0.000183 MiB 00:05:31.325 element at address: 0x200003adb300 with size: 0.000183 MiB 00:05:31.325 element at address: 0x200003adb500 with size: 0.000183 MiB 00:05:31.325 element at address: 0x200003adf7c0 with size: 0.000183 MiB 00:05:31.325 element at address: 0x200003affa80 with size: 0.000183 MiB 00:05:31.325 element at address: 0x200003affb40 with size: 0.000183 MiB 00:05:31.325 element at address: 0x200003eff0c0 with size: 0.000183 MiB 00:05:31.325 element at address: 0x2000070fdd80 with size: 0.000183 MiB 00:05:31.325 element at address: 0x20000b27da00 with size: 0.000183 MiB 00:05:31.325 element at address: 0x20000b27dac0 with size: 0.000183 MiB 00:05:31.325 element at address: 0x20000b2fdd80 with size: 0.000183 MiB 00:05:31.325 element at address: 0x2000138fa8c0 with size: 0.000183 MiB 00:05:31.325 element at address: 0x2000192efc40 with size: 0.000183 MiB 00:05:31.325 element at address: 0x2000192efd00 with size: 0.000183 MiB 00:05:31.325 element at address: 0x2000194bc740 with size: 0.000183 MiB 00:05:31.325 element at address: 0x20001aa95380 with size: 0.000183 MiB 00:05:31.325 element at address: 0x20001aa95440 with size: 0.000183 MiB 00:05:31.325 element at address: 0x200027e68f80 with size: 0.000183 MiB 00:05:31.326 element at address: 0x200027e69040 with size: 0.000183 MiB 00:05:31.326 element at address: 0x200027e6fc40 with size: 0.000183 MiB 00:05:31.326 element at address: 0x200027e6fe40 with size: 0.000183 MiB 00:05:31.326 element at address: 0x200027e6ff00 with size: 0.000183 MiB 00:05:31.326 list of memzone associated elements. size: 602.262573 MiB 00:05:31.326 element at address: 0x20001aa95500 with size: 211.416748 MiB 00:05:31.326 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:05:31.326 element at address: 0x200027e6ffc0 with size: 157.562561 MiB 00:05:31.326 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:05:31.326 element at address: 0x2000139fab80 with size: 84.020630 MiB 00:05:31.326 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_3724514_0 00:05:31.326 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:05:31.326 associated memzone info: size: 48.002930 MiB name: MP_evtpool_3724514_0 00:05:31.326 element at address: 0x200003fff380 with size: 48.003052 MiB 00:05:31.326 associated memzone info: size: 48.002930 MiB name: MP_msgpool_3724514_0 00:05:31.326 element at address: 0x2000195be940 with size: 20.255554 MiB 00:05:31.326 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:05:31.326 element at address: 0x200031dfeb40 with size: 18.005066 MiB 00:05:31.326 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:05:31.326 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:05:31.326 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_3724514 00:05:31.326 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:05:31.326 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_3724514 00:05:31.326 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:05:31.326 associated memzone info: size: 1.007996 MiB name: MP_evtpool_3724514 00:05:31.326 element at address: 0x20000b2fde40 with size: 1.008118 MiB 00:05:31.326 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:05:31.326 element at address: 0x2000194bc800 with size: 1.008118 MiB 00:05:31.326 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:05:31.326 element at address: 0x2000070fde40 with size: 1.008118 MiB 00:05:31.326 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:05:31.326 element at address: 0x2000008fd240 with size: 1.008118 MiB 00:05:31.326 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:05:31.326 element at address: 0x200003eff180 with size: 1.000488 MiB 00:05:31.326 associated memzone info: size: 1.000366 MiB name: RG_ring_0_3724514 00:05:31.326 element at address: 0x200003affc00 with size: 1.000488 MiB 00:05:31.326 associated memzone info: size: 1.000366 MiB name: RG_ring_1_3724514 00:05:31.326 element at address: 0x2000138fa980 with size: 1.000488 MiB 00:05:31.326 associated memzone info: size: 1.000366 MiB name: RG_ring_4_3724514 00:05:31.326 element at address: 0x200031cfe940 with size: 1.000488 MiB 00:05:31.326 associated memzone info: size: 1.000366 MiB name: RG_ring_5_3724514 00:05:31.326 element at address: 0x200003a5b100 with size: 0.500488 MiB 00:05:31.326 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_3724514 00:05:31.326 element at address: 0x20000b27db80 with size: 0.500488 MiB 00:05:31.326 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:05:31.326 element at address: 0x20000087cf80 with size: 0.500488 MiB 00:05:31.326 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:05:31.326 element at address: 0x20001947c540 with size: 0.250488 MiB 00:05:31.326 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:05:31.326 element at address: 0x200003adf880 with size: 0.125488 MiB 00:05:31.326 associated memzone info: size: 0.125366 MiB name: RG_ring_2_3724514 00:05:31.326 element at address: 0x2000070f5b80 with size: 0.031738 MiB 00:05:31.326 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:05:31.326 element at address: 0x200027e69100 with size: 0.023743 MiB 00:05:31.326 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:05:31.326 element at address: 0x200003adb5c0 with size: 0.016113 MiB 00:05:31.326 associated memzone info: size: 0.015991 MiB name: RG_ring_3_3724514 00:05:31.326 element at address: 0x200027e6f240 with size: 0.002441 MiB 00:05:31.326 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:05:31.326 element at address: 0x2000002d7980 with size: 0.000305 MiB 00:05:31.326 associated memzone info: size: 0.000183 MiB name: MP_msgpool_3724514 00:05:31.326 element at address: 0x200003adb3c0 with size: 0.000305 MiB 00:05:31.326 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_3724514 00:05:31.326 element at address: 0x200027e6fd00 with size: 0.000305 MiB 00:05:31.326 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:05:31.326 23:08:20 -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:05:31.326 23:08:20 -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 3724514 00:05:31.326 23:08:20 -- common/autotest_common.sh@936 -- # '[' -z 3724514 ']' 00:05:31.326 23:08:20 -- common/autotest_common.sh@940 -- # kill -0 3724514 00:05:31.326 23:08:20 -- common/autotest_common.sh@941 -- # uname 00:05:31.326 23:08:20 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:31.326 23:08:20 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3724514 00:05:31.326 23:08:20 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:31.326 23:08:20 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:31.326 23:08:20 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3724514' 00:05:31.326 killing process with pid 3724514 00:05:31.326 23:08:20 -- common/autotest_common.sh@955 -- # kill 3724514 00:05:31.326 23:08:20 -- common/autotest_common.sh@960 -- # wait 3724514 00:05:31.587 00:05:31.587 real 0m0.735s 00:05:31.587 user 0m0.733s 00:05:31.587 sys 0m0.307s 00:05:31.587 23:08:20 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:31.587 23:08:20 -- common/autotest_common.sh@10 -- # set +x 00:05:31.587 ************************************ 00:05:31.587 END TEST dpdk_mem_utility 00:05:31.587 ************************************ 00:05:31.587 23:08:20 -- spdk/autotest.sh@177 -- # run_test event /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:05:31.587 23:08:20 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:31.587 23:08:20 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:31.587 23:08:20 -- common/autotest_common.sh@10 -- # set +x 00:05:31.909 ************************************ 00:05:31.909 START TEST event 00:05:31.909 ************************************ 00:05:31.909 23:08:20 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:05:31.909 * Looking for test storage... 00:05:31.909 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:05:31.909 23:08:21 -- event/event.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/nbd_common.sh 00:05:31.909 23:08:21 -- bdev/nbd_common.sh@6 -- # set -e 00:05:31.909 23:08:21 -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:31.909 23:08:21 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:05:31.910 23:08:21 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:31.910 23:08:21 -- common/autotest_common.sh@10 -- # set +x 00:05:32.171 ************************************ 00:05:32.171 START TEST event_perf 00:05:32.171 ************************************ 00:05:32.171 23:08:21 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:32.171 Running I/O for 1 seconds...[2024-04-26 23:08:21.222830] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:05:32.171 [2024-04-26 23:08:21.222922] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3724909 ] 00:05:32.171 EAL: No free 2048 kB hugepages reported on node 1 00:05:32.171 [2024-04-26 23:08:21.291931] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:32.171 [2024-04-26 23:08:21.331231] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:32.171 [2024-04-26 23:08:21.331374] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:05:32.171 [2024-04-26 23:08:21.331554] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:32.171 Running I/O for 1 seconds...[2024-04-26 23:08:21.331554] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:05:33.111 00:05:33.111 lcore 0: 173010 00:05:33.111 lcore 1: 173011 00:05:33.111 lcore 2: 173009 00:05:33.111 lcore 3: 173012 00:05:33.372 done. 00:05:33.372 00:05:33.372 real 0m1.169s 00:05:33.372 user 0m4.083s 00:05:33.372 sys 0m0.085s 00:05:33.372 23:08:22 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:33.372 23:08:22 -- common/autotest_common.sh@10 -- # set +x 00:05:33.372 ************************************ 00:05:33.372 END TEST event_perf 00:05:33.372 ************************************ 00:05:33.372 23:08:22 -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:05:33.372 23:08:22 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:05:33.372 23:08:22 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:33.372 23:08:22 -- common/autotest_common.sh@10 -- # set +x 00:05:33.372 ************************************ 00:05:33.372 START TEST event_reactor 00:05:33.372 ************************************ 00:05:33.372 23:08:22 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:05:33.372 [2024-04-26 23:08:22.579669] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:05:33.372 [2024-04-26 23:08:22.579762] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3725118 ] 00:05:33.372 EAL: No free 2048 kB hugepages reported on node 1 00:05:33.632 [2024-04-26 23:08:22.648067] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:33.632 [2024-04-26 23:08:22.684926] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:34.576 test_start 00:05:34.576 oneshot 00:05:34.576 tick 100 00:05:34.576 tick 100 00:05:34.576 tick 250 00:05:34.576 tick 100 00:05:34.576 tick 100 00:05:34.576 tick 100 00:05:34.576 tick 250 00:05:34.576 tick 500 00:05:34.576 tick 100 00:05:34.576 tick 100 00:05:34.576 tick 250 00:05:34.576 tick 100 00:05:34.576 tick 100 00:05:34.576 test_end 00:05:34.576 00:05:34.576 real 0m1.164s 00:05:34.576 user 0m1.079s 00:05:34.576 sys 0m0.081s 00:05:34.576 23:08:23 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:34.576 23:08:23 -- common/autotest_common.sh@10 -- # set +x 00:05:34.576 ************************************ 00:05:34.576 END TEST event_reactor 00:05:34.576 ************************************ 00:05:34.576 23:08:23 -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:34.576 23:08:23 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:05:34.576 23:08:23 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:34.576 23:08:23 -- common/autotest_common.sh@10 -- # set +x 00:05:34.845 ************************************ 00:05:34.845 START TEST event_reactor_perf 00:05:34.845 ************************************ 00:05:34.845 23:08:23 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:34.845 [2024-04-26 23:08:23.908263] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:05:34.845 [2024-04-26 23:08:23.908351] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3725320 ] 00:05:34.845 EAL: No free 2048 kB hugepages reported on node 1 00:05:34.845 [2024-04-26 23:08:23.972312] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:34.845 [2024-04-26 23:08:24.002979] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:35.788 test_start 00:05:35.788 test_end 00:05:35.788 Performance: 367243 events per second 00:05:35.788 00:05:35.788 real 0m1.153s 00:05:35.788 user 0m1.085s 00:05:35.788 sys 0m0.065s 00:05:35.788 23:08:25 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:35.788 23:08:25 -- common/autotest_common.sh@10 -- # set +x 00:05:35.788 ************************************ 00:05:35.788 END TEST event_reactor_perf 00:05:35.788 ************************************ 00:05:36.049 23:08:25 -- event/event.sh@49 -- # uname -s 00:05:36.049 23:08:25 -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:05:36.049 23:08:25 -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:05:36.049 23:08:25 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:36.049 23:08:25 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:36.049 23:08:25 -- common/autotest_common.sh@10 -- # set +x 00:05:36.049 ************************************ 00:05:36.049 START TEST event_scheduler 00:05:36.049 ************************************ 00:05:36.049 23:08:25 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:05:36.310 * Looking for test storage... 00:05:36.310 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler 00:05:36.310 23:08:25 -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:05:36.310 23:08:25 -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:05:36.310 23:08:25 -- scheduler/scheduler.sh@35 -- # scheduler_pid=3725708 00:05:36.310 23:08:25 -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:05:36.310 23:08:25 -- scheduler/scheduler.sh@37 -- # waitforlisten 3725708 00:05:36.310 23:08:25 -- common/autotest_common.sh@817 -- # '[' -z 3725708 ']' 00:05:36.310 23:08:25 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:36.310 23:08:25 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:36.310 23:08:25 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:36.310 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:36.310 23:08:25 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:36.310 23:08:25 -- common/autotest_common.sh@10 -- # set +x 00:05:36.310 [2024-04-26 23:08:25.347500] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:05:36.310 [2024-04-26 23:08:25.347553] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3725708 ] 00:05:36.310 EAL: No free 2048 kB hugepages reported on node 1 00:05:36.310 [2024-04-26 23:08:25.397176] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:36.310 [2024-04-26 23:08:25.434931] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:36.310 [2024-04-26 23:08:25.435090] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:05:36.310 [2024-04-26 23:08:25.435091] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:05:36.310 [2024-04-26 23:08:25.434961] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:36.310 23:08:25 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:05:36.310 23:08:25 -- common/autotest_common.sh@850 -- # return 0 00:05:36.310 23:08:25 -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:05:36.310 23:08:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:36.310 23:08:25 -- common/autotest_common.sh@10 -- # set +x 00:05:36.310 POWER: Env isn't set yet! 00:05:36.310 POWER: Attempting to initialise ACPI cpufreq power management... 00:05:36.310 POWER: Failed to write /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:36.310 POWER: Cannot set governor of lcore 0 to userspace 00:05:36.310 POWER: Attempting to initialise PSTAT power management... 00:05:36.310 POWER: Power management governor of lcore 0 has been set to 'performance' successfully 00:05:36.310 POWER: Initialized successfully for lcore 0 power management 00:05:36.310 POWER: Power management governor of lcore 1 has been set to 'performance' successfully 00:05:36.310 POWER: Initialized successfully for lcore 1 power management 00:05:36.310 POWER: Power management governor of lcore 2 has been set to 'performance' successfully 00:05:36.310 POWER: Initialized successfully for lcore 2 power management 00:05:36.310 POWER: Power management governor of lcore 3 has been set to 'performance' successfully 00:05:36.310 POWER: Initialized successfully for lcore 3 power management 00:05:36.310 23:08:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:36.310 23:08:25 -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:05:36.310 23:08:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:36.310 23:08:25 -- common/autotest_common.sh@10 -- # set +x 00:05:36.572 [2024-04-26 23:08:25.585439] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:05:36.572 23:08:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:36.572 23:08:25 -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:05:36.572 23:08:25 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:36.572 23:08:25 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:36.572 23:08:25 -- common/autotest_common.sh@10 -- # set +x 00:05:36.572 ************************************ 00:05:36.572 START TEST scheduler_create_thread 00:05:36.572 ************************************ 00:05:36.572 23:08:25 -- common/autotest_common.sh@1111 -- # scheduler_create_thread 00:05:36.572 23:08:25 -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:05:36.572 23:08:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:36.572 23:08:25 -- common/autotest_common.sh@10 -- # set +x 00:05:36.572 2 00:05:36.572 23:08:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:36.572 23:08:25 -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:05:36.572 23:08:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:36.572 23:08:25 -- common/autotest_common.sh@10 -- # set +x 00:05:36.572 3 00:05:36.572 23:08:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:36.572 23:08:25 -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:05:36.572 23:08:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:36.572 23:08:25 -- common/autotest_common.sh@10 -- # set +x 00:05:36.572 4 00:05:36.572 23:08:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:36.572 23:08:25 -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:05:36.572 23:08:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:36.572 23:08:25 -- common/autotest_common.sh@10 -- # set +x 00:05:36.572 5 00:05:36.572 23:08:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:36.572 23:08:25 -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:05:36.572 23:08:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:36.572 23:08:25 -- common/autotest_common.sh@10 -- # set +x 00:05:36.572 6 00:05:36.572 23:08:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:36.572 23:08:25 -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:05:36.572 23:08:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:36.572 23:08:25 -- common/autotest_common.sh@10 -- # set +x 00:05:36.572 7 00:05:36.572 23:08:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:36.572 23:08:25 -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:05:36.572 23:08:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:36.572 23:08:25 -- common/autotest_common.sh@10 -- # set +x 00:05:36.833 8 00:05:36.833 23:08:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:36.833 23:08:25 -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:05:36.833 23:08:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:36.833 23:08:25 -- common/autotest_common.sh@10 -- # set +x 00:05:37.436 9 00:05:37.436 23:08:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:37.436 23:08:26 -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:05:37.436 23:08:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:37.436 23:08:26 -- common/autotest_common.sh@10 -- # set +x 00:05:38.821 10 00:05:38.821 23:08:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:38.821 23:08:27 -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:05:38.821 23:08:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:38.821 23:08:27 -- common/autotest_common.sh@10 -- # set +x 00:05:40.209 23:08:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:40.209 23:08:29 -- scheduler/scheduler.sh@22 -- # thread_id=11 00:05:40.209 23:08:29 -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:05:40.209 23:08:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:40.209 23:08:29 -- common/autotest_common.sh@10 -- # set +x 00:05:40.780 23:08:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:40.780 23:08:29 -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:05:40.780 23:08:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:40.780 23:08:29 -- common/autotest_common.sh@10 -- # set +x 00:05:41.724 23:08:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:41.724 23:08:30 -- scheduler/scheduler.sh@25 -- # thread_id=12 00:05:41.724 23:08:30 -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:05:41.724 23:08:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:41.724 23:08:30 -- common/autotest_common.sh@10 -- # set +x 00:05:42.296 23:08:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:42.296 00:05:42.296 real 0m5.598s 00:05:42.296 user 0m0.027s 00:05:42.296 sys 0m0.003s 00:05:42.296 23:08:31 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:42.296 23:08:31 -- common/autotest_common.sh@10 -- # set +x 00:05:42.296 ************************************ 00:05:42.296 END TEST scheduler_create_thread 00:05:42.296 ************************************ 00:05:42.296 23:08:31 -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:05:42.296 23:08:31 -- scheduler/scheduler.sh@46 -- # killprocess 3725708 00:05:42.296 23:08:31 -- common/autotest_common.sh@936 -- # '[' -z 3725708 ']' 00:05:42.296 23:08:31 -- common/autotest_common.sh@940 -- # kill -0 3725708 00:05:42.296 23:08:31 -- common/autotest_common.sh@941 -- # uname 00:05:42.296 23:08:31 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:42.296 23:08:31 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3725708 00:05:42.296 23:08:31 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:05:42.296 23:08:31 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:05:42.296 23:08:31 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3725708' 00:05:42.296 killing process with pid 3725708 00:05:42.296 23:08:31 -- common/autotest_common.sh@955 -- # kill 3725708 00:05:42.296 23:08:31 -- common/autotest_common.sh@960 -- # wait 3725708 00:05:42.557 [2024-04-26 23:08:31.610126] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:05:42.557 POWER: Power management governor of lcore 0 has been set to 'powersave' successfully 00:05:42.557 POWER: Power management of lcore 0 has exited from 'performance' mode and been set back to the original 00:05:42.557 POWER: Power management governor of lcore 1 has been set to 'powersave' successfully 00:05:42.557 POWER: Power management of lcore 1 has exited from 'performance' mode and been set back to the original 00:05:42.557 POWER: Power management governor of lcore 2 has been set to 'powersave' successfully 00:05:42.557 POWER: Power management of lcore 2 has exited from 'performance' mode and been set back to the original 00:05:42.557 POWER: Power management governor of lcore 3 has been set to 'powersave' successfully 00:05:42.557 POWER: Power management of lcore 3 has exited from 'performance' mode and been set back to the original 00:05:42.557 00:05:42.557 real 0m6.541s 00:05:42.557 user 0m12.552s 00:05:42.557 sys 0m0.391s 00:05:42.557 23:08:31 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:42.557 23:08:31 -- common/autotest_common.sh@10 -- # set +x 00:05:42.557 ************************************ 00:05:42.557 END TEST event_scheduler 00:05:42.557 ************************************ 00:05:42.557 23:08:31 -- event/event.sh@51 -- # modprobe -n nbd 00:05:42.557 23:08:31 -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:05:42.557 23:08:31 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:42.557 23:08:31 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:42.557 23:08:31 -- common/autotest_common.sh@10 -- # set +x 00:05:42.819 ************************************ 00:05:42.819 START TEST app_repeat 00:05:42.819 ************************************ 00:05:42.819 23:08:31 -- common/autotest_common.sh@1111 -- # app_repeat_test 00:05:42.819 23:08:31 -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:42.819 23:08:31 -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:42.819 23:08:31 -- event/event.sh@13 -- # local nbd_list 00:05:42.819 23:08:31 -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:42.819 23:08:31 -- event/event.sh@14 -- # local bdev_list 00:05:42.819 23:08:31 -- event/event.sh@15 -- # local repeat_times=4 00:05:42.819 23:08:31 -- event/event.sh@17 -- # modprobe nbd 00:05:42.819 23:08:31 -- event/event.sh@19 -- # repeat_pid=3727115 00:05:42.819 23:08:31 -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:05:42.819 23:08:31 -- event/event.sh@21 -- # echo 'Process app_repeat pid: 3727115' 00:05:42.819 Process app_repeat pid: 3727115 00:05:42.819 23:08:31 -- event/event.sh@23 -- # for i in {0..2} 00:05:42.819 23:08:31 -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:05:42.819 spdk_app_start Round 0 00:05:42.819 23:08:31 -- event/event.sh@25 -- # waitforlisten 3727115 /var/tmp/spdk-nbd.sock 00:05:42.819 23:08:31 -- common/autotest_common.sh@817 -- # '[' -z 3727115 ']' 00:05:42.819 23:08:31 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:42.819 23:08:31 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:42.819 23:08:31 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:42.819 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:42.819 23:08:31 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:42.819 23:08:31 -- common/autotest_common.sh@10 -- # set +x 00:05:42.819 23:08:31 -- event/event.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:05:42.819 [2024-04-26 23:08:31.985251] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:05:42.819 [2024-04-26 23:08:31.985319] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3727115 ] 00:05:42.819 EAL: No free 2048 kB hugepages reported on node 1 00:05:42.819 [2024-04-26 23:08:32.051273] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:43.081 [2024-04-26 23:08:32.089542] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:43.081 [2024-04-26 23:08:32.089548] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:43.081 23:08:32 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:05:43.081 23:08:32 -- common/autotest_common.sh@850 -- # return 0 00:05:43.081 23:08:32 -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:43.081 Malloc0 00:05:43.081 23:08:32 -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:43.343 Malloc1 00:05:43.343 23:08:32 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:43.343 23:08:32 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:43.343 23:08:32 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:43.343 23:08:32 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:43.343 23:08:32 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:43.343 23:08:32 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:43.343 23:08:32 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:43.343 23:08:32 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:43.343 23:08:32 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:43.343 23:08:32 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:43.343 23:08:32 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:43.343 23:08:32 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:43.343 23:08:32 -- bdev/nbd_common.sh@12 -- # local i 00:05:43.343 23:08:32 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:43.343 23:08:32 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:43.343 23:08:32 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:43.604 /dev/nbd0 00:05:43.604 23:08:32 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:43.604 23:08:32 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:43.604 23:08:32 -- common/autotest_common.sh@854 -- # local nbd_name=nbd0 00:05:43.604 23:08:32 -- common/autotest_common.sh@855 -- # local i 00:05:43.604 23:08:32 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:05:43.604 23:08:32 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:05:43.604 23:08:32 -- common/autotest_common.sh@858 -- # grep -q -w nbd0 /proc/partitions 00:05:43.604 23:08:32 -- common/autotest_common.sh@859 -- # break 00:05:43.604 23:08:32 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:05:43.604 23:08:32 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:05:43.604 23:08:32 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:43.604 1+0 records in 00:05:43.604 1+0 records out 00:05:43.604 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000253866 s, 16.1 MB/s 00:05:43.604 23:08:32 -- common/autotest_common.sh@872 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:43.604 23:08:32 -- common/autotest_common.sh@872 -- # size=4096 00:05:43.604 23:08:32 -- common/autotest_common.sh@873 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:43.604 23:08:32 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:05:43.604 23:08:32 -- common/autotest_common.sh@875 -- # return 0 00:05:43.604 23:08:32 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:43.604 23:08:32 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:43.604 23:08:32 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:43.604 /dev/nbd1 00:05:43.604 23:08:32 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:43.604 23:08:32 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:43.604 23:08:32 -- common/autotest_common.sh@854 -- # local nbd_name=nbd1 00:05:43.604 23:08:32 -- common/autotest_common.sh@855 -- # local i 00:05:43.604 23:08:32 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:05:43.604 23:08:32 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:05:43.604 23:08:32 -- common/autotest_common.sh@858 -- # grep -q -w nbd1 /proc/partitions 00:05:43.604 23:08:32 -- common/autotest_common.sh@859 -- # break 00:05:43.604 23:08:32 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:05:43.604 23:08:32 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:05:43.604 23:08:32 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:43.604 1+0 records in 00:05:43.604 1+0 records out 00:05:43.604 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000237512 s, 17.2 MB/s 00:05:43.604 23:08:32 -- common/autotest_common.sh@872 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:43.604 23:08:32 -- common/autotest_common.sh@872 -- # size=4096 00:05:43.604 23:08:32 -- common/autotest_common.sh@873 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:43.867 23:08:32 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:05:43.867 23:08:32 -- common/autotest_common.sh@875 -- # return 0 00:05:43.867 23:08:32 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:43.867 23:08:32 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:43.867 23:08:32 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:43.867 23:08:32 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:43.867 23:08:32 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:43.867 23:08:33 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:43.867 { 00:05:43.867 "nbd_device": "/dev/nbd0", 00:05:43.867 "bdev_name": "Malloc0" 00:05:43.867 }, 00:05:43.867 { 00:05:43.867 "nbd_device": "/dev/nbd1", 00:05:43.867 "bdev_name": "Malloc1" 00:05:43.867 } 00:05:43.867 ]' 00:05:43.867 23:08:33 -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:43.867 { 00:05:43.867 "nbd_device": "/dev/nbd0", 00:05:43.867 "bdev_name": "Malloc0" 00:05:43.867 }, 00:05:43.867 { 00:05:43.867 "nbd_device": "/dev/nbd1", 00:05:43.867 "bdev_name": "Malloc1" 00:05:43.867 } 00:05:43.867 ]' 00:05:43.867 23:08:33 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:43.867 23:08:33 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:43.867 /dev/nbd1' 00:05:43.867 23:08:33 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:43.867 /dev/nbd1' 00:05:43.867 23:08:33 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:43.867 23:08:33 -- bdev/nbd_common.sh@65 -- # count=2 00:05:43.867 23:08:33 -- bdev/nbd_common.sh@66 -- # echo 2 00:05:43.867 23:08:33 -- bdev/nbd_common.sh@95 -- # count=2 00:05:43.867 23:08:33 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:43.867 23:08:33 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:43.867 23:08:33 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:43.867 23:08:33 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:43.867 23:08:33 -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:43.867 23:08:33 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:43.867 23:08:33 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:43.867 23:08:33 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:43.867 256+0 records in 00:05:43.867 256+0 records out 00:05:43.868 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0124976 s, 83.9 MB/s 00:05:43.868 23:08:33 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:43.868 23:08:33 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:43.868 256+0 records in 00:05:43.868 256+0 records out 00:05:43.868 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0153119 s, 68.5 MB/s 00:05:43.868 23:08:33 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:43.868 23:08:33 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:44.129 256+0 records in 00:05:44.129 256+0 records out 00:05:44.129 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0173061 s, 60.6 MB/s 00:05:44.129 23:08:33 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:44.129 23:08:33 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:44.129 23:08:33 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:44.129 23:08:33 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:44.129 23:08:33 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:44.129 23:08:33 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:44.129 23:08:33 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:44.129 23:08:33 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:44.129 23:08:33 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:44.129 23:08:33 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:44.129 23:08:33 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:44.130 23:08:33 -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:44.130 23:08:33 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:44.130 23:08:33 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:44.130 23:08:33 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:44.130 23:08:33 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:44.130 23:08:33 -- bdev/nbd_common.sh@51 -- # local i 00:05:44.130 23:08:33 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:44.130 23:08:33 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:44.130 23:08:33 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:44.130 23:08:33 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:44.130 23:08:33 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:44.130 23:08:33 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:44.130 23:08:33 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:44.130 23:08:33 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:44.130 23:08:33 -- bdev/nbd_common.sh@41 -- # break 00:05:44.130 23:08:33 -- bdev/nbd_common.sh@45 -- # return 0 00:05:44.130 23:08:33 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:44.130 23:08:33 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:44.391 23:08:33 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:44.391 23:08:33 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:44.391 23:08:33 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:44.391 23:08:33 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:44.391 23:08:33 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:44.391 23:08:33 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:44.391 23:08:33 -- bdev/nbd_common.sh@41 -- # break 00:05:44.391 23:08:33 -- bdev/nbd_common.sh@45 -- # return 0 00:05:44.391 23:08:33 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:44.391 23:08:33 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:44.391 23:08:33 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:44.652 23:08:33 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:44.652 23:08:33 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:44.652 23:08:33 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:44.652 23:08:33 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:44.652 23:08:33 -- bdev/nbd_common.sh@65 -- # echo '' 00:05:44.652 23:08:33 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:44.652 23:08:33 -- bdev/nbd_common.sh@65 -- # true 00:05:44.652 23:08:33 -- bdev/nbd_common.sh@65 -- # count=0 00:05:44.652 23:08:33 -- bdev/nbd_common.sh@66 -- # echo 0 00:05:44.652 23:08:33 -- bdev/nbd_common.sh@104 -- # count=0 00:05:44.652 23:08:33 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:44.652 23:08:33 -- bdev/nbd_common.sh@109 -- # return 0 00:05:44.652 23:08:33 -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:44.652 23:08:33 -- event/event.sh@35 -- # sleep 3 00:05:44.913 [2024-04-26 23:08:33.991519] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:44.913 [2024-04-26 23:08:34.018976] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:44.913 [2024-04-26 23:08:34.018981] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:44.914 [2024-04-26 23:08:34.050938] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:44.914 [2024-04-26 23:08:34.050973] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:48.222 23:08:36 -- event/event.sh@23 -- # for i in {0..2} 00:05:48.222 23:08:36 -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:05:48.222 spdk_app_start Round 1 00:05:48.223 23:08:36 -- event/event.sh@25 -- # waitforlisten 3727115 /var/tmp/spdk-nbd.sock 00:05:48.223 23:08:36 -- common/autotest_common.sh@817 -- # '[' -z 3727115 ']' 00:05:48.223 23:08:36 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:48.223 23:08:36 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:48.223 23:08:36 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:48.223 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:48.223 23:08:36 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:48.223 23:08:36 -- common/autotest_common.sh@10 -- # set +x 00:05:48.223 23:08:37 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:05:48.223 23:08:37 -- common/autotest_common.sh@850 -- # return 0 00:05:48.223 23:08:37 -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:48.223 Malloc0 00:05:48.223 23:08:37 -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:48.223 Malloc1 00:05:48.223 23:08:37 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:48.223 23:08:37 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:48.223 23:08:37 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:48.223 23:08:37 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:48.223 23:08:37 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:48.223 23:08:37 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:48.223 23:08:37 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:48.223 23:08:37 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:48.223 23:08:37 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:48.223 23:08:37 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:48.223 23:08:37 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:48.223 23:08:37 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:48.223 23:08:37 -- bdev/nbd_common.sh@12 -- # local i 00:05:48.223 23:08:37 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:48.223 23:08:37 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:48.223 23:08:37 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:48.483 /dev/nbd0 00:05:48.483 23:08:37 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:48.483 23:08:37 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:48.483 23:08:37 -- common/autotest_common.sh@854 -- # local nbd_name=nbd0 00:05:48.483 23:08:37 -- common/autotest_common.sh@855 -- # local i 00:05:48.483 23:08:37 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:05:48.483 23:08:37 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:05:48.483 23:08:37 -- common/autotest_common.sh@858 -- # grep -q -w nbd0 /proc/partitions 00:05:48.483 23:08:37 -- common/autotest_common.sh@859 -- # break 00:05:48.483 23:08:37 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:05:48.483 23:08:37 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:05:48.483 23:08:37 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:48.483 1+0 records in 00:05:48.483 1+0 records out 00:05:48.483 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000245073 s, 16.7 MB/s 00:05:48.483 23:08:37 -- common/autotest_common.sh@872 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:48.483 23:08:37 -- common/autotest_common.sh@872 -- # size=4096 00:05:48.483 23:08:37 -- common/autotest_common.sh@873 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:48.483 23:08:37 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:05:48.483 23:08:37 -- common/autotest_common.sh@875 -- # return 0 00:05:48.483 23:08:37 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:48.483 23:08:37 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:48.483 23:08:37 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:48.483 /dev/nbd1 00:05:48.483 23:08:37 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:48.745 23:08:37 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:48.745 23:08:37 -- common/autotest_common.sh@854 -- # local nbd_name=nbd1 00:05:48.745 23:08:37 -- common/autotest_common.sh@855 -- # local i 00:05:48.745 23:08:37 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:05:48.745 23:08:37 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:05:48.745 23:08:37 -- common/autotest_common.sh@858 -- # grep -q -w nbd1 /proc/partitions 00:05:48.745 23:08:37 -- common/autotest_common.sh@859 -- # break 00:05:48.745 23:08:37 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:05:48.745 23:08:37 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:05:48.745 23:08:37 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:48.745 1+0 records in 00:05:48.745 1+0 records out 00:05:48.745 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000218121 s, 18.8 MB/s 00:05:48.745 23:08:37 -- common/autotest_common.sh@872 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:48.745 23:08:37 -- common/autotest_common.sh@872 -- # size=4096 00:05:48.745 23:08:37 -- common/autotest_common.sh@873 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:48.745 23:08:37 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:05:48.745 23:08:37 -- common/autotest_common.sh@875 -- # return 0 00:05:48.745 23:08:37 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:48.745 23:08:37 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:48.745 23:08:37 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:48.745 23:08:37 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:48.745 23:08:37 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:48.745 23:08:37 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:48.745 { 00:05:48.745 "nbd_device": "/dev/nbd0", 00:05:48.745 "bdev_name": "Malloc0" 00:05:48.745 }, 00:05:48.745 { 00:05:48.745 "nbd_device": "/dev/nbd1", 00:05:48.745 "bdev_name": "Malloc1" 00:05:48.745 } 00:05:48.745 ]' 00:05:48.745 23:08:37 -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:48.745 { 00:05:48.745 "nbd_device": "/dev/nbd0", 00:05:48.745 "bdev_name": "Malloc0" 00:05:48.745 }, 00:05:48.745 { 00:05:48.745 "nbd_device": "/dev/nbd1", 00:05:48.745 "bdev_name": "Malloc1" 00:05:48.745 } 00:05:48.745 ]' 00:05:48.745 23:08:37 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:48.746 23:08:37 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:48.746 /dev/nbd1' 00:05:48.746 23:08:37 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:48.746 /dev/nbd1' 00:05:48.746 23:08:37 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:48.746 23:08:37 -- bdev/nbd_common.sh@65 -- # count=2 00:05:48.746 23:08:37 -- bdev/nbd_common.sh@66 -- # echo 2 00:05:48.746 23:08:37 -- bdev/nbd_common.sh@95 -- # count=2 00:05:48.746 23:08:37 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:48.746 23:08:37 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:48.746 23:08:37 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:48.746 23:08:37 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:48.746 23:08:37 -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:48.746 23:08:37 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:48.746 23:08:37 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:48.746 23:08:37 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:48.746 256+0 records in 00:05:48.746 256+0 records out 00:05:48.746 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0118319 s, 88.6 MB/s 00:05:48.746 23:08:37 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:48.746 23:08:37 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:49.008 256+0 records in 00:05:49.008 256+0 records out 00:05:49.008 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0160497 s, 65.3 MB/s 00:05:49.008 23:08:38 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:49.008 23:08:38 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:49.008 256+0 records in 00:05:49.008 256+0 records out 00:05:49.008 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0167453 s, 62.6 MB/s 00:05:49.008 23:08:38 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:49.008 23:08:38 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:49.008 23:08:38 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:49.008 23:08:38 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:49.008 23:08:38 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:49.008 23:08:38 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:49.008 23:08:38 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:49.008 23:08:38 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:49.008 23:08:38 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:49.008 23:08:38 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:49.008 23:08:38 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:49.008 23:08:38 -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:49.008 23:08:38 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:49.008 23:08:38 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:49.008 23:08:38 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:49.008 23:08:38 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:49.008 23:08:38 -- bdev/nbd_common.sh@51 -- # local i 00:05:49.008 23:08:38 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:49.008 23:08:38 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:49.008 23:08:38 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:49.008 23:08:38 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:49.008 23:08:38 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:49.008 23:08:38 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:49.008 23:08:38 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:49.008 23:08:38 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:49.008 23:08:38 -- bdev/nbd_common.sh@41 -- # break 00:05:49.008 23:08:38 -- bdev/nbd_common.sh@45 -- # return 0 00:05:49.008 23:08:38 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:49.008 23:08:38 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:49.271 23:08:38 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:49.271 23:08:38 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:49.271 23:08:38 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:49.271 23:08:38 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:49.271 23:08:38 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:49.271 23:08:38 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:49.271 23:08:38 -- bdev/nbd_common.sh@41 -- # break 00:05:49.271 23:08:38 -- bdev/nbd_common.sh@45 -- # return 0 00:05:49.271 23:08:38 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:49.271 23:08:38 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:49.271 23:08:38 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:49.533 23:08:38 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:49.533 23:08:38 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:49.533 23:08:38 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:49.533 23:08:38 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:49.533 23:08:38 -- bdev/nbd_common.sh@65 -- # echo '' 00:05:49.533 23:08:38 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:49.533 23:08:38 -- bdev/nbd_common.sh@65 -- # true 00:05:49.533 23:08:38 -- bdev/nbd_common.sh@65 -- # count=0 00:05:49.533 23:08:38 -- bdev/nbd_common.sh@66 -- # echo 0 00:05:49.533 23:08:38 -- bdev/nbd_common.sh@104 -- # count=0 00:05:49.533 23:08:38 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:49.533 23:08:38 -- bdev/nbd_common.sh@109 -- # return 0 00:05:49.533 23:08:38 -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:49.533 23:08:38 -- event/event.sh@35 -- # sleep 3 00:05:49.794 [2024-04-26 23:08:38.897530] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:49.794 [2024-04-26 23:08:38.925719] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:49.794 [2024-04-26 23:08:38.925724] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:49.794 [2024-04-26 23:08:38.958397] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:49.794 [2024-04-26 23:08:38.958435] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:53.161 23:08:41 -- event/event.sh@23 -- # for i in {0..2} 00:05:53.161 23:08:41 -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:05:53.161 spdk_app_start Round 2 00:05:53.161 23:08:41 -- event/event.sh@25 -- # waitforlisten 3727115 /var/tmp/spdk-nbd.sock 00:05:53.161 23:08:41 -- common/autotest_common.sh@817 -- # '[' -z 3727115 ']' 00:05:53.161 23:08:41 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:53.161 23:08:41 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:53.161 23:08:41 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:53.161 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:53.161 23:08:41 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:53.161 23:08:41 -- common/autotest_common.sh@10 -- # set +x 00:05:53.161 23:08:41 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:05:53.161 23:08:41 -- common/autotest_common.sh@850 -- # return 0 00:05:53.162 23:08:41 -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:53.162 Malloc0 00:05:53.162 23:08:42 -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:53.162 Malloc1 00:05:53.162 23:08:42 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:53.162 23:08:42 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:53.162 23:08:42 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:53.162 23:08:42 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:53.162 23:08:42 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:53.162 23:08:42 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:53.162 23:08:42 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:53.162 23:08:42 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:53.162 23:08:42 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:53.162 23:08:42 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:53.162 23:08:42 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:53.162 23:08:42 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:53.162 23:08:42 -- bdev/nbd_common.sh@12 -- # local i 00:05:53.162 23:08:42 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:53.162 23:08:42 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:53.162 23:08:42 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:53.162 /dev/nbd0 00:05:53.424 23:08:42 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:53.424 23:08:42 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:53.424 23:08:42 -- common/autotest_common.sh@854 -- # local nbd_name=nbd0 00:05:53.424 23:08:42 -- common/autotest_common.sh@855 -- # local i 00:05:53.424 23:08:42 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:05:53.424 23:08:42 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:05:53.424 23:08:42 -- common/autotest_common.sh@858 -- # grep -q -w nbd0 /proc/partitions 00:05:53.424 23:08:42 -- common/autotest_common.sh@859 -- # break 00:05:53.424 23:08:42 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:05:53.424 23:08:42 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:05:53.424 23:08:42 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:53.424 1+0 records in 00:05:53.424 1+0 records out 00:05:53.424 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000272015 s, 15.1 MB/s 00:05:53.424 23:08:42 -- common/autotest_common.sh@872 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:53.424 23:08:42 -- common/autotest_common.sh@872 -- # size=4096 00:05:53.424 23:08:42 -- common/autotest_common.sh@873 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:53.424 23:08:42 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:05:53.424 23:08:42 -- common/autotest_common.sh@875 -- # return 0 00:05:53.424 23:08:42 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:53.424 23:08:42 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:53.424 23:08:42 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:53.424 /dev/nbd1 00:05:53.424 23:08:42 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:53.424 23:08:42 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:53.424 23:08:42 -- common/autotest_common.sh@854 -- # local nbd_name=nbd1 00:05:53.424 23:08:42 -- common/autotest_common.sh@855 -- # local i 00:05:53.424 23:08:42 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:05:53.424 23:08:42 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:05:53.424 23:08:42 -- common/autotest_common.sh@858 -- # grep -q -w nbd1 /proc/partitions 00:05:53.424 23:08:42 -- common/autotest_common.sh@859 -- # break 00:05:53.424 23:08:42 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:05:53.424 23:08:42 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:05:53.424 23:08:42 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:53.424 1+0 records in 00:05:53.424 1+0 records out 00:05:53.424 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000223694 s, 18.3 MB/s 00:05:53.424 23:08:42 -- common/autotest_common.sh@872 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:53.424 23:08:42 -- common/autotest_common.sh@872 -- # size=4096 00:05:53.424 23:08:42 -- common/autotest_common.sh@873 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:53.424 23:08:42 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:05:53.424 23:08:42 -- common/autotest_common.sh@875 -- # return 0 00:05:53.424 23:08:42 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:53.424 23:08:42 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:53.424 23:08:42 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:53.424 23:08:42 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:53.424 23:08:42 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:53.687 23:08:42 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:53.687 { 00:05:53.687 "nbd_device": "/dev/nbd0", 00:05:53.687 "bdev_name": "Malloc0" 00:05:53.687 }, 00:05:53.687 { 00:05:53.687 "nbd_device": "/dev/nbd1", 00:05:53.687 "bdev_name": "Malloc1" 00:05:53.687 } 00:05:53.687 ]' 00:05:53.687 23:08:42 -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:53.687 { 00:05:53.687 "nbd_device": "/dev/nbd0", 00:05:53.687 "bdev_name": "Malloc0" 00:05:53.687 }, 00:05:53.687 { 00:05:53.687 "nbd_device": "/dev/nbd1", 00:05:53.687 "bdev_name": "Malloc1" 00:05:53.687 } 00:05:53.687 ]' 00:05:53.687 23:08:42 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:53.687 23:08:42 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:53.687 /dev/nbd1' 00:05:53.687 23:08:42 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:53.687 /dev/nbd1' 00:05:53.687 23:08:42 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:53.687 23:08:42 -- bdev/nbd_common.sh@65 -- # count=2 00:05:53.687 23:08:42 -- bdev/nbd_common.sh@66 -- # echo 2 00:05:53.687 23:08:42 -- bdev/nbd_common.sh@95 -- # count=2 00:05:53.687 23:08:42 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:53.687 23:08:42 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:53.687 23:08:42 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:53.687 23:08:42 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:53.687 23:08:42 -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:53.687 23:08:42 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:53.687 23:08:42 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:53.687 23:08:42 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:53.687 256+0 records in 00:05:53.687 256+0 records out 00:05:53.687 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.012509 s, 83.8 MB/s 00:05:53.687 23:08:42 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:53.687 23:08:42 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:53.687 256+0 records in 00:05:53.687 256+0 records out 00:05:53.687 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0155825 s, 67.3 MB/s 00:05:53.687 23:08:42 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:53.687 23:08:42 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:53.687 256+0 records in 00:05:53.687 256+0 records out 00:05:53.687 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0159236 s, 65.9 MB/s 00:05:53.687 23:08:42 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:53.687 23:08:42 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:53.687 23:08:42 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:53.687 23:08:42 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:53.687 23:08:42 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:53.687 23:08:42 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:53.687 23:08:42 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:53.687 23:08:42 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:53.687 23:08:42 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:53.687 23:08:42 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:53.687 23:08:42 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:53.687 23:08:42 -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:53.687 23:08:42 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:53.687 23:08:42 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:53.687 23:08:42 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:53.687 23:08:42 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:53.687 23:08:42 -- bdev/nbd_common.sh@51 -- # local i 00:05:53.687 23:08:42 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:53.687 23:08:42 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:53.948 23:08:43 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:53.948 23:08:43 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:53.948 23:08:43 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:53.948 23:08:43 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:53.948 23:08:43 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:53.948 23:08:43 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:53.948 23:08:43 -- bdev/nbd_common.sh@41 -- # break 00:05:53.948 23:08:43 -- bdev/nbd_common.sh@45 -- # return 0 00:05:53.948 23:08:43 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:53.948 23:08:43 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:54.209 23:08:43 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:54.209 23:08:43 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:54.209 23:08:43 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:54.209 23:08:43 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:54.209 23:08:43 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:54.209 23:08:43 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:54.209 23:08:43 -- bdev/nbd_common.sh@41 -- # break 00:05:54.209 23:08:43 -- bdev/nbd_common.sh@45 -- # return 0 00:05:54.209 23:08:43 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:54.209 23:08:43 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:54.209 23:08:43 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:54.209 23:08:43 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:54.209 23:08:43 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:54.209 23:08:43 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:54.470 23:08:43 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:54.471 23:08:43 -- bdev/nbd_common.sh@65 -- # echo '' 00:05:54.471 23:08:43 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:54.471 23:08:43 -- bdev/nbd_common.sh@65 -- # true 00:05:54.471 23:08:43 -- bdev/nbd_common.sh@65 -- # count=0 00:05:54.471 23:08:43 -- bdev/nbd_common.sh@66 -- # echo 0 00:05:54.471 23:08:43 -- bdev/nbd_common.sh@104 -- # count=0 00:05:54.471 23:08:43 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:54.471 23:08:43 -- bdev/nbd_common.sh@109 -- # return 0 00:05:54.471 23:08:43 -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:54.471 23:08:43 -- event/event.sh@35 -- # sleep 3 00:05:54.732 [2024-04-26 23:08:43.776934] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:54.732 [2024-04-26 23:08:43.804655] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:54.732 [2024-04-26 23:08:43.804661] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:54.732 [2024-04-26 23:08:43.836640] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:54.732 [2024-04-26 23:08:43.836676] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:58.036 23:08:46 -- event/event.sh@38 -- # waitforlisten 3727115 /var/tmp/spdk-nbd.sock 00:05:58.036 23:08:46 -- common/autotest_common.sh@817 -- # '[' -z 3727115 ']' 00:05:58.036 23:08:46 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:58.036 23:08:46 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:58.036 23:08:46 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:58.036 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:58.036 23:08:46 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:58.036 23:08:46 -- common/autotest_common.sh@10 -- # set +x 00:05:58.036 23:08:46 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:05:58.036 23:08:46 -- common/autotest_common.sh@850 -- # return 0 00:05:58.036 23:08:46 -- event/event.sh@39 -- # killprocess 3727115 00:05:58.036 23:08:46 -- common/autotest_common.sh@936 -- # '[' -z 3727115 ']' 00:05:58.036 23:08:46 -- common/autotest_common.sh@940 -- # kill -0 3727115 00:05:58.036 23:08:46 -- common/autotest_common.sh@941 -- # uname 00:05:58.036 23:08:46 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:58.036 23:08:46 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3727115 00:05:58.036 23:08:46 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:58.036 23:08:46 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:58.036 23:08:46 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3727115' 00:05:58.036 killing process with pid 3727115 00:05:58.036 23:08:46 -- common/autotest_common.sh@955 -- # kill 3727115 00:05:58.036 23:08:46 -- common/autotest_common.sh@960 -- # wait 3727115 00:05:58.036 spdk_app_start is called in Round 0. 00:05:58.036 Shutdown signal received, stop current app iteration 00:05:58.036 Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 reinitialization... 00:05:58.036 spdk_app_start is called in Round 1. 00:05:58.036 Shutdown signal received, stop current app iteration 00:05:58.037 Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 reinitialization... 00:05:58.037 spdk_app_start is called in Round 2. 00:05:58.037 Shutdown signal received, stop current app iteration 00:05:58.037 Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 reinitialization... 00:05:58.037 spdk_app_start is called in Round 3. 00:05:58.037 Shutdown signal received, stop current app iteration 00:05:58.037 23:08:46 -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:05:58.037 23:08:46 -- event/event.sh@42 -- # return 0 00:05:58.037 00:05:58.037 real 0m15.019s 00:05:58.037 user 0m32.485s 00:05:58.037 sys 0m2.149s 00:05:58.037 23:08:46 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:58.037 23:08:46 -- common/autotest_common.sh@10 -- # set +x 00:05:58.037 ************************************ 00:05:58.037 END TEST app_repeat 00:05:58.037 ************************************ 00:05:58.037 23:08:47 -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:05:58.037 23:08:47 -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:05:58.037 23:08:47 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:58.037 23:08:47 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:58.037 23:08:47 -- common/autotest_common.sh@10 -- # set +x 00:05:58.037 ************************************ 00:05:58.037 START TEST cpu_locks 00:05:58.037 ************************************ 00:05:58.037 23:08:47 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:05:58.037 * Looking for test storage... 00:05:58.037 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:05:58.037 23:08:47 -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:05:58.037 23:08:47 -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:05:58.037 23:08:47 -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:05:58.037 23:08:47 -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:05:58.037 23:08:47 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:58.037 23:08:47 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:58.037 23:08:47 -- common/autotest_common.sh@10 -- # set +x 00:05:58.303 ************************************ 00:05:58.303 START TEST default_locks 00:05:58.304 ************************************ 00:05:58.304 23:08:47 -- common/autotest_common.sh@1111 -- # default_locks 00:05:58.304 23:08:47 -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=3730377 00:05:58.304 23:08:47 -- event/cpu_locks.sh@47 -- # waitforlisten 3730377 00:05:58.304 23:08:47 -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:58.304 23:08:47 -- common/autotest_common.sh@817 -- # '[' -z 3730377 ']' 00:05:58.304 23:08:47 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:58.304 23:08:47 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:58.304 23:08:47 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:58.304 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:58.304 23:08:47 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:58.304 23:08:47 -- common/autotest_common.sh@10 -- # set +x 00:05:58.304 [2024-04-26 23:08:47.442224] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:05:58.304 [2024-04-26 23:08:47.442271] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3730377 ] 00:05:58.304 EAL: No free 2048 kB hugepages reported on node 1 00:05:58.304 [2024-04-26 23:08:47.503798] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:58.304 [2024-04-26 23:08:47.532359] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:59.250 23:08:48 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:05:59.250 23:08:48 -- common/autotest_common.sh@850 -- # return 0 00:05:59.250 23:08:48 -- event/cpu_locks.sh@49 -- # locks_exist 3730377 00:05:59.250 23:08:48 -- event/cpu_locks.sh@22 -- # lslocks -p 3730377 00:05:59.250 23:08:48 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:59.250 lslocks: write error 00:05:59.250 23:08:48 -- event/cpu_locks.sh@50 -- # killprocess 3730377 00:05:59.250 23:08:48 -- common/autotest_common.sh@936 -- # '[' -z 3730377 ']' 00:05:59.250 23:08:48 -- common/autotest_common.sh@940 -- # kill -0 3730377 00:05:59.250 23:08:48 -- common/autotest_common.sh@941 -- # uname 00:05:59.250 23:08:48 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:59.250 23:08:48 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3730377 00:05:59.250 23:08:48 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:59.250 23:08:48 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:59.250 23:08:48 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3730377' 00:05:59.250 killing process with pid 3730377 00:05:59.250 23:08:48 -- common/autotest_common.sh@955 -- # kill 3730377 00:05:59.250 23:08:48 -- common/autotest_common.sh@960 -- # wait 3730377 00:05:59.511 23:08:48 -- event/cpu_locks.sh@52 -- # NOT waitforlisten 3730377 00:05:59.511 23:08:48 -- common/autotest_common.sh@638 -- # local es=0 00:05:59.511 23:08:48 -- common/autotest_common.sh@640 -- # valid_exec_arg waitforlisten 3730377 00:05:59.511 23:08:48 -- common/autotest_common.sh@626 -- # local arg=waitforlisten 00:05:59.511 23:08:48 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:05:59.511 23:08:48 -- common/autotest_common.sh@630 -- # type -t waitforlisten 00:05:59.511 23:08:48 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:05:59.511 23:08:48 -- common/autotest_common.sh@641 -- # waitforlisten 3730377 00:05:59.511 23:08:48 -- common/autotest_common.sh@817 -- # '[' -z 3730377 ']' 00:05:59.511 23:08:48 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:59.511 23:08:48 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:59.511 23:08:48 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:59.511 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:59.511 23:08:48 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:59.511 23:08:48 -- common/autotest_common.sh@10 -- # set +x 00:05:59.511 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 832: kill: (3730377) - No such process 00:05:59.511 ERROR: process (pid: 3730377) is no longer running 00:05:59.511 23:08:48 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:05:59.511 23:08:48 -- common/autotest_common.sh@850 -- # return 1 00:05:59.511 23:08:48 -- common/autotest_common.sh@641 -- # es=1 00:05:59.511 23:08:48 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:05:59.511 23:08:48 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:05:59.511 23:08:48 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:05:59.511 23:08:48 -- event/cpu_locks.sh@54 -- # no_locks 00:05:59.511 23:08:48 -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:59.511 23:08:48 -- event/cpu_locks.sh@26 -- # local lock_files 00:05:59.511 23:08:48 -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:59.511 00:05:59.511 real 0m1.214s 00:05:59.511 user 0m1.288s 00:05:59.511 sys 0m0.394s 00:05:59.511 23:08:48 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:59.511 23:08:48 -- common/autotest_common.sh@10 -- # set +x 00:05:59.511 ************************************ 00:05:59.511 END TEST default_locks 00:05:59.511 ************************************ 00:05:59.511 23:08:48 -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:05:59.511 23:08:48 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:59.511 23:08:48 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:59.511 23:08:48 -- common/autotest_common.sh@10 -- # set +x 00:05:59.773 ************************************ 00:05:59.773 START TEST default_locks_via_rpc 00:05:59.773 ************************************ 00:05:59.773 23:08:48 -- common/autotest_common.sh@1111 -- # default_locks_via_rpc 00:05:59.773 23:08:48 -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=3730744 00:05:59.773 23:08:48 -- event/cpu_locks.sh@63 -- # waitforlisten 3730744 00:05:59.773 23:08:48 -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:59.773 23:08:48 -- common/autotest_common.sh@817 -- # '[' -z 3730744 ']' 00:05:59.773 23:08:48 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:59.773 23:08:48 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:59.773 23:08:48 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:59.773 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:59.773 23:08:48 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:59.773 23:08:48 -- common/autotest_common.sh@10 -- # set +x 00:05:59.773 [2024-04-26 23:08:48.852728] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:05:59.773 [2024-04-26 23:08:48.852775] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3730744 ] 00:05:59.773 EAL: No free 2048 kB hugepages reported on node 1 00:05:59.773 [2024-04-26 23:08:48.913808] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:59.773 [2024-04-26 23:08:48.941817] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:00.718 23:08:49 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:06:00.718 23:08:49 -- common/autotest_common.sh@850 -- # return 0 00:06:00.718 23:08:49 -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:06:00.718 23:08:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:00.718 23:08:49 -- common/autotest_common.sh@10 -- # set +x 00:06:00.718 23:08:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:00.718 23:08:49 -- event/cpu_locks.sh@67 -- # no_locks 00:06:00.718 23:08:49 -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:00.718 23:08:49 -- event/cpu_locks.sh@26 -- # local lock_files 00:06:00.718 23:08:49 -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:00.718 23:08:49 -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:06:00.718 23:08:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:00.718 23:08:49 -- common/autotest_common.sh@10 -- # set +x 00:06:00.718 23:08:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:00.718 23:08:49 -- event/cpu_locks.sh@71 -- # locks_exist 3730744 00:06:00.718 23:08:49 -- event/cpu_locks.sh@22 -- # lslocks -p 3730744 00:06:00.718 23:08:49 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:00.978 23:08:50 -- event/cpu_locks.sh@73 -- # killprocess 3730744 00:06:00.978 23:08:50 -- common/autotest_common.sh@936 -- # '[' -z 3730744 ']' 00:06:00.978 23:08:50 -- common/autotest_common.sh@940 -- # kill -0 3730744 00:06:00.978 23:08:50 -- common/autotest_common.sh@941 -- # uname 00:06:00.978 23:08:50 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:00.978 23:08:50 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3730744 00:06:00.978 23:08:50 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:00.978 23:08:50 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:00.978 23:08:50 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3730744' 00:06:00.978 killing process with pid 3730744 00:06:00.978 23:08:50 -- common/autotest_common.sh@955 -- # kill 3730744 00:06:00.978 23:08:50 -- common/autotest_common.sh@960 -- # wait 3730744 00:06:01.239 00:06:01.239 real 0m1.483s 00:06:01.239 user 0m1.583s 00:06:01.239 sys 0m0.471s 00:06:01.239 23:08:50 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:01.239 23:08:50 -- common/autotest_common.sh@10 -- # set +x 00:06:01.239 ************************************ 00:06:01.239 END TEST default_locks_via_rpc 00:06:01.239 ************************************ 00:06:01.239 23:08:50 -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:06:01.239 23:08:50 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:01.239 23:08:50 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:01.239 23:08:50 -- common/autotest_common.sh@10 -- # set +x 00:06:01.239 ************************************ 00:06:01.239 START TEST non_locking_app_on_locked_coremask 00:06:01.239 ************************************ 00:06:01.239 23:08:50 -- common/autotest_common.sh@1111 -- # non_locking_app_on_locked_coremask 00:06:01.239 23:08:50 -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=3731118 00:06:01.239 23:08:50 -- event/cpu_locks.sh@81 -- # waitforlisten 3731118 /var/tmp/spdk.sock 00:06:01.239 23:08:50 -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:01.239 23:08:50 -- common/autotest_common.sh@817 -- # '[' -z 3731118 ']' 00:06:01.239 23:08:50 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:01.239 23:08:50 -- common/autotest_common.sh@822 -- # local max_retries=100 00:06:01.239 23:08:50 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:01.239 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:01.239 23:08:50 -- common/autotest_common.sh@826 -- # xtrace_disable 00:06:01.239 23:08:50 -- common/autotest_common.sh@10 -- # set +x 00:06:01.501 [2024-04-26 23:08:50.521465] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:06:01.501 [2024-04-26 23:08:50.521524] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3731118 ] 00:06:01.501 EAL: No free 2048 kB hugepages reported on node 1 00:06:01.501 [2024-04-26 23:08:50.586043] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:01.501 [2024-04-26 23:08:50.623680] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:02.075 23:08:51 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:06:02.075 23:08:51 -- common/autotest_common.sh@850 -- # return 0 00:06:02.075 23:08:51 -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=3731184 00:06:02.075 23:08:51 -- event/cpu_locks.sh@85 -- # waitforlisten 3731184 /var/tmp/spdk2.sock 00:06:02.075 23:08:51 -- common/autotest_common.sh@817 -- # '[' -z 3731184 ']' 00:06:02.075 23:08:51 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:02.075 23:08:51 -- common/autotest_common.sh@822 -- # local max_retries=100 00:06:02.075 23:08:51 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:02.075 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:02.075 23:08:51 -- common/autotest_common.sh@826 -- # xtrace_disable 00:06:02.075 23:08:51 -- common/autotest_common.sh@10 -- # set +x 00:06:02.075 23:08:51 -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:06:02.335 [2024-04-26 23:08:51.332766] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:06:02.335 [2024-04-26 23:08:51.332818] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3731184 ] 00:06:02.335 EAL: No free 2048 kB hugepages reported on node 1 00:06:02.335 [2024-04-26 23:08:51.422425] app.c: 825:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:02.335 [2024-04-26 23:08:51.422453] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:02.335 [2024-04-26 23:08:51.479626] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:02.908 23:08:52 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:06:02.908 23:08:52 -- common/autotest_common.sh@850 -- # return 0 00:06:02.908 23:08:52 -- event/cpu_locks.sh@87 -- # locks_exist 3731118 00:06:02.908 23:08:52 -- event/cpu_locks.sh@22 -- # lslocks -p 3731118 00:06:02.908 23:08:52 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:03.480 lslocks: write error 00:06:03.480 23:08:52 -- event/cpu_locks.sh@89 -- # killprocess 3731118 00:06:03.480 23:08:52 -- common/autotest_common.sh@936 -- # '[' -z 3731118 ']' 00:06:03.480 23:08:52 -- common/autotest_common.sh@940 -- # kill -0 3731118 00:06:03.480 23:08:52 -- common/autotest_common.sh@941 -- # uname 00:06:03.480 23:08:52 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:03.480 23:08:52 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3731118 00:06:03.480 23:08:52 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:03.480 23:08:52 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:03.480 23:08:52 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3731118' 00:06:03.480 killing process with pid 3731118 00:06:03.480 23:08:52 -- common/autotest_common.sh@955 -- # kill 3731118 00:06:03.480 23:08:52 -- common/autotest_common.sh@960 -- # wait 3731118 00:06:04.051 23:08:53 -- event/cpu_locks.sh@90 -- # killprocess 3731184 00:06:04.051 23:08:53 -- common/autotest_common.sh@936 -- # '[' -z 3731184 ']' 00:06:04.051 23:08:53 -- common/autotest_common.sh@940 -- # kill -0 3731184 00:06:04.051 23:08:53 -- common/autotest_common.sh@941 -- # uname 00:06:04.051 23:08:53 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:04.051 23:08:53 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3731184 00:06:04.051 23:08:53 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:04.051 23:08:53 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:04.051 23:08:53 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3731184' 00:06:04.051 killing process with pid 3731184 00:06:04.051 23:08:53 -- common/autotest_common.sh@955 -- # kill 3731184 00:06:04.051 23:08:53 -- common/autotest_common.sh@960 -- # wait 3731184 00:06:04.312 00:06:04.312 real 0m2.892s 00:06:04.312 user 0m3.150s 00:06:04.312 sys 0m0.913s 00:06:04.312 23:08:53 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:04.312 23:08:53 -- common/autotest_common.sh@10 -- # set +x 00:06:04.313 ************************************ 00:06:04.313 END TEST non_locking_app_on_locked_coremask 00:06:04.313 ************************************ 00:06:04.313 23:08:53 -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:06:04.313 23:08:53 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:04.313 23:08:53 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:04.313 23:08:53 -- common/autotest_common.sh@10 -- # set +x 00:06:04.313 ************************************ 00:06:04.313 START TEST locking_app_on_unlocked_coremask 00:06:04.313 ************************************ 00:06:04.313 23:08:53 -- common/autotest_common.sh@1111 -- # locking_app_on_unlocked_coremask 00:06:04.313 23:08:53 -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=3731830 00:06:04.313 23:08:53 -- event/cpu_locks.sh@99 -- # waitforlisten 3731830 /var/tmp/spdk.sock 00:06:04.313 23:08:53 -- common/autotest_common.sh@817 -- # '[' -z 3731830 ']' 00:06:04.313 23:08:53 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:04.313 23:08:53 -- common/autotest_common.sh@822 -- # local max_retries=100 00:06:04.313 23:08:53 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:04.313 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:04.313 23:08:53 -- common/autotest_common.sh@826 -- # xtrace_disable 00:06:04.313 23:08:53 -- common/autotest_common.sh@10 -- # set +x 00:06:04.313 23:08:53 -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:06:04.573 [2024-04-26 23:08:53.580574] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:06:04.573 [2024-04-26 23:08:53.580642] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3731830 ] 00:06:04.573 EAL: No free 2048 kB hugepages reported on node 1 00:06:04.573 [2024-04-26 23:08:53.644853] app.c: 825:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:04.573 [2024-04-26 23:08:53.644887] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:04.573 [2024-04-26 23:08:53.682078] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:05.143 23:08:54 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:06:05.143 23:08:54 -- common/autotest_common.sh@850 -- # return 0 00:06:05.143 23:08:54 -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=3731848 00:06:05.143 23:08:54 -- event/cpu_locks.sh@103 -- # waitforlisten 3731848 /var/tmp/spdk2.sock 00:06:05.143 23:08:54 -- common/autotest_common.sh@817 -- # '[' -z 3731848 ']' 00:06:05.143 23:08:54 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:05.143 23:08:54 -- common/autotest_common.sh@822 -- # local max_retries=100 00:06:05.143 23:08:54 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:05.143 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:05.143 23:08:54 -- common/autotest_common.sh@826 -- # xtrace_disable 00:06:05.143 23:08:54 -- common/autotest_common.sh@10 -- # set +x 00:06:05.143 23:08:54 -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:05.143 [2024-04-26 23:08:54.392918] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:06:05.143 [2024-04-26 23:08:54.392970] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3731848 ] 00:06:05.402 EAL: No free 2048 kB hugepages reported on node 1 00:06:05.403 [2024-04-26 23:08:54.482157] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:05.403 [2024-04-26 23:08:54.538465] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:05.971 23:08:55 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:06:05.971 23:08:55 -- common/autotest_common.sh@850 -- # return 0 00:06:05.972 23:08:55 -- event/cpu_locks.sh@105 -- # locks_exist 3731848 00:06:05.972 23:08:55 -- event/cpu_locks.sh@22 -- # lslocks -p 3731848 00:06:05.972 23:08:55 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:06.540 lslocks: write error 00:06:06.540 23:08:55 -- event/cpu_locks.sh@107 -- # killprocess 3731830 00:06:06.540 23:08:55 -- common/autotest_common.sh@936 -- # '[' -z 3731830 ']' 00:06:06.540 23:08:55 -- common/autotest_common.sh@940 -- # kill -0 3731830 00:06:06.540 23:08:55 -- common/autotest_common.sh@941 -- # uname 00:06:06.540 23:08:55 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:06.540 23:08:55 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3731830 00:06:06.540 23:08:55 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:06.540 23:08:55 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:06.540 23:08:55 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3731830' 00:06:06.540 killing process with pid 3731830 00:06:06.540 23:08:55 -- common/autotest_common.sh@955 -- # kill 3731830 00:06:06.540 23:08:55 -- common/autotest_common.sh@960 -- # wait 3731830 00:06:07.111 23:08:56 -- event/cpu_locks.sh@108 -- # killprocess 3731848 00:06:07.111 23:08:56 -- common/autotest_common.sh@936 -- # '[' -z 3731848 ']' 00:06:07.111 23:08:56 -- common/autotest_common.sh@940 -- # kill -0 3731848 00:06:07.111 23:08:56 -- common/autotest_common.sh@941 -- # uname 00:06:07.111 23:08:56 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:07.111 23:08:56 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3731848 00:06:07.111 23:08:56 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:07.111 23:08:56 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:07.111 23:08:56 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3731848' 00:06:07.111 killing process with pid 3731848 00:06:07.111 23:08:56 -- common/autotest_common.sh@955 -- # kill 3731848 00:06:07.111 23:08:56 -- common/autotest_common.sh@960 -- # wait 3731848 00:06:07.111 00:06:07.111 real 0m2.829s 00:06:07.111 user 0m3.069s 00:06:07.111 sys 0m0.855s 00:06:07.111 23:08:56 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:07.111 23:08:56 -- common/autotest_common.sh@10 -- # set +x 00:06:07.111 ************************************ 00:06:07.111 END TEST locking_app_on_unlocked_coremask 00:06:07.111 ************************************ 00:06:07.371 23:08:56 -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:06:07.371 23:08:56 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:07.371 23:08:56 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:07.371 23:08:56 -- common/autotest_common.sh@10 -- # set +x 00:06:07.371 ************************************ 00:06:07.371 START TEST locking_app_on_locked_coremask 00:06:07.371 ************************************ 00:06:07.371 23:08:56 -- common/autotest_common.sh@1111 -- # locking_app_on_locked_coremask 00:06:07.371 23:08:56 -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=3732373 00:06:07.372 23:08:56 -- event/cpu_locks.sh@116 -- # waitforlisten 3732373 /var/tmp/spdk.sock 00:06:07.372 23:08:56 -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:07.372 23:08:56 -- common/autotest_common.sh@817 -- # '[' -z 3732373 ']' 00:06:07.372 23:08:56 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:07.372 23:08:56 -- common/autotest_common.sh@822 -- # local max_retries=100 00:06:07.372 23:08:56 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:07.372 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:07.372 23:08:56 -- common/autotest_common.sh@826 -- # xtrace_disable 00:06:07.372 23:08:56 -- common/autotest_common.sh@10 -- # set +x 00:06:07.372 [2024-04-26 23:08:56.597903] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:06:07.372 [2024-04-26 23:08:56.597954] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3732373 ] 00:06:07.372 EAL: No free 2048 kB hugepages reported on node 1 00:06:07.632 [2024-04-26 23:08:56.657790] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:07.632 [2024-04-26 23:08:56.687285] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:08.203 23:08:57 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:06:08.203 23:08:57 -- common/autotest_common.sh@850 -- # return 0 00:06:08.203 23:08:57 -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:08.203 23:08:57 -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=3732559 00:06:08.203 23:08:57 -- event/cpu_locks.sh@120 -- # NOT waitforlisten 3732559 /var/tmp/spdk2.sock 00:06:08.203 23:08:57 -- common/autotest_common.sh@638 -- # local es=0 00:06:08.203 23:08:57 -- common/autotest_common.sh@640 -- # valid_exec_arg waitforlisten 3732559 /var/tmp/spdk2.sock 00:06:08.203 23:08:57 -- common/autotest_common.sh@626 -- # local arg=waitforlisten 00:06:08.203 23:08:57 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:06:08.204 23:08:57 -- common/autotest_common.sh@630 -- # type -t waitforlisten 00:06:08.204 23:08:57 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:06:08.204 23:08:57 -- common/autotest_common.sh@641 -- # waitforlisten 3732559 /var/tmp/spdk2.sock 00:06:08.204 23:08:57 -- common/autotest_common.sh@817 -- # '[' -z 3732559 ']' 00:06:08.204 23:08:57 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:08.204 23:08:57 -- common/autotest_common.sh@822 -- # local max_retries=100 00:06:08.204 23:08:57 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:08.204 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:08.204 23:08:57 -- common/autotest_common.sh@826 -- # xtrace_disable 00:06:08.204 23:08:57 -- common/autotest_common.sh@10 -- # set +x 00:06:08.204 [2024-04-26 23:08:57.390779] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:06:08.204 [2024-04-26 23:08:57.390827] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3732559 ] 00:06:08.204 EAL: No free 2048 kB hugepages reported on node 1 00:06:08.463 [2024-04-26 23:08:57.481120] app.c: 690:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 3732373 has claimed it. 00:06:08.463 [2024-04-26 23:08:57.481161] app.c: 821:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:09.032 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 832: kill: (3732559) - No such process 00:06:09.032 ERROR: process (pid: 3732559) is no longer running 00:06:09.032 23:08:58 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:06:09.032 23:08:58 -- common/autotest_common.sh@850 -- # return 1 00:06:09.032 23:08:58 -- common/autotest_common.sh@641 -- # es=1 00:06:09.032 23:08:58 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:06:09.032 23:08:58 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:06:09.032 23:08:58 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:06:09.032 23:08:58 -- event/cpu_locks.sh@122 -- # locks_exist 3732373 00:06:09.032 23:08:58 -- event/cpu_locks.sh@22 -- # lslocks -p 3732373 00:06:09.032 23:08:58 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:09.292 lslocks: write error 00:06:09.292 23:08:58 -- event/cpu_locks.sh@124 -- # killprocess 3732373 00:06:09.292 23:08:58 -- common/autotest_common.sh@936 -- # '[' -z 3732373 ']' 00:06:09.292 23:08:58 -- common/autotest_common.sh@940 -- # kill -0 3732373 00:06:09.292 23:08:58 -- common/autotest_common.sh@941 -- # uname 00:06:09.292 23:08:58 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:09.292 23:08:58 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3732373 00:06:09.292 23:08:58 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:09.292 23:08:58 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:09.292 23:08:58 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3732373' 00:06:09.292 killing process with pid 3732373 00:06:09.292 23:08:58 -- common/autotest_common.sh@955 -- # kill 3732373 00:06:09.292 23:08:58 -- common/autotest_common.sh@960 -- # wait 3732373 00:06:09.551 00:06:09.551 real 0m2.068s 00:06:09.551 user 0m2.306s 00:06:09.551 sys 0m0.549s 00:06:09.551 23:08:58 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:09.551 23:08:58 -- common/autotest_common.sh@10 -- # set +x 00:06:09.551 ************************************ 00:06:09.551 END TEST locking_app_on_locked_coremask 00:06:09.551 ************************************ 00:06:09.551 23:08:58 -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:06:09.551 23:08:58 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:09.551 23:08:58 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:09.551 23:08:58 -- common/autotest_common.sh@10 -- # set +x 00:06:09.551 ************************************ 00:06:09.551 START TEST locking_overlapped_coremask 00:06:09.551 ************************************ 00:06:09.551 23:08:58 -- common/autotest_common.sh@1111 -- # locking_overlapped_coremask 00:06:09.551 23:08:58 -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=3732929 00:06:09.551 23:08:58 -- event/cpu_locks.sh@133 -- # waitforlisten 3732929 /var/tmp/spdk.sock 00:06:09.551 23:08:58 -- common/autotest_common.sh@817 -- # '[' -z 3732929 ']' 00:06:09.551 23:08:58 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:09.551 23:08:58 -- common/autotest_common.sh@822 -- # local max_retries=100 00:06:09.551 23:08:58 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:09.551 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:09.551 23:08:58 -- common/autotest_common.sh@826 -- # xtrace_disable 00:06:09.551 23:08:58 -- common/autotest_common.sh@10 -- # set +x 00:06:09.552 23:08:58 -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:06:09.811 [2024-04-26 23:08:58.831749] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:06:09.811 [2024-04-26 23:08:58.831806] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3732929 ] 00:06:09.811 EAL: No free 2048 kB hugepages reported on node 1 00:06:09.811 [2024-04-26 23:08:58.896226] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:09.811 [2024-04-26 23:08:58.935225] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:09.811 [2024-04-26 23:08:58.935333] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:09.811 [2024-04-26 23:08:58.935337] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:10.381 23:08:59 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:06:10.381 23:08:59 -- common/autotest_common.sh@850 -- # return 0 00:06:10.381 23:08:59 -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=3732951 00:06:10.381 23:08:59 -- event/cpu_locks.sh@137 -- # NOT waitforlisten 3732951 /var/tmp/spdk2.sock 00:06:10.381 23:08:59 -- common/autotest_common.sh@638 -- # local es=0 00:06:10.381 23:08:59 -- common/autotest_common.sh@640 -- # valid_exec_arg waitforlisten 3732951 /var/tmp/spdk2.sock 00:06:10.381 23:08:59 -- common/autotest_common.sh@626 -- # local arg=waitforlisten 00:06:10.381 23:08:59 -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:06:10.381 23:08:59 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:06:10.381 23:08:59 -- common/autotest_common.sh@630 -- # type -t waitforlisten 00:06:10.381 23:08:59 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:06:10.381 23:08:59 -- common/autotest_common.sh@641 -- # waitforlisten 3732951 /var/tmp/spdk2.sock 00:06:10.381 23:08:59 -- common/autotest_common.sh@817 -- # '[' -z 3732951 ']' 00:06:10.381 23:08:59 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:10.381 23:08:59 -- common/autotest_common.sh@822 -- # local max_retries=100 00:06:10.381 23:08:59 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:10.381 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:10.381 23:08:59 -- common/autotest_common.sh@826 -- # xtrace_disable 00:06:10.381 23:08:59 -- common/autotest_common.sh@10 -- # set +x 00:06:10.642 [2024-04-26 23:08:59.659412] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:06:10.642 [2024-04-26 23:08:59.659463] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3732951 ] 00:06:10.642 EAL: No free 2048 kB hugepages reported on node 1 00:06:10.642 [2024-04-26 23:08:59.734880] app.c: 690:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 3732929 has claimed it. 00:06:10.642 [2024-04-26 23:08:59.734912] app.c: 821:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:11.212 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 832: kill: (3732951) - No such process 00:06:11.212 ERROR: process (pid: 3732951) is no longer running 00:06:11.212 23:09:00 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:06:11.212 23:09:00 -- common/autotest_common.sh@850 -- # return 1 00:06:11.212 23:09:00 -- common/autotest_common.sh@641 -- # es=1 00:06:11.212 23:09:00 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:06:11.212 23:09:00 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:06:11.212 23:09:00 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:06:11.212 23:09:00 -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:06:11.212 23:09:00 -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:11.212 23:09:00 -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:11.212 23:09:00 -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:11.212 23:09:00 -- event/cpu_locks.sh@141 -- # killprocess 3732929 00:06:11.212 23:09:00 -- common/autotest_common.sh@936 -- # '[' -z 3732929 ']' 00:06:11.212 23:09:00 -- common/autotest_common.sh@940 -- # kill -0 3732929 00:06:11.212 23:09:00 -- common/autotest_common.sh@941 -- # uname 00:06:11.212 23:09:00 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:11.212 23:09:00 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3732929 00:06:11.212 23:09:00 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:11.212 23:09:00 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:11.212 23:09:00 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3732929' 00:06:11.212 killing process with pid 3732929 00:06:11.212 23:09:00 -- common/autotest_common.sh@955 -- # kill 3732929 00:06:11.212 23:09:00 -- common/autotest_common.sh@960 -- # wait 3732929 00:06:11.473 00:06:11.473 real 0m1.741s 00:06:11.473 user 0m5.010s 00:06:11.473 sys 0m0.366s 00:06:11.473 23:09:00 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:11.473 23:09:00 -- common/autotest_common.sh@10 -- # set +x 00:06:11.473 ************************************ 00:06:11.473 END TEST locking_overlapped_coremask 00:06:11.473 ************************************ 00:06:11.473 23:09:00 -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:06:11.473 23:09:00 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:11.473 23:09:00 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:11.473 23:09:00 -- common/autotest_common.sh@10 -- # set +x 00:06:11.473 ************************************ 00:06:11.473 START TEST locking_overlapped_coremask_via_rpc 00:06:11.473 ************************************ 00:06:11.473 23:09:00 -- common/autotest_common.sh@1111 -- # locking_overlapped_coremask_via_rpc 00:06:11.473 23:09:00 -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=3733357 00:06:11.473 23:09:00 -- event/cpu_locks.sh@149 -- # waitforlisten 3733357 /var/tmp/spdk.sock 00:06:11.473 23:09:00 -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:06:11.473 23:09:00 -- common/autotest_common.sh@817 -- # '[' -z 3733357 ']' 00:06:11.473 23:09:00 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:11.473 23:09:00 -- common/autotest_common.sh@822 -- # local max_retries=100 00:06:11.473 23:09:00 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:11.473 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:11.473 23:09:00 -- common/autotest_common.sh@826 -- # xtrace_disable 00:06:11.473 23:09:00 -- common/autotest_common.sh@10 -- # set +x 00:06:11.733 [2024-04-26 23:09:00.758499] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:06:11.733 [2024-04-26 23:09:00.758557] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3733357 ] 00:06:11.733 EAL: No free 2048 kB hugepages reported on node 1 00:06:11.733 [2024-04-26 23:09:00.824302] app.c: 825:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:11.733 [2024-04-26 23:09:00.824337] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:11.733 [2024-04-26 23:09:00.863292] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:11.733 [2024-04-26 23:09:00.863438] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:11.733 [2024-04-26 23:09:00.863441] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:12.305 23:09:01 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:06:12.305 23:09:01 -- common/autotest_common.sh@850 -- # return 0 00:06:12.305 23:09:01 -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=3733494 00:06:12.305 23:09:01 -- event/cpu_locks.sh@153 -- # waitforlisten 3733494 /var/tmp/spdk2.sock 00:06:12.305 23:09:01 -- common/autotest_common.sh@817 -- # '[' -z 3733494 ']' 00:06:12.305 23:09:01 -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:06:12.305 23:09:01 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:12.305 23:09:01 -- common/autotest_common.sh@822 -- # local max_retries=100 00:06:12.305 23:09:01 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:12.305 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:12.305 23:09:01 -- common/autotest_common.sh@826 -- # xtrace_disable 00:06:12.305 23:09:01 -- common/autotest_common.sh@10 -- # set +x 00:06:12.565 [2024-04-26 23:09:01.571523] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:06:12.565 [2024-04-26 23:09:01.571578] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3733494 ] 00:06:12.565 EAL: No free 2048 kB hugepages reported on node 1 00:06:12.565 [2024-04-26 23:09:01.649344] app.c: 825:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:12.565 [2024-04-26 23:09:01.649367] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:12.565 [2024-04-26 23:09:01.705196] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:12.565 [2024-04-26 23:09:01.705211] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:06:12.565 [2024-04-26 23:09:01.705211] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:13.138 23:09:02 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:06:13.138 23:09:02 -- common/autotest_common.sh@850 -- # return 0 00:06:13.138 23:09:02 -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:06:13.138 23:09:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:13.138 23:09:02 -- common/autotest_common.sh@10 -- # set +x 00:06:13.138 23:09:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:13.138 23:09:02 -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:13.138 23:09:02 -- common/autotest_common.sh@638 -- # local es=0 00:06:13.138 23:09:02 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:13.138 23:09:02 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:06:13.138 23:09:02 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:06:13.138 23:09:02 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:06:13.138 23:09:02 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:06:13.138 23:09:02 -- common/autotest_common.sh@641 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:13.138 23:09:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:13.138 23:09:02 -- common/autotest_common.sh@10 -- # set +x 00:06:13.138 [2024-04-26 23:09:02.348901] app.c: 690:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 3733357 has claimed it. 00:06:13.138 request: 00:06:13.138 { 00:06:13.138 "method": "framework_enable_cpumask_locks", 00:06:13.138 "req_id": 1 00:06:13.138 } 00:06:13.138 Got JSON-RPC error response 00:06:13.138 response: 00:06:13.138 { 00:06:13.138 "code": -32603, 00:06:13.138 "message": "Failed to claim CPU core: 2" 00:06:13.138 } 00:06:13.138 23:09:02 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:06:13.138 23:09:02 -- common/autotest_common.sh@641 -- # es=1 00:06:13.138 23:09:02 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:06:13.138 23:09:02 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:06:13.138 23:09:02 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:06:13.138 23:09:02 -- event/cpu_locks.sh@158 -- # waitforlisten 3733357 /var/tmp/spdk.sock 00:06:13.138 23:09:02 -- common/autotest_common.sh@817 -- # '[' -z 3733357 ']' 00:06:13.138 23:09:02 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:13.138 23:09:02 -- common/autotest_common.sh@822 -- # local max_retries=100 00:06:13.138 23:09:02 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:13.138 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:13.138 23:09:02 -- common/autotest_common.sh@826 -- # xtrace_disable 00:06:13.138 23:09:02 -- common/autotest_common.sh@10 -- # set +x 00:06:13.398 23:09:02 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:06:13.398 23:09:02 -- common/autotest_common.sh@850 -- # return 0 00:06:13.398 23:09:02 -- event/cpu_locks.sh@159 -- # waitforlisten 3733494 /var/tmp/spdk2.sock 00:06:13.398 23:09:02 -- common/autotest_common.sh@817 -- # '[' -z 3733494 ']' 00:06:13.398 23:09:02 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:13.398 23:09:02 -- common/autotest_common.sh@822 -- # local max_retries=100 00:06:13.398 23:09:02 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:13.398 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:13.398 23:09:02 -- common/autotest_common.sh@826 -- # xtrace_disable 00:06:13.398 23:09:02 -- common/autotest_common.sh@10 -- # set +x 00:06:13.659 23:09:02 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:06:13.659 23:09:02 -- common/autotest_common.sh@850 -- # return 0 00:06:13.659 23:09:02 -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:06:13.660 23:09:02 -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:13.660 23:09:02 -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:13.660 23:09:02 -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:13.660 00:06:13.660 real 0m1.981s 00:06:13.660 user 0m0.761s 00:06:13.660 sys 0m0.151s 00:06:13.660 23:09:02 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:13.660 23:09:02 -- common/autotest_common.sh@10 -- # set +x 00:06:13.660 ************************************ 00:06:13.660 END TEST locking_overlapped_coremask_via_rpc 00:06:13.660 ************************************ 00:06:13.660 23:09:02 -- event/cpu_locks.sh@174 -- # cleanup 00:06:13.660 23:09:02 -- event/cpu_locks.sh@15 -- # [[ -z 3733357 ]] 00:06:13.660 23:09:02 -- event/cpu_locks.sh@15 -- # killprocess 3733357 00:06:13.660 23:09:02 -- common/autotest_common.sh@936 -- # '[' -z 3733357 ']' 00:06:13.660 23:09:02 -- common/autotest_common.sh@940 -- # kill -0 3733357 00:06:13.660 23:09:02 -- common/autotest_common.sh@941 -- # uname 00:06:13.660 23:09:02 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:13.660 23:09:02 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3733357 00:06:13.660 23:09:02 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:13.660 23:09:02 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:13.660 23:09:02 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3733357' 00:06:13.660 killing process with pid 3733357 00:06:13.660 23:09:02 -- common/autotest_common.sh@955 -- # kill 3733357 00:06:13.660 23:09:02 -- common/autotest_common.sh@960 -- # wait 3733357 00:06:13.920 23:09:02 -- event/cpu_locks.sh@16 -- # [[ -z 3733494 ]] 00:06:13.920 23:09:02 -- event/cpu_locks.sh@16 -- # killprocess 3733494 00:06:13.920 23:09:02 -- common/autotest_common.sh@936 -- # '[' -z 3733494 ']' 00:06:13.920 23:09:02 -- common/autotest_common.sh@940 -- # kill -0 3733494 00:06:13.920 23:09:02 -- common/autotest_common.sh@941 -- # uname 00:06:13.920 23:09:02 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:13.920 23:09:02 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3733494 00:06:13.920 23:09:03 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:06:13.920 23:09:03 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:06:13.920 23:09:03 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3733494' 00:06:13.920 killing process with pid 3733494 00:06:13.920 23:09:03 -- common/autotest_common.sh@955 -- # kill 3733494 00:06:13.920 23:09:03 -- common/autotest_common.sh@960 -- # wait 3733494 00:06:14.180 23:09:03 -- event/cpu_locks.sh@18 -- # rm -f 00:06:14.180 23:09:03 -- event/cpu_locks.sh@1 -- # cleanup 00:06:14.180 23:09:03 -- event/cpu_locks.sh@15 -- # [[ -z 3733357 ]] 00:06:14.180 23:09:03 -- event/cpu_locks.sh@15 -- # killprocess 3733357 00:06:14.180 23:09:03 -- common/autotest_common.sh@936 -- # '[' -z 3733357 ']' 00:06:14.180 23:09:03 -- common/autotest_common.sh@940 -- # kill -0 3733357 00:06:14.180 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 940: kill: (3733357) - No such process 00:06:14.180 23:09:03 -- common/autotest_common.sh@963 -- # echo 'Process with pid 3733357 is not found' 00:06:14.180 Process with pid 3733357 is not found 00:06:14.180 23:09:03 -- event/cpu_locks.sh@16 -- # [[ -z 3733494 ]] 00:06:14.180 23:09:03 -- event/cpu_locks.sh@16 -- # killprocess 3733494 00:06:14.180 23:09:03 -- common/autotest_common.sh@936 -- # '[' -z 3733494 ']' 00:06:14.180 23:09:03 -- common/autotest_common.sh@940 -- # kill -0 3733494 00:06:14.180 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 940: kill: (3733494) - No such process 00:06:14.180 23:09:03 -- common/autotest_common.sh@963 -- # echo 'Process with pid 3733494 is not found' 00:06:14.180 Process with pid 3733494 is not found 00:06:14.180 23:09:03 -- event/cpu_locks.sh@18 -- # rm -f 00:06:14.180 00:06:14.180 real 0m16.064s 00:06:14.180 user 0m27.013s 00:06:14.180 sys 0m4.879s 00:06:14.180 23:09:03 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:14.180 23:09:03 -- common/autotest_common.sh@10 -- # set +x 00:06:14.180 ************************************ 00:06:14.180 END TEST cpu_locks 00:06:14.180 ************************************ 00:06:14.180 00:06:14.180 real 0m42.318s 00:06:14.180 user 1m18.787s 00:06:14.180 sys 0m8.279s 00:06:14.180 23:09:03 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:14.180 23:09:03 -- common/autotest_common.sh@10 -- # set +x 00:06:14.180 ************************************ 00:06:14.180 END TEST event 00:06:14.180 ************************************ 00:06:14.180 23:09:03 -- spdk/autotest.sh@178 -- # run_test thread /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:06:14.180 23:09:03 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:14.180 23:09:03 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:14.180 23:09:03 -- common/autotest_common.sh@10 -- # set +x 00:06:14.180 ************************************ 00:06:14.180 START TEST thread 00:06:14.180 ************************************ 00:06:14.180 23:09:03 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:06:14.442 * Looking for test storage... 00:06:14.442 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread 00:06:14.442 23:09:03 -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:14.442 23:09:03 -- common/autotest_common.sh@1087 -- # '[' 8 -le 1 ']' 00:06:14.442 23:09:03 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:14.442 23:09:03 -- common/autotest_common.sh@10 -- # set +x 00:06:14.442 ************************************ 00:06:14.442 START TEST thread_poller_perf 00:06:14.442 ************************************ 00:06:14.442 23:09:03 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:14.703 [2024-04-26 23:09:03.707936] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:06:14.703 [2024-04-26 23:09:03.708022] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3734182 ] 00:06:14.703 EAL: No free 2048 kB hugepages reported on node 1 00:06:14.703 [2024-04-26 23:09:03.776478] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:14.703 [2024-04-26 23:09:03.813267] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:14.703 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:06:15.647 ====================================== 00:06:15.647 busy:2408415028 (cyc) 00:06:15.647 total_run_count: 286000 00:06:15.647 tsc_hz: 2400000000 (cyc) 00:06:15.647 ====================================== 00:06:15.647 poller_cost: 8421 (cyc), 3508 (nsec) 00:06:15.647 00:06:15.647 real 0m1.172s 00:06:15.647 user 0m1.093s 00:06:15.648 sys 0m0.073s 00:06:15.648 23:09:04 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:15.648 23:09:04 -- common/autotest_common.sh@10 -- # set +x 00:06:15.648 ************************************ 00:06:15.648 END TEST thread_poller_perf 00:06:15.648 ************************************ 00:06:15.648 23:09:04 -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:15.648 23:09:04 -- common/autotest_common.sh@1087 -- # '[' 8 -le 1 ']' 00:06:15.648 23:09:04 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:15.648 23:09:04 -- common/autotest_common.sh@10 -- # set +x 00:06:15.909 ************************************ 00:06:15.909 START TEST thread_poller_perf 00:06:15.909 ************************************ 00:06:15.910 23:09:05 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:15.910 [2024-04-26 23:09:05.042237] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:06:15.910 [2024-04-26 23:09:05.042334] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3734345 ] 00:06:15.910 EAL: No free 2048 kB hugepages reported on node 1 00:06:15.910 [2024-04-26 23:09:05.109092] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:15.910 [2024-04-26 23:09:05.145192] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:15.910 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:06:17.297 ====================================== 00:06:17.297 busy:2401931632 (cyc) 00:06:17.297 total_run_count: 3813000 00:06:17.297 tsc_hz: 2400000000 (cyc) 00:06:17.297 ====================================== 00:06:17.297 poller_cost: 629 (cyc), 262 (nsec) 00:06:17.297 00:06:17.297 real 0m1.163s 00:06:17.297 user 0m1.089s 00:06:17.297 sys 0m0.069s 00:06:17.297 23:09:06 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:17.297 23:09:06 -- common/autotest_common.sh@10 -- # set +x 00:06:17.297 ************************************ 00:06:17.297 END TEST thread_poller_perf 00:06:17.297 ************************************ 00:06:17.297 23:09:06 -- thread/thread.sh@17 -- # [[ y != \y ]] 00:06:17.297 00:06:17.297 real 0m2.787s 00:06:17.297 user 0m2.354s 00:06:17.297 sys 0m0.394s 00:06:17.297 23:09:06 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:17.297 23:09:06 -- common/autotest_common.sh@10 -- # set +x 00:06:17.297 ************************************ 00:06:17.297 END TEST thread 00:06:17.297 ************************************ 00:06:17.297 23:09:06 -- spdk/autotest.sh@179 -- # run_test accel /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel.sh 00:06:17.297 23:09:06 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:17.297 23:09:06 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:17.297 23:09:06 -- common/autotest_common.sh@10 -- # set +x 00:06:17.297 ************************************ 00:06:17.297 START TEST accel 00:06:17.297 ************************************ 00:06:17.297 23:09:06 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel.sh 00:06:17.297 * Looking for test storage... 00:06:17.297 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel 00:06:17.297 23:09:06 -- accel/accel.sh@81 -- # declare -A expected_opcs 00:06:17.297 23:09:06 -- accel/accel.sh@82 -- # get_expected_opcs 00:06:17.297 23:09:06 -- accel/accel.sh@60 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:17.297 23:09:06 -- accel/accel.sh@62 -- # spdk_tgt_pid=3734654 00:06:17.297 23:09:06 -- accel/accel.sh@63 -- # waitforlisten 3734654 00:06:17.297 23:09:06 -- common/autotest_common.sh@817 -- # '[' -z 3734654 ']' 00:06:17.297 23:09:06 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:17.297 23:09:06 -- common/autotest_common.sh@822 -- # local max_retries=100 00:06:17.297 23:09:06 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:17.297 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:17.297 23:09:06 -- accel/accel.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -c /dev/fd/63 00:06:17.297 23:09:06 -- common/autotest_common.sh@826 -- # xtrace_disable 00:06:17.297 23:09:06 -- accel/accel.sh@61 -- # build_accel_config 00:06:17.297 23:09:06 -- common/autotest_common.sh@10 -- # set +x 00:06:17.297 23:09:06 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:17.297 23:09:06 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:17.297 23:09:06 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:17.297 23:09:06 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:17.297 23:09:06 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:17.297 23:09:06 -- accel/accel.sh@40 -- # local IFS=, 00:06:17.297 23:09:06 -- accel/accel.sh@41 -- # jq -r . 00:06:17.559 [2024-04-26 23:09:06.572601] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:06:17.559 [2024-04-26 23:09:06.572669] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3734654 ] 00:06:17.559 EAL: No free 2048 kB hugepages reported on node 1 00:06:17.559 [2024-04-26 23:09:06.640616] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:17.559 [2024-04-26 23:09:06.678703] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:18.132 23:09:07 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:06:18.132 23:09:07 -- common/autotest_common.sh@850 -- # return 0 00:06:18.132 23:09:07 -- accel/accel.sh@65 -- # [[ 0 -gt 0 ]] 00:06:18.132 23:09:07 -- accel/accel.sh@66 -- # [[ 0 -gt 0 ]] 00:06:18.132 23:09:07 -- accel/accel.sh@67 -- # [[ 0 -gt 0 ]] 00:06:18.132 23:09:07 -- accel/accel.sh@68 -- # [[ -n '' ]] 00:06:18.132 23:09:07 -- accel/accel.sh@70 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:06:18.132 23:09:07 -- accel/accel.sh@70 -- # rpc_cmd accel_get_opc_assignments 00:06:18.132 23:09:07 -- accel/accel.sh@70 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:06:18.132 23:09:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:18.132 23:09:07 -- common/autotest_common.sh@10 -- # set +x 00:06:18.132 23:09:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:18.132 23:09:07 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:18.132 23:09:07 -- accel/accel.sh@72 -- # IFS== 00:06:18.132 23:09:07 -- accel/accel.sh@72 -- # read -r opc module 00:06:18.132 23:09:07 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:18.132 23:09:07 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:18.132 23:09:07 -- accel/accel.sh@72 -- # IFS== 00:06:18.132 23:09:07 -- accel/accel.sh@72 -- # read -r opc module 00:06:18.132 23:09:07 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:18.132 23:09:07 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:18.132 23:09:07 -- accel/accel.sh@72 -- # IFS== 00:06:18.132 23:09:07 -- accel/accel.sh@72 -- # read -r opc module 00:06:18.132 23:09:07 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:18.132 23:09:07 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:18.132 23:09:07 -- accel/accel.sh@72 -- # IFS== 00:06:18.132 23:09:07 -- accel/accel.sh@72 -- # read -r opc module 00:06:18.132 23:09:07 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:18.132 23:09:07 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:18.132 23:09:07 -- accel/accel.sh@72 -- # IFS== 00:06:18.132 23:09:07 -- accel/accel.sh@72 -- # read -r opc module 00:06:18.132 23:09:07 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:18.132 23:09:07 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:18.132 23:09:07 -- accel/accel.sh@72 -- # IFS== 00:06:18.132 23:09:07 -- accel/accel.sh@72 -- # read -r opc module 00:06:18.132 23:09:07 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:18.132 23:09:07 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:18.132 23:09:07 -- accel/accel.sh@72 -- # IFS== 00:06:18.132 23:09:07 -- accel/accel.sh@72 -- # read -r opc module 00:06:18.132 23:09:07 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:18.132 23:09:07 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:18.132 23:09:07 -- accel/accel.sh@72 -- # IFS== 00:06:18.132 23:09:07 -- accel/accel.sh@72 -- # read -r opc module 00:06:18.132 23:09:07 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:18.132 23:09:07 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:18.132 23:09:07 -- accel/accel.sh@72 -- # IFS== 00:06:18.132 23:09:07 -- accel/accel.sh@72 -- # read -r opc module 00:06:18.132 23:09:07 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:18.132 23:09:07 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:18.132 23:09:07 -- accel/accel.sh@72 -- # IFS== 00:06:18.132 23:09:07 -- accel/accel.sh@72 -- # read -r opc module 00:06:18.132 23:09:07 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:18.132 23:09:07 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:18.132 23:09:07 -- accel/accel.sh@72 -- # IFS== 00:06:18.132 23:09:07 -- accel/accel.sh@72 -- # read -r opc module 00:06:18.132 23:09:07 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:18.132 23:09:07 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:18.132 23:09:07 -- accel/accel.sh@72 -- # IFS== 00:06:18.132 23:09:07 -- accel/accel.sh@72 -- # read -r opc module 00:06:18.132 23:09:07 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:18.132 23:09:07 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:18.132 23:09:07 -- accel/accel.sh@72 -- # IFS== 00:06:18.132 23:09:07 -- accel/accel.sh@72 -- # read -r opc module 00:06:18.132 23:09:07 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:18.132 23:09:07 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:18.132 23:09:07 -- accel/accel.sh@72 -- # IFS== 00:06:18.132 23:09:07 -- accel/accel.sh@72 -- # read -r opc module 00:06:18.132 23:09:07 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:18.132 23:09:07 -- accel/accel.sh@75 -- # killprocess 3734654 00:06:18.132 23:09:07 -- common/autotest_common.sh@936 -- # '[' -z 3734654 ']' 00:06:18.132 23:09:07 -- common/autotest_common.sh@940 -- # kill -0 3734654 00:06:18.132 23:09:07 -- common/autotest_common.sh@941 -- # uname 00:06:18.394 23:09:07 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:18.394 23:09:07 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3734654 00:06:18.394 23:09:07 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:18.394 23:09:07 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:18.394 23:09:07 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3734654' 00:06:18.394 killing process with pid 3734654 00:06:18.394 23:09:07 -- common/autotest_common.sh@955 -- # kill 3734654 00:06:18.394 23:09:07 -- common/autotest_common.sh@960 -- # wait 3734654 00:06:18.394 23:09:07 -- accel/accel.sh@76 -- # trap - ERR 00:06:18.394 23:09:07 -- accel/accel.sh@89 -- # run_test accel_help accel_perf -h 00:06:18.394 23:09:07 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:06:18.394 23:09:07 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:18.394 23:09:07 -- common/autotest_common.sh@10 -- # set +x 00:06:18.655 23:09:07 -- common/autotest_common.sh@1111 -- # accel_perf -h 00:06:18.655 23:09:07 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -h 00:06:18.655 23:09:07 -- accel/accel.sh@12 -- # build_accel_config 00:06:18.655 23:09:07 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:18.655 23:09:07 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:18.656 23:09:07 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:18.656 23:09:07 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:18.656 23:09:07 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:18.656 23:09:07 -- accel/accel.sh@40 -- # local IFS=, 00:06:18.656 23:09:07 -- accel/accel.sh@41 -- # jq -r . 00:06:18.656 23:09:07 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:18.656 23:09:07 -- common/autotest_common.sh@10 -- # set +x 00:06:18.656 23:09:07 -- accel/accel.sh@91 -- # run_test accel_missing_filename NOT accel_perf -t 1 -w compress 00:06:18.656 23:09:07 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:06:18.656 23:09:07 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:18.656 23:09:07 -- common/autotest_common.sh@10 -- # set +x 00:06:18.917 ************************************ 00:06:18.917 START TEST accel_missing_filename 00:06:18.917 ************************************ 00:06:18.917 23:09:07 -- common/autotest_common.sh@1111 -- # NOT accel_perf -t 1 -w compress 00:06:18.917 23:09:07 -- common/autotest_common.sh@638 -- # local es=0 00:06:18.917 23:09:07 -- common/autotest_common.sh@640 -- # valid_exec_arg accel_perf -t 1 -w compress 00:06:18.917 23:09:07 -- common/autotest_common.sh@626 -- # local arg=accel_perf 00:06:18.917 23:09:07 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:06:18.917 23:09:07 -- common/autotest_common.sh@630 -- # type -t accel_perf 00:06:18.917 23:09:07 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:06:18.917 23:09:07 -- common/autotest_common.sh@641 -- # accel_perf -t 1 -w compress 00:06:18.917 23:09:07 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress 00:06:18.917 23:09:07 -- accel/accel.sh@12 -- # build_accel_config 00:06:18.917 23:09:07 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:18.917 23:09:07 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:18.917 23:09:07 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:18.917 23:09:07 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:18.917 23:09:07 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:18.917 23:09:07 -- accel/accel.sh@40 -- # local IFS=, 00:06:18.917 23:09:07 -- accel/accel.sh@41 -- # jq -r . 00:06:18.917 [2024-04-26 23:09:08.016684] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:06:18.917 [2024-04-26 23:09:08.016752] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3735259 ] 00:06:18.917 EAL: No free 2048 kB hugepages reported on node 1 00:06:18.917 [2024-04-26 23:09:08.082868] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:18.917 [2024-04-26 23:09:08.119664] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:18.917 [2024-04-26 23:09:08.153244] app.c: 966:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:19.181 [2024-04-26 23:09:08.192168] accel_perf.c:1394:main: *ERROR*: ERROR starting application 00:06:19.181 A filename is required. 00:06:19.181 23:09:08 -- common/autotest_common.sh@641 -- # es=234 00:06:19.181 23:09:08 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:06:19.181 23:09:08 -- common/autotest_common.sh@650 -- # es=106 00:06:19.181 23:09:08 -- common/autotest_common.sh@651 -- # case "$es" in 00:06:19.181 23:09:08 -- common/autotest_common.sh@658 -- # es=1 00:06:19.181 23:09:08 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:06:19.181 00:06:19.181 real 0m0.245s 00:06:19.181 user 0m0.173s 00:06:19.181 sys 0m0.114s 00:06:19.181 23:09:08 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:19.181 23:09:08 -- common/autotest_common.sh@10 -- # set +x 00:06:19.181 ************************************ 00:06:19.181 END TEST accel_missing_filename 00:06:19.181 ************************************ 00:06:19.181 23:09:08 -- accel/accel.sh@93 -- # run_test accel_compress_verify NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:19.181 23:09:08 -- common/autotest_common.sh@1087 -- # '[' 10 -le 1 ']' 00:06:19.181 23:09:08 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:19.181 23:09:08 -- common/autotest_common.sh@10 -- # set +x 00:06:19.181 ************************************ 00:06:19.181 START TEST accel_compress_verify 00:06:19.181 ************************************ 00:06:19.182 23:09:08 -- common/autotest_common.sh@1111 -- # NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:19.182 23:09:08 -- common/autotest_common.sh@638 -- # local es=0 00:06:19.182 23:09:08 -- common/autotest_common.sh@640 -- # valid_exec_arg accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:19.182 23:09:08 -- common/autotest_common.sh@626 -- # local arg=accel_perf 00:06:19.182 23:09:08 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:06:19.182 23:09:08 -- common/autotest_common.sh@630 -- # type -t accel_perf 00:06:19.182 23:09:08 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:06:19.182 23:09:08 -- common/autotest_common.sh@641 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:19.182 23:09:08 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:19.182 23:09:08 -- accel/accel.sh@12 -- # build_accel_config 00:06:19.182 23:09:08 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:19.182 23:09:08 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:19.182 23:09:08 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:19.182 23:09:08 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:19.182 23:09:08 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:19.182 23:09:08 -- accel/accel.sh@40 -- # local IFS=, 00:06:19.182 23:09:08 -- accel/accel.sh@41 -- # jq -r . 00:06:19.483 [2024-04-26 23:09:08.452331] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:06:19.483 [2024-04-26 23:09:08.452440] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3735452 ] 00:06:19.483 EAL: No free 2048 kB hugepages reported on node 1 00:06:19.483 [2024-04-26 23:09:08.523688] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:19.483 [2024-04-26 23:09:08.555749] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:19.483 [2024-04-26 23:09:08.588314] app.c: 966:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:19.483 [2024-04-26 23:09:08.626097] accel_perf.c:1394:main: *ERROR*: ERROR starting application 00:06:19.483 00:06:19.483 Compression does not support the verify option, aborting. 00:06:19.483 23:09:08 -- common/autotest_common.sh@641 -- # es=161 00:06:19.483 23:09:08 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:06:19.483 23:09:08 -- common/autotest_common.sh@650 -- # es=33 00:06:19.483 23:09:08 -- common/autotest_common.sh@651 -- # case "$es" in 00:06:19.483 23:09:08 -- common/autotest_common.sh@658 -- # es=1 00:06:19.483 23:09:08 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:06:19.483 00:06:19.483 real 0m0.247s 00:06:19.483 user 0m0.177s 00:06:19.483 sys 0m0.110s 00:06:19.483 23:09:08 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:19.483 23:09:08 -- common/autotest_common.sh@10 -- # set +x 00:06:19.483 ************************************ 00:06:19.483 END TEST accel_compress_verify 00:06:19.483 ************************************ 00:06:19.483 23:09:08 -- accel/accel.sh@95 -- # run_test accel_wrong_workload NOT accel_perf -t 1 -w foobar 00:06:19.483 23:09:08 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:06:19.483 23:09:08 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:19.483 23:09:08 -- common/autotest_common.sh@10 -- # set +x 00:06:19.770 ************************************ 00:06:19.770 START TEST accel_wrong_workload 00:06:19.770 ************************************ 00:06:19.770 23:09:08 -- common/autotest_common.sh@1111 -- # NOT accel_perf -t 1 -w foobar 00:06:19.770 23:09:08 -- common/autotest_common.sh@638 -- # local es=0 00:06:19.770 23:09:08 -- common/autotest_common.sh@640 -- # valid_exec_arg accel_perf -t 1 -w foobar 00:06:19.770 23:09:08 -- common/autotest_common.sh@626 -- # local arg=accel_perf 00:06:19.770 23:09:08 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:06:19.770 23:09:08 -- common/autotest_common.sh@630 -- # type -t accel_perf 00:06:19.770 23:09:08 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:06:19.770 23:09:08 -- common/autotest_common.sh@641 -- # accel_perf -t 1 -w foobar 00:06:19.770 23:09:08 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w foobar 00:06:19.770 23:09:08 -- accel/accel.sh@12 -- # build_accel_config 00:06:19.770 23:09:08 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:19.770 23:09:08 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:19.770 23:09:08 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:19.770 23:09:08 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:19.770 23:09:08 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:19.770 23:09:08 -- accel/accel.sh@40 -- # local IFS=, 00:06:19.770 23:09:08 -- accel/accel.sh@41 -- # jq -r . 00:06:19.770 Unsupported workload type: foobar 00:06:19.770 [2024-04-26 23:09:08.863414] app.c:1364:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'w' failed: 1 00:06:19.770 accel_perf options: 00:06:19.770 [-h help message] 00:06:19.770 [-q queue depth per core] 00:06:19.770 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:06:19.770 [-T number of threads per core 00:06:19.770 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:06:19.770 [-t time in seconds] 00:06:19.770 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:06:19.770 [ dif_verify, , dif_generate, dif_generate_copy 00:06:19.770 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:06:19.770 [-l for compress/decompress workloads, name of uncompressed input file 00:06:19.770 [-S for crc32c workload, use this seed value (default 0) 00:06:19.770 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:06:19.770 [-f for fill workload, use this BYTE value (default 255) 00:06:19.770 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:06:19.770 [-y verify result if this switch is on] 00:06:19.770 [-a tasks to allocate per core (default: same value as -q)] 00:06:19.770 Can be used to spread operations across a wider range of memory. 00:06:19.770 23:09:08 -- common/autotest_common.sh@641 -- # es=1 00:06:19.770 23:09:08 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:06:19.770 23:09:08 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:06:19.770 23:09:08 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:06:19.770 00:06:19.770 real 0m0.034s 00:06:19.770 user 0m0.023s 00:06:19.770 sys 0m0.011s 00:06:19.770 23:09:08 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:19.770 23:09:08 -- common/autotest_common.sh@10 -- # set +x 00:06:19.770 ************************************ 00:06:19.770 END TEST accel_wrong_workload 00:06:19.770 ************************************ 00:06:19.770 Error: writing output failed: Broken pipe 00:06:19.770 23:09:08 -- accel/accel.sh@97 -- # run_test accel_negative_buffers NOT accel_perf -t 1 -w xor -y -x -1 00:06:19.770 23:09:08 -- common/autotest_common.sh@1087 -- # '[' 10 -le 1 ']' 00:06:19.770 23:09:08 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:19.770 23:09:08 -- common/autotest_common.sh@10 -- # set +x 00:06:20.033 ************************************ 00:06:20.033 START TEST accel_negative_buffers 00:06:20.033 ************************************ 00:06:20.033 23:09:09 -- common/autotest_common.sh@1111 -- # NOT accel_perf -t 1 -w xor -y -x -1 00:06:20.033 23:09:09 -- common/autotest_common.sh@638 -- # local es=0 00:06:20.033 23:09:09 -- common/autotest_common.sh@640 -- # valid_exec_arg accel_perf -t 1 -w xor -y -x -1 00:06:20.033 23:09:09 -- common/autotest_common.sh@626 -- # local arg=accel_perf 00:06:20.033 23:09:09 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:06:20.033 23:09:09 -- common/autotest_common.sh@630 -- # type -t accel_perf 00:06:20.033 23:09:09 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:06:20.033 23:09:09 -- common/autotest_common.sh@641 -- # accel_perf -t 1 -w xor -y -x -1 00:06:20.033 23:09:09 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x -1 00:06:20.033 23:09:09 -- accel/accel.sh@12 -- # build_accel_config 00:06:20.033 23:09:09 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:20.033 23:09:09 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:20.033 23:09:09 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:20.033 23:09:09 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:20.033 23:09:09 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:20.033 23:09:09 -- accel/accel.sh@40 -- # local IFS=, 00:06:20.033 23:09:09 -- accel/accel.sh@41 -- # jq -r . 00:06:20.033 -x option must be non-negative. 00:06:20.033 [2024-04-26 23:09:09.084749] app.c:1364:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'x' failed: 1 00:06:20.033 accel_perf options: 00:06:20.033 [-h help message] 00:06:20.033 [-q queue depth per core] 00:06:20.033 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:06:20.033 [-T number of threads per core 00:06:20.033 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:06:20.033 [-t time in seconds] 00:06:20.033 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:06:20.033 [ dif_verify, , dif_generate, dif_generate_copy 00:06:20.033 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:06:20.033 [-l for compress/decompress workloads, name of uncompressed input file 00:06:20.033 [-S for crc32c workload, use this seed value (default 0) 00:06:20.033 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:06:20.033 [-f for fill workload, use this BYTE value (default 255) 00:06:20.033 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:06:20.033 [-y verify result if this switch is on] 00:06:20.033 [-a tasks to allocate per core (default: same value as -q)] 00:06:20.033 Can be used to spread operations across a wider range of memory. 00:06:20.033 23:09:09 -- common/autotest_common.sh@641 -- # es=1 00:06:20.033 23:09:09 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:06:20.033 23:09:09 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:06:20.033 23:09:09 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:06:20.033 00:06:20.033 real 0m0.034s 00:06:20.033 user 0m0.021s 00:06:20.033 sys 0m0.014s 00:06:20.033 23:09:09 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:20.033 23:09:09 -- common/autotest_common.sh@10 -- # set +x 00:06:20.033 ************************************ 00:06:20.033 END TEST accel_negative_buffers 00:06:20.033 ************************************ 00:06:20.033 Error: writing output failed: Broken pipe 00:06:20.033 23:09:09 -- accel/accel.sh@101 -- # run_test accel_crc32c accel_test -t 1 -w crc32c -S 32 -y 00:06:20.033 23:09:09 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:06:20.033 23:09:09 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:20.033 23:09:09 -- common/autotest_common.sh@10 -- # set +x 00:06:20.033 ************************************ 00:06:20.033 START TEST accel_crc32c 00:06:20.033 ************************************ 00:06:20.033 23:09:09 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w crc32c -S 32 -y 00:06:20.033 23:09:09 -- accel/accel.sh@16 -- # local accel_opc 00:06:20.033 23:09:09 -- accel/accel.sh@17 -- # local accel_module 00:06:20.033 23:09:09 -- accel/accel.sh@19 -- # IFS=: 00:06:20.033 23:09:09 -- accel/accel.sh@19 -- # read -r var val 00:06:20.033 23:09:09 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:06:20.033 23:09:09 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:06:20.033 23:09:09 -- accel/accel.sh@12 -- # build_accel_config 00:06:20.033 23:09:09 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:20.033 23:09:09 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:20.033 23:09:09 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:20.033 23:09:09 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:20.033 23:09:09 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:20.033 23:09:09 -- accel/accel.sh@40 -- # local IFS=, 00:06:20.033 23:09:09 -- accel/accel.sh@41 -- # jq -r . 00:06:20.294 [2024-04-26 23:09:09.305539] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:06:20.294 [2024-04-26 23:09:09.305625] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3735898 ] 00:06:20.294 EAL: No free 2048 kB hugepages reported on node 1 00:06:20.294 [2024-04-26 23:09:09.371520] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:20.294 [2024-04-26 23:09:09.408267] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:20.294 23:09:09 -- accel/accel.sh@20 -- # val= 00:06:20.294 23:09:09 -- accel/accel.sh@21 -- # case "$var" in 00:06:20.294 23:09:09 -- accel/accel.sh@19 -- # IFS=: 00:06:20.294 23:09:09 -- accel/accel.sh@19 -- # read -r var val 00:06:20.294 23:09:09 -- accel/accel.sh@20 -- # val= 00:06:20.294 23:09:09 -- accel/accel.sh@21 -- # case "$var" in 00:06:20.294 23:09:09 -- accel/accel.sh@19 -- # IFS=: 00:06:20.294 23:09:09 -- accel/accel.sh@19 -- # read -r var val 00:06:20.294 23:09:09 -- accel/accel.sh@20 -- # val=0x1 00:06:20.294 23:09:09 -- accel/accel.sh@21 -- # case "$var" in 00:06:20.294 23:09:09 -- accel/accel.sh@19 -- # IFS=: 00:06:20.294 23:09:09 -- accel/accel.sh@19 -- # read -r var val 00:06:20.294 23:09:09 -- accel/accel.sh@20 -- # val= 00:06:20.294 23:09:09 -- accel/accel.sh@21 -- # case "$var" in 00:06:20.294 23:09:09 -- accel/accel.sh@19 -- # IFS=: 00:06:20.294 23:09:09 -- accel/accel.sh@19 -- # read -r var val 00:06:20.294 23:09:09 -- accel/accel.sh@20 -- # val= 00:06:20.294 23:09:09 -- accel/accel.sh@21 -- # case "$var" in 00:06:20.294 23:09:09 -- accel/accel.sh@19 -- # IFS=: 00:06:20.294 23:09:09 -- accel/accel.sh@19 -- # read -r var val 00:06:20.294 23:09:09 -- accel/accel.sh@20 -- # val=crc32c 00:06:20.294 23:09:09 -- accel/accel.sh@21 -- # case "$var" in 00:06:20.294 23:09:09 -- accel/accel.sh@23 -- # accel_opc=crc32c 00:06:20.294 23:09:09 -- accel/accel.sh@19 -- # IFS=: 00:06:20.294 23:09:09 -- accel/accel.sh@19 -- # read -r var val 00:06:20.294 23:09:09 -- accel/accel.sh@20 -- # val=32 00:06:20.294 23:09:09 -- accel/accel.sh@21 -- # case "$var" in 00:06:20.294 23:09:09 -- accel/accel.sh@19 -- # IFS=: 00:06:20.294 23:09:09 -- accel/accel.sh@19 -- # read -r var val 00:06:20.294 23:09:09 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:20.294 23:09:09 -- accel/accel.sh@21 -- # case "$var" in 00:06:20.294 23:09:09 -- accel/accel.sh@19 -- # IFS=: 00:06:20.294 23:09:09 -- accel/accel.sh@19 -- # read -r var val 00:06:20.294 23:09:09 -- accel/accel.sh@20 -- # val= 00:06:20.294 23:09:09 -- accel/accel.sh@21 -- # case "$var" in 00:06:20.294 23:09:09 -- accel/accel.sh@19 -- # IFS=: 00:06:20.294 23:09:09 -- accel/accel.sh@19 -- # read -r var val 00:06:20.294 23:09:09 -- accel/accel.sh@20 -- # val=software 00:06:20.294 23:09:09 -- accel/accel.sh@21 -- # case "$var" in 00:06:20.294 23:09:09 -- accel/accel.sh@22 -- # accel_module=software 00:06:20.294 23:09:09 -- accel/accel.sh@19 -- # IFS=: 00:06:20.294 23:09:09 -- accel/accel.sh@19 -- # read -r var val 00:06:20.294 23:09:09 -- accel/accel.sh@20 -- # val=32 00:06:20.294 23:09:09 -- accel/accel.sh@21 -- # case "$var" in 00:06:20.294 23:09:09 -- accel/accel.sh@19 -- # IFS=: 00:06:20.294 23:09:09 -- accel/accel.sh@19 -- # read -r var val 00:06:20.294 23:09:09 -- accel/accel.sh@20 -- # val=32 00:06:20.294 23:09:09 -- accel/accel.sh@21 -- # case "$var" in 00:06:20.294 23:09:09 -- accel/accel.sh@19 -- # IFS=: 00:06:20.294 23:09:09 -- accel/accel.sh@19 -- # read -r var val 00:06:20.294 23:09:09 -- accel/accel.sh@20 -- # val=1 00:06:20.294 23:09:09 -- accel/accel.sh@21 -- # case "$var" in 00:06:20.294 23:09:09 -- accel/accel.sh@19 -- # IFS=: 00:06:20.294 23:09:09 -- accel/accel.sh@19 -- # read -r var val 00:06:20.294 23:09:09 -- accel/accel.sh@20 -- # val='1 seconds' 00:06:20.294 23:09:09 -- accel/accel.sh@21 -- # case "$var" in 00:06:20.294 23:09:09 -- accel/accel.sh@19 -- # IFS=: 00:06:20.294 23:09:09 -- accel/accel.sh@19 -- # read -r var val 00:06:20.294 23:09:09 -- accel/accel.sh@20 -- # val=Yes 00:06:20.294 23:09:09 -- accel/accel.sh@21 -- # case "$var" in 00:06:20.294 23:09:09 -- accel/accel.sh@19 -- # IFS=: 00:06:20.294 23:09:09 -- accel/accel.sh@19 -- # read -r var val 00:06:20.294 23:09:09 -- accel/accel.sh@20 -- # val= 00:06:20.294 23:09:09 -- accel/accel.sh@21 -- # case "$var" in 00:06:20.294 23:09:09 -- accel/accel.sh@19 -- # IFS=: 00:06:20.294 23:09:09 -- accel/accel.sh@19 -- # read -r var val 00:06:20.294 23:09:09 -- accel/accel.sh@20 -- # val= 00:06:20.294 23:09:09 -- accel/accel.sh@21 -- # case "$var" in 00:06:20.294 23:09:09 -- accel/accel.sh@19 -- # IFS=: 00:06:20.294 23:09:09 -- accel/accel.sh@19 -- # read -r var val 00:06:21.679 23:09:10 -- accel/accel.sh@20 -- # val= 00:06:21.679 23:09:10 -- accel/accel.sh@21 -- # case "$var" in 00:06:21.679 23:09:10 -- accel/accel.sh@19 -- # IFS=: 00:06:21.679 23:09:10 -- accel/accel.sh@19 -- # read -r var val 00:06:21.679 23:09:10 -- accel/accel.sh@20 -- # val= 00:06:21.679 23:09:10 -- accel/accel.sh@21 -- # case "$var" in 00:06:21.679 23:09:10 -- accel/accel.sh@19 -- # IFS=: 00:06:21.679 23:09:10 -- accel/accel.sh@19 -- # read -r var val 00:06:21.679 23:09:10 -- accel/accel.sh@20 -- # val= 00:06:21.679 23:09:10 -- accel/accel.sh@21 -- # case "$var" in 00:06:21.679 23:09:10 -- accel/accel.sh@19 -- # IFS=: 00:06:21.679 23:09:10 -- accel/accel.sh@19 -- # read -r var val 00:06:21.679 23:09:10 -- accel/accel.sh@20 -- # val= 00:06:21.679 23:09:10 -- accel/accel.sh@21 -- # case "$var" in 00:06:21.679 23:09:10 -- accel/accel.sh@19 -- # IFS=: 00:06:21.679 23:09:10 -- accel/accel.sh@19 -- # read -r var val 00:06:21.679 23:09:10 -- accel/accel.sh@20 -- # val= 00:06:21.679 23:09:10 -- accel/accel.sh@21 -- # case "$var" in 00:06:21.679 23:09:10 -- accel/accel.sh@19 -- # IFS=: 00:06:21.679 23:09:10 -- accel/accel.sh@19 -- # read -r var val 00:06:21.679 23:09:10 -- accel/accel.sh@20 -- # val= 00:06:21.679 23:09:10 -- accel/accel.sh@21 -- # case "$var" in 00:06:21.679 23:09:10 -- accel/accel.sh@19 -- # IFS=: 00:06:21.679 23:09:10 -- accel/accel.sh@19 -- # read -r var val 00:06:21.679 23:09:10 -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:21.679 23:09:10 -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:06:21.679 23:09:10 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:21.679 00:06:21.679 real 0m1.249s 00:06:21.679 user 0m1.150s 00:06:21.679 sys 0m0.109s 00:06:21.679 23:09:10 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:21.679 23:09:10 -- common/autotest_common.sh@10 -- # set +x 00:06:21.679 ************************************ 00:06:21.679 END TEST accel_crc32c 00:06:21.679 ************************************ 00:06:21.679 23:09:10 -- accel/accel.sh@102 -- # run_test accel_crc32c_C2 accel_test -t 1 -w crc32c -y -C 2 00:06:21.680 23:09:10 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:06:21.680 23:09:10 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:21.680 23:09:10 -- common/autotest_common.sh@10 -- # set +x 00:06:21.680 ************************************ 00:06:21.680 START TEST accel_crc32c_C2 00:06:21.680 ************************************ 00:06:21.680 23:09:10 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w crc32c -y -C 2 00:06:21.680 23:09:10 -- accel/accel.sh@16 -- # local accel_opc 00:06:21.680 23:09:10 -- accel/accel.sh@17 -- # local accel_module 00:06:21.680 23:09:10 -- accel/accel.sh@19 -- # IFS=: 00:06:21.680 23:09:10 -- accel/accel.sh@19 -- # read -r var val 00:06:21.680 23:09:10 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -y -C 2 00:06:21.680 23:09:10 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:06:21.680 23:09:10 -- accel/accel.sh@12 -- # build_accel_config 00:06:21.680 23:09:10 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:21.680 23:09:10 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:21.680 23:09:10 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:21.680 23:09:10 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:21.680 23:09:10 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:21.680 23:09:10 -- accel/accel.sh@40 -- # local IFS=, 00:06:21.680 23:09:10 -- accel/accel.sh@41 -- # jq -r . 00:06:21.680 [2024-04-26 23:09:10.738832] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:06:21.680 [2024-04-26 23:09:10.738925] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3736290 ] 00:06:21.680 EAL: No free 2048 kB hugepages reported on node 1 00:06:21.680 [2024-04-26 23:09:10.803857] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:21.680 [2024-04-26 23:09:10.839277] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:21.680 23:09:10 -- accel/accel.sh@20 -- # val= 00:06:21.680 23:09:10 -- accel/accel.sh@21 -- # case "$var" in 00:06:21.680 23:09:10 -- accel/accel.sh@19 -- # IFS=: 00:06:21.680 23:09:10 -- accel/accel.sh@19 -- # read -r var val 00:06:21.680 23:09:10 -- accel/accel.sh@20 -- # val= 00:06:21.680 23:09:10 -- accel/accel.sh@21 -- # case "$var" in 00:06:21.680 23:09:10 -- accel/accel.sh@19 -- # IFS=: 00:06:21.680 23:09:10 -- accel/accel.sh@19 -- # read -r var val 00:06:21.680 23:09:10 -- accel/accel.sh@20 -- # val=0x1 00:06:21.680 23:09:10 -- accel/accel.sh@21 -- # case "$var" in 00:06:21.680 23:09:10 -- accel/accel.sh@19 -- # IFS=: 00:06:21.680 23:09:10 -- accel/accel.sh@19 -- # read -r var val 00:06:21.680 23:09:10 -- accel/accel.sh@20 -- # val= 00:06:21.680 23:09:10 -- accel/accel.sh@21 -- # case "$var" in 00:06:21.680 23:09:10 -- accel/accel.sh@19 -- # IFS=: 00:06:21.680 23:09:10 -- accel/accel.sh@19 -- # read -r var val 00:06:21.680 23:09:10 -- accel/accel.sh@20 -- # val= 00:06:21.680 23:09:10 -- accel/accel.sh@21 -- # case "$var" in 00:06:21.680 23:09:10 -- accel/accel.sh@19 -- # IFS=: 00:06:21.680 23:09:10 -- accel/accel.sh@19 -- # read -r var val 00:06:21.680 23:09:10 -- accel/accel.sh@20 -- # val=crc32c 00:06:21.680 23:09:10 -- accel/accel.sh@21 -- # case "$var" in 00:06:21.680 23:09:10 -- accel/accel.sh@23 -- # accel_opc=crc32c 00:06:21.680 23:09:10 -- accel/accel.sh@19 -- # IFS=: 00:06:21.680 23:09:10 -- accel/accel.sh@19 -- # read -r var val 00:06:21.680 23:09:10 -- accel/accel.sh@20 -- # val=0 00:06:21.680 23:09:10 -- accel/accel.sh@21 -- # case "$var" in 00:06:21.680 23:09:10 -- accel/accel.sh@19 -- # IFS=: 00:06:21.680 23:09:10 -- accel/accel.sh@19 -- # read -r var val 00:06:21.680 23:09:10 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:21.680 23:09:10 -- accel/accel.sh@21 -- # case "$var" in 00:06:21.680 23:09:10 -- accel/accel.sh@19 -- # IFS=: 00:06:21.680 23:09:10 -- accel/accel.sh@19 -- # read -r var val 00:06:21.680 23:09:10 -- accel/accel.sh@20 -- # val= 00:06:21.680 23:09:10 -- accel/accel.sh@21 -- # case "$var" in 00:06:21.680 23:09:10 -- accel/accel.sh@19 -- # IFS=: 00:06:21.680 23:09:10 -- accel/accel.sh@19 -- # read -r var val 00:06:21.680 23:09:10 -- accel/accel.sh@20 -- # val=software 00:06:21.680 23:09:10 -- accel/accel.sh@21 -- # case "$var" in 00:06:21.680 23:09:10 -- accel/accel.sh@22 -- # accel_module=software 00:06:21.680 23:09:10 -- accel/accel.sh@19 -- # IFS=: 00:06:21.680 23:09:10 -- accel/accel.sh@19 -- # read -r var val 00:06:21.680 23:09:10 -- accel/accel.sh@20 -- # val=32 00:06:21.680 23:09:10 -- accel/accel.sh@21 -- # case "$var" in 00:06:21.680 23:09:10 -- accel/accel.sh@19 -- # IFS=: 00:06:21.680 23:09:10 -- accel/accel.sh@19 -- # read -r var val 00:06:21.680 23:09:10 -- accel/accel.sh@20 -- # val=32 00:06:21.680 23:09:10 -- accel/accel.sh@21 -- # case "$var" in 00:06:21.680 23:09:10 -- accel/accel.sh@19 -- # IFS=: 00:06:21.680 23:09:10 -- accel/accel.sh@19 -- # read -r var val 00:06:21.680 23:09:10 -- accel/accel.sh@20 -- # val=1 00:06:21.680 23:09:10 -- accel/accel.sh@21 -- # case "$var" in 00:06:21.680 23:09:10 -- accel/accel.sh@19 -- # IFS=: 00:06:21.680 23:09:10 -- accel/accel.sh@19 -- # read -r var val 00:06:21.680 23:09:10 -- accel/accel.sh@20 -- # val='1 seconds' 00:06:21.680 23:09:10 -- accel/accel.sh@21 -- # case "$var" in 00:06:21.680 23:09:10 -- accel/accel.sh@19 -- # IFS=: 00:06:21.680 23:09:10 -- accel/accel.sh@19 -- # read -r var val 00:06:21.680 23:09:10 -- accel/accel.sh@20 -- # val=Yes 00:06:21.680 23:09:10 -- accel/accel.sh@21 -- # case "$var" in 00:06:21.680 23:09:10 -- accel/accel.sh@19 -- # IFS=: 00:06:21.680 23:09:10 -- accel/accel.sh@19 -- # read -r var val 00:06:21.680 23:09:10 -- accel/accel.sh@20 -- # val= 00:06:21.680 23:09:10 -- accel/accel.sh@21 -- # case "$var" in 00:06:21.680 23:09:10 -- accel/accel.sh@19 -- # IFS=: 00:06:21.680 23:09:10 -- accel/accel.sh@19 -- # read -r var val 00:06:21.680 23:09:10 -- accel/accel.sh@20 -- # val= 00:06:21.680 23:09:10 -- accel/accel.sh@21 -- # case "$var" in 00:06:21.680 23:09:10 -- accel/accel.sh@19 -- # IFS=: 00:06:21.680 23:09:10 -- accel/accel.sh@19 -- # read -r var val 00:06:23.062 23:09:11 -- accel/accel.sh@20 -- # val= 00:06:23.062 23:09:11 -- accel/accel.sh@21 -- # case "$var" in 00:06:23.062 23:09:11 -- accel/accel.sh@19 -- # IFS=: 00:06:23.062 23:09:11 -- accel/accel.sh@19 -- # read -r var val 00:06:23.062 23:09:11 -- accel/accel.sh@20 -- # val= 00:06:23.062 23:09:11 -- accel/accel.sh@21 -- # case "$var" in 00:06:23.062 23:09:11 -- accel/accel.sh@19 -- # IFS=: 00:06:23.062 23:09:11 -- accel/accel.sh@19 -- # read -r var val 00:06:23.062 23:09:11 -- accel/accel.sh@20 -- # val= 00:06:23.062 23:09:11 -- accel/accel.sh@21 -- # case "$var" in 00:06:23.062 23:09:11 -- accel/accel.sh@19 -- # IFS=: 00:06:23.062 23:09:11 -- accel/accel.sh@19 -- # read -r var val 00:06:23.062 23:09:11 -- accel/accel.sh@20 -- # val= 00:06:23.062 23:09:11 -- accel/accel.sh@21 -- # case "$var" in 00:06:23.062 23:09:11 -- accel/accel.sh@19 -- # IFS=: 00:06:23.062 23:09:11 -- accel/accel.sh@19 -- # read -r var val 00:06:23.062 23:09:11 -- accel/accel.sh@20 -- # val= 00:06:23.062 23:09:11 -- accel/accel.sh@21 -- # case "$var" in 00:06:23.062 23:09:11 -- accel/accel.sh@19 -- # IFS=: 00:06:23.062 23:09:11 -- accel/accel.sh@19 -- # read -r var val 00:06:23.062 23:09:11 -- accel/accel.sh@20 -- # val= 00:06:23.062 23:09:11 -- accel/accel.sh@21 -- # case "$var" in 00:06:23.062 23:09:11 -- accel/accel.sh@19 -- # IFS=: 00:06:23.063 23:09:11 -- accel/accel.sh@19 -- # read -r var val 00:06:23.063 23:09:11 -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:23.063 23:09:11 -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:06:23.063 23:09:11 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:23.063 00:06:23.063 real 0m1.247s 00:06:23.063 user 0m1.146s 00:06:23.063 sys 0m0.111s 00:06:23.063 23:09:11 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:23.063 23:09:11 -- common/autotest_common.sh@10 -- # set +x 00:06:23.063 ************************************ 00:06:23.063 END TEST accel_crc32c_C2 00:06:23.063 ************************************ 00:06:23.063 23:09:11 -- accel/accel.sh@103 -- # run_test accel_copy accel_test -t 1 -w copy -y 00:06:23.063 23:09:11 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:06:23.063 23:09:11 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:23.063 23:09:11 -- common/autotest_common.sh@10 -- # set +x 00:06:23.063 ************************************ 00:06:23.063 START TEST accel_copy 00:06:23.063 ************************************ 00:06:23.063 23:09:12 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w copy -y 00:06:23.063 23:09:12 -- accel/accel.sh@16 -- # local accel_opc 00:06:23.063 23:09:12 -- accel/accel.sh@17 -- # local accel_module 00:06:23.063 23:09:12 -- accel/accel.sh@19 -- # IFS=: 00:06:23.063 23:09:12 -- accel/accel.sh@19 -- # read -r var val 00:06:23.063 23:09:12 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy -y 00:06:23.063 23:09:12 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:06:23.063 23:09:12 -- accel/accel.sh@12 -- # build_accel_config 00:06:23.063 23:09:12 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:23.063 23:09:12 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:23.063 23:09:12 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:23.063 23:09:12 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:23.063 23:09:12 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:23.063 23:09:12 -- accel/accel.sh@40 -- # local IFS=, 00:06:23.063 23:09:12 -- accel/accel.sh@41 -- # jq -r . 00:06:23.063 [2024-04-26 23:09:12.172038] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:06:23.063 [2024-04-26 23:09:12.172128] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3736575 ] 00:06:23.063 EAL: No free 2048 kB hugepages reported on node 1 00:06:23.063 [2024-04-26 23:09:12.238143] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:23.063 [2024-04-26 23:09:12.274526] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:23.063 23:09:12 -- accel/accel.sh@20 -- # val= 00:06:23.063 23:09:12 -- accel/accel.sh@21 -- # case "$var" in 00:06:23.063 23:09:12 -- accel/accel.sh@19 -- # IFS=: 00:06:23.063 23:09:12 -- accel/accel.sh@19 -- # read -r var val 00:06:23.063 23:09:12 -- accel/accel.sh@20 -- # val= 00:06:23.063 23:09:12 -- accel/accel.sh@21 -- # case "$var" in 00:06:23.063 23:09:12 -- accel/accel.sh@19 -- # IFS=: 00:06:23.063 23:09:12 -- accel/accel.sh@19 -- # read -r var val 00:06:23.063 23:09:12 -- accel/accel.sh@20 -- # val=0x1 00:06:23.063 23:09:12 -- accel/accel.sh@21 -- # case "$var" in 00:06:23.063 23:09:12 -- accel/accel.sh@19 -- # IFS=: 00:06:23.063 23:09:12 -- accel/accel.sh@19 -- # read -r var val 00:06:23.063 23:09:12 -- accel/accel.sh@20 -- # val= 00:06:23.063 23:09:12 -- accel/accel.sh@21 -- # case "$var" in 00:06:23.063 23:09:12 -- accel/accel.sh@19 -- # IFS=: 00:06:23.063 23:09:12 -- accel/accel.sh@19 -- # read -r var val 00:06:23.063 23:09:12 -- accel/accel.sh@20 -- # val= 00:06:23.063 23:09:12 -- accel/accel.sh@21 -- # case "$var" in 00:06:23.063 23:09:12 -- accel/accel.sh@19 -- # IFS=: 00:06:23.063 23:09:12 -- accel/accel.sh@19 -- # read -r var val 00:06:23.063 23:09:12 -- accel/accel.sh@20 -- # val=copy 00:06:23.063 23:09:12 -- accel/accel.sh@21 -- # case "$var" in 00:06:23.063 23:09:12 -- accel/accel.sh@23 -- # accel_opc=copy 00:06:23.063 23:09:12 -- accel/accel.sh@19 -- # IFS=: 00:06:23.063 23:09:12 -- accel/accel.sh@19 -- # read -r var val 00:06:23.063 23:09:12 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:23.063 23:09:12 -- accel/accel.sh@21 -- # case "$var" in 00:06:23.063 23:09:12 -- accel/accel.sh@19 -- # IFS=: 00:06:23.063 23:09:12 -- accel/accel.sh@19 -- # read -r var val 00:06:23.063 23:09:12 -- accel/accel.sh@20 -- # val= 00:06:23.063 23:09:12 -- accel/accel.sh@21 -- # case "$var" in 00:06:23.063 23:09:12 -- accel/accel.sh@19 -- # IFS=: 00:06:23.063 23:09:12 -- accel/accel.sh@19 -- # read -r var val 00:06:23.323 23:09:12 -- accel/accel.sh@20 -- # val=software 00:06:23.323 23:09:12 -- accel/accel.sh@21 -- # case "$var" in 00:06:23.323 23:09:12 -- accel/accel.sh@22 -- # accel_module=software 00:06:23.323 23:09:12 -- accel/accel.sh@19 -- # IFS=: 00:06:23.323 23:09:12 -- accel/accel.sh@19 -- # read -r var val 00:06:23.323 23:09:12 -- accel/accel.sh@20 -- # val=32 00:06:23.323 23:09:12 -- accel/accel.sh@21 -- # case "$var" in 00:06:23.323 23:09:12 -- accel/accel.sh@19 -- # IFS=: 00:06:23.323 23:09:12 -- accel/accel.sh@19 -- # read -r var val 00:06:23.323 23:09:12 -- accel/accel.sh@20 -- # val=32 00:06:23.323 23:09:12 -- accel/accel.sh@21 -- # case "$var" in 00:06:23.323 23:09:12 -- accel/accel.sh@19 -- # IFS=: 00:06:23.323 23:09:12 -- accel/accel.sh@19 -- # read -r var val 00:06:23.323 23:09:12 -- accel/accel.sh@20 -- # val=1 00:06:23.323 23:09:12 -- accel/accel.sh@21 -- # case "$var" in 00:06:23.323 23:09:12 -- accel/accel.sh@19 -- # IFS=: 00:06:23.323 23:09:12 -- accel/accel.sh@19 -- # read -r var val 00:06:23.323 23:09:12 -- accel/accel.sh@20 -- # val='1 seconds' 00:06:23.323 23:09:12 -- accel/accel.sh@21 -- # case "$var" in 00:06:23.323 23:09:12 -- accel/accel.sh@19 -- # IFS=: 00:06:23.323 23:09:12 -- accel/accel.sh@19 -- # read -r var val 00:06:23.323 23:09:12 -- accel/accel.sh@20 -- # val=Yes 00:06:23.323 23:09:12 -- accel/accel.sh@21 -- # case "$var" in 00:06:23.323 23:09:12 -- accel/accel.sh@19 -- # IFS=: 00:06:23.323 23:09:12 -- accel/accel.sh@19 -- # read -r var val 00:06:23.324 23:09:12 -- accel/accel.sh@20 -- # val= 00:06:23.324 23:09:12 -- accel/accel.sh@21 -- # case "$var" in 00:06:23.324 23:09:12 -- accel/accel.sh@19 -- # IFS=: 00:06:23.324 23:09:12 -- accel/accel.sh@19 -- # read -r var val 00:06:23.324 23:09:12 -- accel/accel.sh@20 -- # val= 00:06:23.324 23:09:12 -- accel/accel.sh@21 -- # case "$var" in 00:06:23.324 23:09:12 -- accel/accel.sh@19 -- # IFS=: 00:06:23.324 23:09:12 -- accel/accel.sh@19 -- # read -r var val 00:06:24.263 23:09:13 -- accel/accel.sh@20 -- # val= 00:06:24.263 23:09:13 -- accel/accel.sh@21 -- # case "$var" in 00:06:24.263 23:09:13 -- accel/accel.sh@19 -- # IFS=: 00:06:24.263 23:09:13 -- accel/accel.sh@19 -- # read -r var val 00:06:24.263 23:09:13 -- accel/accel.sh@20 -- # val= 00:06:24.263 23:09:13 -- accel/accel.sh@21 -- # case "$var" in 00:06:24.263 23:09:13 -- accel/accel.sh@19 -- # IFS=: 00:06:24.263 23:09:13 -- accel/accel.sh@19 -- # read -r var val 00:06:24.263 23:09:13 -- accel/accel.sh@20 -- # val= 00:06:24.263 23:09:13 -- accel/accel.sh@21 -- # case "$var" in 00:06:24.263 23:09:13 -- accel/accel.sh@19 -- # IFS=: 00:06:24.263 23:09:13 -- accel/accel.sh@19 -- # read -r var val 00:06:24.263 23:09:13 -- accel/accel.sh@20 -- # val= 00:06:24.263 23:09:13 -- accel/accel.sh@21 -- # case "$var" in 00:06:24.263 23:09:13 -- accel/accel.sh@19 -- # IFS=: 00:06:24.263 23:09:13 -- accel/accel.sh@19 -- # read -r var val 00:06:24.263 23:09:13 -- accel/accel.sh@20 -- # val= 00:06:24.263 23:09:13 -- accel/accel.sh@21 -- # case "$var" in 00:06:24.263 23:09:13 -- accel/accel.sh@19 -- # IFS=: 00:06:24.263 23:09:13 -- accel/accel.sh@19 -- # read -r var val 00:06:24.263 23:09:13 -- accel/accel.sh@20 -- # val= 00:06:24.263 23:09:13 -- accel/accel.sh@21 -- # case "$var" in 00:06:24.263 23:09:13 -- accel/accel.sh@19 -- # IFS=: 00:06:24.263 23:09:13 -- accel/accel.sh@19 -- # read -r var val 00:06:24.263 23:09:13 -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:24.263 23:09:13 -- accel/accel.sh@27 -- # [[ -n copy ]] 00:06:24.263 23:09:13 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:24.263 00:06:24.263 real 0m1.250s 00:06:24.263 user 0m1.155s 00:06:24.263 sys 0m0.109s 00:06:24.263 23:09:13 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:24.263 23:09:13 -- common/autotest_common.sh@10 -- # set +x 00:06:24.263 ************************************ 00:06:24.263 END TEST accel_copy 00:06:24.263 ************************************ 00:06:24.263 23:09:13 -- accel/accel.sh@104 -- # run_test accel_fill accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:24.263 23:09:13 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:06:24.263 23:09:13 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:24.263 23:09:13 -- common/autotest_common.sh@10 -- # set +x 00:06:24.524 ************************************ 00:06:24.524 START TEST accel_fill 00:06:24.524 ************************************ 00:06:24.524 23:09:13 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:24.524 23:09:13 -- accel/accel.sh@16 -- # local accel_opc 00:06:24.524 23:09:13 -- accel/accel.sh@17 -- # local accel_module 00:06:24.524 23:09:13 -- accel/accel.sh@19 -- # IFS=: 00:06:24.524 23:09:13 -- accel/accel.sh@19 -- # read -r var val 00:06:24.524 23:09:13 -- accel/accel.sh@15 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:24.524 23:09:13 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:24.524 23:09:13 -- accel/accel.sh@12 -- # build_accel_config 00:06:24.524 23:09:13 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:24.524 23:09:13 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:24.524 23:09:13 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:24.524 23:09:13 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:24.524 23:09:13 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:24.524 23:09:13 -- accel/accel.sh@40 -- # local IFS=, 00:06:24.524 23:09:13 -- accel/accel.sh@41 -- # jq -r . 00:06:24.524 [2024-04-26 23:09:13.607974] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:06:24.524 [2024-04-26 23:09:13.608059] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3736805 ] 00:06:24.524 EAL: No free 2048 kB hugepages reported on node 1 00:06:24.524 [2024-04-26 23:09:13.674327] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:24.524 [2024-04-26 23:09:13.710602] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:24.524 23:09:13 -- accel/accel.sh@20 -- # val= 00:06:24.524 23:09:13 -- accel/accel.sh@21 -- # case "$var" in 00:06:24.524 23:09:13 -- accel/accel.sh@19 -- # IFS=: 00:06:24.524 23:09:13 -- accel/accel.sh@19 -- # read -r var val 00:06:24.524 23:09:13 -- accel/accel.sh@20 -- # val= 00:06:24.524 23:09:13 -- accel/accel.sh@21 -- # case "$var" in 00:06:24.524 23:09:13 -- accel/accel.sh@19 -- # IFS=: 00:06:24.524 23:09:13 -- accel/accel.sh@19 -- # read -r var val 00:06:24.524 23:09:13 -- accel/accel.sh@20 -- # val=0x1 00:06:24.524 23:09:13 -- accel/accel.sh@21 -- # case "$var" in 00:06:24.524 23:09:13 -- accel/accel.sh@19 -- # IFS=: 00:06:24.524 23:09:13 -- accel/accel.sh@19 -- # read -r var val 00:06:24.524 23:09:13 -- accel/accel.sh@20 -- # val= 00:06:24.524 23:09:13 -- accel/accel.sh@21 -- # case "$var" in 00:06:24.524 23:09:13 -- accel/accel.sh@19 -- # IFS=: 00:06:24.524 23:09:13 -- accel/accel.sh@19 -- # read -r var val 00:06:24.524 23:09:13 -- accel/accel.sh@20 -- # val= 00:06:24.524 23:09:13 -- accel/accel.sh@21 -- # case "$var" in 00:06:24.524 23:09:13 -- accel/accel.sh@19 -- # IFS=: 00:06:24.524 23:09:13 -- accel/accel.sh@19 -- # read -r var val 00:06:24.524 23:09:13 -- accel/accel.sh@20 -- # val=fill 00:06:24.524 23:09:13 -- accel/accel.sh@21 -- # case "$var" in 00:06:24.524 23:09:13 -- accel/accel.sh@23 -- # accel_opc=fill 00:06:24.524 23:09:13 -- accel/accel.sh@19 -- # IFS=: 00:06:24.524 23:09:13 -- accel/accel.sh@19 -- # read -r var val 00:06:24.524 23:09:13 -- accel/accel.sh@20 -- # val=0x80 00:06:24.524 23:09:13 -- accel/accel.sh@21 -- # case "$var" in 00:06:24.524 23:09:13 -- accel/accel.sh@19 -- # IFS=: 00:06:24.524 23:09:13 -- accel/accel.sh@19 -- # read -r var val 00:06:24.524 23:09:13 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:24.524 23:09:13 -- accel/accel.sh@21 -- # case "$var" in 00:06:24.524 23:09:13 -- accel/accel.sh@19 -- # IFS=: 00:06:24.524 23:09:13 -- accel/accel.sh@19 -- # read -r var val 00:06:24.524 23:09:13 -- accel/accel.sh@20 -- # val= 00:06:24.524 23:09:13 -- accel/accel.sh@21 -- # case "$var" in 00:06:24.524 23:09:13 -- accel/accel.sh@19 -- # IFS=: 00:06:24.524 23:09:13 -- accel/accel.sh@19 -- # read -r var val 00:06:24.524 23:09:13 -- accel/accel.sh@20 -- # val=software 00:06:24.524 23:09:13 -- accel/accel.sh@21 -- # case "$var" in 00:06:24.524 23:09:13 -- accel/accel.sh@22 -- # accel_module=software 00:06:24.524 23:09:13 -- accel/accel.sh@19 -- # IFS=: 00:06:24.524 23:09:13 -- accel/accel.sh@19 -- # read -r var val 00:06:24.524 23:09:13 -- accel/accel.sh@20 -- # val=64 00:06:24.524 23:09:13 -- accel/accel.sh@21 -- # case "$var" in 00:06:24.524 23:09:13 -- accel/accel.sh@19 -- # IFS=: 00:06:24.524 23:09:13 -- accel/accel.sh@19 -- # read -r var val 00:06:24.524 23:09:13 -- accel/accel.sh@20 -- # val=64 00:06:24.524 23:09:13 -- accel/accel.sh@21 -- # case "$var" in 00:06:24.524 23:09:13 -- accel/accel.sh@19 -- # IFS=: 00:06:24.524 23:09:13 -- accel/accel.sh@19 -- # read -r var val 00:06:24.524 23:09:13 -- accel/accel.sh@20 -- # val=1 00:06:24.524 23:09:13 -- accel/accel.sh@21 -- # case "$var" in 00:06:24.524 23:09:13 -- accel/accel.sh@19 -- # IFS=: 00:06:24.524 23:09:13 -- accel/accel.sh@19 -- # read -r var val 00:06:24.524 23:09:13 -- accel/accel.sh@20 -- # val='1 seconds' 00:06:24.524 23:09:13 -- accel/accel.sh@21 -- # case "$var" in 00:06:24.524 23:09:13 -- accel/accel.sh@19 -- # IFS=: 00:06:24.524 23:09:13 -- accel/accel.sh@19 -- # read -r var val 00:06:24.524 23:09:13 -- accel/accel.sh@20 -- # val=Yes 00:06:24.524 23:09:13 -- accel/accel.sh@21 -- # case "$var" in 00:06:24.524 23:09:13 -- accel/accel.sh@19 -- # IFS=: 00:06:24.524 23:09:13 -- accel/accel.sh@19 -- # read -r var val 00:06:24.524 23:09:13 -- accel/accel.sh@20 -- # val= 00:06:24.524 23:09:13 -- accel/accel.sh@21 -- # case "$var" in 00:06:24.524 23:09:13 -- accel/accel.sh@19 -- # IFS=: 00:06:24.524 23:09:13 -- accel/accel.sh@19 -- # read -r var val 00:06:24.524 23:09:13 -- accel/accel.sh@20 -- # val= 00:06:24.524 23:09:13 -- accel/accel.sh@21 -- # case "$var" in 00:06:24.524 23:09:13 -- accel/accel.sh@19 -- # IFS=: 00:06:24.524 23:09:13 -- accel/accel.sh@19 -- # read -r var val 00:06:25.908 23:09:14 -- accel/accel.sh@20 -- # val= 00:06:25.908 23:09:14 -- accel/accel.sh@21 -- # case "$var" in 00:06:25.908 23:09:14 -- accel/accel.sh@19 -- # IFS=: 00:06:25.908 23:09:14 -- accel/accel.sh@19 -- # read -r var val 00:06:25.908 23:09:14 -- accel/accel.sh@20 -- # val= 00:06:25.908 23:09:14 -- accel/accel.sh@21 -- # case "$var" in 00:06:25.908 23:09:14 -- accel/accel.sh@19 -- # IFS=: 00:06:25.908 23:09:14 -- accel/accel.sh@19 -- # read -r var val 00:06:25.908 23:09:14 -- accel/accel.sh@20 -- # val= 00:06:25.908 23:09:14 -- accel/accel.sh@21 -- # case "$var" in 00:06:25.908 23:09:14 -- accel/accel.sh@19 -- # IFS=: 00:06:25.909 23:09:14 -- accel/accel.sh@19 -- # read -r var val 00:06:25.909 23:09:14 -- accel/accel.sh@20 -- # val= 00:06:25.909 23:09:14 -- accel/accel.sh@21 -- # case "$var" in 00:06:25.909 23:09:14 -- accel/accel.sh@19 -- # IFS=: 00:06:25.909 23:09:14 -- accel/accel.sh@19 -- # read -r var val 00:06:25.909 23:09:14 -- accel/accel.sh@20 -- # val= 00:06:25.909 23:09:14 -- accel/accel.sh@21 -- # case "$var" in 00:06:25.909 23:09:14 -- accel/accel.sh@19 -- # IFS=: 00:06:25.909 23:09:14 -- accel/accel.sh@19 -- # read -r var val 00:06:25.909 23:09:14 -- accel/accel.sh@20 -- # val= 00:06:25.909 23:09:14 -- accel/accel.sh@21 -- # case "$var" in 00:06:25.909 23:09:14 -- accel/accel.sh@19 -- # IFS=: 00:06:25.909 23:09:14 -- accel/accel.sh@19 -- # read -r var val 00:06:25.909 23:09:14 -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:25.909 23:09:14 -- accel/accel.sh@27 -- # [[ -n fill ]] 00:06:25.909 23:09:14 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:25.909 00:06:25.909 real 0m1.250s 00:06:25.909 user 0m1.151s 00:06:25.909 sys 0m0.109s 00:06:25.909 23:09:14 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:25.909 23:09:14 -- common/autotest_common.sh@10 -- # set +x 00:06:25.909 ************************************ 00:06:25.909 END TEST accel_fill 00:06:25.909 ************************************ 00:06:25.909 23:09:14 -- accel/accel.sh@105 -- # run_test accel_copy_crc32c accel_test -t 1 -w copy_crc32c -y 00:06:25.909 23:09:14 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:06:25.909 23:09:14 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:25.909 23:09:14 -- common/autotest_common.sh@10 -- # set +x 00:06:25.909 ************************************ 00:06:25.909 START TEST accel_copy_crc32c 00:06:25.909 ************************************ 00:06:25.909 23:09:15 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w copy_crc32c -y 00:06:25.909 23:09:15 -- accel/accel.sh@16 -- # local accel_opc 00:06:25.909 23:09:15 -- accel/accel.sh@17 -- # local accel_module 00:06:25.909 23:09:15 -- accel/accel.sh@19 -- # IFS=: 00:06:25.909 23:09:15 -- accel/accel.sh@19 -- # read -r var val 00:06:25.909 23:09:15 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y 00:06:25.909 23:09:15 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:06:25.909 23:09:15 -- accel/accel.sh@12 -- # build_accel_config 00:06:25.909 23:09:15 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:25.909 23:09:15 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:25.909 23:09:15 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:25.909 23:09:15 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:25.909 23:09:15 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:25.909 23:09:15 -- accel/accel.sh@40 -- # local IFS=, 00:06:25.909 23:09:15 -- accel/accel.sh@41 -- # jq -r . 00:06:25.909 [2024-04-26 23:09:15.045367] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:06:25.909 [2024-04-26 23:09:15.045461] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3737051 ] 00:06:25.909 EAL: No free 2048 kB hugepages reported on node 1 00:06:25.909 [2024-04-26 23:09:15.117219] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:25.909 [2024-04-26 23:09:15.154936] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:26.169 23:09:15 -- accel/accel.sh@20 -- # val= 00:06:26.169 23:09:15 -- accel/accel.sh@21 -- # case "$var" in 00:06:26.169 23:09:15 -- accel/accel.sh@19 -- # IFS=: 00:06:26.169 23:09:15 -- accel/accel.sh@19 -- # read -r var val 00:06:26.169 23:09:15 -- accel/accel.sh@20 -- # val= 00:06:26.169 23:09:15 -- accel/accel.sh@21 -- # case "$var" in 00:06:26.169 23:09:15 -- accel/accel.sh@19 -- # IFS=: 00:06:26.169 23:09:15 -- accel/accel.sh@19 -- # read -r var val 00:06:26.169 23:09:15 -- accel/accel.sh@20 -- # val=0x1 00:06:26.169 23:09:15 -- accel/accel.sh@21 -- # case "$var" in 00:06:26.169 23:09:15 -- accel/accel.sh@19 -- # IFS=: 00:06:26.169 23:09:15 -- accel/accel.sh@19 -- # read -r var val 00:06:26.169 23:09:15 -- accel/accel.sh@20 -- # val= 00:06:26.169 23:09:15 -- accel/accel.sh@21 -- # case "$var" in 00:06:26.169 23:09:15 -- accel/accel.sh@19 -- # IFS=: 00:06:26.169 23:09:15 -- accel/accel.sh@19 -- # read -r var val 00:06:26.169 23:09:15 -- accel/accel.sh@20 -- # val= 00:06:26.169 23:09:15 -- accel/accel.sh@21 -- # case "$var" in 00:06:26.169 23:09:15 -- accel/accel.sh@19 -- # IFS=: 00:06:26.169 23:09:15 -- accel/accel.sh@19 -- # read -r var val 00:06:26.169 23:09:15 -- accel/accel.sh@20 -- # val=copy_crc32c 00:06:26.169 23:09:15 -- accel/accel.sh@21 -- # case "$var" in 00:06:26.169 23:09:15 -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:06:26.169 23:09:15 -- accel/accel.sh@19 -- # IFS=: 00:06:26.169 23:09:15 -- accel/accel.sh@19 -- # read -r var val 00:06:26.169 23:09:15 -- accel/accel.sh@20 -- # val=0 00:06:26.169 23:09:15 -- accel/accel.sh@21 -- # case "$var" in 00:06:26.169 23:09:15 -- accel/accel.sh@19 -- # IFS=: 00:06:26.169 23:09:15 -- accel/accel.sh@19 -- # read -r var val 00:06:26.169 23:09:15 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:26.169 23:09:15 -- accel/accel.sh@21 -- # case "$var" in 00:06:26.169 23:09:15 -- accel/accel.sh@19 -- # IFS=: 00:06:26.169 23:09:15 -- accel/accel.sh@19 -- # read -r var val 00:06:26.169 23:09:15 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:26.169 23:09:15 -- accel/accel.sh@21 -- # case "$var" in 00:06:26.169 23:09:15 -- accel/accel.sh@19 -- # IFS=: 00:06:26.169 23:09:15 -- accel/accel.sh@19 -- # read -r var val 00:06:26.169 23:09:15 -- accel/accel.sh@20 -- # val= 00:06:26.169 23:09:15 -- accel/accel.sh@21 -- # case "$var" in 00:06:26.169 23:09:15 -- accel/accel.sh@19 -- # IFS=: 00:06:26.169 23:09:15 -- accel/accel.sh@19 -- # read -r var val 00:06:26.169 23:09:15 -- accel/accel.sh@20 -- # val=software 00:06:26.169 23:09:15 -- accel/accel.sh@21 -- # case "$var" in 00:06:26.169 23:09:15 -- accel/accel.sh@22 -- # accel_module=software 00:06:26.169 23:09:15 -- accel/accel.sh@19 -- # IFS=: 00:06:26.169 23:09:15 -- accel/accel.sh@19 -- # read -r var val 00:06:26.169 23:09:15 -- accel/accel.sh@20 -- # val=32 00:06:26.169 23:09:15 -- accel/accel.sh@21 -- # case "$var" in 00:06:26.169 23:09:15 -- accel/accel.sh@19 -- # IFS=: 00:06:26.169 23:09:15 -- accel/accel.sh@19 -- # read -r var val 00:06:26.169 23:09:15 -- accel/accel.sh@20 -- # val=32 00:06:26.169 23:09:15 -- accel/accel.sh@21 -- # case "$var" in 00:06:26.169 23:09:15 -- accel/accel.sh@19 -- # IFS=: 00:06:26.169 23:09:15 -- accel/accel.sh@19 -- # read -r var val 00:06:26.169 23:09:15 -- accel/accel.sh@20 -- # val=1 00:06:26.169 23:09:15 -- accel/accel.sh@21 -- # case "$var" in 00:06:26.169 23:09:15 -- accel/accel.sh@19 -- # IFS=: 00:06:26.169 23:09:15 -- accel/accel.sh@19 -- # read -r var val 00:06:26.169 23:09:15 -- accel/accel.sh@20 -- # val='1 seconds' 00:06:26.169 23:09:15 -- accel/accel.sh@21 -- # case "$var" in 00:06:26.169 23:09:15 -- accel/accel.sh@19 -- # IFS=: 00:06:26.169 23:09:15 -- accel/accel.sh@19 -- # read -r var val 00:06:26.169 23:09:15 -- accel/accel.sh@20 -- # val=Yes 00:06:26.169 23:09:15 -- accel/accel.sh@21 -- # case "$var" in 00:06:26.169 23:09:15 -- accel/accel.sh@19 -- # IFS=: 00:06:26.169 23:09:15 -- accel/accel.sh@19 -- # read -r var val 00:06:26.169 23:09:15 -- accel/accel.sh@20 -- # val= 00:06:26.169 23:09:15 -- accel/accel.sh@21 -- # case "$var" in 00:06:26.169 23:09:15 -- accel/accel.sh@19 -- # IFS=: 00:06:26.169 23:09:15 -- accel/accel.sh@19 -- # read -r var val 00:06:26.169 23:09:15 -- accel/accel.sh@20 -- # val= 00:06:26.169 23:09:15 -- accel/accel.sh@21 -- # case "$var" in 00:06:26.169 23:09:15 -- accel/accel.sh@19 -- # IFS=: 00:06:26.169 23:09:15 -- accel/accel.sh@19 -- # read -r var val 00:06:27.109 23:09:16 -- accel/accel.sh@20 -- # val= 00:06:27.109 23:09:16 -- accel/accel.sh@21 -- # case "$var" in 00:06:27.109 23:09:16 -- accel/accel.sh@19 -- # IFS=: 00:06:27.109 23:09:16 -- accel/accel.sh@19 -- # read -r var val 00:06:27.109 23:09:16 -- accel/accel.sh@20 -- # val= 00:06:27.109 23:09:16 -- accel/accel.sh@21 -- # case "$var" in 00:06:27.109 23:09:16 -- accel/accel.sh@19 -- # IFS=: 00:06:27.109 23:09:16 -- accel/accel.sh@19 -- # read -r var val 00:06:27.109 23:09:16 -- accel/accel.sh@20 -- # val= 00:06:27.109 23:09:16 -- accel/accel.sh@21 -- # case "$var" in 00:06:27.109 23:09:16 -- accel/accel.sh@19 -- # IFS=: 00:06:27.109 23:09:16 -- accel/accel.sh@19 -- # read -r var val 00:06:27.109 23:09:16 -- accel/accel.sh@20 -- # val= 00:06:27.109 23:09:16 -- accel/accel.sh@21 -- # case "$var" in 00:06:27.109 23:09:16 -- accel/accel.sh@19 -- # IFS=: 00:06:27.109 23:09:16 -- accel/accel.sh@19 -- # read -r var val 00:06:27.109 23:09:16 -- accel/accel.sh@20 -- # val= 00:06:27.109 23:09:16 -- accel/accel.sh@21 -- # case "$var" in 00:06:27.109 23:09:16 -- accel/accel.sh@19 -- # IFS=: 00:06:27.109 23:09:16 -- accel/accel.sh@19 -- # read -r var val 00:06:27.109 23:09:16 -- accel/accel.sh@20 -- # val= 00:06:27.109 23:09:16 -- accel/accel.sh@21 -- # case "$var" in 00:06:27.109 23:09:16 -- accel/accel.sh@19 -- # IFS=: 00:06:27.109 23:09:16 -- accel/accel.sh@19 -- # read -r var val 00:06:27.109 23:09:16 -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:27.109 23:09:16 -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:06:27.109 23:09:16 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:27.109 00:06:27.109 real 0m1.258s 00:06:27.109 user 0m1.151s 00:06:27.109 sys 0m0.118s 00:06:27.109 23:09:16 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:27.109 23:09:16 -- common/autotest_common.sh@10 -- # set +x 00:06:27.109 ************************************ 00:06:27.109 END TEST accel_copy_crc32c 00:06:27.109 ************************************ 00:06:27.109 23:09:16 -- accel/accel.sh@106 -- # run_test accel_copy_crc32c_C2 accel_test -t 1 -w copy_crc32c -y -C 2 00:06:27.109 23:09:16 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:06:27.109 23:09:16 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:27.109 23:09:16 -- common/autotest_common.sh@10 -- # set +x 00:06:27.369 ************************************ 00:06:27.369 START TEST accel_copy_crc32c_C2 00:06:27.369 ************************************ 00:06:27.369 23:09:16 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w copy_crc32c -y -C 2 00:06:27.369 23:09:16 -- accel/accel.sh@16 -- # local accel_opc 00:06:27.369 23:09:16 -- accel/accel.sh@17 -- # local accel_module 00:06:27.369 23:09:16 -- accel/accel.sh@19 -- # IFS=: 00:06:27.369 23:09:16 -- accel/accel.sh@19 -- # read -r var val 00:06:27.369 23:09:16 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:06:27.369 23:09:16 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:06:27.369 23:09:16 -- accel/accel.sh@12 -- # build_accel_config 00:06:27.369 23:09:16 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:27.369 23:09:16 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:27.369 23:09:16 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:27.369 23:09:16 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:27.369 23:09:16 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:27.369 23:09:16 -- accel/accel.sh@40 -- # local IFS=, 00:06:27.369 23:09:16 -- accel/accel.sh@41 -- # jq -r . 00:06:27.369 [2024-04-26 23:09:16.478389] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:06:27.369 [2024-04-26 23:09:16.478448] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3737407 ] 00:06:27.369 EAL: No free 2048 kB hugepages reported on node 1 00:06:27.369 [2024-04-26 23:09:16.539144] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:27.369 [2024-04-26 23:09:16.567519] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:27.369 23:09:16 -- accel/accel.sh@20 -- # val= 00:06:27.369 23:09:16 -- accel/accel.sh@21 -- # case "$var" in 00:06:27.369 23:09:16 -- accel/accel.sh@19 -- # IFS=: 00:06:27.369 23:09:16 -- accel/accel.sh@19 -- # read -r var val 00:06:27.369 23:09:16 -- accel/accel.sh@20 -- # val= 00:06:27.369 23:09:16 -- accel/accel.sh@21 -- # case "$var" in 00:06:27.369 23:09:16 -- accel/accel.sh@19 -- # IFS=: 00:06:27.369 23:09:16 -- accel/accel.sh@19 -- # read -r var val 00:06:27.369 23:09:16 -- accel/accel.sh@20 -- # val=0x1 00:06:27.369 23:09:16 -- accel/accel.sh@21 -- # case "$var" in 00:06:27.370 23:09:16 -- accel/accel.sh@19 -- # IFS=: 00:06:27.370 23:09:16 -- accel/accel.sh@19 -- # read -r var val 00:06:27.370 23:09:16 -- accel/accel.sh@20 -- # val= 00:06:27.370 23:09:16 -- accel/accel.sh@21 -- # case "$var" in 00:06:27.370 23:09:16 -- accel/accel.sh@19 -- # IFS=: 00:06:27.370 23:09:16 -- accel/accel.sh@19 -- # read -r var val 00:06:27.370 23:09:16 -- accel/accel.sh@20 -- # val= 00:06:27.370 23:09:16 -- accel/accel.sh@21 -- # case "$var" in 00:06:27.370 23:09:16 -- accel/accel.sh@19 -- # IFS=: 00:06:27.370 23:09:16 -- accel/accel.sh@19 -- # read -r var val 00:06:27.370 23:09:16 -- accel/accel.sh@20 -- # val=copy_crc32c 00:06:27.370 23:09:16 -- accel/accel.sh@21 -- # case "$var" in 00:06:27.370 23:09:16 -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:06:27.370 23:09:16 -- accel/accel.sh@19 -- # IFS=: 00:06:27.370 23:09:16 -- accel/accel.sh@19 -- # read -r var val 00:06:27.370 23:09:16 -- accel/accel.sh@20 -- # val=0 00:06:27.370 23:09:16 -- accel/accel.sh@21 -- # case "$var" in 00:06:27.370 23:09:16 -- accel/accel.sh@19 -- # IFS=: 00:06:27.370 23:09:16 -- accel/accel.sh@19 -- # read -r var val 00:06:27.370 23:09:16 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:27.370 23:09:16 -- accel/accel.sh@21 -- # case "$var" in 00:06:27.370 23:09:16 -- accel/accel.sh@19 -- # IFS=: 00:06:27.370 23:09:16 -- accel/accel.sh@19 -- # read -r var val 00:06:27.370 23:09:16 -- accel/accel.sh@20 -- # val='8192 bytes' 00:06:27.370 23:09:16 -- accel/accel.sh@21 -- # case "$var" in 00:06:27.370 23:09:16 -- accel/accel.sh@19 -- # IFS=: 00:06:27.370 23:09:16 -- accel/accel.sh@19 -- # read -r var val 00:06:27.370 23:09:16 -- accel/accel.sh@20 -- # val= 00:06:27.370 23:09:16 -- accel/accel.sh@21 -- # case "$var" in 00:06:27.370 23:09:16 -- accel/accel.sh@19 -- # IFS=: 00:06:27.370 23:09:16 -- accel/accel.sh@19 -- # read -r var val 00:06:27.370 23:09:16 -- accel/accel.sh@20 -- # val=software 00:06:27.370 23:09:16 -- accel/accel.sh@21 -- # case "$var" in 00:06:27.370 23:09:16 -- accel/accel.sh@22 -- # accel_module=software 00:06:27.370 23:09:16 -- accel/accel.sh@19 -- # IFS=: 00:06:27.370 23:09:16 -- accel/accel.sh@19 -- # read -r var val 00:06:27.370 23:09:16 -- accel/accel.sh@20 -- # val=32 00:06:27.370 23:09:16 -- accel/accel.sh@21 -- # case "$var" in 00:06:27.370 23:09:16 -- accel/accel.sh@19 -- # IFS=: 00:06:27.370 23:09:16 -- accel/accel.sh@19 -- # read -r var val 00:06:27.370 23:09:16 -- accel/accel.sh@20 -- # val=32 00:06:27.370 23:09:16 -- accel/accel.sh@21 -- # case "$var" in 00:06:27.370 23:09:16 -- accel/accel.sh@19 -- # IFS=: 00:06:27.370 23:09:16 -- accel/accel.sh@19 -- # read -r var val 00:06:27.370 23:09:16 -- accel/accel.sh@20 -- # val=1 00:06:27.370 23:09:16 -- accel/accel.sh@21 -- # case "$var" in 00:06:27.370 23:09:16 -- accel/accel.sh@19 -- # IFS=: 00:06:27.370 23:09:16 -- accel/accel.sh@19 -- # read -r var val 00:06:27.370 23:09:16 -- accel/accel.sh@20 -- # val='1 seconds' 00:06:27.370 23:09:16 -- accel/accel.sh@21 -- # case "$var" in 00:06:27.370 23:09:16 -- accel/accel.sh@19 -- # IFS=: 00:06:27.370 23:09:16 -- accel/accel.sh@19 -- # read -r var val 00:06:27.370 23:09:16 -- accel/accel.sh@20 -- # val=Yes 00:06:27.370 23:09:16 -- accel/accel.sh@21 -- # case "$var" in 00:06:27.370 23:09:16 -- accel/accel.sh@19 -- # IFS=: 00:06:27.370 23:09:16 -- accel/accel.sh@19 -- # read -r var val 00:06:27.370 23:09:16 -- accel/accel.sh@20 -- # val= 00:06:27.370 23:09:16 -- accel/accel.sh@21 -- # case "$var" in 00:06:27.370 23:09:16 -- accel/accel.sh@19 -- # IFS=: 00:06:27.370 23:09:16 -- accel/accel.sh@19 -- # read -r var val 00:06:27.370 23:09:16 -- accel/accel.sh@20 -- # val= 00:06:27.370 23:09:16 -- accel/accel.sh@21 -- # case "$var" in 00:06:27.370 23:09:16 -- accel/accel.sh@19 -- # IFS=: 00:06:27.370 23:09:16 -- accel/accel.sh@19 -- # read -r var val 00:06:28.753 23:09:17 -- accel/accel.sh@20 -- # val= 00:06:28.753 23:09:17 -- accel/accel.sh@21 -- # case "$var" in 00:06:28.753 23:09:17 -- accel/accel.sh@19 -- # IFS=: 00:06:28.753 23:09:17 -- accel/accel.sh@19 -- # read -r var val 00:06:28.753 23:09:17 -- accel/accel.sh@20 -- # val= 00:06:28.753 23:09:17 -- accel/accel.sh@21 -- # case "$var" in 00:06:28.753 23:09:17 -- accel/accel.sh@19 -- # IFS=: 00:06:28.753 23:09:17 -- accel/accel.sh@19 -- # read -r var val 00:06:28.753 23:09:17 -- accel/accel.sh@20 -- # val= 00:06:28.753 23:09:17 -- accel/accel.sh@21 -- # case "$var" in 00:06:28.753 23:09:17 -- accel/accel.sh@19 -- # IFS=: 00:06:28.753 23:09:17 -- accel/accel.sh@19 -- # read -r var val 00:06:28.753 23:09:17 -- accel/accel.sh@20 -- # val= 00:06:28.753 23:09:17 -- accel/accel.sh@21 -- # case "$var" in 00:06:28.753 23:09:17 -- accel/accel.sh@19 -- # IFS=: 00:06:28.753 23:09:17 -- accel/accel.sh@19 -- # read -r var val 00:06:28.753 23:09:17 -- accel/accel.sh@20 -- # val= 00:06:28.753 23:09:17 -- accel/accel.sh@21 -- # case "$var" in 00:06:28.753 23:09:17 -- accel/accel.sh@19 -- # IFS=: 00:06:28.753 23:09:17 -- accel/accel.sh@19 -- # read -r var val 00:06:28.753 23:09:17 -- accel/accel.sh@20 -- # val= 00:06:28.753 23:09:17 -- accel/accel.sh@21 -- # case "$var" in 00:06:28.753 23:09:17 -- accel/accel.sh@19 -- # IFS=: 00:06:28.753 23:09:17 -- accel/accel.sh@19 -- # read -r var val 00:06:28.753 23:09:17 -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:28.753 23:09:17 -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:06:28.753 23:09:17 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:28.753 00:06:28.753 real 0m1.232s 00:06:28.753 user 0m1.150s 00:06:28.753 sys 0m0.093s 00:06:28.754 23:09:17 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:28.754 23:09:17 -- common/autotest_common.sh@10 -- # set +x 00:06:28.754 ************************************ 00:06:28.754 END TEST accel_copy_crc32c_C2 00:06:28.754 ************************************ 00:06:28.754 23:09:17 -- accel/accel.sh@107 -- # run_test accel_dualcast accel_test -t 1 -w dualcast -y 00:06:28.754 23:09:17 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:06:28.754 23:09:17 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:28.754 23:09:17 -- common/autotest_common.sh@10 -- # set +x 00:06:28.754 ************************************ 00:06:28.754 START TEST accel_dualcast 00:06:28.754 ************************************ 00:06:28.754 23:09:17 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w dualcast -y 00:06:28.754 23:09:17 -- accel/accel.sh@16 -- # local accel_opc 00:06:28.754 23:09:17 -- accel/accel.sh@17 -- # local accel_module 00:06:28.754 23:09:17 -- accel/accel.sh@19 -- # IFS=: 00:06:28.754 23:09:17 -- accel/accel.sh@19 -- # read -r var val 00:06:28.754 23:09:17 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dualcast -y 00:06:28.754 23:09:17 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:06:28.754 23:09:17 -- accel/accel.sh@12 -- # build_accel_config 00:06:28.754 23:09:17 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:28.754 23:09:17 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:28.754 23:09:17 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:28.754 23:09:17 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:28.754 23:09:17 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:28.754 23:09:17 -- accel/accel.sh@40 -- # local IFS=, 00:06:28.754 23:09:17 -- accel/accel.sh@41 -- # jq -r . 00:06:28.754 [2024-04-26 23:09:17.901066] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:06:28.754 [2024-04-26 23:09:17.901134] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3737767 ] 00:06:28.754 EAL: No free 2048 kB hugepages reported on node 1 00:06:28.754 [2024-04-26 23:09:17.966187] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:28.754 [2024-04-26 23:09:18.002279] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:29.014 23:09:18 -- accel/accel.sh@20 -- # val= 00:06:29.014 23:09:18 -- accel/accel.sh@21 -- # case "$var" in 00:06:29.014 23:09:18 -- accel/accel.sh@19 -- # IFS=: 00:06:29.014 23:09:18 -- accel/accel.sh@19 -- # read -r var val 00:06:29.014 23:09:18 -- accel/accel.sh@20 -- # val= 00:06:29.014 23:09:18 -- accel/accel.sh@21 -- # case "$var" in 00:06:29.014 23:09:18 -- accel/accel.sh@19 -- # IFS=: 00:06:29.014 23:09:18 -- accel/accel.sh@19 -- # read -r var val 00:06:29.014 23:09:18 -- accel/accel.sh@20 -- # val=0x1 00:06:29.014 23:09:18 -- accel/accel.sh@21 -- # case "$var" in 00:06:29.014 23:09:18 -- accel/accel.sh@19 -- # IFS=: 00:06:29.014 23:09:18 -- accel/accel.sh@19 -- # read -r var val 00:06:29.014 23:09:18 -- accel/accel.sh@20 -- # val= 00:06:29.014 23:09:18 -- accel/accel.sh@21 -- # case "$var" in 00:06:29.014 23:09:18 -- accel/accel.sh@19 -- # IFS=: 00:06:29.014 23:09:18 -- accel/accel.sh@19 -- # read -r var val 00:06:29.014 23:09:18 -- accel/accel.sh@20 -- # val= 00:06:29.014 23:09:18 -- accel/accel.sh@21 -- # case "$var" in 00:06:29.014 23:09:18 -- accel/accel.sh@19 -- # IFS=: 00:06:29.014 23:09:18 -- accel/accel.sh@19 -- # read -r var val 00:06:29.014 23:09:18 -- accel/accel.sh@20 -- # val=dualcast 00:06:29.014 23:09:18 -- accel/accel.sh@21 -- # case "$var" in 00:06:29.014 23:09:18 -- accel/accel.sh@23 -- # accel_opc=dualcast 00:06:29.014 23:09:18 -- accel/accel.sh@19 -- # IFS=: 00:06:29.014 23:09:18 -- accel/accel.sh@19 -- # read -r var val 00:06:29.014 23:09:18 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:29.014 23:09:18 -- accel/accel.sh@21 -- # case "$var" in 00:06:29.014 23:09:18 -- accel/accel.sh@19 -- # IFS=: 00:06:29.014 23:09:18 -- accel/accel.sh@19 -- # read -r var val 00:06:29.014 23:09:18 -- accel/accel.sh@20 -- # val= 00:06:29.014 23:09:18 -- accel/accel.sh@21 -- # case "$var" in 00:06:29.014 23:09:18 -- accel/accel.sh@19 -- # IFS=: 00:06:29.014 23:09:18 -- accel/accel.sh@19 -- # read -r var val 00:06:29.014 23:09:18 -- accel/accel.sh@20 -- # val=software 00:06:29.014 23:09:18 -- accel/accel.sh@21 -- # case "$var" in 00:06:29.014 23:09:18 -- accel/accel.sh@22 -- # accel_module=software 00:06:29.014 23:09:18 -- accel/accel.sh@19 -- # IFS=: 00:06:29.014 23:09:18 -- accel/accel.sh@19 -- # read -r var val 00:06:29.014 23:09:18 -- accel/accel.sh@20 -- # val=32 00:06:29.014 23:09:18 -- accel/accel.sh@21 -- # case "$var" in 00:06:29.014 23:09:18 -- accel/accel.sh@19 -- # IFS=: 00:06:29.014 23:09:18 -- accel/accel.sh@19 -- # read -r var val 00:06:29.014 23:09:18 -- accel/accel.sh@20 -- # val=32 00:06:29.014 23:09:18 -- accel/accel.sh@21 -- # case "$var" in 00:06:29.014 23:09:18 -- accel/accel.sh@19 -- # IFS=: 00:06:29.014 23:09:18 -- accel/accel.sh@19 -- # read -r var val 00:06:29.014 23:09:18 -- accel/accel.sh@20 -- # val=1 00:06:29.014 23:09:18 -- accel/accel.sh@21 -- # case "$var" in 00:06:29.014 23:09:18 -- accel/accel.sh@19 -- # IFS=: 00:06:29.014 23:09:18 -- accel/accel.sh@19 -- # read -r var val 00:06:29.014 23:09:18 -- accel/accel.sh@20 -- # val='1 seconds' 00:06:29.014 23:09:18 -- accel/accel.sh@21 -- # case "$var" in 00:06:29.014 23:09:18 -- accel/accel.sh@19 -- # IFS=: 00:06:29.014 23:09:18 -- accel/accel.sh@19 -- # read -r var val 00:06:29.014 23:09:18 -- accel/accel.sh@20 -- # val=Yes 00:06:29.014 23:09:18 -- accel/accel.sh@21 -- # case "$var" in 00:06:29.014 23:09:18 -- accel/accel.sh@19 -- # IFS=: 00:06:29.014 23:09:18 -- accel/accel.sh@19 -- # read -r var val 00:06:29.014 23:09:18 -- accel/accel.sh@20 -- # val= 00:06:29.014 23:09:18 -- accel/accel.sh@21 -- # case "$var" in 00:06:29.014 23:09:18 -- accel/accel.sh@19 -- # IFS=: 00:06:29.014 23:09:18 -- accel/accel.sh@19 -- # read -r var val 00:06:29.014 23:09:18 -- accel/accel.sh@20 -- # val= 00:06:29.014 23:09:18 -- accel/accel.sh@21 -- # case "$var" in 00:06:29.014 23:09:18 -- accel/accel.sh@19 -- # IFS=: 00:06:29.014 23:09:18 -- accel/accel.sh@19 -- # read -r var val 00:06:29.955 23:09:19 -- accel/accel.sh@20 -- # val= 00:06:29.955 23:09:19 -- accel/accel.sh@21 -- # case "$var" in 00:06:29.955 23:09:19 -- accel/accel.sh@19 -- # IFS=: 00:06:29.955 23:09:19 -- accel/accel.sh@19 -- # read -r var val 00:06:29.955 23:09:19 -- accel/accel.sh@20 -- # val= 00:06:29.955 23:09:19 -- accel/accel.sh@21 -- # case "$var" in 00:06:29.955 23:09:19 -- accel/accel.sh@19 -- # IFS=: 00:06:29.955 23:09:19 -- accel/accel.sh@19 -- # read -r var val 00:06:29.955 23:09:19 -- accel/accel.sh@20 -- # val= 00:06:29.955 23:09:19 -- accel/accel.sh@21 -- # case "$var" in 00:06:29.955 23:09:19 -- accel/accel.sh@19 -- # IFS=: 00:06:29.955 23:09:19 -- accel/accel.sh@19 -- # read -r var val 00:06:29.955 23:09:19 -- accel/accel.sh@20 -- # val= 00:06:29.955 23:09:19 -- accel/accel.sh@21 -- # case "$var" in 00:06:29.955 23:09:19 -- accel/accel.sh@19 -- # IFS=: 00:06:29.955 23:09:19 -- accel/accel.sh@19 -- # read -r var val 00:06:29.955 23:09:19 -- accel/accel.sh@20 -- # val= 00:06:29.955 23:09:19 -- accel/accel.sh@21 -- # case "$var" in 00:06:29.955 23:09:19 -- accel/accel.sh@19 -- # IFS=: 00:06:29.955 23:09:19 -- accel/accel.sh@19 -- # read -r var val 00:06:29.955 23:09:19 -- accel/accel.sh@20 -- # val= 00:06:29.955 23:09:19 -- accel/accel.sh@21 -- # case "$var" in 00:06:29.955 23:09:19 -- accel/accel.sh@19 -- # IFS=: 00:06:29.955 23:09:19 -- accel/accel.sh@19 -- # read -r var val 00:06:29.955 23:09:19 -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:29.955 23:09:19 -- accel/accel.sh@27 -- # [[ -n dualcast ]] 00:06:29.955 23:09:19 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:29.955 00:06:29.955 real 0m1.248s 00:06:29.955 user 0m1.148s 00:06:29.955 sys 0m0.109s 00:06:29.955 23:09:19 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:29.955 23:09:19 -- common/autotest_common.sh@10 -- # set +x 00:06:29.955 ************************************ 00:06:29.955 END TEST accel_dualcast 00:06:29.955 ************************************ 00:06:29.955 23:09:19 -- accel/accel.sh@108 -- # run_test accel_compare accel_test -t 1 -w compare -y 00:06:29.955 23:09:19 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:06:29.955 23:09:19 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:29.955 23:09:19 -- common/autotest_common.sh@10 -- # set +x 00:06:30.217 ************************************ 00:06:30.217 START TEST accel_compare 00:06:30.217 ************************************ 00:06:30.217 23:09:19 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w compare -y 00:06:30.217 23:09:19 -- accel/accel.sh@16 -- # local accel_opc 00:06:30.217 23:09:19 -- accel/accel.sh@17 -- # local accel_module 00:06:30.217 23:09:19 -- accel/accel.sh@19 -- # IFS=: 00:06:30.217 23:09:19 -- accel/accel.sh@19 -- # read -r var val 00:06:30.217 23:09:19 -- accel/accel.sh@15 -- # accel_perf -t 1 -w compare -y 00:06:30.217 23:09:19 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:06:30.217 23:09:19 -- accel/accel.sh@12 -- # build_accel_config 00:06:30.217 23:09:19 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:30.217 23:09:19 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:30.217 23:09:19 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:30.217 23:09:19 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:30.217 23:09:19 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:30.217 23:09:19 -- accel/accel.sh@40 -- # local IFS=, 00:06:30.217 23:09:19 -- accel/accel.sh@41 -- # jq -r . 00:06:30.217 [2024-04-26 23:09:19.340401] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:06:30.217 [2024-04-26 23:09:19.340460] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3738122 ] 00:06:30.217 EAL: No free 2048 kB hugepages reported on node 1 00:06:30.217 [2024-04-26 23:09:19.402990] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:30.217 [2024-04-26 23:09:19.431216] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:30.217 23:09:19 -- accel/accel.sh@20 -- # val= 00:06:30.217 23:09:19 -- accel/accel.sh@21 -- # case "$var" in 00:06:30.217 23:09:19 -- accel/accel.sh@19 -- # IFS=: 00:06:30.217 23:09:19 -- accel/accel.sh@19 -- # read -r var val 00:06:30.217 23:09:19 -- accel/accel.sh@20 -- # val= 00:06:30.217 23:09:19 -- accel/accel.sh@21 -- # case "$var" in 00:06:30.217 23:09:19 -- accel/accel.sh@19 -- # IFS=: 00:06:30.217 23:09:19 -- accel/accel.sh@19 -- # read -r var val 00:06:30.217 23:09:19 -- accel/accel.sh@20 -- # val=0x1 00:06:30.217 23:09:19 -- accel/accel.sh@21 -- # case "$var" in 00:06:30.217 23:09:19 -- accel/accel.sh@19 -- # IFS=: 00:06:30.217 23:09:19 -- accel/accel.sh@19 -- # read -r var val 00:06:30.217 23:09:19 -- accel/accel.sh@20 -- # val= 00:06:30.217 23:09:19 -- accel/accel.sh@21 -- # case "$var" in 00:06:30.217 23:09:19 -- accel/accel.sh@19 -- # IFS=: 00:06:30.217 23:09:19 -- accel/accel.sh@19 -- # read -r var val 00:06:30.217 23:09:19 -- accel/accel.sh@20 -- # val= 00:06:30.217 23:09:19 -- accel/accel.sh@21 -- # case "$var" in 00:06:30.217 23:09:19 -- accel/accel.sh@19 -- # IFS=: 00:06:30.217 23:09:19 -- accel/accel.sh@19 -- # read -r var val 00:06:30.217 23:09:19 -- accel/accel.sh@20 -- # val=compare 00:06:30.217 23:09:19 -- accel/accel.sh@21 -- # case "$var" in 00:06:30.217 23:09:19 -- accel/accel.sh@23 -- # accel_opc=compare 00:06:30.217 23:09:19 -- accel/accel.sh@19 -- # IFS=: 00:06:30.217 23:09:19 -- accel/accel.sh@19 -- # read -r var val 00:06:30.217 23:09:19 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:30.217 23:09:19 -- accel/accel.sh@21 -- # case "$var" in 00:06:30.217 23:09:19 -- accel/accel.sh@19 -- # IFS=: 00:06:30.217 23:09:19 -- accel/accel.sh@19 -- # read -r var val 00:06:30.217 23:09:19 -- accel/accel.sh@20 -- # val= 00:06:30.217 23:09:19 -- accel/accel.sh@21 -- # case "$var" in 00:06:30.217 23:09:19 -- accel/accel.sh@19 -- # IFS=: 00:06:30.217 23:09:19 -- accel/accel.sh@19 -- # read -r var val 00:06:30.217 23:09:19 -- accel/accel.sh@20 -- # val=software 00:06:30.217 23:09:19 -- accel/accel.sh@21 -- # case "$var" in 00:06:30.217 23:09:19 -- accel/accel.sh@22 -- # accel_module=software 00:06:30.478 23:09:19 -- accel/accel.sh@19 -- # IFS=: 00:06:30.478 23:09:19 -- accel/accel.sh@19 -- # read -r var val 00:06:30.478 23:09:19 -- accel/accel.sh@20 -- # val=32 00:06:30.478 23:09:19 -- accel/accel.sh@21 -- # case "$var" in 00:06:30.478 23:09:19 -- accel/accel.sh@19 -- # IFS=: 00:06:30.478 23:09:19 -- accel/accel.sh@19 -- # read -r var val 00:06:30.478 23:09:19 -- accel/accel.sh@20 -- # val=32 00:06:30.478 23:09:19 -- accel/accel.sh@21 -- # case "$var" in 00:06:30.478 23:09:19 -- accel/accel.sh@19 -- # IFS=: 00:06:30.478 23:09:19 -- accel/accel.sh@19 -- # read -r var val 00:06:30.478 23:09:19 -- accel/accel.sh@20 -- # val=1 00:06:30.478 23:09:19 -- accel/accel.sh@21 -- # case "$var" in 00:06:30.478 23:09:19 -- accel/accel.sh@19 -- # IFS=: 00:06:30.478 23:09:19 -- accel/accel.sh@19 -- # read -r var val 00:06:30.478 23:09:19 -- accel/accel.sh@20 -- # val='1 seconds' 00:06:30.478 23:09:19 -- accel/accel.sh@21 -- # case "$var" in 00:06:30.478 23:09:19 -- accel/accel.sh@19 -- # IFS=: 00:06:30.478 23:09:19 -- accel/accel.sh@19 -- # read -r var val 00:06:30.478 23:09:19 -- accel/accel.sh@20 -- # val=Yes 00:06:30.478 23:09:19 -- accel/accel.sh@21 -- # case "$var" in 00:06:30.478 23:09:19 -- accel/accel.sh@19 -- # IFS=: 00:06:30.478 23:09:19 -- accel/accel.sh@19 -- # read -r var val 00:06:30.478 23:09:19 -- accel/accel.sh@20 -- # val= 00:06:30.478 23:09:19 -- accel/accel.sh@21 -- # case "$var" in 00:06:30.478 23:09:19 -- accel/accel.sh@19 -- # IFS=: 00:06:30.478 23:09:19 -- accel/accel.sh@19 -- # read -r var val 00:06:30.478 23:09:19 -- accel/accel.sh@20 -- # val= 00:06:30.478 23:09:19 -- accel/accel.sh@21 -- # case "$var" in 00:06:30.478 23:09:19 -- accel/accel.sh@19 -- # IFS=: 00:06:30.478 23:09:19 -- accel/accel.sh@19 -- # read -r var val 00:06:31.419 23:09:20 -- accel/accel.sh@20 -- # val= 00:06:31.419 23:09:20 -- accel/accel.sh@21 -- # case "$var" in 00:06:31.419 23:09:20 -- accel/accel.sh@19 -- # IFS=: 00:06:31.419 23:09:20 -- accel/accel.sh@19 -- # read -r var val 00:06:31.419 23:09:20 -- accel/accel.sh@20 -- # val= 00:06:31.419 23:09:20 -- accel/accel.sh@21 -- # case "$var" in 00:06:31.419 23:09:20 -- accel/accel.sh@19 -- # IFS=: 00:06:31.419 23:09:20 -- accel/accel.sh@19 -- # read -r var val 00:06:31.419 23:09:20 -- accel/accel.sh@20 -- # val= 00:06:31.419 23:09:20 -- accel/accel.sh@21 -- # case "$var" in 00:06:31.419 23:09:20 -- accel/accel.sh@19 -- # IFS=: 00:06:31.419 23:09:20 -- accel/accel.sh@19 -- # read -r var val 00:06:31.419 23:09:20 -- accel/accel.sh@20 -- # val= 00:06:31.419 23:09:20 -- accel/accel.sh@21 -- # case "$var" in 00:06:31.419 23:09:20 -- accel/accel.sh@19 -- # IFS=: 00:06:31.419 23:09:20 -- accel/accel.sh@19 -- # read -r var val 00:06:31.419 23:09:20 -- accel/accel.sh@20 -- # val= 00:06:31.419 23:09:20 -- accel/accel.sh@21 -- # case "$var" in 00:06:31.419 23:09:20 -- accel/accel.sh@19 -- # IFS=: 00:06:31.419 23:09:20 -- accel/accel.sh@19 -- # read -r var val 00:06:31.419 23:09:20 -- accel/accel.sh@20 -- # val= 00:06:31.419 23:09:20 -- accel/accel.sh@21 -- # case "$var" in 00:06:31.419 23:09:20 -- accel/accel.sh@19 -- # IFS=: 00:06:31.419 23:09:20 -- accel/accel.sh@19 -- # read -r var val 00:06:31.419 23:09:20 -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:31.419 23:09:20 -- accel/accel.sh@27 -- # [[ -n compare ]] 00:06:31.419 23:09:20 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:31.419 00:06:31.419 real 0m1.235s 00:06:31.419 user 0m1.142s 00:06:31.419 sys 0m0.104s 00:06:31.419 23:09:20 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:31.419 23:09:20 -- common/autotest_common.sh@10 -- # set +x 00:06:31.419 ************************************ 00:06:31.419 END TEST accel_compare 00:06:31.419 ************************************ 00:06:31.419 23:09:20 -- accel/accel.sh@109 -- # run_test accel_xor accel_test -t 1 -w xor -y 00:06:31.419 23:09:20 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:06:31.419 23:09:20 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:31.419 23:09:20 -- common/autotest_common.sh@10 -- # set +x 00:06:31.679 ************************************ 00:06:31.679 START TEST accel_xor 00:06:31.679 ************************************ 00:06:31.679 23:09:20 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w xor -y 00:06:31.679 23:09:20 -- accel/accel.sh@16 -- # local accel_opc 00:06:31.679 23:09:20 -- accel/accel.sh@17 -- # local accel_module 00:06:31.679 23:09:20 -- accel/accel.sh@19 -- # IFS=: 00:06:31.679 23:09:20 -- accel/accel.sh@19 -- # read -r var val 00:06:31.679 23:09:20 -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y 00:06:31.679 23:09:20 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:06:31.679 23:09:20 -- accel/accel.sh@12 -- # build_accel_config 00:06:31.679 23:09:20 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:31.679 23:09:20 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:31.679 23:09:20 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:31.679 23:09:20 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:31.679 23:09:20 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:31.679 23:09:20 -- accel/accel.sh@40 -- # local IFS=, 00:06:31.679 23:09:20 -- accel/accel.sh@41 -- # jq -r . 00:06:31.679 [2024-04-26 23:09:20.764958] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:06:31.679 [2024-04-26 23:09:20.765049] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3738485 ] 00:06:31.679 EAL: No free 2048 kB hugepages reported on node 1 00:06:31.679 [2024-04-26 23:09:20.830831] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:31.679 [2024-04-26 23:09:20.867576] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:31.679 23:09:20 -- accel/accel.sh@20 -- # val= 00:06:31.679 23:09:20 -- accel/accel.sh@21 -- # case "$var" in 00:06:31.679 23:09:20 -- accel/accel.sh@19 -- # IFS=: 00:06:31.679 23:09:20 -- accel/accel.sh@19 -- # read -r var val 00:06:31.679 23:09:20 -- accel/accel.sh@20 -- # val= 00:06:31.679 23:09:20 -- accel/accel.sh@21 -- # case "$var" in 00:06:31.679 23:09:20 -- accel/accel.sh@19 -- # IFS=: 00:06:31.679 23:09:20 -- accel/accel.sh@19 -- # read -r var val 00:06:31.679 23:09:20 -- accel/accel.sh@20 -- # val=0x1 00:06:31.679 23:09:20 -- accel/accel.sh@21 -- # case "$var" in 00:06:31.679 23:09:20 -- accel/accel.sh@19 -- # IFS=: 00:06:31.679 23:09:20 -- accel/accel.sh@19 -- # read -r var val 00:06:31.679 23:09:20 -- accel/accel.sh@20 -- # val= 00:06:31.679 23:09:20 -- accel/accel.sh@21 -- # case "$var" in 00:06:31.679 23:09:20 -- accel/accel.sh@19 -- # IFS=: 00:06:31.679 23:09:20 -- accel/accel.sh@19 -- # read -r var val 00:06:31.679 23:09:20 -- accel/accel.sh@20 -- # val= 00:06:31.679 23:09:20 -- accel/accel.sh@21 -- # case "$var" in 00:06:31.679 23:09:20 -- accel/accel.sh@19 -- # IFS=: 00:06:31.679 23:09:20 -- accel/accel.sh@19 -- # read -r var val 00:06:31.679 23:09:20 -- accel/accel.sh@20 -- # val=xor 00:06:31.679 23:09:20 -- accel/accel.sh@21 -- # case "$var" in 00:06:31.679 23:09:20 -- accel/accel.sh@23 -- # accel_opc=xor 00:06:31.679 23:09:20 -- accel/accel.sh@19 -- # IFS=: 00:06:31.679 23:09:20 -- accel/accel.sh@19 -- # read -r var val 00:06:31.679 23:09:20 -- accel/accel.sh@20 -- # val=2 00:06:31.679 23:09:20 -- accel/accel.sh@21 -- # case "$var" in 00:06:31.679 23:09:20 -- accel/accel.sh@19 -- # IFS=: 00:06:31.679 23:09:20 -- accel/accel.sh@19 -- # read -r var val 00:06:31.679 23:09:20 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:31.679 23:09:20 -- accel/accel.sh@21 -- # case "$var" in 00:06:31.679 23:09:20 -- accel/accel.sh@19 -- # IFS=: 00:06:31.679 23:09:20 -- accel/accel.sh@19 -- # read -r var val 00:06:31.679 23:09:20 -- accel/accel.sh@20 -- # val= 00:06:31.679 23:09:20 -- accel/accel.sh@21 -- # case "$var" in 00:06:31.679 23:09:20 -- accel/accel.sh@19 -- # IFS=: 00:06:31.679 23:09:20 -- accel/accel.sh@19 -- # read -r var val 00:06:31.679 23:09:20 -- accel/accel.sh@20 -- # val=software 00:06:31.679 23:09:20 -- accel/accel.sh@21 -- # case "$var" in 00:06:31.679 23:09:20 -- accel/accel.sh@22 -- # accel_module=software 00:06:31.679 23:09:20 -- accel/accel.sh@19 -- # IFS=: 00:06:31.679 23:09:20 -- accel/accel.sh@19 -- # read -r var val 00:06:31.679 23:09:20 -- accel/accel.sh@20 -- # val=32 00:06:31.679 23:09:20 -- accel/accel.sh@21 -- # case "$var" in 00:06:31.679 23:09:20 -- accel/accel.sh@19 -- # IFS=: 00:06:31.679 23:09:20 -- accel/accel.sh@19 -- # read -r var val 00:06:31.679 23:09:20 -- accel/accel.sh@20 -- # val=32 00:06:31.679 23:09:20 -- accel/accel.sh@21 -- # case "$var" in 00:06:31.679 23:09:20 -- accel/accel.sh@19 -- # IFS=: 00:06:31.679 23:09:20 -- accel/accel.sh@19 -- # read -r var val 00:06:31.679 23:09:20 -- accel/accel.sh@20 -- # val=1 00:06:31.679 23:09:20 -- accel/accel.sh@21 -- # case "$var" in 00:06:31.679 23:09:20 -- accel/accel.sh@19 -- # IFS=: 00:06:31.679 23:09:20 -- accel/accel.sh@19 -- # read -r var val 00:06:31.679 23:09:20 -- accel/accel.sh@20 -- # val='1 seconds' 00:06:31.679 23:09:20 -- accel/accel.sh@21 -- # case "$var" in 00:06:31.679 23:09:20 -- accel/accel.sh@19 -- # IFS=: 00:06:31.679 23:09:20 -- accel/accel.sh@19 -- # read -r var val 00:06:31.679 23:09:20 -- accel/accel.sh@20 -- # val=Yes 00:06:31.679 23:09:20 -- accel/accel.sh@21 -- # case "$var" in 00:06:31.679 23:09:20 -- accel/accel.sh@19 -- # IFS=: 00:06:31.679 23:09:20 -- accel/accel.sh@19 -- # read -r var val 00:06:31.679 23:09:20 -- accel/accel.sh@20 -- # val= 00:06:31.679 23:09:20 -- accel/accel.sh@21 -- # case "$var" in 00:06:31.679 23:09:20 -- accel/accel.sh@19 -- # IFS=: 00:06:31.679 23:09:20 -- accel/accel.sh@19 -- # read -r var val 00:06:31.679 23:09:20 -- accel/accel.sh@20 -- # val= 00:06:31.679 23:09:20 -- accel/accel.sh@21 -- # case "$var" in 00:06:31.679 23:09:20 -- accel/accel.sh@19 -- # IFS=: 00:06:31.679 23:09:20 -- accel/accel.sh@19 -- # read -r var val 00:06:33.059 23:09:21 -- accel/accel.sh@20 -- # val= 00:06:33.059 23:09:21 -- accel/accel.sh@21 -- # case "$var" in 00:06:33.059 23:09:21 -- accel/accel.sh@19 -- # IFS=: 00:06:33.059 23:09:21 -- accel/accel.sh@19 -- # read -r var val 00:06:33.059 23:09:21 -- accel/accel.sh@20 -- # val= 00:06:33.059 23:09:21 -- accel/accel.sh@21 -- # case "$var" in 00:06:33.059 23:09:21 -- accel/accel.sh@19 -- # IFS=: 00:06:33.059 23:09:21 -- accel/accel.sh@19 -- # read -r var val 00:06:33.059 23:09:21 -- accel/accel.sh@20 -- # val= 00:06:33.059 23:09:21 -- accel/accel.sh@21 -- # case "$var" in 00:06:33.059 23:09:21 -- accel/accel.sh@19 -- # IFS=: 00:06:33.059 23:09:21 -- accel/accel.sh@19 -- # read -r var val 00:06:33.059 23:09:21 -- accel/accel.sh@20 -- # val= 00:06:33.059 23:09:21 -- accel/accel.sh@21 -- # case "$var" in 00:06:33.059 23:09:21 -- accel/accel.sh@19 -- # IFS=: 00:06:33.059 23:09:21 -- accel/accel.sh@19 -- # read -r var val 00:06:33.059 23:09:21 -- accel/accel.sh@20 -- # val= 00:06:33.059 23:09:21 -- accel/accel.sh@21 -- # case "$var" in 00:06:33.059 23:09:21 -- accel/accel.sh@19 -- # IFS=: 00:06:33.059 23:09:21 -- accel/accel.sh@19 -- # read -r var val 00:06:33.059 23:09:21 -- accel/accel.sh@20 -- # val= 00:06:33.059 23:09:21 -- accel/accel.sh@21 -- # case "$var" in 00:06:33.059 23:09:21 -- accel/accel.sh@19 -- # IFS=: 00:06:33.059 23:09:21 -- accel/accel.sh@19 -- # read -r var val 00:06:33.059 23:09:21 -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:33.059 23:09:21 -- accel/accel.sh@27 -- # [[ -n xor ]] 00:06:33.059 23:09:21 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:33.059 00:06:33.059 real 0m1.249s 00:06:33.059 user 0m1.153s 00:06:33.059 sys 0m0.107s 00:06:33.059 23:09:21 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:33.059 23:09:21 -- common/autotest_common.sh@10 -- # set +x 00:06:33.059 ************************************ 00:06:33.059 END TEST accel_xor 00:06:33.059 ************************************ 00:06:33.059 23:09:22 -- accel/accel.sh@110 -- # run_test accel_xor accel_test -t 1 -w xor -y -x 3 00:06:33.059 23:09:22 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:06:33.059 23:09:22 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:33.059 23:09:22 -- common/autotest_common.sh@10 -- # set +x 00:06:33.059 ************************************ 00:06:33.059 START TEST accel_xor 00:06:33.059 ************************************ 00:06:33.059 23:09:22 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w xor -y -x 3 00:06:33.059 23:09:22 -- accel/accel.sh@16 -- # local accel_opc 00:06:33.059 23:09:22 -- accel/accel.sh@17 -- # local accel_module 00:06:33.059 23:09:22 -- accel/accel.sh@19 -- # IFS=: 00:06:33.059 23:09:22 -- accel/accel.sh@19 -- # read -r var val 00:06:33.059 23:09:22 -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y -x 3 00:06:33.059 23:09:22 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:06:33.059 23:09:22 -- accel/accel.sh@12 -- # build_accel_config 00:06:33.059 23:09:22 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:33.059 23:09:22 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:33.059 23:09:22 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:33.059 23:09:22 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:33.059 23:09:22 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:33.059 23:09:22 -- accel/accel.sh@40 -- # local IFS=, 00:06:33.059 23:09:22 -- accel/accel.sh@41 -- # jq -r . 00:06:33.059 [2024-04-26 23:09:22.195195] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:06:33.059 [2024-04-26 23:09:22.195265] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3738709 ] 00:06:33.059 EAL: No free 2048 kB hugepages reported on node 1 00:06:33.059 [2024-04-26 23:09:22.260706] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:33.059 [2024-04-26 23:09:22.297050] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:33.320 23:09:22 -- accel/accel.sh@20 -- # val= 00:06:33.320 23:09:22 -- accel/accel.sh@21 -- # case "$var" in 00:06:33.320 23:09:22 -- accel/accel.sh@19 -- # IFS=: 00:06:33.320 23:09:22 -- accel/accel.sh@19 -- # read -r var val 00:06:33.320 23:09:22 -- accel/accel.sh@20 -- # val= 00:06:33.320 23:09:22 -- accel/accel.sh@21 -- # case "$var" in 00:06:33.320 23:09:22 -- accel/accel.sh@19 -- # IFS=: 00:06:33.320 23:09:22 -- accel/accel.sh@19 -- # read -r var val 00:06:33.320 23:09:22 -- accel/accel.sh@20 -- # val=0x1 00:06:33.320 23:09:22 -- accel/accel.sh@21 -- # case "$var" in 00:06:33.320 23:09:22 -- accel/accel.sh@19 -- # IFS=: 00:06:33.320 23:09:22 -- accel/accel.sh@19 -- # read -r var val 00:06:33.320 23:09:22 -- accel/accel.sh@20 -- # val= 00:06:33.320 23:09:22 -- accel/accel.sh@21 -- # case "$var" in 00:06:33.320 23:09:22 -- accel/accel.sh@19 -- # IFS=: 00:06:33.320 23:09:22 -- accel/accel.sh@19 -- # read -r var val 00:06:33.320 23:09:22 -- accel/accel.sh@20 -- # val= 00:06:33.320 23:09:22 -- accel/accel.sh@21 -- # case "$var" in 00:06:33.320 23:09:22 -- accel/accel.sh@19 -- # IFS=: 00:06:33.320 23:09:22 -- accel/accel.sh@19 -- # read -r var val 00:06:33.320 23:09:22 -- accel/accel.sh@20 -- # val=xor 00:06:33.320 23:09:22 -- accel/accel.sh@21 -- # case "$var" in 00:06:33.320 23:09:22 -- accel/accel.sh@23 -- # accel_opc=xor 00:06:33.320 23:09:22 -- accel/accel.sh@19 -- # IFS=: 00:06:33.320 23:09:22 -- accel/accel.sh@19 -- # read -r var val 00:06:33.320 23:09:22 -- accel/accel.sh@20 -- # val=3 00:06:33.320 23:09:22 -- accel/accel.sh@21 -- # case "$var" in 00:06:33.320 23:09:22 -- accel/accel.sh@19 -- # IFS=: 00:06:33.320 23:09:22 -- accel/accel.sh@19 -- # read -r var val 00:06:33.320 23:09:22 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:33.320 23:09:22 -- accel/accel.sh@21 -- # case "$var" in 00:06:33.320 23:09:22 -- accel/accel.sh@19 -- # IFS=: 00:06:33.320 23:09:22 -- accel/accel.sh@19 -- # read -r var val 00:06:33.320 23:09:22 -- accel/accel.sh@20 -- # val= 00:06:33.320 23:09:22 -- accel/accel.sh@21 -- # case "$var" in 00:06:33.320 23:09:22 -- accel/accel.sh@19 -- # IFS=: 00:06:33.320 23:09:22 -- accel/accel.sh@19 -- # read -r var val 00:06:33.320 23:09:22 -- accel/accel.sh@20 -- # val=software 00:06:33.320 23:09:22 -- accel/accel.sh@21 -- # case "$var" in 00:06:33.320 23:09:22 -- accel/accel.sh@22 -- # accel_module=software 00:06:33.320 23:09:22 -- accel/accel.sh@19 -- # IFS=: 00:06:33.320 23:09:22 -- accel/accel.sh@19 -- # read -r var val 00:06:33.320 23:09:22 -- accel/accel.sh@20 -- # val=32 00:06:33.320 23:09:22 -- accel/accel.sh@21 -- # case "$var" in 00:06:33.320 23:09:22 -- accel/accel.sh@19 -- # IFS=: 00:06:33.320 23:09:22 -- accel/accel.sh@19 -- # read -r var val 00:06:33.320 23:09:22 -- accel/accel.sh@20 -- # val=32 00:06:33.320 23:09:22 -- accel/accel.sh@21 -- # case "$var" in 00:06:33.320 23:09:22 -- accel/accel.sh@19 -- # IFS=: 00:06:33.320 23:09:22 -- accel/accel.sh@19 -- # read -r var val 00:06:33.320 23:09:22 -- accel/accel.sh@20 -- # val=1 00:06:33.320 23:09:22 -- accel/accel.sh@21 -- # case "$var" in 00:06:33.320 23:09:22 -- accel/accel.sh@19 -- # IFS=: 00:06:33.320 23:09:22 -- accel/accel.sh@19 -- # read -r var val 00:06:33.320 23:09:22 -- accel/accel.sh@20 -- # val='1 seconds' 00:06:33.320 23:09:22 -- accel/accel.sh@21 -- # case "$var" in 00:06:33.320 23:09:22 -- accel/accel.sh@19 -- # IFS=: 00:06:33.320 23:09:22 -- accel/accel.sh@19 -- # read -r var val 00:06:33.320 23:09:22 -- accel/accel.sh@20 -- # val=Yes 00:06:33.320 23:09:22 -- accel/accel.sh@21 -- # case "$var" in 00:06:33.320 23:09:22 -- accel/accel.sh@19 -- # IFS=: 00:06:33.320 23:09:22 -- accel/accel.sh@19 -- # read -r var val 00:06:33.320 23:09:22 -- accel/accel.sh@20 -- # val= 00:06:33.320 23:09:22 -- accel/accel.sh@21 -- # case "$var" in 00:06:33.320 23:09:22 -- accel/accel.sh@19 -- # IFS=: 00:06:33.320 23:09:22 -- accel/accel.sh@19 -- # read -r var val 00:06:33.320 23:09:22 -- accel/accel.sh@20 -- # val= 00:06:33.320 23:09:22 -- accel/accel.sh@21 -- # case "$var" in 00:06:33.320 23:09:22 -- accel/accel.sh@19 -- # IFS=: 00:06:33.320 23:09:22 -- accel/accel.sh@19 -- # read -r var val 00:06:34.261 23:09:23 -- accel/accel.sh@20 -- # val= 00:06:34.261 23:09:23 -- accel/accel.sh@21 -- # case "$var" in 00:06:34.261 23:09:23 -- accel/accel.sh@19 -- # IFS=: 00:06:34.261 23:09:23 -- accel/accel.sh@19 -- # read -r var val 00:06:34.261 23:09:23 -- accel/accel.sh@20 -- # val= 00:06:34.261 23:09:23 -- accel/accel.sh@21 -- # case "$var" in 00:06:34.261 23:09:23 -- accel/accel.sh@19 -- # IFS=: 00:06:34.261 23:09:23 -- accel/accel.sh@19 -- # read -r var val 00:06:34.261 23:09:23 -- accel/accel.sh@20 -- # val= 00:06:34.261 23:09:23 -- accel/accel.sh@21 -- # case "$var" in 00:06:34.261 23:09:23 -- accel/accel.sh@19 -- # IFS=: 00:06:34.261 23:09:23 -- accel/accel.sh@19 -- # read -r var val 00:06:34.261 23:09:23 -- accel/accel.sh@20 -- # val= 00:06:34.261 23:09:23 -- accel/accel.sh@21 -- # case "$var" in 00:06:34.261 23:09:23 -- accel/accel.sh@19 -- # IFS=: 00:06:34.261 23:09:23 -- accel/accel.sh@19 -- # read -r var val 00:06:34.261 23:09:23 -- accel/accel.sh@20 -- # val= 00:06:34.261 23:09:23 -- accel/accel.sh@21 -- # case "$var" in 00:06:34.261 23:09:23 -- accel/accel.sh@19 -- # IFS=: 00:06:34.261 23:09:23 -- accel/accel.sh@19 -- # read -r var val 00:06:34.261 23:09:23 -- accel/accel.sh@20 -- # val= 00:06:34.261 23:09:23 -- accel/accel.sh@21 -- # case "$var" in 00:06:34.261 23:09:23 -- accel/accel.sh@19 -- # IFS=: 00:06:34.261 23:09:23 -- accel/accel.sh@19 -- # read -r var val 00:06:34.261 23:09:23 -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:34.261 23:09:23 -- accel/accel.sh@27 -- # [[ -n xor ]] 00:06:34.261 23:09:23 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:34.261 00:06:34.261 real 0m1.247s 00:06:34.261 user 0m1.151s 00:06:34.261 sys 0m0.108s 00:06:34.261 23:09:23 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:34.261 23:09:23 -- common/autotest_common.sh@10 -- # set +x 00:06:34.261 ************************************ 00:06:34.261 END TEST accel_xor 00:06:34.261 ************************************ 00:06:34.261 23:09:23 -- accel/accel.sh@111 -- # run_test accel_dif_verify accel_test -t 1 -w dif_verify 00:06:34.261 23:09:23 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:06:34.261 23:09:23 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:34.261 23:09:23 -- common/autotest_common.sh@10 -- # set +x 00:06:34.521 ************************************ 00:06:34.521 START TEST accel_dif_verify 00:06:34.521 ************************************ 00:06:34.521 23:09:23 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w dif_verify 00:06:34.521 23:09:23 -- accel/accel.sh@16 -- # local accel_opc 00:06:34.521 23:09:23 -- accel/accel.sh@17 -- # local accel_module 00:06:34.521 23:09:23 -- accel/accel.sh@19 -- # IFS=: 00:06:34.521 23:09:23 -- accel/accel.sh@19 -- # read -r var val 00:06:34.521 23:09:23 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_verify 00:06:34.521 23:09:23 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:06:34.521 23:09:23 -- accel/accel.sh@12 -- # build_accel_config 00:06:34.521 23:09:23 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:34.521 23:09:23 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:34.521 23:09:23 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:34.521 23:09:23 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:34.521 23:09:23 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:34.521 23:09:23 -- accel/accel.sh@40 -- # local IFS=, 00:06:34.521 23:09:23 -- accel/accel.sh@41 -- # jq -r . 00:06:34.521 [2024-04-26 23:09:23.632701] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:06:34.521 [2024-04-26 23:09:23.632786] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3738930 ] 00:06:34.521 EAL: No free 2048 kB hugepages reported on node 1 00:06:34.521 [2024-04-26 23:09:23.698561] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:34.521 [2024-04-26 23:09:23.734960] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:34.521 23:09:23 -- accel/accel.sh@20 -- # val= 00:06:34.521 23:09:23 -- accel/accel.sh@21 -- # case "$var" in 00:06:34.521 23:09:23 -- accel/accel.sh@19 -- # IFS=: 00:06:34.521 23:09:23 -- accel/accel.sh@19 -- # read -r var val 00:06:34.521 23:09:23 -- accel/accel.sh@20 -- # val= 00:06:34.521 23:09:23 -- accel/accel.sh@21 -- # case "$var" in 00:06:34.521 23:09:23 -- accel/accel.sh@19 -- # IFS=: 00:06:34.521 23:09:23 -- accel/accel.sh@19 -- # read -r var val 00:06:34.521 23:09:23 -- accel/accel.sh@20 -- # val=0x1 00:06:34.521 23:09:23 -- accel/accel.sh@21 -- # case "$var" in 00:06:34.521 23:09:23 -- accel/accel.sh@19 -- # IFS=: 00:06:34.521 23:09:23 -- accel/accel.sh@19 -- # read -r var val 00:06:34.521 23:09:23 -- accel/accel.sh@20 -- # val= 00:06:34.521 23:09:23 -- accel/accel.sh@21 -- # case "$var" in 00:06:34.521 23:09:23 -- accel/accel.sh@19 -- # IFS=: 00:06:34.521 23:09:23 -- accel/accel.sh@19 -- # read -r var val 00:06:34.521 23:09:23 -- accel/accel.sh@20 -- # val= 00:06:34.521 23:09:23 -- accel/accel.sh@21 -- # case "$var" in 00:06:34.521 23:09:23 -- accel/accel.sh@19 -- # IFS=: 00:06:34.522 23:09:23 -- accel/accel.sh@19 -- # read -r var val 00:06:34.522 23:09:23 -- accel/accel.sh@20 -- # val=dif_verify 00:06:34.522 23:09:23 -- accel/accel.sh@21 -- # case "$var" in 00:06:34.522 23:09:23 -- accel/accel.sh@23 -- # accel_opc=dif_verify 00:06:34.522 23:09:23 -- accel/accel.sh@19 -- # IFS=: 00:06:34.522 23:09:23 -- accel/accel.sh@19 -- # read -r var val 00:06:34.522 23:09:23 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:34.522 23:09:23 -- accel/accel.sh@21 -- # case "$var" in 00:06:34.522 23:09:23 -- accel/accel.sh@19 -- # IFS=: 00:06:34.522 23:09:23 -- accel/accel.sh@19 -- # read -r var val 00:06:34.522 23:09:23 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:34.782 23:09:23 -- accel/accel.sh@21 -- # case "$var" in 00:06:34.782 23:09:23 -- accel/accel.sh@19 -- # IFS=: 00:06:34.782 23:09:23 -- accel/accel.sh@19 -- # read -r var val 00:06:34.782 23:09:23 -- accel/accel.sh@20 -- # val='512 bytes' 00:06:34.782 23:09:23 -- accel/accel.sh@21 -- # case "$var" in 00:06:34.782 23:09:23 -- accel/accel.sh@19 -- # IFS=: 00:06:34.782 23:09:23 -- accel/accel.sh@19 -- # read -r var val 00:06:34.782 23:09:23 -- accel/accel.sh@20 -- # val='8 bytes' 00:06:34.782 23:09:23 -- accel/accel.sh@21 -- # case "$var" in 00:06:34.782 23:09:23 -- accel/accel.sh@19 -- # IFS=: 00:06:34.782 23:09:23 -- accel/accel.sh@19 -- # read -r var val 00:06:34.782 23:09:23 -- accel/accel.sh@20 -- # val= 00:06:34.782 23:09:23 -- accel/accel.sh@21 -- # case "$var" in 00:06:34.782 23:09:23 -- accel/accel.sh@19 -- # IFS=: 00:06:34.782 23:09:23 -- accel/accel.sh@19 -- # read -r var val 00:06:34.782 23:09:23 -- accel/accel.sh@20 -- # val=software 00:06:34.782 23:09:23 -- accel/accel.sh@21 -- # case "$var" in 00:06:34.782 23:09:23 -- accel/accel.sh@22 -- # accel_module=software 00:06:34.782 23:09:23 -- accel/accel.sh@19 -- # IFS=: 00:06:34.782 23:09:23 -- accel/accel.sh@19 -- # read -r var val 00:06:34.782 23:09:23 -- accel/accel.sh@20 -- # val=32 00:06:34.782 23:09:23 -- accel/accel.sh@21 -- # case "$var" in 00:06:34.782 23:09:23 -- accel/accel.sh@19 -- # IFS=: 00:06:34.782 23:09:23 -- accel/accel.sh@19 -- # read -r var val 00:06:34.782 23:09:23 -- accel/accel.sh@20 -- # val=32 00:06:34.782 23:09:23 -- accel/accel.sh@21 -- # case "$var" in 00:06:34.782 23:09:23 -- accel/accel.sh@19 -- # IFS=: 00:06:34.782 23:09:23 -- accel/accel.sh@19 -- # read -r var val 00:06:34.782 23:09:23 -- accel/accel.sh@20 -- # val=1 00:06:34.782 23:09:23 -- accel/accel.sh@21 -- # case "$var" in 00:06:34.782 23:09:23 -- accel/accel.sh@19 -- # IFS=: 00:06:34.782 23:09:23 -- accel/accel.sh@19 -- # read -r var val 00:06:34.782 23:09:23 -- accel/accel.sh@20 -- # val='1 seconds' 00:06:34.782 23:09:23 -- accel/accel.sh@21 -- # case "$var" in 00:06:34.782 23:09:23 -- accel/accel.sh@19 -- # IFS=: 00:06:34.782 23:09:23 -- accel/accel.sh@19 -- # read -r var val 00:06:34.782 23:09:23 -- accel/accel.sh@20 -- # val=No 00:06:34.782 23:09:23 -- accel/accel.sh@21 -- # case "$var" in 00:06:34.782 23:09:23 -- accel/accel.sh@19 -- # IFS=: 00:06:34.782 23:09:23 -- accel/accel.sh@19 -- # read -r var val 00:06:34.782 23:09:23 -- accel/accel.sh@20 -- # val= 00:06:34.782 23:09:23 -- accel/accel.sh@21 -- # case "$var" in 00:06:34.782 23:09:23 -- accel/accel.sh@19 -- # IFS=: 00:06:34.782 23:09:23 -- accel/accel.sh@19 -- # read -r var val 00:06:34.782 23:09:23 -- accel/accel.sh@20 -- # val= 00:06:34.782 23:09:23 -- accel/accel.sh@21 -- # case "$var" in 00:06:34.782 23:09:23 -- accel/accel.sh@19 -- # IFS=: 00:06:34.782 23:09:23 -- accel/accel.sh@19 -- # read -r var val 00:06:35.728 23:09:24 -- accel/accel.sh@20 -- # val= 00:06:35.728 23:09:24 -- accel/accel.sh@21 -- # case "$var" in 00:06:35.728 23:09:24 -- accel/accel.sh@19 -- # IFS=: 00:06:35.728 23:09:24 -- accel/accel.sh@19 -- # read -r var val 00:06:35.728 23:09:24 -- accel/accel.sh@20 -- # val= 00:06:35.728 23:09:24 -- accel/accel.sh@21 -- # case "$var" in 00:06:35.728 23:09:24 -- accel/accel.sh@19 -- # IFS=: 00:06:35.728 23:09:24 -- accel/accel.sh@19 -- # read -r var val 00:06:35.728 23:09:24 -- accel/accel.sh@20 -- # val= 00:06:35.728 23:09:24 -- accel/accel.sh@21 -- # case "$var" in 00:06:35.728 23:09:24 -- accel/accel.sh@19 -- # IFS=: 00:06:35.728 23:09:24 -- accel/accel.sh@19 -- # read -r var val 00:06:35.728 23:09:24 -- accel/accel.sh@20 -- # val= 00:06:35.728 23:09:24 -- accel/accel.sh@21 -- # case "$var" in 00:06:35.728 23:09:24 -- accel/accel.sh@19 -- # IFS=: 00:06:35.728 23:09:24 -- accel/accel.sh@19 -- # read -r var val 00:06:35.728 23:09:24 -- accel/accel.sh@20 -- # val= 00:06:35.728 23:09:24 -- accel/accel.sh@21 -- # case "$var" in 00:06:35.728 23:09:24 -- accel/accel.sh@19 -- # IFS=: 00:06:35.728 23:09:24 -- accel/accel.sh@19 -- # read -r var val 00:06:35.728 23:09:24 -- accel/accel.sh@20 -- # val= 00:06:35.728 23:09:24 -- accel/accel.sh@21 -- # case "$var" in 00:06:35.728 23:09:24 -- accel/accel.sh@19 -- # IFS=: 00:06:35.728 23:09:24 -- accel/accel.sh@19 -- # read -r var val 00:06:35.728 23:09:24 -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:35.728 23:09:24 -- accel/accel.sh@27 -- # [[ -n dif_verify ]] 00:06:35.728 23:09:24 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:35.728 00:06:35.728 real 0m1.249s 00:06:35.728 user 0m1.152s 00:06:35.728 sys 0m0.110s 00:06:35.728 23:09:24 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:35.728 23:09:24 -- common/autotest_common.sh@10 -- # set +x 00:06:35.728 ************************************ 00:06:35.728 END TEST accel_dif_verify 00:06:35.728 ************************************ 00:06:35.728 23:09:24 -- accel/accel.sh@112 -- # run_test accel_dif_generate accel_test -t 1 -w dif_generate 00:06:35.728 23:09:24 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:06:35.728 23:09:24 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:35.728 23:09:24 -- common/autotest_common.sh@10 -- # set +x 00:06:35.989 ************************************ 00:06:35.989 START TEST accel_dif_generate 00:06:35.989 ************************************ 00:06:35.989 23:09:25 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w dif_generate 00:06:35.989 23:09:25 -- accel/accel.sh@16 -- # local accel_opc 00:06:35.989 23:09:25 -- accel/accel.sh@17 -- # local accel_module 00:06:35.989 23:09:25 -- accel/accel.sh@19 -- # IFS=: 00:06:35.989 23:09:25 -- accel/accel.sh@19 -- # read -r var val 00:06:35.989 23:09:25 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate 00:06:35.989 23:09:25 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:06:35.989 23:09:25 -- accel/accel.sh@12 -- # build_accel_config 00:06:35.989 23:09:25 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:35.989 23:09:25 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:35.989 23:09:25 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:35.989 23:09:25 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:35.989 23:09:25 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:35.989 23:09:25 -- accel/accel.sh@40 -- # local IFS=, 00:06:35.989 23:09:25 -- accel/accel.sh@41 -- # jq -r . 00:06:35.989 [2024-04-26 23:09:25.069946] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:06:35.990 [2024-04-26 23:09:25.070038] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3739250 ] 00:06:35.990 EAL: No free 2048 kB hugepages reported on node 1 00:06:35.990 [2024-04-26 23:09:25.140578] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:35.990 [2024-04-26 23:09:25.176672] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:35.990 23:09:25 -- accel/accel.sh@20 -- # val= 00:06:35.990 23:09:25 -- accel/accel.sh@21 -- # case "$var" in 00:06:35.990 23:09:25 -- accel/accel.sh@19 -- # IFS=: 00:06:35.990 23:09:25 -- accel/accel.sh@19 -- # read -r var val 00:06:35.990 23:09:25 -- accel/accel.sh@20 -- # val= 00:06:35.990 23:09:25 -- accel/accel.sh@21 -- # case "$var" in 00:06:35.990 23:09:25 -- accel/accel.sh@19 -- # IFS=: 00:06:35.990 23:09:25 -- accel/accel.sh@19 -- # read -r var val 00:06:35.990 23:09:25 -- accel/accel.sh@20 -- # val=0x1 00:06:35.990 23:09:25 -- accel/accel.sh@21 -- # case "$var" in 00:06:35.990 23:09:25 -- accel/accel.sh@19 -- # IFS=: 00:06:35.990 23:09:25 -- accel/accel.sh@19 -- # read -r var val 00:06:35.990 23:09:25 -- accel/accel.sh@20 -- # val= 00:06:35.990 23:09:25 -- accel/accel.sh@21 -- # case "$var" in 00:06:35.990 23:09:25 -- accel/accel.sh@19 -- # IFS=: 00:06:35.990 23:09:25 -- accel/accel.sh@19 -- # read -r var val 00:06:35.990 23:09:25 -- accel/accel.sh@20 -- # val= 00:06:35.990 23:09:25 -- accel/accel.sh@21 -- # case "$var" in 00:06:35.990 23:09:25 -- accel/accel.sh@19 -- # IFS=: 00:06:35.990 23:09:25 -- accel/accel.sh@19 -- # read -r var val 00:06:35.990 23:09:25 -- accel/accel.sh@20 -- # val=dif_generate 00:06:35.990 23:09:25 -- accel/accel.sh@21 -- # case "$var" in 00:06:35.990 23:09:25 -- accel/accel.sh@23 -- # accel_opc=dif_generate 00:06:35.990 23:09:25 -- accel/accel.sh@19 -- # IFS=: 00:06:35.990 23:09:25 -- accel/accel.sh@19 -- # read -r var val 00:06:35.990 23:09:25 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:35.990 23:09:25 -- accel/accel.sh@21 -- # case "$var" in 00:06:35.990 23:09:25 -- accel/accel.sh@19 -- # IFS=: 00:06:35.990 23:09:25 -- accel/accel.sh@19 -- # read -r var val 00:06:35.990 23:09:25 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:35.990 23:09:25 -- accel/accel.sh@21 -- # case "$var" in 00:06:35.990 23:09:25 -- accel/accel.sh@19 -- # IFS=: 00:06:35.990 23:09:25 -- accel/accel.sh@19 -- # read -r var val 00:06:35.990 23:09:25 -- accel/accel.sh@20 -- # val='512 bytes' 00:06:35.990 23:09:25 -- accel/accel.sh@21 -- # case "$var" in 00:06:35.990 23:09:25 -- accel/accel.sh@19 -- # IFS=: 00:06:35.990 23:09:25 -- accel/accel.sh@19 -- # read -r var val 00:06:35.990 23:09:25 -- accel/accel.sh@20 -- # val='8 bytes' 00:06:35.990 23:09:25 -- accel/accel.sh@21 -- # case "$var" in 00:06:35.990 23:09:25 -- accel/accel.sh@19 -- # IFS=: 00:06:35.990 23:09:25 -- accel/accel.sh@19 -- # read -r var val 00:06:35.990 23:09:25 -- accel/accel.sh@20 -- # val= 00:06:35.990 23:09:25 -- accel/accel.sh@21 -- # case "$var" in 00:06:35.990 23:09:25 -- accel/accel.sh@19 -- # IFS=: 00:06:35.990 23:09:25 -- accel/accel.sh@19 -- # read -r var val 00:06:35.990 23:09:25 -- accel/accel.sh@20 -- # val=software 00:06:35.990 23:09:25 -- accel/accel.sh@21 -- # case "$var" in 00:06:35.990 23:09:25 -- accel/accel.sh@22 -- # accel_module=software 00:06:35.990 23:09:25 -- accel/accel.sh@19 -- # IFS=: 00:06:35.990 23:09:25 -- accel/accel.sh@19 -- # read -r var val 00:06:35.990 23:09:25 -- accel/accel.sh@20 -- # val=32 00:06:35.990 23:09:25 -- accel/accel.sh@21 -- # case "$var" in 00:06:35.990 23:09:25 -- accel/accel.sh@19 -- # IFS=: 00:06:35.990 23:09:25 -- accel/accel.sh@19 -- # read -r var val 00:06:35.990 23:09:25 -- accel/accel.sh@20 -- # val=32 00:06:35.990 23:09:25 -- accel/accel.sh@21 -- # case "$var" in 00:06:35.990 23:09:25 -- accel/accel.sh@19 -- # IFS=: 00:06:35.990 23:09:25 -- accel/accel.sh@19 -- # read -r var val 00:06:35.990 23:09:25 -- accel/accel.sh@20 -- # val=1 00:06:35.990 23:09:25 -- accel/accel.sh@21 -- # case "$var" in 00:06:35.990 23:09:25 -- accel/accel.sh@19 -- # IFS=: 00:06:35.990 23:09:25 -- accel/accel.sh@19 -- # read -r var val 00:06:35.990 23:09:25 -- accel/accel.sh@20 -- # val='1 seconds' 00:06:35.990 23:09:25 -- accel/accel.sh@21 -- # case "$var" in 00:06:35.990 23:09:25 -- accel/accel.sh@19 -- # IFS=: 00:06:35.990 23:09:25 -- accel/accel.sh@19 -- # read -r var val 00:06:35.990 23:09:25 -- accel/accel.sh@20 -- # val=No 00:06:35.990 23:09:25 -- accel/accel.sh@21 -- # case "$var" in 00:06:35.990 23:09:25 -- accel/accel.sh@19 -- # IFS=: 00:06:35.990 23:09:25 -- accel/accel.sh@19 -- # read -r var val 00:06:35.990 23:09:25 -- accel/accel.sh@20 -- # val= 00:06:35.990 23:09:25 -- accel/accel.sh@21 -- # case "$var" in 00:06:35.990 23:09:25 -- accel/accel.sh@19 -- # IFS=: 00:06:35.990 23:09:25 -- accel/accel.sh@19 -- # read -r var val 00:06:35.990 23:09:25 -- accel/accel.sh@20 -- # val= 00:06:35.990 23:09:25 -- accel/accel.sh@21 -- # case "$var" in 00:06:35.990 23:09:25 -- accel/accel.sh@19 -- # IFS=: 00:06:35.990 23:09:25 -- accel/accel.sh@19 -- # read -r var val 00:06:37.374 23:09:26 -- accel/accel.sh@20 -- # val= 00:06:37.374 23:09:26 -- accel/accel.sh@21 -- # case "$var" in 00:06:37.374 23:09:26 -- accel/accel.sh@19 -- # IFS=: 00:06:37.374 23:09:26 -- accel/accel.sh@19 -- # read -r var val 00:06:37.374 23:09:26 -- accel/accel.sh@20 -- # val= 00:06:37.374 23:09:26 -- accel/accel.sh@21 -- # case "$var" in 00:06:37.374 23:09:26 -- accel/accel.sh@19 -- # IFS=: 00:06:37.374 23:09:26 -- accel/accel.sh@19 -- # read -r var val 00:06:37.374 23:09:26 -- accel/accel.sh@20 -- # val= 00:06:37.374 23:09:26 -- accel/accel.sh@21 -- # case "$var" in 00:06:37.374 23:09:26 -- accel/accel.sh@19 -- # IFS=: 00:06:37.374 23:09:26 -- accel/accel.sh@19 -- # read -r var val 00:06:37.374 23:09:26 -- accel/accel.sh@20 -- # val= 00:06:37.374 23:09:26 -- accel/accel.sh@21 -- # case "$var" in 00:06:37.374 23:09:26 -- accel/accel.sh@19 -- # IFS=: 00:06:37.374 23:09:26 -- accel/accel.sh@19 -- # read -r var val 00:06:37.374 23:09:26 -- accel/accel.sh@20 -- # val= 00:06:37.374 23:09:26 -- accel/accel.sh@21 -- # case "$var" in 00:06:37.374 23:09:26 -- accel/accel.sh@19 -- # IFS=: 00:06:37.374 23:09:26 -- accel/accel.sh@19 -- # read -r var val 00:06:37.374 23:09:26 -- accel/accel.sh@20 -- # val= 00:06:37.374 23:09:26 -- accel/accel.sh@21 -- # case "$var" in 00:06:37.374 23:09:26 -- accel/accel.sh@19 -- # IFS=: 00:06:37.374 23:09:26 -- accel/accel.sh@19 -- # read -r var val 00:06:37.374 23:09:26 -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:37.374 23:09:26 -- accel/accel.sh@27 -- # [[ -n dif_generate ]] 00:06:37.374 23:09:26 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:37.374 00:06:37.374 real 0m1.254s 00:06:37.374 user 0m1.144s 00:06:37.374 sys 0m0.122s 00:06:37.375 23:09:26 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:37.375 23:09:26 -- common/autotest_common.sh@10 -- # set +x 00:06:37.375 ************************************ 00:06:37.375 END TEST accel_dif_generate 00:06:37.375 ************************************ 00:06:37.375 23:09:26 -- accel/accel.sh@113 -- # run_test accel_dif_generate_copy accel_test -t 1 -w dif_generate_copy 00:06:37.375 23:09:26 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:06:37.375 23:09:26 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:37.375 23:09:26 -- common/autotest_common.sh@10 -- # set +x 00:06:37.375 ************************************ 00:06:37.375 START TEST accel_dif_generate_copy 00:06:37.375 ************************************ 00:06:37.375 23:09:26 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w dif_generate_copy 00:06:37.375 23:09:26 -- accel/accel.sh@16 -- # local accel_opc 00:06:37.375 23:09:26 -- accel/accel.sh@17 -- # local accel_module 00:06:37.375 23:09:26 -- accel/accel.sh@19 -- # IFS=: 00:06:37.375 23:09:26 -- accel/accel.sh@19 -- # read -r var val 00:06:37.375 23:09:26 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate_copy 00:06:37.375 23:09:26 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:06:37.375 23:09:26 -- accel/accel.sh@12 -- # build_accel_config 00:06:37.375 23:09:26 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:37.375 23:09:26 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:37.375 23:09:26 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:37.375 23:09:26 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:37.375 23:09:26 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:37.375 23:09:26 -- accel/accel.sh@40 -- # local IFS=, 00:06:37.375 23:09:26 -- accel/accel.sh@41 -- # jq -r . 00:06:37.375 [2024-04-26 23:09:26.504437] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:06:37.375 [2024-04-26 23:09:26.504503] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3739603 ] 00:06:37.375 EAL: No free 2048 kB hugepages reported on node 1 00:06:37.375 [2024-04-26 23:09:26.569300] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:37.375 [2024-04-26 23:09:26.605242] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:37.635 23:09:26 -- accel/accel.sh@20 -- # val= 00:06:37.635 23:09:26 -- accel/accel.sh@21 -- # case "$var" in 00:06:37.635 23:09:26 -- accel/accel.sh@19 -- # IFS=: 00:06:37.635 23:09:26 -- accel/accel.sh@19 -- # read -r var val 00:06:37.635 23:09:26 -- accel/accel.sh@20 -- # val= 00:06:37.635 23:09:26 -- accel/accel.sh@21 -- # case "$var" in 00:06:37.635 23:09:26 -- accel/accel.sh@19 -- # IFS=: 00:06:37.635 23:09:26 -- accel/accel.sh@19 -- # read -r var val 00:06:37.635 23:09:26 -- accel/accel.sh@20 -- # val=0x1 00:06:37.635 23:09:26 -- accel/accel.sh@21 -- # case "$var" in 00:06:37.635 23:09:26 -- accel/accel.sh@19 -- # IFS=: 00:06:37.635 23:09:26 -- accel/accel.sh@19 -- # read -r var val 00:06:37.635 23:09:26 -- accel/accel.sh@20 -- # val= 00:06:37.635 23:09:26 -- accel/accel.sh@21 -- # case "$var" in 00:06:37.635 23:09:26 -- accel/accel.sh@19 -- # IFS=: 00:06:37.635 23:09:26 -- accel/accel.sh@19 -- # read -r var val 00:06:37.635 23:09:26 -- accel/accel.sh@20 -- # val= 00:06:37.635 23:09:26 -- accel/accel.sh@21 -- # case "$var" in 00:06:37.635 23:09:26 -- accel/accel.sh@19 -- # IFS=: 00:06:37.635 23:09:26 -- accel/accel.sh@19 -- # read -r var val 00:06:37.635 23:09:26 -- accel/accel.sh@20 -- # val=dif_generate_copy 00:06:37.635 23:09:26 -- accel/accel.sh@21 -- # case "$var" in 00:06:37.635 23:09:26 -- accel/accel.sh@23 -- # accel_opc=dif_generate_copy 00:06:37.635 23:09:26 -- accel/accel.sh@19 -- # IFS=: 00:06:37.635 23:09:26 -- accel/accel.sh@19 -- # read -r var val 00:06:37.635 23:09:26 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:37.635 23:09:26 -- accel/accel.sh@21 -- # case "$var" in 00:06:37.635 23:09:26 -- accel/accel.sh@19 -- # IFS=: 00:06:37.635 23:09:26 -- accel/accel.sh@19 -- # read -r var val 00:06:37.635 23:09:26 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:37.635 23:09:26 -- accel/accel.sh@21 -- # case "$var" in 00:06:37.635 23:09:26 -- accel/accel.sh@19 -- # IFS=: 00:06:37.635 23:09:26 -- accel/accel.sh@19 -- # read -r var val 00:06:37.635 23:09:26 -- accel/accel.sh@20 -- # val= 00:06:37.635 23:09:26 -- accel/accel.sh@21 -- # case "$var" in 00:06:37.635 23:09:26 -- accel/accel.sh@19 -- # IFS=: 00:06:37.635 23:09:26 -- accel/accel.sh@19 -- # read -r var val 00:06:37.635 23:09:26 -- accel/accel.sh@20 -- # val=software 00:06:37.635 23:09:26 -- accel/accel.sh@21 -- # case "$var" in 00:06:37.635 23:09:26 -- accel/accel.sh@22 -- # accel_module=software 00:06:37.635 23:09:26 -- accel/accel.sh@19 -- # IFS=: 00:06:37.635 23:09:26 -- accel/accel.sh@19 -- # read -r var val 00:06:37.635 23:09:26 -- accel/accel.sh@20 -- # val=32 00:06:37.635 23:09:26 -- accel/accel.sh@21 -- # case "$var" in 00:06:37.635 23:09:26 -- accel/accel.sh@19 -- # IFS=: 00:06:37.635 23:09:26 -- accel/accel.sh@19 -- # read -r var val 00:06:37.635 23:09:26 -- accel/accel.sh@20 -- # val=32 00:06:37.635 23:09:26 -- accel/accel.sh@21 -- # case "$var" in 00:06:37.635 23:09:26 -- accel/accel.sh@19 -- # IFS=: 00:06:37.636 23:09:26 -- accel/accel.sh@19 -- # read -r var val 00:06:37.636 23:09:26 -- accel/accel.sh@20 -- # val=1 00:06:37.636 23:09:26 -- accel/accel.sh@21 -- # case "$var" in 00:06:37.636 23:09:26 -- accel/accel.sh@19 -- # IFS=: 00:06:37.636 23:09:26 -- accel/accel.sh@19 -- # read -r var val 00:06:37.636 23:09:26 -- accel/accel.sh@20 -- # val='1 seconds' 00:06:37.636 23:09:26 -- accel/accel.sh@21 -- # case "$var" in 00:06:37.636 23:09:26 -- accel/accel.sh@19 -- # IFS=: 00:06:37.636 23:09:26 -- accel/accel.sh@19 -- # read -r var val 00:06:37.636 23:09:26 -- accel/accel.sh@20 -- # val=No 00:06:37.636 23:09:26 -- accel/accel.sh@21 -- # case "$var" in 00:06:37.636 23:09:26 -- accel/accel.sh@19 -- # IFS=: 00:06:37.636 23:09:26 -- accel/accel.sh@19 -- # read -r var val 00:06:37.636 23:09:26 -- accel/accel.sh@20 -- # val= 00:06:37.636 23:09:26 -- accel/accel.sh@21 -- # case "$var" in 00:06:37.636 23:09:26 -- accel/accel.sh@19 -- # IFS=: 00:06:37.636 23:09:26 -- accel/accel.sh@19 -- # read -r var val 00:06:37.636 23:09:26 -- accel/accel.sh@20 -- # val= 00:06:37.636 23:09:26 -- accel/accel.sh@21 -- # case "$var" in 00:06:37.636 23:09:26 -- accel/accel.sh@19 -- # IFS=: 00:06:37.636 23:09:26 -- accel/accel.sh@19 -- # read -r var val 00:06:38.576 23:09:27 -- accel/accel.sh@20 -- # val= 00:06:38.576 23:09:27 -- accel/accel.sh@21 -- # case "$var" in 00:06:38.576 23:09:27 -- accel/accel.sh@19 -- # IFS=: 00:06:38.576 23:09:27 -- accel/accel.sh@19 -- # read -r var val 00:06:38.576 23:09:27 -- accel/accel.sh@20 -- # val= 00:06:38.576 23:09:27 -- accel/accel.sh@21 -- # case "$var" in 00:06:38.576 23:09:27 -- accel/accel.sh@19 -- # IFS=: 00:06:38.576 23:09:27 -- accel/accel.sh@19 -- # read -r var val 00:06:38.576 23:09:27 -- accel/accel.sh@20 -- # val= 00:06:38.576 23:09:27 -- accel/accel.sh@21 -- # case "$var" in 00:06:38.576 23:09:27 -- accel/accel.sh@19 -- # IFS=: 00:06:38.576 23:09:27 -- accel/accel.sh@19 -- # read -r var val 00:06:38.576 23:09:27 -- accel/accel.sh@20 -- # val= 00:06:38.576 23:09:27 -- accel/accel.sh@21 -- # case "$var" in 00:06:38.576 23:09:27 -- accel/accel.sh@19 -- # IFS=: 00:06:38.576 23:09:27 -- accel/accel.sh@19 -- # read -r var val 00:06:38.576 23:09:27 -- accel/accel.sh@20 -- # val= 00:06:38.576 23:09:27 -- accel/accel.sh@21 -- # case "$var" in 00:06:38.576 23:09:27 -- accel/accel.sh@19 -- # IFS=: 00:06:38.576 23:09:27 -- accel/accel.sh@19 -- # read -r var val 00:06:38.576 23:09:27 -- accel/accel.sh@20 -- # val= 00:06:38.576 23:09:27 -- accel/accel.sh@21 -- # case "$var" in 00:06:38.576 23:09:27 -- accel/accel.sh@19 -- # IFS=: 00:06:38.576 23:09:27 -- accel/accel.sh@19 -- # read -r var val 00:06:38.576 23:09:27 -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:38.576 23:09:27 -- accel/accel.sh@27 -- # [[ -n dif_generate_copy ]] 00:06:38.576 23:09:27 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:38.576 00:06:38.576 real 0m1.248s 00:06:38.576 user 0m1.143s 00:06:38.576 sys 0m0.114s 00:06:38.576 23:09:27 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:38.576 23:09:27 -- common/autotest_common.sh@10 -- # set +x 00:06:38.576 ************************************ 00:06:38.576 END TEST accel_dif_generate_copy 00:06:38.576 ************************************ 00:06:38.576 23:09:27 -- accel/accel.sh@115 -- # [[ y == y ]] 00:06:38.576 23:09:27 -- accel/accel.sh@116 -- # run_test accel_comp accel_test -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:38.576 23:09:27 -- common/autotest_common.sh@1087 -- # '[' 8 -le 1 ']' 00:06:38.576 23:09:27 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:38.576 23:09:27 -- common/autotest_common.sh@10 -- # set +x 00:06:38.837 ************************************ 00:06:38.837 START TEST accel_comp 00:06:38.837 ************************************ 00:06:38.837 23:09:27 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:38.837 23:09:27 -- accel/accel.sh@16 -- # local accel_opc 00:06:38.837 23:09:27 -- accel/accel.sh@17 -- # local accel_module 00:06:38.837 23:09:27 -- accel/accel.sh@19 -- # IFS=: 00:06:38.837 23:09:27 -- accel/accel.sh@19 -- # read -r var val 00:06:38.837 23:09:27 -- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:38.837 23:09:27 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:38.837 23:09:27 -- accel/accel.sh@12 -- # build_accel_config 00:06:38.837 23:09:27 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:38.837 23:09:27 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:38.837 23:09:27 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:38.837 23:09:27 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:38.837 23:09:27 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:38.837 23:09:27 -- accel/accel.sh@40 -- # local IFS=, 00:06:38.837 23:09:27 -- accel/accel.sh@41 -- # jq -r . 00:06:38.837 [2024-04-26 23:09:27.922648] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:06:38.837 [2024-04-26 23:09:27.922715] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3739965 ] 00:06:38.837 EAL: No free 2048 kB hugepages reported on node 1 00:06:38.837 [2024-04-26 23:09:27.985337] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:38.837 [2024-04-26 23:09:28.014773] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:38.837 23:09:28 -- accel/accel.sh@20 -- # val= 00:06:38.837 23:09:28 -- accel/accel.sh@21 -- # case "$var" in 00:06:38.837 23:09:28 -- accel/accel.sh@19 -- # IFS=: 00:06:38.837 23:09:28 -- accel/accel.sh@19 -- # read -r var val 00:06:38.837 23:09:28 -- accel/accel.sh@20 -- # val= 00:06:38.837 23:09:28 -- accel/accel.sh@21 -- # case "$var" in 00:06:38.837 23:09:28 -- accel/accel.sh@19 -- # IFS=: 00:06:38.837 23:09:28 -- accel/accel.sh@19 -- # read -r var val 00:06:38.837 23:09:28 -- accel/accel.sh@20 -- # val= 00:06:38.837 23:09:28 -- accel/accel.sh@21 -- # case "$var" in 00:06:38.837 23:09:28 -- accel/accel.sh@19 -- # IFS=: 00:06:38.837 23:09:28 -- accel/accel.sh@19 -- # read -r var val 00:06:38.837 23:09:28 -- accel/accel.sh@20 -- # val=0x1 00:06:38.837 23:09:28 -- accel/accel.sh@21 -- # case "$var" in 00:06:38.837 23:09:28 -- accel/accel.sh@19 -- # IFS=: 00:06:38.837 23:09:28 -- accel/accel.sh@19 -- # read -r var val 00:06:38.837 23:09:28 -- accel/accel.sh@20 -- # val= 00:06:38.837 23:09:28 -- accel/accel.sh@21 -- # case "$var" in 00:06:38.837 23:09:28 -- accel/accel.sh@19 -- # IFS=: 00:06:38.837 23:09:28 -- accel/accel.sh@19 -- # read -r var val 00:06:38.837 23:09:28 -- accel/accel.sh@20 -- # val= 00:06:38.837 23:09:28 -- accel/accel.sh@21 -- # case "$var" in 00:06:38.837 23:09:28 -- accel/accel.sh@19 -- # IFS=: 00:06:38.837 23:09:28 -- accel/accel.sh@19 -- # read -r var val 00:06:38.837 23:09:28 -- accel/accel.sh@20 -- # val=compress 00:06:38.837 23:09:28 -- accel/accel.sh@21 -- # case "$var" in 00:06:38.837 23:09:28 -- accel/accel.sh@23 -- # accel_opc=compress 00:06:38.837 23:09:28 -- accel/accel.sh@19 -- # IFS=: 00:06:38.837 23:09:28 -- accel/accel.sh@19 -- # read -r var val 00:06:38.837 23:09:28 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:38.837 23:09:28 -- accel/accel.sh@21 -- # case "$var" in 00:06:38.837 23:09:28 -- accel/accel.sh@19 -- # IFS=: 00:06:38.837 23:09:28 -- accel/accel.sh@19 -- # read -r var val 00:06:38.837 23:09:28 -- accel/accel.sh@20 -- # val= 00:06:38.837 23:09:28 -- accel/accel.sh@21 -- # case "$var" in 00:06:38.837 23:09:28 -- accel/accel.sh@19 -- # IFS=: 00:06:38.837 23:09:28 -- accel/accel.sh@19 -- # read -r var val 00:06:38.837 23:09:28 -- accel/accel.sh@20 -- # val=software 00:06:38.837 23:09:28 -- accel/accel.sh@21 -- # case "$var" in 00:06:38.837 23:09:28 -- accel/accel.sh@22 -- # accel_module=software 00:06:38.837 23:09:28 -- accel/accel.sh@19 -- # IFS=: 00:06:38.837 23:09:28 -- accel/accel.sh@19 -- # read -r var val 00:06:38.837 23:09:28 -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:38.837 23:09:28 -- accel/accel.sh@21 -- # case "$var" in 00:06:38.837 23:09:28 -- accel/accel.sh@19 -- # IFS=: 00:06:38.837 23:09:28 -- accel/accel.sh@19 -- # read -r var val 00:06:38.837 23:09:28 -- accel/accel.sh@20 -- # val=32 00:06:38.837 23:09:28 -- accel/accel.sh@21 -- # case "$var" in 00:06:38.837 23:09:28 -- accel/accel.sh@19 -- # IFS=: 00:06:38.837 23:09:28 -- accel/accel.sh@19 -- # read -r var val 00:06:38.837 23:09:28 -- accel/accel.sh@20 -- # val=32 00:06:38.837 23:09:28 -- accel/accel.sh@21 -- # case "$var" in 00:06:38.837 23:09:28 -- accel/accel.sh@19 -- # IFS=: 00:06:38.837 23:09:28 -- accel/accel.sh@19 -- # read -r var val 00:06:38.837 23:09:28 -- accel/accel.sh@20 -- # val=1 00:06:38.837 23:09:28 -- accel/accel.sh@21 -- # case "$var" in 00:06:38.837 23:09:28 -- accel/accel.sh@19 -- # IFS=: 00:06:38.837 23:09:28 -- accel/accel.sh@19 -- # read -r var val 00:06:38.837 23:09:28 -- accel/accel.sh@20 -- # val='1 seconds' 00:06:38.837 23:09:28 -- accel/accel.sh@21 -- # case "$var" in 00:06:38.837 23:09:28 -- accel/accel.sh@19 -- # IFS=: 00:06:38.837 23:09:28 -- accel/accel.sh@19 -- # read -r var val 00:06:38.837 23:09:28 -- accel/accel.sh@20 -- # val=No 00:06:38.837 23:09:28 -- accel/accel.sh@21 -- # case "$var" in 00:06:38.837 23:09:28 -- accel/accel.sh@19 -- # IFS=: 00:06:38.837 23:09:28 -- accel/accel.sh@19 -- # read -r var val 00:06:38.837 23:09:28 -- accel/accel.sh@20 -- # val= 00:06:38.837 23:09:28 -- accel/accel.sh@21 -- # case "$var" in 00:06:38.837 23:09:28 -- accel/accel.sh@19 -- # IFS=: 00:06:38.837 23:09:28 -- accel/accel.sh@19 -- # read -r var val 00:06:38.837 23:09:28 -- accel/accel.sh@20 -- # val= 00:06:38.837 23:09:28 -- accel/accel.sh@21 -- # case "$var" in 00:06:38.837 23:09:28 -- accel/accel.sh@19 -- # IFS=: 00:06:38.837 23:09:28 -- accel/accel.sh@19 -- # read -r var val 00:06:40.222 23:09:29 -- accel/accel.sh@20 -- # val= 00:06:40.222 23:09:29 -- accel/accel.sh@21 -- # case "$var" in 00:06:40.222 23:09:29 -- accel/accel.sh@19 -- # IFS=: 00:06:40.222 23:09:29 -- accel/accel.sh@19 -- # read -r var val 00:06:40.222 23:09:29 -- accel/accel.sh@20 -- # val= 00:06:40.222 23:09:29 -- accel/accel.sh@21 -- # case "$var" in 00:06:40.222 23:09:29 -- accel/accel.sh@19 -- # IFS=: 00:06:40.222 23:09:29 -- accel/accel.sh@19 -- # read -r var val 00:06:40.222 23:09:29 -- accel/accel.sh@20 -- # val= 00:06:40.222 23:09:29 -- accel/accel.sh@21 -- # case "$var" in 00:06:40.222 23:09:29 -- accel/accel.sh@19 -- # IFS=: 00:06:40.222 23:09:29 -- accel/accel.sh@19 -- # read -r var val 00:06:40.222 23:09:29 -- accel/accel.sh@20 -- # val= 00:06:40.222 23:09:29 -- accel/accel.sh@21 -- # case "$var" in 00:06:40.222 23:09:29 -- accel/accel.sh@19 -- # IFS=: 00:06:40.222 23:09:29 -- accel/accel.sh@19 -- # read -r var val 00:06:40.222 23:09:29 -- accel/accel.sh@20 -- # val= 00:06:40.222 23:09:29 -- accel/accel.sh@21 -- # case "$var" in 00:06:40.222 23:09:29 -- accel/accel.sh@19 -- # IFS=: 00:06:40.222 23:09:29 -- accel/accel.sh@19 -- # read -r var val 00:06:40.222 23:09:29 -- accel/accel.sh@20 -- # val= 00:06:40.222 23:09:29 -- accel/accel.sh@21 -- # case "$var" in 00:06:40.222 23:09:29 -- accel/accel.sh@19 -- # IFS=: 00:06:40.222 23:09:29 -- accel/accel.sh@19 -- # read -r var val 00:06:40.222 23:09:29 -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:40.222 23:09:29 -- accel/accel.sh@27 -- # [[ -n compress ]] 00:06:40.222 23:09:29 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:40.222 00:06:40.222 real 0m1.239s 00:06:40.222 user 0m1.143s 00:06:40.222 sys 0m0.107s 00:06:40.222 23:09:29 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:40.222 23:09:29 -- common/autotest_common.sh@10 -- # set +x 00:06:40.222 ************************************ 00:06:40.222 END TEST accel_comp 00:06:40.222 ************************************ 00:06:40.222 23:09:29 -- accel/accel.sh@117 -- # run_test accel_decomp accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:40.222 23:09:29 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:06:40.222 23:09:29 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:40.222 23:09:29 -- common/autotest_common.sh@10 -- # set +x 00:06:40.222 ************************************ 00:06:40.222 START TEST accel_decomp 00:06:40.222 ************************************ 00:06:40.222 23:09:29 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:40.222 23:09:29 -- accel/accel.sh@16 -- # local accel_opc 00:06:40.222 23:09:29 -- accel/accel.sh@17 -- # local accel_module 00:06:40.222 23:09:29 -- accel/accel.sh@19 -- # IFS=: 00:06:40.222 23:09:29 -- accel/accel.sh@19 -- # read -r var val 00:06:40.222 23:09:29 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:40.222 23:09:29 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:40.222 23:09:29 -- accel/accel.sh@12 -- # build_accel_config 00:06:40.222 23:09:29 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:40.222 23:09:29 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:40.222 23:09:29 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:40.222 23:09:29 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:40.222 23:09:29 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:40.222 23:09:29 -- accel/accel.sh@40 -- # local IFS=, 00:06:40.222 23:09:29 -- accel/accel.sh@41 -- # jq -r . 00:06:40.222 [2024-04-26 23:09:29.343973] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:06:40.222 [2024-04-26 23:09:29.344039] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3740322 ] 00:06:40.222 EAL: No free 2048 kB hugepages reported on node 1 00:06:40.222 [2024-04-26 23:09:29.409811] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:40.222 [2024-04-26 23:09:29.446072] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:40.483 23:09:29 -- accel/accel.sh@20 -- # val= 00:06:40.483 23:09:29 -- accel/accel.sh@21 -- # case "$var" in 00:06:40.483 23:09:29 -- accel/accel.sh@19 -- # IFS=: 00:06:40.483 23:09:29 -- accel/accel.sh@19 -- # read -r var val 00:06:40.483 23:09:29 -- accel/accel.sh@20 -- # val= 00:06:40.483 23:09:29 -- accel/accel.sh@21 -- # case "$var" in 00:06:40.483 23:09:29 -- accel/accel.sh@19 -- # IFS=: 00:06:40.483 23:09:29 -- accel/accel.sh@19 -- # read -r var val 00:06:40.483 23:09:29 -- accel/accel.sh@20 -- # val= 00:06:40.483 23:09:29 -- accel/accel.sh@21 -- # case "$var" in 00:06:40.483 23:09:29 -- accel/accel.sh@19 -- # IFS=: 00:06:40.483 23:09:29 -- accel/accel.sh@19 -- # read -r var val 00:06:40.483 23:09:29 -- accel/accel.sh@20 -- # val=0x1 00:06:40.483 23:09:29 -- accel/accel.sh@21 -- # case "$var" in 00:06:40.483 23:09:29 -- accel/accel.sh@19 -- # IFS=: 00:06:40.483 23:09:29 -- accel/accel.sh@19 -- # read -r var val 00:06:40.483 23:09:29 -- accel/accel.sh@20 -- # val= 00:06:40.483 23:09:29 -- accel/accel.sh@21 -- # case "$var" in 00:06:40.483 23:09:29 -- accel/accel.sh@19 -- # IFS=: 00:06:40.483 23:09:29 -- accel/accel.sh@19 -- # read -r var val 00:06:40.483 23:09:29 -- accel/accel.sh@20 -- # val= 00:06:40.483 23:09:29 -- accel/accel.sh@21 -- # case "$var" in 00:06:40.483 23:09:29 -- accel/accel.sh@19 -- # IFS=: 00:06:40.483 23:09:29 -- accel/accel.sh@19 -- # read -r var val 00:06:40.483 23:09:29 -- accel/accel.sh@20 -- # val=decompress 00:06:40.483 23:09:29 -- accel/accel.sh@21 -- # case "$var" in 00:06:40.483 23:09:29 -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:40.483 23:09:29 -- accel/accel.sh@19 -- # IFS=: 00:06:40.483 23:09:29 -- accel/accel.sh@19 -- # read -r var val 00:06:40.483 23:09:29 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:40.483 23:09:29 -- accel/accel.sh@21 -- # case "$var" in 00:06:40.483 23:09:29 -- accel/accel.sh@19 -- # IFS=: 00:06:40.483 23:09:29 -- accel/accel.sh@19 -- # read -r var val 00:06:40.483 23:09:29 -- accel/accel.sh@20 -- # val= 00:06:40.483 23:09:29 -- accel/accel.sh@21 -- # case "$var" in 00:06:40.483 23:09:29 -- accel/accel.sh@19 -- # IFS=: 00:06:40.483 23:09:29 -- accel/accel.sh@19 -- # read -r var val 00:06:40.483 23:09:29 -- accel/accel.sh@20 -- # val=software 00:06:40.483 23:09:29 -- accel/accel.sh@21 -- # case "$var" in 00:06:40.483 23:09:29 -- accel/accel.sh@22 -- # accel_module=software 00:06:40.483 23:09:29 -- accel/accel.sh@19 -- # IFS=: 00:06:40.483 23:09:29 -- accel/accel.sh@19 -- # read -r var val 00:06:40.483 23:09:29 -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:40.483 23:09:29 -- accel/accel.sh@21 -- # case "$var" in 00:06:40.483 23:09:29 -- accel/accel.sh@19 -- # IFS=: 00:06:40.483 23:09:29 -- accel/accel.sh@19 -- # read -r var val 00:06:40.483 23:09:29 -- accel/accel.sh@20 -- # val=32 00:06:40.483 23:09:29 -- accel/accel.sh@21 -- # case "$var" in 00:06:40.483 23:09:29 -- accel/accel.sh@19 -- # IFS=: 00:06:40.483 23:09:29 -- accel/accel.sh@19 -- # read -r var val 00:06:40.483 23:09:29 -- accel/accel.sh@20 -- # val=32 00:06:40.483 23:09:29 -- accel/accel.sh@21 -- # case "$var" in 00:06:40.483 23:09:29 -- accel/accel.sh@19 -- # IFS=: 00:06:40.483 23:09:29 -- accel/accel.sh@19 -- # read -r var val 00:06:40.483 23:09:29 -- accel/accel.sh@20 -- # val=1 00:06:40.483 23:09:29 -- accel/accel.sh@21 -- # case "$var" in 00:06:40.483 23:09:29 -- accel/accel.sh@19 -- # IFS=: 00:06:40.483 23:09:29 -- accel/accel.sh@19 -- # read -r var val 00:06:40.483 23:09:29 -- accel/accel.sh@20 -- # val='1 seconds' 00:06:40.483 23:09:29 -- accel/accel.sh@21 -- # case "$var" in 00:06:40.483 23:09:29 -- accel/accel.sh@19 -- # IFS=: 00:06:40.483 23:09:29 -- accel/accel.sh@19 -- # read -r var val 00:06:40.483 23:09:29 -- accel/accel.sh@20 -- # val=Yes 00:06:40.483 23:09:29 -- accel/accel.sh@21 -- # case "$var" in 00:06:40.483 23:09:29 -- accel/accel.sh@19 -- # IFS=: 00:06:40.483 23:09:29 -- accel/accel.sh@19 -- # read -r var val 00:06:40.483 23:09:29 -- accel/accel.sh@20 -- # val= 00:06:40.483 23:09:29 -- accel/accel.sh@21 -- # case "$var" in 00:06:40.483 23:09:29 -- accel/accel.sh@19 -- # IFS=: 00:06:40.483 23:09:29 -- accel/accel.sh@19 -- # read -r var val 00:06:40.483 23:09:29 -- accel/accel.sh@20 -- # val= 00:06:40.483 23:09:29 -- accel/accel.sh@21 -- # case "$var" in 00:06:40.483 23:09:29 -- accel/accel.sh@19 -- # IFS=: 00:06:40.483 23:09:29 -- accel/accel.sh@19 -- # read -r var val 00:06:41.423 23:09:30 -- accel/accel.sh@20 -- # val= 00:06:41.423 23:09:30 -- accel/accel.sh@21 -- # case "$var" in 00:06:41.423 23:09:30 -- accel/accel.sh@19 -- # IFS=: 00:06:41.423 23:09:30 -- accel/accel.sh@19 -- # read -r var val 00:06:41.423 23:09:30 -- accel/accel.sh@20 -- # val= 00:06:41.423 23:09:30 -- accel/accel.sh@21 -- # case "$var" in 00:06:41.423 23:09:30 -- accel/accel.sh@19 -- # IFS=: 00:06:41.423 23:09:30 -- accel/accel.sh@19 -- # read -r var val 00:06:41.423 23:09:30 -- accel/accel.sh@20 -- # val= 00:06:41.423 23:09:30 -- accel/accel.sh@21 -- # case "$var" in 00:06:41.423 23:09:30 -- accel/accel.sh@19 -- # IFS=: 00:06:41.423 23:09:30 -- accel/accel.sh@19 -- # read -r var val 00:06:41.423 23:09:30 -- accel/accel.sh@20 -- # val= 00:06:41.423 23:09:30 -- accel/accel.sh@21 -- # case "$var" in 00:06:41.423 23:09:30 -- accel/accel.sh@19 -- # IFS=: 00:06:41.423 23:09:30 -- accel/accel.sh@19 -- # read -r var val 00:06:41.423 23:09:30 -- accel/accel.sh@20 -- # val= 00:06:41.423 23:09:30 -- accel/accel.sh@21 -- # case "$var" in 00:06:41.423 23:09:30 -- accel/accel.sh@19 -- # IFS=: 00:06:41.423 23:09:30 -- accel/accel.sh@19 -- # read -r var val 00:06:41.423 23:09:30 -- accel/accel.sh@20 -- # val= 00:06:41.423 23:09:30 -- accel/accel.sh@21 -- # case "$var" in 00:06:41.423 23:09:30 -- accel/accel.sh@19 -- # IFS=: 00:06:41.423 23:09:30 -- accel/accel.sh@19 -- # read -r var val 00:06:41.423 23:09:30 -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:41.423 23:09:30 -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:41.423 23:09:30 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:41.423 00:06:41.423 real 0m1.251s 00:06:41.423 user 0m1.155s 00:06:41.423 sys 0m0.108s 00:06:41.423 23:09:30 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:41.423 23:09:30 -- common/autotest_common.sh@10 -- # set +x 00:06:41.423 ************************************ 00:06:41.423 END TEST accel_decomp 00:06:41.423 ************************************ 00:06:41.423 23:09:30 -- accel/accel.sh@118 -- # run_test accel_decmop_full accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:06:41.423 23:09:30 -- common/autotest_common.sh@1087 -- # '[' 11 -le 1 ']' 00:06:41.423 23:09:30 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:41.423 23:09:30 -- common/autotest_common.sh@10 -- # set +x 00:06:41.683 ************************************ 00:06:41.683 START TEST accel_decmop_full 00:06:41.683 ************************************ 00:06:41.683 23:09:30 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:06:41.683 23:09:30 -- accel/accel.sh@16 -- # local accel_opc 00:06:41.683 23:09:30 -- accel/accel.sh@17 -- # local accel_module 00:06:41.683 23:09:30 -- accel/accel.sh@19 -- # IFS=: 00:06:41.683 23:09:30 -- accel/accel.sh@19 -- # read -r var val 00:06:41.683 23:09:30 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:06:41.683 23:09:30 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:06:41.683 23:09:30 -- accel/accel.sh@12 -- # build_accel_config 00:06:41.683 23:09:30 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:41.683 23:09:30 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:41.683 23:09:30 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:41.683 23:09:30 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:41.683 23:09:30 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:41.683 23:09:30 -- accel/accel.sh@40 -- # local IFS=, 00:06:41.683 23:09:30 -- accel/accel.sh@41 -- # jq -r . 00:06:41.683 [2024-04-26 23:09:30.776436] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:06:41.683 [2024-04-26 23:09:30.776507] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3740563 ] 00:06:41.683 EAL: No free 2048 kB hugepages reported on node 1 00:06:41.683 [2024-04-26 23:09:30.842082] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:41.683 [2024-04-26 23:09:30.878936] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:41.683 23:09:30 -- accel/accel.sh@20 -- # val= 00:06:41.683 23:09:30 -- accel/accel.sh@21 -- # case "$var" in 00:06:41.683 23:09:30 -- accel/accel.sh@19 -- # IFS=: 00:06:41.683 23:09:30 -- accel/accel.sh@19 -- # read -r var val 00:06:41.683 23:09:30 -- accel/accel.sh@20 -- # val= 00:06:41.683 23:09:30 -- accel/accel.sh@21 -- # case "$var" in 00:06:41.683 23:09:30 -- accel/accel.sh@19 -- # IFS=: 00:06:41.683 23:09:30 -- accel/accel.sh@19 -- # read -r var val 00:06:41.683 23:09:30 -- accel/accel.sh@20 -- # val= 00:06:41.683 23:09:30 -- accel/accel.sh@21 -- # case "$var" in 00:06:41.683 23:09:30 -- accel/accel.sh@19 -- # IFS=: 00:06:41.683 23:09:30 -- accel/accel.sh@19 -- # read -r var val 00:06:41.683 23:09:30 -- accel/accel.sh@20 -- # val=0x1 00:06:41.683 23:09:30 -- accel/accel.sh@21 -- # case "$var" in 00:06:41.683 23:09:30 -- accel/accel.sh@19 -- # IFS=: 00:06:41.683 23:09:30 -- accel/accel.sh@19 -- # read -r var val 00:06:41.683 23:09:30 -- accel/accel.sh@20 -- # val= 00:06:41.683 23:09:30 -- accel/accel.sh@21 -- # case "$var" in 00:06:41.683 23:09:30 -- accel/accel.sh@19 -- # IFS=: 00:06:41.683 23:09:30 -- accel/accel.sh@19 -- # read -r var val 00:06:41.683 23:09:30 -- accel/accel.sh@20 -- # val= 00:06:41.683 23:09:30 -- accel/accel.sh@21 -- # case "$var" in 00:06:41.683 23:09:30 -- accel/accel.sh@19 -- # IFS=: 00:06:41.683 23:09:30 -- accel/accel.sh@19 -- # read -r var val 00:06:41.683 23:09:30 -- accel/accel.sh@20 -- # val=decompress 00:06:41.683 23:09:30 -- accel/accel.sh@21 -- # case "$var" in 00:06:41.683 23:09:30 -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:41.683 23:09:30 -- accel/accel.sh@19 -- # IFS=: 00:06:41.683 23:09:30 -- accel/accel.sh@19 -- # read -r var val 00:06:41.683 23:09:30 -- accel/accel.sh@20 -- # val='111250 bytes' 00:06:41.683 23:09:30 -- accel/accel.sh@21 -- # case "$var" in 00:06:41.683 23:09:30 -- accel/accel.sh@19 -- # IFS=: 00:06:41.683 23:09:30 -- accel/accel.sh@19 -- # read -r var val 00:06:41.683 23:09:30 -- accel/accel.sh@20 -- # val= 00:06:41.683 23:09:30 -- accel/accel.sh@21 -- # case "$var" in 00:06:41.683 23:09:30 -- accel/accel.sh@19 -- # IFS=: 00:06:41.683 23:09:30 -- accel/accel.sh@19 -- # read -r var val 00:06:41.683 23:09:30 -- accel/accel.sh@20 -- # val=software 00:06:41.683 23:09:30 -- accel/accel.sh@21 -- # case "$var" in 00:06:41.683 23:09:30 -- accel/accel.sh@22 -- # accel_module=software 00:06:41.683 23:09:30 -- accel/accel.sh@19 -- # IFS=: 00:06:41.683 23:09:30 -- accel/accel.sh@19 -- # read -r var val 00:06:41.683 23:09:30 -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:41.683 23:09:30 -- accel/accel.sh@21 -- # case "$var" in 00:06:41.683 23:09:30 -- accel/accel.sh@19 -- # IFS=: 00:06:41.683 23:09:30 -- accel/accel.sh@19 -- # read -r var val 00:06:41.683 23:09:30 -- accel/accel.sh@20 -- # val=32 00:06:41.683 23:09:30 -- accel/accel.sh@21 -- # case "$var" in 00:06:41.683 23:09:30 -- accel/accel.sh@19 -- # IFS=: 00:06:41.683 23:09:30 -- accel/accel.sh@19 -- # read -r var val 00:06:41.683 23:09:30 -- accel/accel.sh@20 -- # val=32 00:06:41.683 23:09:30 -- accel/accel.sh@21 -- # case "$var" in 00:06:41.683 23:09:30 -- accel/accel.sh@19 -- # IFS=: 00:06:41.683 23:09:30 -- accel/accel.sh@19 -- # read -r var val 00:06:41.683 23:09:30 -- accel/accel.sh@20 -- # val=1 00:06:41.683 23:09:30 -- accel/accel.sh@21 -- # case "$var" in 00:06:41.683 23:09:30 -- accel/accel.sh@19 -- # IFS=: 00:06:41.683 23:09:30 -- accel/accel.sh@19 -- # read -r var val 00:06:41.683 23:09:30 -- accel/accel.sh@20 -- # val='1 seconds' 00:06:41.683 23:09:30 -- accel/accel.sh@21 -- # case "$var" in 00:06:41.683 23:09:30 -- accel/accel.sh@19 -- # IFS=: 00:06:41.683 23:09:30 -- accel/accel.sh@19 -- # read -r var val 00:06:41.683 23:09:30 -- accel/accel.sh@20 -- # val=Yes 00:06:41.683 23:09:30 -- accel/accel.sh@21 -- # case "$var" in 00:06:41.683 23:09:30 -- accel/accel.sh@19 -- # IFS=: 00:06:41.683 23:09:30 -- accel/accel.sh@19 -- # read -r var val 00:06:41.683 23:09:30 -- accel/accel.sh@20 -- # val= 00:06:41.683 23:09:30 -- accel/accel.sh@21 -- # case "$var" in 00:06:41.683 23:09:30 -- accel/accel.sh@19 -- # IFS=: 00:06:41.683 23:09:30 -- accel/accel.sh@19 -- # read -r var val 00:06:41.683 23:09:30 -- accel/accel.sh@20 -- # val= 00:06:41.683 23:09:30 -- accel/accel.sh@21 -- # case "$var" in 00:06:41.683 23:09:30 -- accel/accel.sh@19 -- # IFS=: 00:06:41.683 23:09:30 -- accel/accel.sh@19 -- # read -r var val 00:06:43.063 23:09:32 -- accel/accel.sh@20 -- # val= 00:06:43.063 23:09:32 -- accel/accel.sh@21 -- # case "$var" in 00:06:43.063 23:09:32 -- accel/accel.sh@19 -- # IFS=: 00:06:43.063 23:09:32 -- accel/accel.sh@19 -- # read -r var val 00:06:43.063 23:09:32 -- accel/accel.sh@20 -- # val= 00:06:43.063 23:09:32 -- accel/accel.sh@21 -- # case "$var" in 00:06:43.063 23:09:32 -- accel/accel.sh@19 -- # IFS=: 00:06:43.063 23:09:32 -- accel/accel.sh@19 -- # read -r var val 00:06:43.063 23:09:32 -- accel/accel.sh@20 -- # val= 00:06:43.063 23:09:32 -- accel/accel.sh@21 -- # case "$var" in 00:06:43.063 23:09:32 -- accel/accel.sh@19 -- # IFS=: 00:06:43.063 23:09:32 -- accel/accel.sh@19 -- # read -r var val 00:06:43.063 23:09:32 -- accel/accel.sh@20 -- # val= 00:06:43.063 23:09:32 -- accel/accel.sh@21 -- # case "$var" in 00:06:43.063 23:09:32 -- accel/accel.sh@19 -- # IFS=: 00:06:43.063 23:09:32 -- accel/accel.sh@19 -- # read -r var val 00:06:43.063 23:09:32 -- accel/accel.sh@20 -- # val= 00:06:43.063 23:09:32 -- accel/accel.sh@21 -- # case "$var" in 00:06:43.063 23:09:32 -- accel/accel.sh@19 -- # IFS=: 00:06:43.063 23:09:32 -- accel/accel.sh@19 -- # read -r var val 00:06:43.063 23:09:32 -- accel/accel.sh@20 -- # val= 00:06:43.063 23:09:32 -- accel/accel.sh@21 -- # case "$var" in 00:06:43.063 23:09:32 -- accel/accel.sh@19 -- # IFS=: 00:06:43.063 23:09:32 -- accel/accel.sh@19 -- # read -r var val 00:06:43.063 23:09:32 -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:43.063 23:09:32 -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:43.063 23:09:32 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:43.063 00:06:43.063 real 0m1.261s 00:06:43.063 user 0m1.164s 00:06:43.063 sys 0m0.109s 00:06:43.063 23:09:32 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:43.063 23:09:32 -- common/autotest_common.sh@10 -- # set +x 00:06:43.063 ************************************ 00:06:43.063 END TEST accel_decmop_full 00:06:43.063 ************************************ 00:06:43.063 23:09:32 -- accel/accel.sh@119 -- # run_test accel_decomp_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:06:43.063 23:09:32 -- common/autotest_common.sh@1087 -- # '[' 11 -le 1 ']' 00:06:43.063 23:09:32 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:43.063 23:09:32 -- common/autotest_common.sh@10 -- # set +x 00:06:43.063 ************************************ 00:06:43.063 START TEST accel_decomp_mcore 00:06:43.063 ************************************ 00:06:43.063 23:09:32 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:06:43.063 23:09:32 -- accel/accel.sh@16 -- # local accel_opc 00:06:43.063 23:09:32 -- accel/accel.sh@17 -- # local accel_module 00:06:43.063 23:09:32 -- accel/accel.sh@19 -- # IFS=: 00:06:43.063 23:09:32 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:06:43.063 23:09:32 -- accel/accel.sh@19 -- # read -r var val 00:06:43.063 23:09:32 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:06:43.063 23:09:32 -- accel/accel.sh@12 -- # build_accel_config 00:06:43.063 23:09:32 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:43.063 23:09:32 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:43.063 23:09:32 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:43.063 23:09:32 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:43.063 23:09:32 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:43.063 23:09:32 -- accel/accel.sh@40 -- # local IFS=, 00:06:43.063 23:09:32 -- accel/accel.sh@41 -- # jq -r . 00:06:43.063 [2024-04-26 23:09:32.215587] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:06:43.063 [2024-04-26 23:09:32.215669] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3740778 ] 00:06:43.063 EAL: No free 2048 kB hugepages reported on node 1 00:06:43.063 [2024-04-26 23:09:32.282531] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:43.323 [2024-04-26 23:09:32.323295] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:43.323 [2024-04-26 23:09:32.323423] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:43.323 [2024-04-26 23:09:32.323569] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:43.323 [2024-04-26 23:09:32.323569] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:43.323 23:09:32 -- accel/accel.sh@20 -- # val= 00:06:43.323 23:09:32 -- accel/accel.sh@21 -- # case "$var" in 00:06:43.323 23:09:32 -- accel/accel.sh@19 -- # IFS=: 00:06:43.323 23:09:32 -- accel/accel.sh@19 -- # read -r var val 00:06:43.323 23:09:32 -- accel/accel.sh@20 -- # val= 00:06:43.323 23:09:32 -- accel/accel.sh@21 -- # case "$var" in 00:06:43.323 23:09:32 -- accel/accel.sh@19 -- # IFS=: 00:06:43.323 23:09:32 -- accel/accel.sh@19 -- # read -r var val 00:06:43.323 23:09:32 -- accel/accel.sh@20 -- # val= 00:06:43.323 23:09:32 -- accel/accel.sh@21 -- # case "$var" in 00:06:43.323 23:09:32 -- accel/accel.sh@19 -- # IFS=: 00:06:43.323 23:09:32 -- accel/accel.sh@19 -- # read -r var val 00:06:43.323 23:09:32 -- accel/accel.sh@20 -- # val=0xf 00:06:43.323 23:09:32 -- accel/accel.sh@21 -- # case "$var" in 00:06:43.323 23:09:32 -- accel/accel.sh@19 -- # IFS=: 00:06:43.323 23:09:32 -- accel/accel.sh@19 -- # read -r var val 00:06:43.323 23:09:32 -- accel/accel.sh@20 -- # val= 00:06:43.323 23:09:32 -- accel/accel.sh@21 -- # case "$var" in 00:06:43.323 23:09:32 -- accel/accel.sh@19 -- # IFS=: 00:06:43.323 23:09:32 -- accel/accel.sh@19 -- # read -r var val 00:06:43.323 23:09:32 -- accel/accel.sh@20 -- # val= 00:06:43.323 23:09:32 -- accel/accel.sh@21 -- # case "$var" in 00:06:43.323 23:09:32 -- accel/accel.sh@19 -- # IFS=: 00:06:43.323 23:09:32 -- accel/accel.sh@19 -- # read -r var val 00:06:43.323 23:09:32 -- accel/accel.sh@20 -- # val=decompress 00:06:43.323 23:09:32 -- accel/accel.sh@21 -- # case "$var" in 00:06:43.323 23:09:32 -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:43.323 23:09:32 -- accel/accel.sh@19 -- # IFS=: 00:06:43.323 23:09:32 -- accel/accel.sh@19 -- # read -r var val 00:06:43.323 23:09:32 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:43.323 23:09:32 -- accel/accel.sh@21 -- # case "$var" in 00:06:43.323 23:09:32 -- accel/accel.sh@19 -- # IFS=: 00:06:43.323 23:09:32 -- accel/accel.sh@19 -- # read -r var val 00:06:43.323 23:09:32 -- accel/accel.sh@20 -- # val= 00:06:43.323 23:09:32 -- accel/accel.sh@21 -- # case "$var" in 00:06:43.323 23:09:32 -- accel/accel.sh@19 -- # IFS=: 00:06:43.323 23:09:32 -- accel/accel.sh@19 -- # read -r var val 00:06:43.323 23:09:32 -- accel/accel.sh@20 -- # val=software 00:06:43.323 23:09:32 -- accel/accel.sh@21 -- # case "$var" in 00:06:43.323 23:09:32 -- accel/accel.sh@22 -- # accel_module=software 00:06:43.323 23:09:32 -- accel/accel.sh@19 -- # IFS=: 00:06:43.323 23:09:32 -- accel/accel.sh@19 -- # read -r var val 00:06:43.323 23:09:32 -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:43.323 23:09:32 -- accel/accel.sh@21 -- # case "$var" in 00:06:43.323 23:09:32 -- accel/accel.sh@19 -- # IFS=: 00:06:43.323 23:09:32 -- accel/accel.sh@19 -- # read -r var val 00:06:43.323 23:09:32 -- accel/accel.sh@20 -- # val=32 00:06:43.323 23:09:32 -- accel/accel.sh@21 -- # case "$var" in 00:06:43.323 23:09:32 -- accel/accel.sh@19 -- # IFS=: 00:06:43.323 23:09:32 -- accel/accel.sh@19 -- # read -r var val 00:06:43.323 23:09:32 -- accel/accel.sh@20 -- # val=32 00:06:43.323 23:09:32 -- accel/accel.sh@21 -- # case "$var" in 00:06:43.323 23:09:32 -- accel/accel.sh@19 -- # IFS=: 00:06:43.323 23:09:32 -- accel/accel.sh@19 -- # read -r var val 00:06:43.323 23:09:32 -- accel/accel.sh@20 -- # val=1 00:06:43.323 23:09:32 -- accel/accel.sh@21 -- # case "$var" in 00:06:43.323 23:09:32 -- accel/accel.sh@19 -- # IFS=: 00:06:43.323 23:09:32 -- accel/accel.sh@19 -- # read -r var val 00:06:43.323 23:09:32 -- accel/accel.sh@20 -- # val='1 seconds' 00:06:43.323 23:09:32 -- accel/accel.sh@21 -- # case "$var" in 00:06:43.323 23:09:32 -- accel/accel.sh@19 -- # IFS=: 00:06:43.323 23:09:32 -- accel/accel.sh@19 -- # read -r var val 00:06:43.323 23:09:32 -- accel/accel.sh@20 -- # val=Yes 00:06:43.323 23:09:32 -- accel/accel.sh@21 -- # case "$var" in 00:06:43.323 23:09:32 -- accel/accel.sh@19 -- # IFS=: 00:06:43.323 23:09:32 -- accel/accel.sh@19 -- # read -r var val 00:06:43.323 23:09:32 -- accel/accel.sh@20 -- # val= 00:06:43.323 23:09:32 -- accel/accel.sh@21 -- # case "$var" in 00:06:43.323 23:09:32 -- accel/accel.sh@19 -- # IFS=: 00:06:43.323 23:09:32 -- accel/accel.sh@19 -- # read -r var val 00:06:43.324 23:09:32 -- accel/accel.sh@20 -- # val= 00:06:43.324 23:09:32 -- accel/accel.sh@21 -- # case "$var" in 00:06:43.324 23:09:32 -- accel/accel.sh@19 -- # IFS=: 00:06:43.324 23:09:32 -- accel/accel.sh@19 -- # read -r var val 00:06:44.261 23:09:33 -- accel/accel.sh@20 -- # val= 00:06:44.261 23:09:33 -- accel/accel.sh@21 -- # case "$var" in 00:06:44.261 23:09:33 -- accel/accel.sh@19 -- # IFS=: 00:06:44.261 23:09:33 -- accel/accel.sh@19 -- # read -r var val 00:06:44.261 23:09:33 -- accel/accel.sh@20 -- # val= 00:06:44.261 23:09:33 -- accel/accel.sh@21 -- # case "$var" in 00:06:44.262 23:09:33 -- accel/accel.sh@19 -- # IFS=: 00:06:44.262 23:09:33 -- accel/accel.sh@19 -- # read -r var val 00:06:44.262 23:09:33 -- accel/accel.sh@20 -- # val= 00:06:44.262 23:09:33 -- accel/accel.sh@21 -- # case "$var" in 00:06:44.262 23:09:33 -- accel/accel.sh@19 -- # IFS=: 00:06:44.262 23:09:33 -- accel/accel.sh@19 -- # read -r var val 00:06:44.262 23:09:33 -- accel/accel.sh@20 -- # val= 00:06:44.262 23:09:33 -- accel/accel.sh@21 -- # case "$var" in 00:06:44.262 23:09:33 -- accel/accel.sh@19 -- # IFS=: 00:06:44.262 23:09:33 -- accel/accel.sh@19 -- # read -r var val 00:06:44.262 23:09:33 -- accel/accel.sh@20 -- # val= 00:06:44.262 23:09:33 -- accel/accel.sh@21 -- # case "$var" in 00:06:44.262 23:09:33 -- accel/accel.sh@19 -- # IFS=: 00:06:44.262 23:09:33 -- accel/accel.sh@19 -- # read -r var val 00:06:44.262 23:09:33 -- accel/accel.sh@20 -- # val= 00:06:44.262 23:09:33 -- accel/accel.sh@21 -- # case "$var" in 00:06:44.262 23:09:33 -- accel/accel.sh@19 -- # IFS=: 00:06:44.262 23:09:33 -- accel/accel.sh@19 -- # read -r var val 00:06:44.262 23:09:33 -- accel/accel.sh@20 -- # val= 00:06:44.262 23:09:33 -- accel/accel.sh@21 -- # case "$var" in 00:06:44.262 23:09:33 -- accel/accel.sh@19 -- # IFS=: 00:06:44.262 23:09:33 -- accel/accel.sh@19 -- # read -r var val 00:06:44.262 23:09:33 -- accel/accel.sh@20 -- # val= 00:06:44.262 23:09:33 -- accel/accel.sh@21 -- # case "$var" in 00:06:44.262 23:09:33 -- accel/accel.sh@19 -- # IFS=: 00:06:44.262 23:09:33 -- accel/accel.sh@19 -- # read -r var val 00:06:44.262 23:09:33 -- accel/accel.sh@20 -- # val= 00:06:44.262 23:09:33 -- accel/accel.sh@21 -- # case "$var" in 00:06:44.262 23:09:33 -- accel/accel.sh@19 -- # IFS=: 00:06:44.262 23:09:33 -- accel/accel.sh@19 -- # read -r var val 00:06:44.262 23:09:33 -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:44.262 23:09:33 -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:44.262 23:09:33 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:44.262 00:06:44.262 real 0m1.261s 00:06:44.262 user 0m4.400s 00:06:44.262 sys 0m0.114s 00:06:44.262 23:09:33 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:44.262 23:09:33 -- common/autotest_common.sh@10 -- # set +x 00:06:44.262 ************************************ 00:06:44.262 END TEST accel_decomp_mcore 00:06:44.262 ************************************ 00:06:44.262 23:09:33 -- accel/accel.sh@120 -- # run_test accel_decomp_full_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:44.262 23:09:33 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:06:44.262 23:09:33 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:44.262 23:09:33 -- common/autotest_common.sh@10 -- # set +x 00:06:44.522 ************************************ 00:06:44.522 START TEST accel_decomp_full_mcore 00:06:44.522 ************************************ 00:06:44.522 23:09:33 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:44.522 23:09:33 -- accel/accel.sh@16 -- # local accel_opc 00:06:44.522 23:09:33 -- accel/accel.sh@17 -- # local accel_module 00:06:44.522 23:09:33 -- accel/accel.sh@19 -- # IFS=: 00:06:44.522 23:09:33 -- accel/accel.sh@19 -- # read -r var val 00:06:44.522 23:09:33 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:44.522 23:09:33 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:44.522 23:09:33 -- accel/accel.sh@12 -- # build_accel_config 00:06:44.522 23:09:33 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:44.522 23:09:33 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:44.522 23:09:33 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:44.522 23:09:33 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:44.522 23:09:33 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:44.522 23:09:33 -- accel/accel.sh@40 -- # local IFS=, 00:06:44.522 23:09:33 -- accel/accel.sh@41 -- # jq -r . 00:06:44.522 [2024-04-26 23:09:33.675000] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:06:44.522 [2024-04-26 23:09:33.675074] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3741087 ] 00:06:44.522 EAL: No free 2048 kB hugepages reported on node 1 00:06:44.522 [2024-04-26 23:09:33.743727] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:44.782 [2024-04-26 23:09:33.783642] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:44.782 [2024-04-26 23:09:33.783786] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:44.782 [2024-04-26 23:09:33.783933] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:44.782 [2024-04-26 23:09:33.783934] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:44.782 23:09:33 -- accel/accel.sh@20 -- # val= 00:06:44.782 23:09:33 -- accel/accel.sh@21 -- # case "$var" in 00:06:44.782 23:09:33 -- accel/accel.sh@19 -- # IFS=: 00:06:44.782 23:09:33 -- accel/accel.sh@19 -- # read -r var val 00:06:44.782 23:09:33 -- accel/accel.sh@20 -- # val= 00:06:44.782 23:09:33 -- accel/accel.sh@21 -- # case "$var" in 00:06:44.782 23:09:33 -- accel/accel.sh@19 -- # IFS=: 00:06:44.782 23:09:33 -- accel/accel.sh@19 -- # read -r var val 00:06:44.782 23:09:33 -- accel/accel.sh@20 -- # val= 00:06:44.782 23:09:33 -- accel/accel.sh@21 -- # case "$var" in 00:06:44.782 23:09:33 -- accel/accel.sh@19 -- # IFS=: 00:06:44.782 23:09:33 -- accel/accel.sh@19 -- # read -r var val 00:06:44.782 23:09:33 -- accel/accel.sh@20 -- # val=0xf 00:06:44.782 23:09:33 -- accel/accel.sh@21 -- # case "$var" in 00:06:44.782 23:09:33 -- accel/accel.sh@19 -- # IFS=: 00:06:44.782 23:09:33 -- accel/accel.sh@19 -- # read -r var val 00:06:44.782 23:09:33 -- accel/accel.sh@20 -- # val= 00:06:44.782 23:09:33 -- accel/accel.sh@21 -- # case "$var" in 00:06:44.782 23:09:33 -- accel/accel.sh@19 -- # IFS=: 00:06:44.782 23:09:33 -- accel/accel.sh@19 -- # read -r var val 00:06:44.782 23:09:33 -- accel/accel.sh@20 -- # val= 00:06:44.782 23:09:33 -- accel/accel.sh@21 -- # case "$var" in 00:06:44.782 23:09:33 -- accel/accel.sh@19 -- # IFS=: 00:06:44.782 23:09:33 -- accel/accel.sh@19 -- # read -r var val 00:06:44.782 23:09:33 -- accel/accel.sh@20 -- # val=decompress 00:06:44.782 23:09:33 -- accel/accel.sh@21 -- # case "$var" in 00:06:44.782 23:09:33 -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:44.782 23:09:33 -- accel/accel.sh@19 -- # IFS=: 00:06:44.782 23:09:33 -- accel/accel.sh@19 -- # read -r var val 00:06:44.782 23:09:33 -- accel/accel.sh@20 -- # val='111250 bytes' 00:06:44.782 23:09:33 -- accel/accel.sh@21 -- # case "$var" in 00:06:44.782 23:09:33 -- accel/accel.sh@19 -- # IFS=: 00:06:44.782 23:09:33 -- accel/accel.sh@19 -- # read -r var val 00:06:44.782 23:09:33 -- accel/accel.sh@20 -- # val= 00:06:44.782 23:09:33 -- accel/accel.sh@21 -- # case "$var" in 00:06:44.782 23:09:33 -- accel/accel.sh@19 -- # IFS=: 00:06:44.782 23:09:33 -- accel/accel.sh@19 -- # read -r var val 00:06:44.782 23:09:33 -- accel/accel.sh@20 -- # val=software 00:06:44.782 23:09:33 -- accel/accel.sh@21 -- # case "$var" in 00:06:44.782 23:09:33 -- accel/accel.sh@22 -- # accel_module=software 00:06:44.782 23:09:33 -- accel/accel.sh@19 -- # IFS=: 00:06:44.782 23:09:33 -- accel/accel.sh@19 -- # read -r var val 00:06:44.782 23:09:33 -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:44.782 23:09:33 -- accel/accel.sh@21 -- # case "$var" in 00:06:44.782 23:09:33 -- accel/accel.sh@19 -- # IFS=: 00:06:44.782 23:09:33 -- accel/accel.sh@19 -- # read -r var val 00:06:44.782 23:09:33 -- accel/accel.sh@20 -- # val=32 00:06:44.782 23:09:33 -- accel/accel.sh@21 -- # case "$var" in 00:06:44.782 23:09:33 -- accel/accel.sh@19 -- # IFS=: 00:06:44.782 23:09:33 -- accel/accel.sh@19 -- # read -r var val 00:06:44.782 23:09:33 -- accel/accel.sh@20 -- # val=32 00:06:44.782 23:09:33 -- accel/accel.sh@21 -- # case "$var" in 00:06:44.782 23:09:33 -- accel/accel.sh@19 -- # IFS=: 00:06:44.782 23:09:33 -- accel/accel.sh@19 -- # read -r var val 00:06:44.782 23:09:33 -- accel/accel.sh@20 -- # val=1 00:06:44.782 23:09:33 -- accel/accel.sh@21 -- # case "$var" in 00:06:44.782 23:09:33 -- accel/accel.sh@19 -- # IFS=: 00:06:44.782 23:09:33 -- accel/accel.sh@19 -- # read -r var val 00:06:44.782 23:09:33 -- accel/accel.sh@20 -- # val='1 seconds' 00:06:44.782 23:09:33 -- accel/accel.sh@21 -- # case "$var" in 00:06:44.782 23:09:33 -- accel/accel.sh@19 -- # IFS=: 00:06:44.782 23:09:33 -- accel/accel.sh@19 -- # read -r var val 00:06:44.782 23:09:33 -- accel/accel.sh@20 -- # val=Yes 00:06:44.782 23:09:33 -- accel/accel.sh@21 -- # case "$var" in 00:06:44.782 23:09:33 -- accel/accel.sh@19 -- # IFS=: 00:06:44.782 23:09:33 -- accel/accel.sh@19 -- # read -r var val 00:06:44.782 23:09:33 -- accel/accel.sh@20 -- # val= 00:06:44.782 23:09:33 -- accel/accel.sh@21 -- # case "$var" in 00:06:44.782 23:09:33 -- accel/accel.sh@19 -- # IFS=: 00:06:44.782 23:09:33 -- accel/accel.sh@19 -- # read -r var val 00:06:44.782 23:09:33 -- accel/accel.sh@20 -- # val= 00:06:44.782 23:09:33 -- accel/accel.sh@21 -- # case "$var" in 00:06:44.782 23:09:33 -- accel/accel.sh@19 -- # IFS=: 00:06:44.782 23:09:33 -- accel/accel.sh@19 -- # read -r var val 00:06:45.722 23:09:34 -- accel/accel.sh@20 -- # val= 00:06:45.722 23:09:34 -- accel/accel.sh@21 -- # case "$var" in 00:06:45.722 23:09:34 -- accel/accel.sh@19 -- # IFS=: 00:06:45.722 23:09:34 -- accel/accel.sh@19 -- # read -r var val 00:06:45.722 23:09:34 -- accel/accel.sh@20 -- # val= 00:06:45.722 23:09:34 -- accel/accel.sh@21 -- # case "$var" in 00:06:45.722 23:09:34 -- accel/accel.sh@19 -- # IFS=: 00:06:45.722 23:09:34 -- accel/accel.sh@19 -- # read -r var val 00:06:45.722 23:09:34 -- accel/accel.sh@20 -- # val= 00:06:45.722 23:09:34 -- accel/accel.sh@21 -- # case "$var" in 00:06:45.722 23:09:34 -- accel/accel.sh@19 -- # IFS=: 00:06:45.722 23:09:34 -- accel/accel.sh@19 -- # read -r var val 00:06:45.722 23:09:34 -- accel/accel.sh@20 -- # val= 00:06:45.722 23:09:34 -- accel/accel.sh@21 -- # case "$var" in 00:06:45.722 23:09:34 -- accel/accel.sh@19 -- # IFS=: 00:06:45.722 23:09:34 -- accel/accel.sh@19 -- # read -r var val 00:06:45.722 23:09:34 -- accel/accel.sh@20 -- # val= 00:06:45.722 23:09:34 -- accel/accel.sh@21 -- # case "$var" in 00:06:45.722 23:09:34 -- accel/accel.sh@19 -- # IFS=: 00:06:45.722 23:09:34 -- accel/accel.sh@19 -- # read -r var val 00:06:45.722 23:09:34 -- accel/accel.sh@20 -- # val= 00:06:45.722 23:09:34 -- accel/accel.sh@21 -- # case "$var" in 00:06:45.722 23:09:34 -- accel/accel.sh@19 -- # IFS=: 00:06:45.722 23:09:34 -- accel/accel.sh@19 -- # read -r var val 00:06:45.722 23:09:34 -- accel/accel.sh@20 -- # val= 00:06:45.722 23:09:34 -- accel/accel.sh@21 -- # case "$var" in 00:06:45.722 23:09:34 -- accel/accel.sh@19 -- # IFS=: 00:06:45.722 23:09:34 -- accel/accel.sh@19 -- # read -r var val 00:06:45.722 23:09:34 -- accel/accel.sh@20 -- # val= 00:06:45.722 23:09:34 -- accel/accel.sh@21 -- # case "$var" in 00:06:45.722 23:09:34 -- accel/accel.sh@19 -- # IFS=: 00:06:45.722 23:09:34 -- accel/accel.sh@19 -- # read -r var val 00:06:45.722 23:09:34 -- accel/accel.sh@20 -- # val= 00:06:45.722 23:09:34 -- accel/accel.sh@21 -- # case "$var" in 00:06:45.722 23:09:34 -- accel/accel.sh@19 -- # IFS=: 00:06:45.722 23:09:34 -- accel/accel.sh@19 -- # read -r var val 00:06:45.722 23:09:34 -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:45.722 23:09:34 -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:45.722 23:09:34 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:45.722 00:06:45.722 real 0m1.278s 00:06:45.722 user 0m4.438s 00:06:45.722 sys 0m0.125s 00:06:45.722 23:09:34 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:45.722 23:09:34 -- common/autotest_common.sh@10 -- # set +x 00:06:45.722 ************************************ 00:06:45.722 END TEST accel_decomp_full_mcore 00:06:45.722 ************************************ 00:06:45.722 23:09:34 -- accel/accel.sh@121 -- # run_test accel_decomp_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:06:45.722 23:09:34 -- common/autotest_common.sh@1087 -- # '[' 11 -le 1 ']' 00:06:45.722 23:09:34 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:45.722 23:09:34 -- common/autotest_common.sh@10 -- # set +x 00:06:45.982 ************************************ 00:06:45.983 START TEST accel_decomp_mthread 00:06:45.983 ************************************ 00:06:45.983 23:09:35 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:06:45.983 23:09:35 -- accel/accel.sh@16 -- # local accel_opc 00:06:45.983 23:09:35 -- accel/accel.sh@17 -- # local accel_module 00:06:45.983 23:09:35 -- accel/accel.sh@19 -- # IFS=: 00:06:45.983 23:09:35 -- accel/accel.sh@19 -- # read -r var val 00:06:45.983 23:09:35 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:06:45.983 23:09:35 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:06:45.983 23:09:35 -- accel/accel.sh@12 -- # build_accel_config 00:06:45.983 23:09:35 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:45.983 23:09:35 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:45.983 23:09:35 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:45.983 23:09:35 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:45.983 23:09:35 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:45.983 23:09:35 -- accel/accel.sh@40 -- # local IFS=, 00:06:45.983 23:09:35 -- accel/accel.sh@41 -- # jq -r . 00:06:45.983 [2024-04-26 23:09:35.146679] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:06:45.983 [2024-04-26 23:09:35.146754] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3741449 ] 00:06:45.983 EAL: No free 2048 kB hugepages reported on node 1 00:06:45.983 [2024-04-26 23:09:35.213619] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:46.243 [2024-04-26 23:09:35.250173] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:46.243 23:09:35 -- accel/accel.sh@20 -- # val= 00:06:46.243 23:09:35 -- accel/accel.sh@21 -- # case "$var" in 00:06:46.243 23:09:35 -- accel/accel.sh@19 -- # IFS=: 00:06:46.243 23:09:35 -- accel/accel.sh@19 -- # read -r var val 00:06:46.243 23:09:35 -- accel/accel.sh@20 -- # val= 00:06:46.243 23:09:35 -- accel/accel.sh@21 -- # case "$var" in 00:06:46.243 23:09:35 -- accel/accel.sh@19 -- # IFS=: 00:06:46.243 23:09:35 -- accel/accel.sh@19 -- # read -r var val 00:06:46.243 23:09:35 -- accel/accel.sh@20 -- # val= 00:06:46.243 23:09:35 -- accel/accel.sh@21 -- # case "$var" in 00:06:46.243 23:09:35 -- accel/accel.sh@19 -- # IFS=: 00:06:46.243 23:09:35 -- accel/accel.sh@19 -- # read -r var val 00:06:46.243 23:09:35 -- accel/accel.sh@20 -- # val=0x1 00:06:46.243 23:09:35 -- accel/accel.sh@21 -- # case "$var" in 00:06:46.243 23:09:35 -- accel/accel.sh@19 -- # IFS=: 00:06:46.243 23:09:35 -- accel/accel.sh@19 -- # read -r var val 00:06:46.243 23:09:35 -- accel/accel.sh@20 -- # val= 00:06:46.243 23:09:35 -- accel/accel.sh@21 -- # case "$var" in 00:06:46.243 23:09:35 -- accel/accel.sh@19 -- # IFS=: 00:06:46.243 23:09:35 -- accel/accel.sh@19 -- # read -r var val 00:06:46.243 23:09:35 -- accel/accel.sh@20 -- # val= 00:06:46.243 23:09:35 -- accel/accel.sh@21 -- # case "$var" in 00:06:46.243 23:09:35 -- accel/accel.sh@19 -- # IFS=: 00:06:46.243 23:09:35 -- accel/accel.sh@19 -- # read -r var val 00:06:46.243 23:09:35 -- accel/accel.sh@20 -- # val=decompress 00:06:46.243 23:09:35 -- accel/accel.sh@21 -- # case "$var" in 00:06:46.243 23:09:35 -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:46.243 23:09:35 -- accel/accel.sh@19 -- # IFS=: 00:06:46.243 23:09:35 -- accel/accel.sh@19 -- # read -r var val 00:06:46.243 23:09:35 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:46.243 23:09:35 -- accel/accel.sh@21 -- # case "$var" in 00:06:46.243 23:09:35 -- accel/accel.sh@19 -- # IFS=: 00:06:46.243 23:09:35 -- accel/accel.sh@19 -- # read -r var val 00:06:46.243 23:09:35 -- accel/accel.sh@20 -- # val= 00:06:46.243 23:09:35 -- accel/accel.sh@21 -- # case "$var" in 00:06:46.243 23:09:35 -- accel/accel.sh@19 -- # IFS=: 00:06:46.243 23:09:35 -- accel/accel.sh@19 -- # read -r var val 00:06:46.243 23:09:35 -- accel/accel.sh@20 -- # val=software 00:06:46.243 23:09:35 -- accel/accel.sh@21 -- # case "$var" in 00:06:46.243 23:09:35 -- accel/accel.sh@22 -- # accel_module=software 00:06:46.243 23:09:35 -- accel/accel.sh@19 -- # IFS=: 00:06:46.243 23:09:35 -- accel/accel.sh@19 -- # read -r var val 00:06:46.243 23:09:35 -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:46.243 23:09:35 -- accel/accel.sh@21 -- # case "$var" in 00:06:46.243 23:09:35 -- accel/accel.sh@19 -- # IFS=: 00:06:46.243 23:09:35 -- accel/accel.sh@19 -- # read -r var val 00:06:46.243 23:09:35 -- accel/accel.sh@20 -- # val=32 00:06:46.243 23:09:35 -- accel/accel.sh@21 -- # case "$var" in 00:06:46.243 23:09:35 -- accel/accel.sh@19 -- # IFS=: 00:06:46.243 23:09:35 -- accel/accel.sh@19 -- # read -r var val 00:06:46.243 23:09:35 -- accel/accel.sh@20 -- # val=32 00:06:46.243 23:09:35 -- accel/accel.sh@21 -- # case "$var" in 00:06:46.243 23:09:35 -- accel/accel.sh@19 -- # IFS=: 00:06:46.243 23:09:35 -- accel/accel.sh@19 -- # read -r var val 00:06:46.243 23:09:35 -- accel/accel.sh@20 -- # val=2 00:06:46.243 23:09:35 -- accel/accel.sh@21 -- # case "$var" in 00:06:46.243 23:09:35 -- accel/accel.sh@19 -- # IFS=: 00:06:46.243 23:09:35 -- accel/accel.sh@19 -- # read -r var val 00:06:46.243 23:09:35 -- accel/accel.sh@20 -- # val='1 seconds' 00:06:46.243 23:09:35 -- accel/accel.sh@21 -- # case "$var" in 00:06:46.243 23:09:35 -- accel/accel.sh@19 -- # IFS=: 00:06:46.243 23:09:35 -- accel/accel.sh@19 -- # read -r var val 00:06:46.243 23:09:35 -- accel/accel.sh@20 -- # val=Yes 00:06:46.243 23:09:35 -- accel/accel.sh@21 -- # case "$var" in 00:06:46.243 23:09:35 -- accel/accel.sh@19 -- # IFS=: 00:06:46.243 23:09:35 -- accel/accel.sh@19 -- # read -r var val 00:06:46.243 23:09:35 -- accel/accel.sh@20 -- # val= 00:06:46.243 23:09:35 -- accel/accel.sh@21 -- # case "$var" in 00:06:46.243 23:09:35 -- accel/accel.sh@19 -- # IFS=: 00:06:46.243 23:09:35 -- accel/accel.sh@19 -- # read -r var val 00:06:46.243 23:09:35 -- accel/accel.sh@20 -- # val= 00:06:46.243 23:09:35 -- accel/accel.sh@21 -- # case "$var" in 00:06:46.243 23:09:35 -- accel/accel.sh@19 -- # IFS=: 00:06:46.243 23:09:35 -- accel/accel.sh@19 -- # read -r var val 00:06:47.243 23:09:36 -- accel/accel.sh@20 -- # val= 00:06:47.243 23:09:36 -- accel/accel.sh@21 -- # case "$var" in 00:06:47.243 23:09:36 -- accel/accel.sh@19 -- # IFS=: 00:06:47.243 23:09:36 -- accel/accel.sh@19 -- # read -r var val 00:06:47.243 23:09:36 -- accel/accel.sh@20 -- # val= 00:06:47.243 23:09:36 -- accel/accel.sh@21 -- # case "$var" in 00:06:47.243 23:09:36 -- accel/accel.sh@19 -- # IFS=: 00:06:47.243 23:09:36 -- accel/accel.sh@19 -- # read -r var val 00:06:47.243 23:09:36 -- accel/accel.sh@20 -- # val= 00:06:47.243 23:09:36 -- accel/accel.sh@21 -- # case "$var" in 00:06:47.243 23:09:36 -- accel/accel.sh@19 -- # IFS=: 00:06:47.243 23:09:36 -- accel/accel.sh@19 -- # read -r var val 00:06:47.243 23:09:36 -- accel/accel.sh@20 -- # val= 00:06:47.243 23:09:36 -- accel/accel.sh@21 -- # case "$var" in 00:06:47.243 23:09:36 -- accel/accel.sh@19 -- # IFS=: 00:06:47.243 23:09:36 -- accel/accel.sh@19 -- # read -r var val 00:06:47.243 23:09:36 -- accel/accel.sh@20 -- # val= 00:06:47.243 23:09:36 -- accel/accel.sh@21 -- # case "$var" in 00:06:47.243 23:09:36 -- accel/accel.sh@19 -- # IFS=: 00:06:47.243 23:09:36 -- accel/accel.sh@19 -- # read -r var val 00:06:47.243 23:09:36 -- accel/accel.sh@20 -- # val= 00:06:47.243 23:09:36 -- accel/accel.sh@21 -- # case "$var" in 00:06:47.243 23:09:36 -- accel/accel.sh@19 -- # IFS=: 00:06:47.243 23:09:36 -- accel/accel.sh@19 -- # read -r var val 00:06:47.243 23:09:36 -- accel/accel.sh@20 -- # val= 00:06:47.243 23:09:36 -- accel/accel.sh@21 -- # case "$var" in 00:06:47.243 23:09:36 -- accel/accel.sh@19 -- # IFS=: 00:06:47.243 23:09:36 -- accel/accel.sh@19 -- # read -r var val 00:06:47.243 23:09:36 -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:47.243 23:09:36 -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:47.243 23:09:36 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:47.243 00:06:47.243 real 0m1.257s 00:06:47.243 user 0m1.163s 00:06:47.243 sys 0m0.107s 00:06:47.243 23:09:36 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:47.243 23:09:36 -- common/autotest_common.sh@10 -- # set +x 00:06:47.243 ************************************ 00:06:47.243 END TEST accel_decomp_mthread 00:06:47.243 ************************************ 00:06:47.243 23:09:36 -- accel/accel.sh@122 -- # run_test accel_deomp_full_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:06:47.243 23:09:36 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:06:47.243 23:09:36 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:47.243 23:09:36 -- common/autotest_common.sh@10 -- # set +x 00:06:47.504 ************************************ 00:06:47.504 START TEST accel_deomp_full_mthread 00:06:47.504 ************************************ 00:06:47.504 23:09:36 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:06:47.504 23:09:36 -- accel/accel.sh@16 -- # local accel_opc 00:06:47.504 23:09:36 -- accel/accel.sh@17 -- # local accel_module 00:06:47.504 23:09:36 -- accel/accel.sh@19 -- # IFS=: 00:06:47.504 23:09:36 -- accel/accel.sh@19 -- # read -r var val 00:06:47.504 23:09:36 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:06:47.504 23:09:36 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:06:47.504 23:09:36 -- accel/accel.sh@12 -- # build_accel_config 00:06:47.504 23:09:36 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:47.504 23:09:36 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:47.504 23:09:36 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:47.504 23:09:36 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:47.504 23:09:36 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:47.504 23:09:36 -- accel/accel.sh@40 -- # local IFS=, 00:06:47.504 23:09:36 -- accel/accel.sh@41 -- # jq -r . 00:06:47.504 [2024-04-26 23:09:36.601864] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:06:47.504 [2024-04-26 23:09:36.601949] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3741807 ] 00:06:47.504 EAL: No free 2048 kB hugepages reported on node 1 00:06:47.504 [2024-04-26 23:09:36.669519] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:47.504 [2024-04-26 23:09:36.705848] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:47.504 23:09:36 -- accel/accel.sh@20 -- # val= 00:06:47.504 23:09:36 -- accel/accel.sh@21 -- # case "$var" in 00:06:47.504 23:09:36 -- accel/accel.sh@19 -- # IFS=: 00:06:47.504 23:09:36 -- accel/accel.sh@19 -- # read -r var val 00:06:47.504 23:09:36 -- accel/accel.sh@20 -- # val= 00:06:47.504 23:09:36 -- accel/accel.sh@21 -- # case "$var" in 00:06:47.504 23:09:36 -- accel/accel.sh@19 -- # IFS=: 00:06:47.504 23:09:36 -- accel/accel.sh@19 -- # read -r var val 00:06:47.504 23:09:36 -- accel/accel.sh@20 -- # val= 00:06:47.504 23:09:36 -- accel/accel.sh@21 -- # case "$var" in 00:06:47.504 23:09:36 -- accel/accel.sh@19 -- # IFS=: 00:06:47.504 23:09:36 -- accel/accel.sh@19 -- # read -r var val 00:06:47.504 23:09:36 -- accel/accel.sh@20 -- # val=0x1 00:06:47.504 23:09:36 -- accel/accel.sh@21 -- # case "$var" in 00:06:47.504 23:09:36 -- accel/accel.sh@19 -- # IFS=: 00:06:47.504 23:09:36 -- accel/accel.sh@19 -- # read -r var val 00:06:47.504 23:09:36 -- accel/accel.sh@20 -- # val= 00:06:47.504 23:09:36 -- accel/accel.sh@21 -- # case "$var" in 00:06:47.504 23:09:36 -- accel/accel.sh@19 -- # IFS=: 00:06:47.504 23:09:36 -- accel/accel.sh@19 -- # read -r var val 00:06:47.504 23:09:36 -- accel/accel.sh@20 -- # val= 00:06:47.504 23:09:36 -- accel/accel.sh@21 -- # case "$var" in 00:06:47.504 23:09:36 -- accel/accel.sh@19 -- # IFS=: 00:06:47.504 23:09:36 -- accel/accel.sh@19 -- # read -r var val 00:06:47.504 23:09:36 -- accel/accel.sh@20 -- # val=decompress 00:06:47.504 23:09:36 -- accel/accel.sh@21 -- # case "$var" in 00:06:47.504 23:09:36 -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:47.504 23:09:36 -- accel/accel.sh@19 -- # IFS=: 00:06:47.504 23:09:36 -- accel/accel.sh@19 -- # read -r var val 00:06:47.504 23:09:36 -- accel/accel.sh@20 -- # val='111250 bytes' 00:06:47.504 23:09:36 -- accel/accel.sh@21 -- # case "$var" in 00:06:47.504 23:09:36 -- accel/accel.sh@19 -- # IFS=: 00:06:47.504 23:09:36 -- accel/accel.sh@19 -- # read -r var val 00:06:47.504 23:09:36 -- accel/accel.sh@20 -- # val= 00:06:47.504 23:09:36 -- accel/accel.sh@21 -- # case "$var" in 00:06:47.504 23:09:36 -- accel/accel.sh@19 -- # IFS=: 00:06:47.504 23:09:36 -- accel/accel.sh@19 -- # read -r var val 00:06:47.504 23:09:36 -- accel/accel.sh@20 -- # val=software 00:06:47.504 23:09:36 -- accel/accel.sh@21 -- # case "$var" in 00:06:47.504 23:09:36 -- accel/accel.sh@22 -- # accel_module=software 00:06:47.504 23:09:36 -- accel/accel.sh@19 -- # IFS=: 00:06:47.504 23:09:36 -- accel/accel.sh@19 -- # read -r var val 00:06:47.504 23:09:36 -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:47.504 23:09:36 -- accel/accel.sh@21 -- # case "$var" in 00:06:47.504 23:09:36 -- accel/accel.sh@19 -- # IFS=: 00:06:47.504 23:09:36 -- accel/accel.sh@19 -- # read -r var val 00:06:47.504 23:09:36 -- accel/accel.sh@20 -- # val=32 00:06:47.504 23:09:36 -- accel/accel.sh@21 -- # case "$var" in 00:06:47.504 23:09:36 -- accel/accel.sh@19 -- # IFS=: 00:06:47.505 23:09:36 -- accel/accel.sh@19 -- # read -r var val 00:06:47.505 23:09:36 -- accel/accel.sh@20 -- # val=32 00:06:47.505 23:09:36 -- accel/accel.sh@21 -- # case "$var" in 00:06:47.505 23:09:36 -- accel/accel.sh@19 -- # IFS=: 00:06:47.505 23:09:36 -- accel/accel.sh@19 -- # read -r var val 00:06:47.505 23:09:36 -- accel/accel.sh@20 -- # val=2 00:06:47.505 23:09:36 -- accel/accel.sh@21 -- # case "$var" in 00:06:47.505 23:09:36 -- accel/accel.sh@19 -- # IFS=: 00:06:47.505 23:09:36 -- accel/accel.sh@19 -- # read -r var val 00:06:47.505 23:09:36 -- accel/accel.sh@20 -- # val='1 seconds' 00:06:47.505 23:09:36 -- accel/accel.sh@21 -- # case "$var" in 00:06:47.505 23:09:36 -- accel/accel.sh@19 -- # IFS=: 00:06:47.505 23:09:36 -- accel/accel.sh@19 -- # read -r var val 00:06:47.505 23:09:36 -- accel/accel.sh@20 -- # val=Yes 00:06:47.505 23:09:36 -- accel/accel.sh@21 -- # case "$var" in 00:06:47.505 23:09:36 -- accel/accel.sh@19 -- # IFS=: 00:06:47.505 23:09:36 -- accel/accel.sh@19 -- # read -r var val 00:06:47.505 23:09:36 -- accel/accel.sh@20 -- # val= 00:06:47.505 23:09:36 -- accel/accel.sh@21 -- # case "$var" in 00:06:47.505 23:09:36 -- accel/accel.sh@19 -- # IFS=: 00:06:47.505 23:09:36 -- accel/accel.sh@19 -- # read -r var val 00:06:47.505 23:09:36 -- accel/accel.sh@20 -- # val= 00:06:47.505 23:09:36 -- accel/accel.sh@21 -- # case "$var" in 00:06:47.505 23:09:36 -- accel/accel.sh@19 -- # IFS=: 00:06:47.505 23:09:36 -- accel/accel.sh@19 -- # read -r var val 00:06:48.888 23:09:37 -- accel/accel.sh@20 -- # val= 00:06:48.888 23:09:37 -- accel/accel.sh@21 -- # case "$var" in 00:06:48.888 23:09:37 -- accel/accel.sh@19 -- # IFS=: 00:06:48.888 23:09:37 -- accel/accel.sh@19 -- # read -r var val 00:06:48.888 23:09:37 -- accel/accel.sh@20 -- # val= 00:06:48.888 23:09:37 -- accel/accel.sh@21 -- # case "$var" in 00:06:48.888 23:09:37 -- accel/accel.sh@19 -- # IFS=: 00:06:48.888 23:09:37 -- accel/accel.sh@19 -- # read -r var val 00:06:48.888 23:09:37 -- accel/accel.sh@20 -- # val= 00:06:48.888 23:09:37 -- accel/accel.sh@21 -- # case "$var" in 00:06:48.888 23:09:37 -- accel/accel.sh@19 -- # IFS=: 00:06:48.888 23:09:37 -- accel/accel.sh@19 -- # read -r var val 00:06:48.888 23:09:37 -- accel/accel.sh@20 -- # val= 00:06:48.888 23:09:37 -- accel/accel.sh@21 -- # case "$var" in 00:06:48.888 23:09:37 -- accel/accel.sh@19 -- # IFS=: 00:06:48.888 23:09:37 -- accel/accel.sh@19 -- # read -r var val 00:06:48.888 23:09:37 -- accel/accel.sh@20 -- # val= 00:06:48.888 23:09:37 -- accel/accel.sh@21 -- # case "$var" in 00:06:48.888 23:09:37 -- accel/accel.sh@19 -- # IFS=: 00:06:48.888 23:09:37 -- accel/accel.sh@19 -- # read -r var val 00:06:48.888 23:09:37 -- accel/accel.sh@20 -- # val= 00:06:48.888 23:09:37 -- accel/accel.sh@21 -- # case "$var" in 00:06:48.888 23:09:37 -- accel/accel.sh@19 -- # IFS=: 00:06:48.888 23:09:37 -- accel/accel.sh@19 -- # read -r var val 00:06:48.888 23:09:37 -- accel/accel.sh@20 -- # val= 00:06:48.888 23:09:37 -- accel/accel.sh@21 -- # case "$var" in 00:06:48.888 23:09:37 -- accel/accel.sh@19 -- # IFS=: 00:06:48.888 23:09:37 -- accel/accel.sh@19 -- # read -r var val 00:06:48.888 23:09:37 -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:48.888 23:09:37 -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:48.888 23:09:37 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:48.888 00:06:48.888 real 0m1.283s 00:06:48.888 user 0m1.177s 00:06:48.888 sys 0m0.117s 00:06:48.888 23:09:37 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:48.888 23:09:37 -- common/autotest_common.sh@10 -- # set +x 00:06:48.888 ************************************ 00:06:48.888 END TEST accel_deomp_full_mthread 00:06:48.888 ************************************ 00:06:48.888 23:09:37 -- accel/accel.sh@124 -- # [[ n == y ]] 00:06:48.888 23:09:37 -- accel/accel.sh@137 -- # run_test accel_dif_functional_tests /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:06:48.888 23:09:37 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:06:48.888 23:09:37 -- accel/accel.sh@137 -- # build_accel_config 00:06:48.888 23:09:37 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:48.888 23:09:37 -- common/autotest_common.sh@10 -- # set +x 00:06:48.888 23:09:37 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:48.888 23:09:37 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:48.888 23:09:37 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:48.888 23:09:37 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:48.888 23:09:37 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:48.888 23:09:37 -- accel/accel.sh@40 -- # local IFS=, 00:06:48.888 23:09:37 -- accel/accel.sh@41 -- # jq -r . 00:06:48.888 ************************************ 00:06:48.888 START TEST accel_dif_functional_tests 00:06:48.888 ************************************ 00:06:48.888 23:09:38 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:06:48.888 [2024-04-26 23:09:38.096251] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:06:48.888 [2024-04-26 23:09:38.096320] [ DPDK EAL parameters: DIF --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3742173 ] 00:06:48.888 EAL: No free 2048 kB hugepages reported on node 1 00:06:49.149 [2024-04-26 23:09:38.162129] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:49.149 [2024-04-26 23:09:38.200420] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:49.149 [2024-04-26 23:09:38.200545] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:49.149 [2024-04-26 23:09:38.200547] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:49.149 00:06:49.149 00:06:49.149 CUnit - A unit testing framework for C - Version 2.1-3 00:06:49.149 http://cunit.sourceforge.net/ 00:06:49.149 00:06:49.149 00:06:49.149 Suite: accel_dif 00:06:49.149 Test: verify: DIF generated, GUARD check ...passed 00:06:49.149 Test: verify: DIF generated, APPTAG check ...passed 00:06:49.149 Test: verify: DIF generated, REFTAG check ...passed 00:06:49.149 Test: verify: DIF not generated, GUARD check ...[2024-04-26 23:09:38.251695] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:06:49.149 [2024-04-26 23:09:38.251735] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:06:49.149 passed 00:06:49.149 Test: verify: DIF not generated, APPTAG check ...[2024-04-26 23:09:38.251767] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:06:49.149 [2024-04-26 23:09:38.251782] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:06:49.149 passed 00:06:49.149 Test: verify: DIF not generated, REFTAG check ...[2024-04-26 23:09:38.251798] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:06:49.149 [2024-04-26 23:09:38.251814] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:06:49.149 passed 00:06:49.149 Test: verify: APPTAG correct, APPTAG check ...passed 00:06:49.149 Test: verify: APPTAG incorrect, APPTAG check ...[2024-04-26 23:09:38.251871] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=30, Expected=28, Actual=14 00:06:49.149 passed 00:06:49.149 Test: verify: APPTAG incorrect, no APPTAG check ...passed 00:06:49.149 Test: verify: REFTAG incorrect, REFTAG ignore ...passed 00:06:49.149 Test: verify: REFTAG_INIT correct, REFTAG check ...passed 00:06:49.149 Test: verify: REFTAG_INIT incorrect, REFTAG check ...[2024-04-26 23:09:38.251986] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=10 00:06:49.149 passed 00:06:49.149 Test: generate copy: DIF generated, GUARD check ...passed 00:06:49.149 Test: generate copy: DIF generated, APTTAG check ...passed 00:06:49.149 Test: generate copy: DIF generated, REFTAG check ...passed 00:06:49.149 Test: generate copy: DIF generated, no GUARD check flag set ...passed 00:06:49.149 Test: generate copy: DIF generated, no APPTAG check flag set ...passed 00:06:49.149 Test: generate copy: DIF generated, no REFTAG check flag set ...passed 00:06:49.149 Test: generate copy: iovecs-len validate ...[2024-04-26 23:09:38.252173] dif.c:1190:spdk_dif_generate_copy: *ERROR*: Size of bounce_iovs arrays are not valid or misaligned with block_size. 00:06:49.149 passed 00:06:49.149 Test: generate copy: buffer alignment validate ...passed 00:06:49.149 00:06:49.149 Run Summary: Type Total Ran Passed Failed Inactive 00:06:49.149 suites 1 1 n/a 0 0 00:06:49.149 tests 20 20 20 0 0 00:06:49.149 asserts 204 204 204 0 n/a 00:06:49.149 00:06:49.149 Elapsed time = 0.002 seconds 00:06:49.149 00:06:49.149 real 0m0.314s 00:06:49.149 user 0m0.390s 00:06:49.149 sys 0m0.133s 00:06:49.149 23:09:38 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:49.149 23:09:38 -- common/autotest_common.sh@10 -- # set +x 00:06:49.149 ************************************ 00:06:49.149 END TEST accel_dif_functional_tests 00:06:49.149 ************************************ 00:06:49.149 00:06:49.149 real 0m31.990s 00:06:49.149 user 0m33.630s 00:06:49.149 sys 0m5.564s 00:06:49.149 23:09:38 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:49.149 23:09:38 -- common/autotest_common.sh@10 -- # set +x 00:06:49.149 ************************************ 00:06:49.149 END TEST accel 00:06:49.149 ************************************ 00:06:49.423 23:09:38 -- spdk/autotest.sh@180 -- # run_test accel_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel_rpc.sh 00:06:49.424 23:09:38 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:49.424 23:09:38 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:49.424 23:09:38 -- common/autotest_common.sh@10 -- # set +x 00:06:49.424 ************************************ 00:06:49.424 START TEST accel_rpc 00:06:49.424 ************************************ 00:06:49.424 23:09:38 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel_rpc.sh 00:06:49.424 * Looking for test storage... 00:06:49.685 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel 00:06:49.685 23:09:38 -- accel/accel_rpc.sh@11 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:49.685 23:09:38 -- accel/accel_rpc.sh@14 -- # spdk_tgt_pid=3742242 00:06:49.685 23:09:38 -- accel/accel_rpc.sh@15 -- # waitforlisten 3742242 00:06:49.685 23:09:38 -- accel/accel_rpc.sh@13 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --wait-for-rpc 00:06:49.685 23:09:38 -- common/autotest_common.sh@817 -- # '[' -z 3742242 ']' 00:06:49.685 23:09:38 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:49.685 23:09:38 -- common/autotest_common.sh@822 -- # local max_retries=100 00:06:49.685 23:09:38 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:49.685 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:49.685 23:09:38 -- common/autotest_common.sh@826 -- # xtrace_disable 00:06:49.685 23:09:38 -- common/autotest_common.sh@10 -- # set +x 00:06:49.685 [2024-04-26 23:09:38.734128] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:06:49.685 [2024-04-26 23:09:38.734179] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3742242 ] 00:06:49.685 EAL: No free 2048 kB hugepages reported on node 1 00:06:49.685 [2024-04-26 23:09:38.795906] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:49.685 [2024-04-26 23:09:38.828652] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:50.258 23:09:39 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:06:50.258 23:09:39 -- common/autotest_common.sh@850 -- # return 0 00:06:50.258 23:09:39 -- accel/accel_rpc.sh@45 -- # [[ y == y ]] 00:06:50.258 23:09:39 -- accel/accel_rpc.sh@45 -- # [[ 0 -gt 0 ]] 00:06:50.258 23:09:39 -- accel/accel_rpc.sh@49 -- # [[ y == y ]] 00:06:50.258 23:09:39 -- accel/accel_rpc.sh@49 -- # [[ 0 -gt 0 ]] 00:06:50.258 23:09:39 -- accel/accel_rpc.sh@53 -- # run_test accel_assign_opcode accel_assign_opcode_test_suite 00:06:50.258 23:09:39 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:50.258 23:09:39 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:50.258 23:09:39 -- common/autotest_common.sh@10 -- # set +x 00:06:50.518 ************************************ 00:06:50.518 START TEST accel_assign_opcode 00:06:50.518 ************************************ 00:06:50.518 23:09:39 -- common/autotest_common.sh@1111 -- # accel_assign_opcode_test_suite 00:06:50.518 23:09:39 -- accel/accel_rpc.sh@38 -- # rpc_cmd accel_assign_opc -o copy -m incorrect 00:06:50.518 23:09:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:50.518 23:09:39 -- common/autotest_common.sh@10 -- # set +x 00:06:50.518 [2024-04-26 23:09:39.630949] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module incorrect 00:06:50.518 23:09:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:50.518 23:09:39 -- accel/accel_rpc.sh@40 -- # rpc_cmd accel_assign_opc -o copy -m software 00:06:50.518 23:09:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:50.518 23:09:39 -- common/autotest_common.sh@10 -- # set +x 00:06:50.518 [2024-04-26 23:09:39.642973] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module software 00:06:50.518 23:09:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:50.518 23:09:39 -- accel/accel_rpc.sh@41 -- # rpc_cmd framework_start_init 00:06:50.518 23:09:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:50.518 23:09:39 -- common/autotest_common.sh@10 -- # set +x 00:06:50.780 23:09:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:50.780 23:09:39 -- accel/accel_rpc.sh@42 -- # rpc_cmd accel_get_opc_assignments 00:06:50.780 23:09:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:50.780 23:09:39 -- accel/accel_rpc.sh@42 -- # jq -r .copy 00:06:50.780 23:09:39 -- common/autotest_common.sh@10 -- # set +x 00:06:50.780 23:09:39 -- accel/accel_rpc.sh@42 -- # grep software 00:06:50.780 23:09:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:50.780 software 00:06:50.780 00:06:50.780 real 0m0.196s 00:06:50.780 user 0m0.052s 00:06:50.780 sys 0m0.009s 00:06:50.780 23:09:39 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:50.780 23:09:39 -- common/autotest_common.sh@10 -- # set +x 00:06:50.780 ************************************ 00:06:50.780 END TEST accel_assign_opcode 00:06:50.780 ************************************ 00:06:50.780 23:09:39 -- accel/accel_rpc.sh@55 -- # killprocess 3742242 00:06:50.780 23:09:39 -- common/autotest_common.sh@936 -- # '[' -z 3742242 ']' 00:06:50.780 23:09:39 -- common/autotest_common.sh@940 -- # kill -0 3742242 00:06:50.780 23:09:39 -- common/autotest_common.sh@941 -- # uname 00:06:50.780 23:09:39 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:50.780 23:09:39 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3742242 00:06:50.780 23:09:39 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:50.780 23:09:39 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:50.780 23:09:39 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3742242' 00:06:50.780 killing process with pid 3742242 00:06:50.780 23:09:39 -- common/autotest_common.sh@955 -- # kill 3742242 00:06:50.780 23:09:39 -- common/autotest_common.sh@960 -- # wait 3742242 00:06:51.040 00:06:51.040 real 0m1.531s 00:06:51.040 user 0m1.660s 00:06:51.040 sys 0m0.431s 00:06:51.040 23:09:40 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:51.040 23:09:40 -- common/autotest_common.sh@10 -- # set +x 00:06:51.040 ************************************ 00:06:51.041 END TEST accel_rpc 00:06:51.041 ************************************ 00:06:51.041 23:09:40 -- spdk/autotest.sh@181 -- # run_test app_cmdline /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:06:51.041 23:09:40 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:51.041 23:09:40 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:51.041 23:09:40 -- common/autotest_common.sh@10 -- # set +x 00:06:51.301 ************************************ 00:06:51.301 START TEST app_cmdline 00:06:51.301 ************************************ 00:06:51.301 23:09:40 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:06:51.301 * Looking for test storage... 00:06:51.301 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:06:51.301 23:09:40 -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:06:51.301 23:09:40 -- app/cmdline.sh@17 -- # spdk_tgt_pid=3742669 00:06:51.301 23:09:40 -- app/cmdline.sh@18 -- # waitforlisten 3742669 00:06:51.301 23:09:40 -- app/cmdline.sh@16 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:06:51.301 23:09:40 -- common/autotest_common.sh@817 -- # '[' -z 3742669 ']' 00:06:51.301 23:09:40 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:51.301 23:09:40 -- common/autotest_common.sh@822 -- # local max_retries=100 00:06:51.301 23:09:40 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:51.301 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:51.301 23:09:40 -- common/autotest_common.sh@826 -- # xtrace_disable 00:06:51.301 23:09:40 -- common/autotest_common.sh@10 -- # set +x 00:06:51.301 [2024-04-26 23:09:40.452861] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:06:51.301 [2024-04-26 23:09:40.452914] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3742669 ] 00:06:51.301 EAL: No free 2048 kB hugepages reported on node 1 00:06:51.301 [2024-04-26 23:09:40.516471] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:51.301 [2024-04-26 23:09:40.551093] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:52.243 23:09:41 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:06:52.244 23:09:41 -- common/autotest_common.sh@850 -- # return 0 00:06:52.244 23:09:41 -- app/cmdline.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:06:52.244 { 00:06:52.244 "version": "SPDK v24.05-pre git sha1 8571999d8", 00:06:52.244 "fields": { 00:06:52.244 "major": 24, 00:06:52.244 "minor": 5, 00:06:52.244 "patch": 0, 00:06:52.244 "suffix": "-pre", 00:06:52.244 "commit": "8571999d8" 00:06:52.244 } 00:06:52.244 } 00:06:52.244 23:09:41 -- app/cmdline.sh@22 -- # expected_methods=() 00:06:52.244 23:09:41 -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:06:52.244 23:09:41 -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:06:52.244 23:09:41 -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:06:52.244 23:09:41 -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:06:52.244 23:09:41 -- app/cmdline.sh@26 -- # jq -r '.[]' 00:06:52.244 23:09:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:52.244 23:09:41 -- app/cmdline.sh@26 -- # sort 00:06:52.244 23:09:41 -- common/autotest_common.sh@10 -- # set +x 00:06:52.244 23:09:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:52.244 23:09:41 -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:06:52.244 23:09:41 -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:06:52.244 23:09:41 -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:52.244 23:09:41 -- common/autotest_common.sh@638 -- # local es=0 00:06:52.244 23:09:41 -- common/autotest_common.sh@640 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:52.244 23:09:41 -- common/autotest_common.sh@626 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:52.244 23:09:41 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:06:52.244 23:09:41 -- common/autotest_common.sh@630 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:52.244 23:09:41 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:06:52.244 23:09:41 -- common/autotest_common.sh@632 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:52.244 23:09:41 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:06:52.244 23:09:41 -- common/autotest_common.sh@632 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:52.244 23:09:41 -- common/autotest_common.sh@632 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:06:52.244 23:09:41 -- common/autotest_common.sh@641 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:52.505 request: 00:06:52.505 { 00:06:52.505 "method": "env_dpdk_get_mem_stats", 00:06:52.505 "req_id": 1 00:06:52.505 } 00:06:52.505 Got JSON-RPC error response 00:06:52.505 response: 00:06:52.505 { 00:06:52.505 "code": -32601, 00:06:52.505 "message": "Method not found" 00:06:52.505 } 00:06:52.505 23:09:41 -- common/autotest_common.sh@641 -- # es=1 00:06:52.505 23:09:41 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:06:52.505 23:09:41 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:06:52.505 23:09:41 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:06:52.505 23:09:41 -- app/cmdline.sh@1 -- # killprocess 3742669 00:06:52.505 23:09:41 -- common/autotest_common.sh@936 -- # '[' -z 3742669 ']' 00:06:52.505 23:09:41 -- common/autotest_common.sh@940 -- # kill -0 3742669 00:06:52.505 23:09:41 -- common/autotest_common.sh@941 -- # uname 00:06:52.505 23:09:41 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:52.505 23:09:41 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3742669 00:06:52.505 23:09:41 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:52.505 23:09:41 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:52.505 23:09:41 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3742669' 00:06:52.505 killing process with pid 3742669 00:06:52.505 23:09:41 -- common/autotest_common.sh@955 -- # kill 3742669 00:06:52.505 23:09:41 -- common/autotest_common.sh@960 -- # wait 3742669 00:06:52.765 00:06:52.765 real 0m1.507s 00:06:52.765 user 0m1.800s 00:06:52.765 sys 0m0.383s 00:06:52.765 23:09:41 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:52.765 23:09:41 -- common/autotest_common.sh@10 -- # set +x 00:06:52.765 ************************************ 00:06:52.765 END TEST app_cmdline 00:06:52.765 ************************************ 00:06:52.765 23:09:41 -- spdk/autotest.sh@182 -- # run_test version /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:06:52.765 23:09:41 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:52.765 23:09:41 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:52.765 23:09:41 -- common/autotest_common.sh@10 -- # set +x 00:06:52.765 ************************************ 00:06:52.765 START TEST version 00:06:52.765 ************************************ 00:06:52.765 23:09:41 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:06:53.026 * Looking for test storage... 00:06:53.026 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:06:53.026 23:09:42 -- app/version.sh@17 -- # get_header_version major 00:06:53.026 23:09:42 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:53.026 23:09:42 -- app/version.sh@14 -- # cut -f2 00:06:53.026 23:09:42 -- app/version.sh@14 -- # tr -d '"' 00:06:53.026 23:09:42 -- app/version.sh@17 -- # major=24 00:06:53.026 23:09:42 -- app/version.sh@18 -- # get_header_version minor 00:06:53.026 23:09:42 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:53.026 23:09:42 -- app/version.sh@14 -- # cut -f2 00:06:53.026 23:09:42 -- app/version.sh@14 -- # tr -d '"' 00:06:53.026 23:09:42 -- app/version.sh@18 -- # minor=5 00:06:53.026 23:09:42 -- app/version.sh@19 -- # get_header_version patch 00:06:53.026 23:09:42 -- app/version.sh@14 -- # cut -f2 00:06:53.026 23:09:42 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:53.026 23:09:42 -- app/version.sh@14 -- # tr -d '"' 00:06:53.026 23:09:42 -- app/version.sh@19 -- # patch=0 00:06:53.026 23:09:42 -- app/version.sh@20 -- # get_header_version suffix 00:06:53.027 23:09:42 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:53.027 23:09:42 -- app/version.sh@14 -- # cut -f2 00:06:53.027 23:09:42 -- app/version.sh@14 -- # tr -d '"' 00:06:53.027 23:09:42 -- app/version.sh@20 -- # suffix=-pre 00:06:53.027 23:09:42 -- app/version.sh@22 -- # version=24.5 00:06:53.027 23:09:42 -- app/version.sh@25 -- # (( patch != 0 )) 00:06:53.027 23:09:42 -- app/version.sh@28 -- # version=24.5rc0 00:06:53.027 23:09:42 -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:06:53.027 23:09:42 -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:06:53.027 23:09:42 -- app/version.sh@30 -- # py_version=24.5rc0 00:06:53.027 23:09:42 -- app/version.sh@31 -- # [[ 24.5rc0 == \2\4\.\5\r\c\0 ]] 00:06:53.027 00:06:53.027 real 0m0.151s 00:06:53.027 user 0m0.082s 00:06:53.027 sys 0m0.101s 00:06:53.027 23:09:42 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:53.027 23:09:42 -- common/autotest_common.sh@10 -- # set +x 00:06:53.027 ************************************ 00:06:53.027 END TEST version 00:06:53.027 ************************************ 00:06:53.027 23:09:42 -- spdk/autotest.sh@184 -- # '[' 0 -eq 1 ']' 00:06:53.027 23:09:42 -- spdk/autotest.sh@194 -- # uname -s 00:06:53.027 23:09:42 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:06:53.027 23:09:42 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:06:53.027 23:09:42 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:06:53.027 23:09:42 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:06:53.027 23:09:42 -- spdk/autotest.sh@254 -- # '[' 0 -eq 1 ']' 00:06:53.027 23:09:42 -- spdk/autotest.sh@258 -- # timing_exit lib 00:06:53.027 23:09:42 -- common/autotest_common.sh@716 -- # xtrace_disable 00:06:53.027 23:09:42 -- common/autotest_common.sh@10 -- # set +x 00:06:53.027 23:09:42 -- spdk/autotest.sh@260 -- # '[' 0 -eq 1 ']' 00:06:53.027 23:09:42 -- spdk/autotest.sh@268 -- # '[' 0 -eq 1 ']' 00:06:53.027 23:09:42 -- spdk/autotest.sh@277 -- # '[' 1 -eq 1 ']' 00:06:53.027 23:09:42 -- spdk/autotest.sh@278 -- # export NET_TYPE 00:06:53.027 23:09:42 -- spdk/autotest.sh@281 -- # '[' tcp = rdma ']' 00:06:53.027 23:09:42 -- spdk/autotest.sh@284 -- # '[' tcp = tcp ']' 00:06:53.027 23:09:42 -- spdk/autotest.sh@285 -- # run_test nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:06:53.027 23:09:42 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:06:53.027 23:09:42 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:53.027 23:09:42 -- common/autotest_common.sh@10 -- # set +x 00:06:53.288 ************************************ 00:06:53.288 START TEST nvmf_tcp 00:06:53.288 ************************************ 00:06:53.288 23:09:42 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:06:53.288 * Looking for test storage... 00:06:53.288 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:06:53.288 23:09:42 -- nvmf/nvmf.sh@10 -- # uname -s 00:06:53.288 23:09:42 -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:06:53.288 23:09:42 -- nvmf/nvmf.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:53.288 23:09:42 -- nvmf/common.sh@7 -- # uname -s 00:06:53.288 23:09:42 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:53.288 23:09:42 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:53.288 23:09:42 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:53.288 23:09:42 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:53.288 23:09:42 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:53.288 23:09:42 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:53.288 23:09:42 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:53.288 23:09:42 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:53.288 23:09:42 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:53.288 23:09:42 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:53.288 23:09:42 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:06:53.288 23:09:42 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:06:53.288 23:09:42 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:53.288 23:09:42 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:53.288 23:09:42 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:53.288 23:09:42 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:53.288 23:09:42 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:53.288 23:09:42 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:53.288 23:09:42 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:53.288 23:09:42 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:53.288 23:09:42 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:53.288 23:09:42 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:53.288 23:09:42 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:53.288 23:09:42 -- paths/export.sh@5 -- # export PATH 00:06:53.288 23:09:42 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:53.288 23:09:42 -- nvmf/common.sh@47 -- # : 0 00:06:53.288 23:09:42 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:53.288 23:09:42 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:53.288 23:09:42 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:53.288 23:09:42 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:53.288 23:09:42 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:53.288 23:09:42 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:53.288 23:09:42 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:53.288 23:09:42 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:53.288 23:09:42 -- nvmf/nvmf.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:06:53.288 23:09:42 -- nvmf/nvmf.sh@18 -- # TEST_ARGS=("$@") 00:06:53.288 23:09:42 -- nvmf/nvmf.sh@20 -- # timing_enter target 00:06:53.288 23:09:42 -- common/autotest_common.sh@710 -- # xtrace_disable 00:06:53.288 23:09:42 -- common/autotest_common.sh@10 -- # set +x 00:06:53.288 23:09:42 -- nvmf/nvmf.sh@22 -- # [[ 0 -eq 0 ]] 00:06:53.288 23:09:42 -- nvmf/nvmf.sh@23 -- # run_test nvmf_example /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:06:53.288 23:09:42 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:06:53.288 23:09:42 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:53.288 23:09:42 -- common/autotest_common.sh@10 -- # set +x 00:06:53.549 ************************************ 00:06:53.549 START TEST nvmf_example 00:06:53.549 ************************************ 00:06:53.549 23:09:42 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:06:53.549 * Looking for test storage... 00:06:53.549 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:53.549 23:09:42 -- target/nvmf_example.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:53.549 23:09:42 -- nvmf/common.sh@7 -- # uname -s 00:06:53.549 23:09:42 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:53.549 23:09:42 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:53.549 23:09:42 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:53.549 23:09:42 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:53.549 23:09:42 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:53.549 23:09:42 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:53.549 23:09:42 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:53.549 23:09:42 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:53.549 23:09:42 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:53.549 23:09:42 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:53.549 23:09:42 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:06:53.549 23:09:42 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:06:53.549 23:09:42 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:53.549 23:09:42 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:53.549 23:09:42 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:53.549 23:09:42 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:53.549 23:09:42 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:53.549 23:09:42 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:53.810 23:09:42 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:53.810 23:09:42 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:53.810 23:09:42 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:53.810 23:09:42 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:53.810 23:09:42 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:53.810 23:09:42 -- paths/export.sh@5 -- # export PATH 00:06:53.810 23:09:42 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:53.810 23:09:42 -- nvmf/common.sh@47 -- # : 0 00:06:53.810 23:09:42 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:53.810 23:09:42 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:53.810 23:09:42 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:53.810 23:09:42 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:53.810 23:09:42 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:53.810 23:09:42 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:53.810 23:09:42 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:53.810 23:09:42 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:53.810 23:09:42 -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:06:53.810 23:09:42 -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:06:53.810 23:09:42 -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:06:53.810 23:09:42 -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:06:53.810 23:09:42 -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:06:53.810 23:09:42 -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:06:53.810 23:09:42 -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:06:53.810 23:09:42 -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:06:53.810 23:09:42 -- common/autotest_common.sh@710 -- # xtrace_disable 00:06:53.810 23:09:42 -- common/autotest_common.sh@10 -- # set +x 00:06:53.810 23:09:42 -- target/nvmf_example.sh@41 -- # nvmftestinit 00:06:53.810 23:09:42 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:06:53.810 23:09:42 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:53.810 23:09:42 -- nvmf/common.sh@437 -- # prepare_net_devs 00:06:53.810 23:09:42 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:06:53.810 23:09:42 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:06:53.810 23:09:42 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:53.810 23:09:42 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:06:53.810 23:09:42 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:53.810 23:09:42 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:06:53.810 23:09:42 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:06:53.810 23:09:42 -- nvmf/common.sh@285 -- # xtrace_disable 00:06:53.810 23:09:42 -- common/autotest_common.sh@10 -- # set +x 00:07:00.397 23:09:49 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:07:00.397 23:09:49 -- nvmf/common.sh@291 -- # pci_devs=() 00:07:00.397 23:09:49 -- nvmf/common.sh@291 -- # local -a pci_devs 00:07:00.397 23:09:49 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:07:00.397 23:09:49 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:07:00.397 23:09:49 -- nvmf/common.sh@293 -- # pci_drivers=() 00:07:00.397 23:09:49 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:07:00.397 23:09:49 -- nvmf/common.sh@295 -- # net_devs=() 00:07:00.397 23:09:49 -- nvmf/common.sh@295 -- # local -ga net_devs 00:07:00.397 23:09:49 -- nvmf/common.sh@296 -- # e810=() 00:07:00.397 23:09:49 -- nvmf/common.sh@296 -- # local -ga e810 00:07:00.397 23:09:49 -- nvmf/common.sh@297 -- # x722=() 00:07:00.397 23:09:49 -- nvmf/common.sh@297 -- # local -ga x722 00:07:00.397 23:09:49 -- nvmf/common.sh@298 -- # mlx=() 00:07:00.397 23:09:49 -- nvmf/common.sh@298 -- # local -ga mlx 00:07:00.397 23:09:49 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:00.397 23:09:49 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:00.397 23:09:49 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:00.397 23:09:49 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:00.397 23:09:49 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:00.397 23:09:49 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:00.397 23:09:49 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:00.397 23:09:49 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:00.397 23:09:49 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:00.397 23:09:49 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:00.397 23:09:49 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:00.397 23:09:49 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:07:00.397 23:09:49 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:07:00.397 23:09:49 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:07:00.397 23:09:49 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:07:00.397 23:09:49 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:07:00.397 23:09:49 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:07:00.397 23:09:49 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:00.397 23:09:49 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:07:00.397 Found 0000:31:00.0 (0x8086 - 0x159b) 00:07:00.397 23:09:49 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:00.397 23:09:49 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:00.397 23:09:49 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:00.397 23:09:49 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:00.397 23:09:49 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:00.397 23:09:49 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:00.397 23:09:49 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:07:00.397 Found 0000:31:00.1 (0x8086 - 0x159b) 00:07:00.397 23:09:49 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:00.397 23:09:49 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:00.397 23:09:49 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:00.397 23:09:49 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:00.397 23:09:49 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:00.397 23:09:49 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:07:00.397 23:09:49 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:07:00.397 23:09:49 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:07:00.397 23:09:49 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:00.397 23:09:49 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:00.397 23:09:49 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:07:00.397 23:09:49 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:00.397 23:09:49 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:07:00.397 Found net devices under 0000:31:00.0: cvl_0_0 00:07:00.658 23:09:49 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:07:00.658 23:09:49 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:00.658 23:09:49 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:00.658 23:09:49 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:07:00.658 23:09:49 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:00.658 23:09:49 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:07:00.658 Found net devices under 0000:31:00.1: cvl_0_1 00:07:00.658 23:09:49 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:07:00.658 23:09:49 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:07:00.658 23:09:49 -- nvmf/common.sh@403 -- # is_hw=yes 00:07:00.658 23:09:49 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:07:00.658 23:09:49 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:07:00.658 23:09:49 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:07:00.658 23:09:49 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:00.658 23:09:49 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:00.658 23:09:49 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:00.658 23:09:49 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:07:00.658 23:09:49 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:00.658 23:09:49 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:00.658 23:09:49 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:07:00.658 23:09:49 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:00.658 23:09:49 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:00.658 23:09:49 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:07:00.658 23:09:49 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:07:00.658 23:09:49 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:07:00.658 23:09:49 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:00.658 23:09:49 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:00.658 23:09:49 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:00.658 23:09:49 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:07:00.658 23:09:49 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:00.919 23:09:49 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:00.919 23:09:49 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:00.919 23:09:49 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:07:00.919 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:00.919 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.669 ms 00:07:00.919 00:07:00.919 --- 10.0.0.2 ping statistics --- 00:07:00.919 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:00.919 rtt min/avg/max/mdev = 0.669/0.669/0.669/0.000 ms 00:07:00.919 23:09:49 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:00.919 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:00.919 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.281 ms 00:07:00.919 00:07:00.919 --- 10.0.0.1 ping statistics --- 00:07:00.919 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:00.919 rtt min/avg/max/mdev = 0.281/0.281/0.281/0.000 ms 00:07:00.919 23:09:49 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:00.919 23:09:49 -- nvmf/common.sh@411 -- # return 0 00:07:00.919 23:09:49 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:07:00.919 23:09:49 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:00.919 23:09:49 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:07:00.919 23:09:49 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:07:00.919 23:09:49 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:00.919 23:09:49 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:07:00.919 23:09:49 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:07:00.919 23:09:50 -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:07:00.919 23:09:50 -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:07:00.919 23:09:50 -- common/autotest_common.sh@710 -- # xtrace_disable 00:07:00.919 23:09:50 -- common/autotest_common.sh@10 -- # set +x 00:07:00.919 23:09:50 -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:07:00.919 23:09:50 -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:07:00.919 23:09:50 -- target/nvmf_example.sh@34 -- # nvmfpid=3747126 00:07:00.919 23:09:50 -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:07:00.919 23:09:50 -- target/nvmf_example.sh@36 -- # waitforlisten 3747126 00:07:00.919 23:09:50 -- target/nvmf_example.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:07:00.919 23:09:50 -- common/autotest_common.sh@817 -- # '[' -z 3747126 ']' 00:07:00.919 23:09:50 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:00.919 23:09:50 -- common/autotest_common.sh@822 -- # local max_retries=100 00:07:00.919 23:09:50 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:00.919 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:00.919 23:09:50 -- common/autotest_common.sh@826 -- # xtrace_disable 00:07:00.919 23:09:50 -- common/autotest_common.sh@10 -- # set +x 00:07:00.919 EAL: No free 2048 kB hugepages reported on node 1 00:07:01.864 23:09:50 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:07:01.864 23:09:50 -- common/autotest_common.sh@850 -- # return 0 00:07:01.864 23:09:50 -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:07:01.864 23:09:50 -- common/autotest_common.sh@716 -- # xtrace_disable 00:07:01.864 23:09:50 -- common/autotest_common.sh@10 -- # set +x 00:07:01.864 23:09:50 -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:01.864 23:09:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:01.864 23:09:50 -- common/autotest_common.sh@10 -- # set +x 00:07:01.864 23:09:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:01.864 23:09:50 -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:07:01.864 23:09:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:01.864 23:09:50 -- common/autotest_common.sh@10 -- # set +x 00:07:01.864 23:09:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:01.864 23:09:50 -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:07:01.864 23:09:50 -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:07:01.864 23:09:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:01.864 23:09:50 -- common/autotest_common.sh@10 -- # set +x 00:07:01.864 23:09:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:01.864 23:09:50 -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:07:01.864 23:09:50 -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:07:01.864 23:09:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:01.864 23:09:50 -- common/autotest_common.sh@10 -- # set +x 00:07:01.864 23:09:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:01.864 23:09:50 -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:01.864 23:09:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:01.864 23:09:50 -- common/autotest_common.sh@10 -- # set +x 00:07:01.864 23:09:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:01.864 23:09:50 -- target/nvmf_example.sh@59 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:07:01.864 23:09:50 -- target/nvmf_example.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:07:01.864 EAL: No free 2048 kB hugepages reported on node 1 00:07:14.107 Initializing NVMe Controllers 00:07:14.107 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:07:14.107 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:07:14.107 Initialization complete. Launching workers. 00:07:14.107 ======================================================== 00:07:14.107 Latency(us) 00:07:14.107 Device Information : IOPS MiB/s Average min max 00:07:14.107 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 17269.50 67.46 3707.20 864.60 19100.14 00:07:14.107 ======================================================== 00:07:14.107 Total : 17269.50 67.46 3707.20 864.60 19100.14 00:07:14.107 00:07:14.107 23:10:01 -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:07:14.107 23:10:01 -- target/nvmf_example.sh@66 -- # nvmftestfini 00:07:14.107 23:10:01 -- nvmf/common.sh@477 -- # nvmfcleanup 00:07:14.107 23:10:01 -- nvmf/common.sh@117 -- # sync 00:07:14.107 23:10:01 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:07:14.107 23:10:01 -- nvmf/common.sh@120 -- # set +e 00:07:14.107 23:10:01 -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:14.107 23:10:01 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:07:14.107 rmmod nvme_tcp 00:07:14.107 rmmod nvme_fabrics 00:07:14.107 rmmod nvme_keyring 00:07:14.107 23:10:01 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:14.107 23:10:01 -- nvmf/common.sh@124 -- # set -e 00:07:14.107 23:10:01 -- nvmf/common.sh@125 -- # return 0 00:07:14.107 23:10:01 -- nvmf/common.sh@478 -- # '[' -n 3747126 ']' 00:07:14.107 23:10:01 -- nvmf/common.sh@479 -- # killprocess 3747126 00:07:14.107 23:10:01 -- common/autotest_common.sh@936 -- # '[' -z 3747126 ']' 00:07:14.107 23:10:01 -- common/autotest_common.sh@940 -- # kill -0 3747126 00:07:14.107 23:10:01 -- common/autotest_common.sh@941 -- # uname 00:07:14.107 23:10:01 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:07:14.107 23:10:01 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3747126 00:07:14.107 23:10:01 -- common/autotest_common.sh@942 -- # process_name=nvmf 00:07:14.107 23:10:01 -- common/autotest_common.sh@946 -- # '[' nvmf = sudo ']' 00:07:14.107 23:10:01 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3747126' 00:07:14.107 killing process with pid 3747126 00:07:14.107 23:10:01 -- common/autotest_common.sh@955 -- # kill 3747126 00:07:14.107 23:10:01 -- common/autotest_common.sh@960 -- # wait 3747126 00:07:14.107 nvmf threads initialize successfully 00:07:14.107 bdev subsystem init successfully 00:07:14.107 created a nvmf target service 00:07:14.107 create targets's poll groups done 00:07:14.107 all subsystems of target started 00:07:14.107 nvmf target is running 00:07:14.107 all subsystems of target stopped 00:07:14.107 destroy targets's poll groups done 00:07:14.107 destroyed the nvmf target service 00:07:14.107 bdev subsystem finish successfully 00:07:14.107 nvmf threads destroy successfully 00:07:14.107 23:10:01 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:07:14.107 23:10:01 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:07:14.107 23:10:01 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:07:14.107 23:10:01 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:14.107 23:10:01 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:07:14.107 23:10:01 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:14.107 23:10:01 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:14.107 23:10:01 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:14.368 23:10:03 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:07:14.368 23:10:03 -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:07:14.368 23:10:03 -- common/autotest_common.sh@716 -- # xtrace_disable 00:07:14.368 23:10:03 -- common/autotest_common.sh@10 -- # set +x 00:07:14.368 00:07:14.368 real 0m20.844s 00:07:14.368 user 0m46.384s 00:07:14.368 sys 0m6.383s 00:07:14.368 23:10:03 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:14.368 23:10:03 -- common/autotest_common.sh@10 -- # set +x 00:07:14.368 ************************************ 00:07:14.368 END TEST nvmf_example 00:07:14.368 ************************************ 00:07:14.368 23:10:03 -- nvmf/nvmf.sh@24 -- # run_test nvmf_filesystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:07:14.368 23:10:03 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:07:14.368 23:10:03 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:14.368 23:10:03 -- common/autotest_common.sh@10 -- # set +x 00:07:14.631 ************************************ 00:07:14.631 START TEST nvmf_filesystem 00:07:14.631 ************************************ 00:07:14.631 23:10:03 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:07:14.631 * Looking for test storage... 00:07:14.631 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:14.631 23:10:03 -- target/filesystem.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh 00:07:14.631 23:10:03 -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:07:14.631 23:10:03 -- common/autotest_common.sh@34 -- # set -e 00:07:14.631 23:10:03 -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:07:14.631 23:10:03 -- common/autotest_common.sh@36 -- # shopt -s extglob 00:07:14.631 23:10:03 -- common/autotest_common.sh@38 -- # '[' -z /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output ']' 00:07:14.631 23:10:03 -- common/autotest_common.sh@43 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh ]] 00:07:14.631 23:10:03 -- common/autotest_common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh 00:07:14.631 23:10:03 -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:07:14.631 23:10:03 -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:07:14.631 23:10:03 -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:07:14.631 23:10:03 -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:07:14.632 23:10:03 -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:07:14.632 23:10:03 -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:07:14.632 23:10:03 -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:07:14.632 23:10:03 -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:07:14.632 23:10:03 -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:07:14.632 23:10:03 -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:07:14.632 23:10:03 -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:07:14.632 23:10:03 -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:07:14.632 23:10:03 -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:07:14.632 23:10:03 -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:07:14.632 23:10:03 -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:07:14.632 23:10:03 -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:07:14.632 23:10:03 -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:07:14.632 23:10:03 -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:07:14.632 23:10:03 -- common/build_config.sh@19 -- # CONFIG_ENV=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:07:14.632 23:10:03 -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:07:14.632 23:10:03 -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:07:14.632 23:10:03 -- common/build_config.sh@22 -- # CONFIG_CET=n 00:07:14.632 23:10:03 -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:07:14.632 23:10:03 -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:07:14.632 23:10:03 -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:07:14.632 23:10:03 -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:07:14.632 23:10:03 -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:07:14.632 23:10:03 -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:07:14.632 23:10:03 -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:07:14.632 23:10:03 -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:07:14.632 23:10:03 -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:07:14.632 23:10:03 -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:07:14.632 23:10:03 -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:07:14.632 23:10:03 -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 00:07:14.632 23:10:03 -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 00:07:14.632 23:10:03 -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:07:14.632 23:10:03 -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:07:14.632 23:10:03 -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:07:14.632 23:10:03 -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:07:14.632 23:10:03 -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:07:14.632 23:10:03 -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR=//var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:07:14.632 23:10:03 -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:07:14.632 23:10:03 -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:07:14.632 23:10:03 -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:07:14.632 23:10:03 -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:07:14.632 23:10:03 -- common/build_config.sh@46 -- # CONFIG_COVERAGE=y 00:07:14.632 23:10:03 -- common/build_config.sh@47 -- # CONFIG_RDMA=y 00:07:14.632 23:10:03 -- common/build_config.sh@48 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:07:14.632 23:10:03 -- common/build_config.sh@49 -- # CONFIG_URING_PATH= 00:07:14.632 23:10:03 -- common/build_config.sh@50 -- # CONFIG_XNVME=n 00:07:14.632 23:10:03 -- common/build_config.sh@51 -- # CONFIG_VFIO_USER=y 00:07:14.632 23:10:03 -- common/build_config.sh@52 -- # CONFIG_ARCH=native 00:07:14.632 23:10:03 -- common/build_config.sh@53 -- # CONFIG_HAVE_EVP_MAC=y 00:07:14.632 23:10:03 -- common/build_config.sh@54 -- # CONFIG_URING_ZNS=n 00:07:14.632 23:10:03 -- common/build_config.sh@55 -- # CONFIG_WERROR=y 00:07:14.632 23:10:03 -- common/build_config.sh@56 -- # CONFIG_HAVE_LIBBSD=n 00:07:14.632 23:10:03 -- common/build_config.sh@57 -- # CONFIG_UBSAN=y 00:07:14.632 23:10:03 -- common/build_config.sh@58 -- # CONFIG_IPSEC_MB_DIR= 00:07:14.632 23:10:03 -- common/build_config.sh@59 -- # CONFIG_GOLANG=n 00:07:14.632 23:10:03 -- common/build_config.sh@60 -- # CONFIG_ISAL=y 00:07:14.632 23:10:03 -- common/build_config.sh@61 -- # CONFIG_IDXD_KERNEL=n 00:07:14.632 23:10:03 -- common/build_config.sh@62 -- # CONFIG_DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:07:14.632 23:10:03 -- common/build_config.sh@63 -- # CONFIG_RDMA_PROV=verbs 00:07:14.632 23:10:03 -- common/build_config.sh@64 -- # CONFIG_APPS=y 00:07:14.632 23:10:03 -- common/build_config.sh@65 -- # CONFIG_SHARED=y 00:07:14.632 23:10:03 -- common/build_config.sh@66 -- # CONFIG_HAVE_KEYUTILS=n 00:07:14.632 23:10:03 -- common/build_config.sh@67 -- # CONFIG_FC_PATH= 00:07:14.632 23:10:03 -- common/build_config.sh@68 -- # CONFIG_DPDK_PKG_CONFIG=n 00:07:14.632 23:10:03 -- common/build_config.sh@69 -- # CONFIG_FC=n 00:07:14.632 23:10:03 -- common/build_config.sh@70 -- # CONFIG_AVAHI=n 00:07:14.632 23:10:03 -- common/build_config.sh@71 -- # CONFIG_FIO_PLUGIN=y 00:07:14.632 23:10:03 -- common/build_config.sh@72 -- # CONFIG_RAID5F=n 00:07:14.632 23:10:03 -- common/build_config.sh@73 -- # CONFIG_EXAMPLES=y 00:07:14.632 23:10:03 -- common/build_config.sh@74 -- # CONFIG_TESTS=y 00:07:14.632 23:10:03 -- common/build_config.sh@75 -- # CONFIG_CRYPTO_MLX5=n 00:07:14.632 23:10:03 -- common/build_config.sh@76 -- # CONFIG_MAX_LCORES= 00:07:14.632 23:10:03 -- common/build_config.sh@77 -- # CONFIG_IPSEC_MB=n 00:07:14.632 23:10:03 -- common/build_config.sh@78 -- # CONFIG_PGO_DIR= 00:07:14.632 23:10:03 -- common/build_config.sh@79 -- # CONFIG_DEBUG=y 00:07:14.632 23:10:03 -- common/build_config.sh@80 -- # CONFIG_DPDK_COMPRESSDEV=n 00:07:14.632 23:10:03 -- common/build_config.sh@81 -- # CONFIG_CROSS_PREFIX= 00:07:14.632 23:10:03 -- common/build_config.sh@82 -- # CONFIG_URING=n 00:07:14.632 23:10:03 -- common/autotest_common.sh@53 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:07:14.632 23:10:03 -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:07:14.632 23:10:03 -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:07:14.632 23:10:03 -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:07:14.632 23:10:03 -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:07:14.632 23:10:03 -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:07:14.632 23:10:03 -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:07:14.632 23:10:03 -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:07:14.632 23:10:03 -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:07:14.632 23:10:03 -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:07:14.632 23:10:03 -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:07:14.632 23:10:03 -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:07:14.632 23:10:03 -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:07:14.632 23:10:03 -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:07:14.632 23:10:03 -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/config.h ]] 00:07:14.632 23:10:03 -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:07:14.632 #define SPDK_CONFIG_H 00:07:14.632 #define SPDK_CONFIG_APPS 1 00:07:14.632 #define SPDK_CONFIG_ARCH native 00:07:14.632 #undef SPDK_CONFIG_ASAN 00:07:14.632 #undef SPDK_CONFIG_AVAHI 00:07:14.632 #undef SPDK_CONFIG_CET 00:07:14.632 #define SPDK_CONFIG_COVERAGE 1 00:07:14.632 #define SPDK_CONFIG_CROSS_PREFIX 00:07:14.632 #undef SPDK_CONFIG_CRYPTO 00:07:14.632 #undef SPDK_CONFIG_CRYPTO_MLX5 00:07:14.632 #undef SPDK_CONFIG_CUSTOMOCF 00:07:14.632 #undef SPDK_CONFIG_DAOS 00:07:14.632 #define SPDK_CONFIG_DAOS_DIR 00:07:14.632 #define SPDK_CONFIG_DEBUG 1 00:07:14.632 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:07:14.632 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:07:14.632 #define SPDK_CONFIG_DPDK_INC_DIR //var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:07:14.632 #define SPDK_CONFIG_DPDK_LIB_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:07:14.632 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:07:14.632 #define SPDK_CONFIG_ENV /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:07:14.632 #define SPDK_CONFIG_EXAMPLES 1 00:07:14.632 #undef SPDK_CONFIG_FC 00:07:14.632 #define SPDK_CONFIG_FC_PATH 00:07:14.632 #define SPDK_CONFIG_FIO_PLUGIN 1 00:07:14.632 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:07:14.632 #undef SPDK_CONFIG_FUSE 00:07:14.632 #undef SPDK_CONFIG_FUZZER 00:07:14.632 #define SPDK_CONFIG_FUZZER_LIB 00:07:14.632 #undef SPDK_CONFIG_GOLANG 00:07:14.632 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:07:14.632 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:07:14.632 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:07:14.632 #undef SPDK_CONFIG_HAVE_KEYUTILS 00:07:14.632 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:07:14.632 #undef SPDK_CONFIG_HAVE_LIBBSD 00:07:14.632 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:07:14.632 #define SPDK_CONFIG_IDXD 1 00:07:14.632 #undef SPDK_CONFIG_IDXD_KERNEL 00:07:14.632 #undef SPDK_CONFIG_IPSEC_MB 00:07:14.632 #define SPDK_CONFIG_IPSEC_MB_DIR 00:07:14.632 #define SPDK_CONFIG_ISAL 1 00:07:14.632 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:07:14.632 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:07:14.632 #define SPDK_CONFIG_LIBDIR 00:07:14.632 #undef SPDK_CONFIG_LTO 00:07:14.632 #define SPDK_CONFIG_MAX_LCORES 00:07:14.632 #define SPDK_CONFIG_NVME_CUSE 1 00:07:14.632 #undef SPDK_CONFIG_OCF 00:07:14.632 #define SPDK_CONFIG_OCF_PATH 00:07:14.632 #define SPDK_CONFIG_OPENSSL_PATH 00:07:14.632 #undef SPDK_CONFIG_PGO_CAPTURE 00:07:14.632 #define SPDK_CONFIG_PGO_DIR 00:07:14.632 #undef SPDK_CONFIG_PGO_USE 00:07:14.632 #define SPDK_CONFIG_PREFIX /usr/local 00:07:14.632 #undef SPDK_CONFIG_RAID5F 00:07:14.632 #undef SPDK_CONFIG_RBD 00:07:14.632 #define SPDK_CONFIG_RDMA 1 00:07:14.632 #define SPDK_CONFIG_RDMA_PROV verbs 00:07:14.632 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:07:14.632 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:07:14.632 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:07:14.632 #define SPDK_CONFIG_SHARED 1 00:07:14.632 #undef SPDK_CONFIG_SMA 00:07:14.632 #define SPDK_CONFIG_TESTS 1 00:07:14.632 #undef SPDK_CONFIG_TSAN 00:07:14.632 #define SPDK_CONFIG_UBLK 1 00:07:14.632 #define SPDK_CONFIG_UBSAN 1 00:07:14.632 #undef SPDK_CONFIG_UNIT_TESTS 00:07:14.633 #undef SPDK_CONFIG_URING 00:07:14.633 #define SPDK_CONFIG_URING_PATH 00:07:14.633 #undef SPDK_CONFIG_URING_ZNS 00:07:14.633 #undef SPDK_CONFIG_USDT 00:07:14.633 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:07:14.633 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:07:14.633 #define SPDK_CONFIG_VFIO_USER 1 00:07:14.633 #define SPDK_CONFIG_VFIO_USER_DIR 00:07:14.633 #define SPDK_CONFIG_VHOST 1 00:07:14.633 #define SPDK_CONFIG_VIRTIO 1 00:07:14.633 #undef SPDK_CONFIG_VTUNE 00:07:14.633 #define SPDK_CONFIG_VTUNE_DIR 00:07:14.633 #define SPDK_CONFIG_WERROR 1 00:07:14.633 #define SPDK_CONFIG_WPDK_DIR 00:07:14.633 #undef SPDK_CONFIG_XNVME 00:07:14.633 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:07:14.633 23:10:03 -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:07:14.633 23:10:03 -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:14.633 23:10:03 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:14.633 23:10:03 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:14.633 23:10:03 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:14.633 23:10:03 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:14.633 23:10:03 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:14.633 23:10:03 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:14.633 23:10:03 -- paths/export.sh@5 -- # export PATH 00:07:14.633 23:10:03 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:14.633 23:10:03 -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:07:14.633 23:10:03 -- pm/common@6 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:07:14.633 23:10:03 -- pm/common@6 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:07:14.633 23:10:03 -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:07:14.633 23:10:03 -- pm/common@7 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/../../../ 00:07:14.633 23:10:03 -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:07:14.633 23:10:03 -- pm/common@67 -- # TEST_TAG=N/A 00:07:14.633 23:10:03 -- pm/common@68 -- # TEST_TAG_FILE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.run_test_name 00:07:14.633 23:10:03 -- pm/common@70 -- # PM_OUTPUTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:07:14.633 23:10:03 -- pm/common@71 -- # uname -s 00:07:14.633 23:10:03 -- pm/common@71 -- # PM_OS=Linux 00:07:14.633 23:10:03 -- pm/common@73 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:07:14.633 23:10:03 -- pm/common@74 -- # [[ Linux == FreeBSD ]] 00:07:14.633 23:10:03 -- pm/common@76 -- # [[ Linux == Linux ]] 00:07:14.633 23:10:03 -- pm/common@76 -- # [[ ............................... != QEMU ]] 00:07:14.633 23:10:03 -- pm/common@76 -- # [[ ! -e /.dockerenv ]] 00:07:14.633 23:10:03 -- pm/common@79 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:07:14.633 23:10:03 -- pm/common@80 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:07:14.633 23:10:03 -- pm/common@83 -- # MONITOR_RESOURCES_PIDS=() 00:07:14.633 23:10:03 -- pm/common@83 -- # declare -A MONITOR_RESOURCES_PIDS 00:07:14.633 23:10:03 -- pm/common@85 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:07:14.633 23:10:03 -- common/autotest_common.sh@57 -- # : 1 00:07:14.633 23:10:03 -- common/autotest_common.sh@58 -- # export RUN_NIGHTLY 00:07:14.633 23:10:03 -- common/autotest_common.sh@61 -- # : 0 00:07:14.633 23:10:03 -- common/autotest_common.sh@62 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:07:14.633 23:10:03 -- common/autotest_common.sh@63 -- # : 0 00:07:14.633 23:10:03 -- common/autotest_common.sh@64 -- # export SPDK_RUN_VALGRIND 00:07:14.633 23:10:03 -- common/autotest_common.sh@65 -- # : 1 00:07:14.633 23:10:03 -- common/autotest_common.sh@66 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:07:14.633 23:10:03 -- common/autotest_common.sh@67 -- # : 0 00:07:14.633 23:10:03 -- common/autotest_common.sh@68 -- # export SPDK_TEST_UNITTEST 00:07:14.633 23:10:03 -- common/autotest_common.sh@69 -- # : 00:07:14.633 23:10:03 -- common/autotest_common.sh@70 -- # export SPDK_TEST_AUTOBUILD 00:07:14.633 23:10:03 -- common/autotest_common.sh@71 -- # : 0 00:07:14.633 23:10:03 -- common/autotest_common.sh@72 -- # export SPDK_TEST_RELEASE_BUILD 00:07:14.633 23:10:03 -- common/autotest_common.sh@73 -- # : 0 00:07:14.633 23:10:03 -- common/autotest_common.sh@74 -- # export SPDK_TEST_ISAL 00:07:14.633 23:10:03 -- common/autotest_common.sh@75 -- # : 0 00:07:14.633 23:10:03 -- common/autotest_common.sh@76 -- # export SPDK_TEST_ISCSI 00:07:14.633 23:10:03 -- common/autotest_common.sh@77 -- # : 0 00:07:14.633 23:10:03 -- common/autotest_common.sh@78 -- # export SPDK_TEST_ISCSI_INITIATOR 00:07:14.633 23:10:03 -- common/autotest_common.sh@79 -- # : 0 00:07:14.633 23:10:03 -- common/autotest_common.sh@80 -- # export SPDK_TEST_NVME 00:07:14.633 23:10:03 -- common/autotest_common.sh@81 -- # : 0 00:07:14.633 23:10:03 -- common/autotest_common.sh@82 -- # export SPDK_TEST_NVME_PMR 00:07:14.633 23:10:03 -- common/autotest_common.sh@83 -- # : 0 00:07:14.633 23:10:03 -- common/autotest_common.sh@84 -- # export SPDK_TEST_NVME_BP 00:07:14.633 23:10:03 -- common/autotest_common.sh@85 -- # : 1 00:07:14.633 23:10:03 -- common/autotest_common.sh@86 -- # export SPDK_TEST_NVME_CLI 00:07:14.633 23:10:03 -- common/autotest_common.sh@87 -- # : 0 00:07:14.633 23:10:03 -- common/autotest_common.sh@88 -- # export SPDK_TEST_NVME_CUSE 00:07:14.633 23:10:03 -- common/autotest_common.sh@89 -- # : 0 00:07:14.633 23:10:03 -- common/autotest_common.sh@90 -- # export SPDK_TEST_NVME_FDP 00:07:14.633 23:10:03 -- common/autotest_common.sh@91 -- # : 1 00:07:14.633 23:10:03 -- common/autotest_common.sh@92 -- # export SPDK_TEST_NVMF 00:07:14.633 23:10:03 -- common/autotest_common.sh@93 -- # : 1 00:07:14.633 23:10:03 -- common/autotest_common.sh@94 -- # export SPDK_TEST_VFIOUSER 00:07:14.633 23:10:03 -- common/autotest_common.sh@95 -- # : 0 00:07:14.633 23:10:03 -- common/autotest_common.sh@96 -- # export SPDK_TEST_VFIOUSER_QEMU 00:07:14.633 23:10:03 -- common/autotest_common.sh@97 -- # : 0 00:07:14.633 23:10:03 -- common/autotest_common.sh@98 -- # export SPDK_TEST_FUZZER 00:07:14.633 23:10:03 -- common/autotest_common.sh@99 -- # : 0 00:07:14.633 23:10:03 -- common/autotest_common.sh@100 -- # export SPDK_TEST_FUZZER_SHORT 00:07:14.633 23:10:03 -- common/autotest_common.sh@101 -- # : tcp 00:07:14.633 23:10:03 -- common/autotest_common.sh@102 -- # export SPDK_TEST_NVMF_TRANSPORT 00:07:14.633 23:10:03 -- common/autotest_common.sh@103 -- # : 0 00:07:14.633 23:10:03 -- common/autotest_common.sh@104 -- # export SPDK_TEST_RBD 00:07:14.633 23:10:03 -- common/autotest_common.sh@105 -- # : 0 00:07:14.633 23:10:03 -- common/autotest_common.sh@106 -- # export SPDK_TEST_VHOST 00:07:14.633 23:10:03 -- common/autotest_common.sh@107 -- # : 0 00:07:14.633 23:10:03 -- common/autotest_common.sh@108 -- # export SPDK_TEST_BLOCKDEV 00:07:14.633 23:10:03 -- common/autotest_common.sh@109 -- # : 0 00:07:14.633 23:10:03 -- common/autotest_common.sh@110 -- # export SPDK_TEST_IOAT 00:07:14.633 23:10:03 -- common/autotest_common.sh@111 -- # : 0 00:07:14.633 23:10:03 -- common/autotest_common.sh@112 -- # export SPDK_TEST_BLOBFS 00:07:14.633 23:10:03 -- common/autotest_common.sh@113 -- # : 0 00:07:14.633 23:10:03 -- common/autotest_common.sh@114 -- # export SPDK_TEST_VHOST_INIT 00:07:14.633 23:10:03 -- common/autotest_common.sh@115 -- # : 0 00:07:14.633 23:10:03 -- common/autotest_common.sh@116 -- # export SPDK_TEST_LVOL 00:07:14.898 23:10:03 -- common/autotest_common.sh@117 -- # : 0 00:07:14.898 23:10:03 -- common/autotest_common.sh@118 -- # export SPDK_TEST_VBDEV_COMPRESS 00:07:14.898 23:10:03 -- common/autotest_common.sh@119 -- # : 0 00:07:14.898 23:10:03 -- common/autotest_common.sh@120 -- # export SPDK_RUN_ASAN 00:07:14.898 23:10:03 -- common/autotest_common.sh@121 -- # : 1 00:07:14.898 23:10:03 -- common/autotest_common.sh@122 -- # export SPDK_RUN_UBSAN 00:07:14.898 23:10:03 -- common/autotest_common.sh@123 -- # : /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:07:14.898 23:10:03 -- common/autotest_common.sh@124 -- # export SPDK_RUN_EXTERNAL_DPDK 00:07:14.898 23:10:03 -- common/autotest_common.sh@125 -- # : 0 00:07:14.898 23:10:03 -- common/autotest_common.sh@126 -- # export SPDK_RUN_NON_ROOT 00:07:14.898 23:10:03 -- common/autotest_common.sh@127 -- # : 0 00:07:14.898 23:10:03 -- common/autotest_common.sh@128 -- # export SPDK_TEST_CRYPTO 00:07:14.898 23:10:03 -- common/autotest_common.sh@129 -- # : 0 00:07:14.898 23:10:03 -- common/autotest_common.sh@130 -- # export SPDK_TEST_FTL 00:07:14.898 23:10:03 -- common/autotest_common.sh@131 -- # : 0 00:07:14.898 23:10:03 -- common/autotest_common.sh@132 -- # export SPDK_TEST_OCF 00:07:14.898 23:10:03 -- common/autotest_common.sh@133 -- # : 0 00:07:14.898 23:10:03 -- common/autotest_common.sh@134 -- # export SPDK_TEST_VMD 00:07:14.898 23:10:03 -- common/autotest_common.sh@135 -- # : 0 00:07:14.898 23:10:03 -- common/autotest_common.sh@136 -- # export SPDK_TEST_OPAL 00:07:14.898 23:10:03 -- common/autotest_common.sh@137 -- # : v23.11 00:07:14.898 23:10:03 -- common/autotest_common.sh@138 -- # export SPDK_TEST_NATIVE_DPDK 00:07:14.898 23:10:03 -- common/autotest_common.sh@139 -- # : true 00:07:14.898 23:10:03 -- common/autotest_common.sh@140 -- # export SPDK_AUTOTEST_X 00:07:14.898 23:10:03 -- common/autotest_common.sh@141 -- # : 0 00:07:14.898 23:10:03 -- common/autotest_common.sh@142 -- # export SPDK_TEST_RAID5 00:07:14.898 23:10:03 -- common/autotest_common.sh@143 -- # : 0 00:07:14.898 23:10:03 -- common/autotest_common.sh@144 -- # export SPDK_TEST_URING 00:07:14.898 23:10:03 -- common/autotest_common.sh@145 -- # : 0 00:07:14.898 23:10:03 -- common/autotest_common.sh@146 -- # export SPDK_TEST_USDT 00:07:14.898 23:10:03 -- common/autotest_common.sh@147 -- # : 0 00:07:14.898 23:10:03 -- common/autotest_common.sh@148 -- # export SPDK_TEST_USE_IGB_UIO 00:07:14.898 23:10:03 -- common/autotest_common.sh@149 -- # : 0 00:07:14.898 23:10:03 -- common/autotest_common.sh@150 -- # export SPDK_TEST_SCHEDULER 00:07:14.898 23:10:03 -- common/autotest_common.sh@151 -- # : 0 00:07:14.898 23:10:03 -- common/autotest_common.sh@152 -- # export SPDK_TEST_SCANBUILD 00:07:14.898 23:10:03 -- common/autotest_common.sh@153 -- # : e810 00:07:14.898 23:10:03 -- common/autotest_common.sh@154 -- # export SPDK_TEST_NVMF_NICS 00:07:14.898 23:10:03 -- common/autotest_common.sh@155 -- # : 0 00:07:14.898 23:10:03 -- common/autotest_common.sh@156 -- # export SPDK_TEST_SMA 00:07:14.898 23:10:03 -- common/autotest_common.sh@157 -- # : 0 00:07:14.898 23:10:03 -- common/autotest_common.sh@158 -- # export SPDK_TEST_DAOS 00:07:14.898 23:10:03 -- common/autotest_common.sh@159 -- # : 0 00:07:14.898 23:10:03 -- common/autotest_common.sh@160 -- # export SPDK_TEST_XNVME 00:07:14.898 23:10:03 -- common/autotest_common.sh@161 -- # : 0 00:07:14.898 23:10:03 -- common/autotest_common.sh@162 -- # export SPDK_TEST_ACCEL_DSA 00:07:14.898 23:10:03 -- common/autotest_common.sh@163 -- # : 0 00:07:14.898 23:10:03 -- common/autotest_common.sh@164 -- # export SPDK_TEST_ACCEL_IAA 00:07:14.898 23:10:03 -- common/autotest_common.sh@166 -- # : 00:07:14.898 23:10:03 -- common/autotest_common.sh@167 -- # export SPDK_TEST_FUZZER_TARGET 00:07:14.898 23:10:03 -- common/autotest_common.sh@168 -- # : 0 00:07:14.898 23:10:03 -- common/autotest_common.sh@169 -- # export SPDK_TEST_NVMF_MDNS 00:07:14.898 23:10:03 -- common/autotest_common.sh@170 -- # : 0 00:07:14.898 23:10:03 -- common/autotest_common.sh@171 -- # export SPDK_JSONRPC_GO_CLIENT 00:07:14.898 23:10:03 -- common/autotest_common.sh@174 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:07:14.898 23:10:03 -- common/autotest_common.sh@174 -- # SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:07:14.898 23:10:03 -- common/autotest_common.sh@175 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:07:14.898 23:10:03 -- common/autotest_common.sh@175 -- # DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:07:14.898 23:10:03 -- common/autotest_common.sh@176 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:07:14.898 23:10:03 -- common/autotest_common.sh@176 -- # VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:07:14.898 23:10:03 -- common/autotest_common.sh@177 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:07:14.898 23:10:03 -- common/autotest_common.sh@177 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:07:14.898 23:10:03 -- common/autotest_common.sh@180 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:07:14.898 23:10:03 -- common/autotest_common.sh@180 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:07:14.898 23:10:03 -- common/autotest_common.sh@184 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:07:14.898 23:10:03 -- common/autotest_common.sh@184 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:07:14.898 23:10:03 -- common/autotest_common.sh@188 -- # export PYTHONDONTWRITEBYTECODE=1 00:07:14.898 23:10:03 -- common/autotest_common.sh@188 -- # PYTHONDONTWRITEBYTECODE=1 00:07:14.898 23:10:03 -- common/autotest_common.sh@192 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:07:14.898 23:10:03 -- common/autotest_common.sh@192 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:07:14.898 23:10:03 -- common/autotest_common.sh@193 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:07:14.898 23:10:03 -- common/autotest_common.sh@193 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:07:14.898 23:10:03 -- common/autotest_common.sh@197 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:07:14.898 23:10:03 -- common/autotest_common.sh@198 -- # rm -rf /var/tmp/asan_suppression_file 00:07:14.898 23:10:03 -- common/autotest_common.sh@199 -- # cat 00:07:14.898 23:10:03 -- common/autotest_common.sh@225 -- # echo leak:libfuse3.so 00:07:14.898 23:10:03 -- common/autotest_common.sh@227 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:07:14.898 23:10:03 -- common/autotest_common.sh@227 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:07:14.898 23:10:03 -- common/autotest_common.sh@229 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:07:14.898 23:10:03 -- common/autotest_common.sh@229 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:07:14.898 23:10:03 -- common/autotest_common.sh@231 -- # '[' -z /var/spdk/dependencies ']' 00:07:14.898 23:10:03 -- common/autotest_common.sh@234 -- # export DEPENDENCY_DIR 00:07:14.898 23:10:03 -- common/autotest_common.sh@238 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:07:14.898 23:10:03 -- common/autotest_common.sh@238 -- # SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:07:14.898 23:10:03 -- common/autotest_common.sh@239 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:07:14.898 23:10:03 -- common/autotest_common.sh@239 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:07:14.898 23:10:03 -- common/autotest_common.sh@242 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:07:14.898 23:10:03 -- common/autotest_common.sh@242 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:07:14.898 23:10:03 -- common/autotest_common.sh@243 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:07:14.898 23:10:03 -- common/autotest_common.sh@243 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:07:14.898 23:10:03 -- common/autotest_common.sh@245 -- # export AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:07:14.898 23:10:03 -- common/autotest_common.sh@245 -- # AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:07:14.898 23:10:03 -- common/autotest_common.sh@248 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:07:14.898 23:10:03 -- common/autotest_common.sh@248 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:07:14.898 23:10:03 -- common/autotest_common.sh@251 -- # '[' 0 -eq 0 ']' 00:07:14.898 23:10:03 -- common/autotest_common.sh@252 -- # export valgrind= 00:07:14.898 23:10:03 -- common/autotest_common.sh@252 -- # valgrind= 00:07:14.898 23:10:03 -- common/autotest_common.sh@258 -- # uname -s 00:07:14.898 23:10:03 -- common/autotest_common.sh@258 -- # '[' Linux = Linux ']' 00:07:14.898 23:10:03 -- common/autotest_common.sh@259 -- # HUGEMEM=4096 00:07:14.898 23:10:03 -- common/autotest_common.sh@260 -- # export CLEAR_HUGE=yes 00:07:14.898 23:10:03 -- common/autotest_common.sh@260 -- # CLEAR_HUGE=yes 00:07:14.898 23:10:03 -- common/autotest_common.sh@261 -- # [[ 0 -eq 1 ]] 00:07:14.898 23:10:03 -- common/autotest_common.sh@261 -- # [[ 0 -eq 1 ]] 00:07:14.898 23:10:03 -- common/autotest_common.sh@268 -- # MAKE=make 00:07:14.898 23:10:03 -- common/autotest_common.sh@269 -- # MAKEFLAGS=-j144 00:07:14.898 23:10:03 -- common/autotest_common.sh@285 -- # export HUGEMEM=4096 00:07:14.898 23:10:03 -- common/autotest_common.sh@285 -- # HUGEMEM=4096 00:07:14.898 23:10:03 -- common/autotest_common.sh@287 -- # NO_HUGE=() 00:07:14.898 23:10:03 -- common/autotest_common.sh@288 -- # TEST_MODE= 00:07:14.898 23:10:03 -- common/autotest_common.sh@289 -- # for i in "$@" 00:07:14.898 23:10:03 -- common/autotest_common.sh@290 -- # case "$i" in 00:07:14.898 23:10:03 -- common/autotest_common.sh@295 -- # TEST_TRANSPORT=tcp 00:07:14.898 23:10:03 -- common/autotest_common.sh@307 -- # [[ -z 3749984 ]] 00:07:14.898 23:10:03 -- common/autotest_common.sh@307 -- # kill -0 3749984 00:07:14.898 23:10:03 -- common/autotest_common.sh@1666 -- # set_test_storage 2147483648 00:07:14.898 23:10:03 -- common/autotest_common.sh@317 -- # [[ -v testdir ]] 00:07:14.898 23:10:03 -- common/autotest_common.sh@319 -- # local requested_size=2147483648 00:07:14.898 23:10:03 -- common/autotest_common.sh@320 -- # local mount target_dir 00:07:14.898 23:10:03 -- common/autotest_common.sh@322 -- # local -A mounts fss sizes avails uses 00:07:14.898 23:10:03 -- common/autotest_common.sh@323 -- # local source fs size avail mount use 00:07:14.898 23:10:03 -- common/autotest_common.sh@325 -- # local storage_fallback storage_candidates 00:07:14.898 23:10:03 -- common/autotest_common.sh@327 -- # mktemp -udt spdk.XXXXXX 00:07:14.898 23:10:03 -- common/autotest_common.sh@327 -- # storage_fallback=/tmp/spdk.iWgFiU 00:07:14.898 23:10:03 -- common/autotest_common.sh@332 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:07:14.898 23:10:03 -- common/autotest_common.sh@334 -- # [[ -n '' ]] 00:07:14.898 23:10:03 -- common/autotest_common.sh@339 -- # [[ -n '' ]] 00:07:14.898 23:10:03 -- common/autotest_common.sh@344 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target /tmp/spdk.iWgFiU/tests/target /tmp/spdk.iWgFiU 00:07:14.898 23:10:03 -- common/autotest_common.sh@347 -- # requested_size=2214592512 00:07:14.898 23:10:03 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:07:14.898 23:10:03 -- common/autotest_common.sh@316 -- # df -T 00:07:14.898 23:10:03 -- common/autotest_common.sh@316 -- # grep -v Filesystem 00:07:14.898 23:10:03 -- common/autotest_common.sh@350 -- # mounts["$mount"]=spdk_devtmpfs 00:07:14.898 23:10:03 -- common/autotest_common.sh@350 -- # fss["$mount"]=devtmpfs 00:07:14.898 23:10:03 -- common/autotest_common.sh@351 -- # avails["$mount"]=67108864 00:07:14.898 23:10:03 -- common/autotest_common.sh@351 -- # sizes["$mount"]=67108864 00:07:14.898 23:10:03 -- common/autotest_common.sh@352 -- # uses["$mount"]=0 00:07:14.898 23:10:03 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:07:14.898 23:10:03 -- common/autotest_common.sh@350 -- # mounts["$mount"]=/dev/pmem0 00:07:14.898 23:10:03 -- common/autotest_common.sh@350 -- # fss["$mount"]=ext2 00:07:14.898 23:10:03 -- common/autotest_common.sh@351 -- # avails["$mount"]=1052192768 00:07:14.898 23:10:03 -- common/autotest_common.sh@351 -- # sizes["$mount"]=5284429824 00:07:14.898 23:10:03 -- common/autotest_common.sh@352 -- # uses["$mount"]=4232237056 00:07:14.898 23:10:03 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:07:14.898 23:10:03 -- common/autotest_common.sh@350 -- # mounts["$mount"]=spdk_root 00:07:14.898 23:10:03 -- common/autotest_common.sh@350 -- # fss["$mount"]=overlay 00:07:14.898 23:10:03 -- common/autotest_common.sh@351 -- # avails["$mount"]=120358506496 00:07:14.898 23:10:03 -- common/autotest_common.sh@351 -- # sizes["$mount"]=129371000832 00:07:14.898 23:10:03 -- common/autotest_common.sh@352 -- # uses["$mount"]=9012494336 00:07:14.898 23:10:03 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:07:14.898 23:10:03 -- common/autotest_common.sh@350 -- # mounts["$mount"]=tmpfs 00:07:14.898 23:10:03 -- common/autotest_common.sh@350 -- # fss["$mount"]=tmpfs 00:07:14.898 23:10:03 -- common/autotest_common.sh@351 -- # avails["$mount"]=64682885120 00:07:14.898 23:10:03 -- common/autotest_common.sh@351 -- # sizes["$mount"]=64685498368 00:07:14.898 23:10:03 -- common/autotest_common.sh@352 -- # uses["$mount"]=2613248 00:07:14.898 23:10:03 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:07:14.898 23:10:03 -- common/autotest_common.sh@350 -- # mounts["$mount"]=tmpfs 00:07:14.898 23:10:03 -- common/autotest_common.sh@350 -- # fss["$mount"]=tmpfs 00:07:14.898 23:10:03 -- common/autotest_common.sh@351 -- # avails["$mount"]=25864454144 00:07:14.898 23:10:03 -- common/autotest_common.sh@351 -- # sizes["$mount"]=25874202624 00:07:14.898 23:10:03 -- common/autotest_common.sh@352 -- # uses["$mount"]=9748480 00:07:14.898 23:10:03 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:07:14.898 23:10:03 -- common/autotest_common.sh@350 -- # mounts["$mount"]=efivarfs 00:07:14.898 23:10:03 -- common/autotest_common.sh@350 -- # fss["$mount"]=efivarfs 00:07:14.898 23:10:03 -- common/autotest_common.sh@351 -- # avails["$mount"]=189440 00:07:14.898 23:10:03 -- common/autotest_common.sh@351 -- # sizes["$mount"]=507904 00:07:14.898 23:10:03 -- common/autotest_common.sh@352 -- # uses["$mount"]=314368 00:07:14.898 23:10:03 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:07:14.898 23:10:03 -- common/autotest_common.sh@350 -- # mounts["$mount"]=tmpfs 00:07:14.898 23:10:03 -- common/autotest_common.sh@350 -- # fss["$mount"]=tmpfs 00:07:14.898 23:10:03 -- common/autotest_common.sh@351 -- # avails["$mount"]=64684863488 00:07:14.898 23:10:03 -- common/autotest_common.sh@351 -- # sizes["$mount"]=64685502464 00:07:14.898 23:10:03 -- common/autotest_common.sh@352 -- # uses["$mount"]=638976 00:07:14.898 23:10:03 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:07:14.898 23:10:03 -- common/autotest_common.sh@350 -- # mounts["$mount"]=tmpfs 00:07:14.898 23:10:03 -- common/autotest_common.sh@350 -- # fss["$mount"]=tmpfs 00:07:14.898 23:10:03 -- common/autotest_common.sh@351 -- # avails["$mount"]=12937093120 00:07:14.898 23:10:03 -- common/autotest_common.sh@351 -- # sizes["$mount"]=12937097216 00:07:14.898 23:10:03 -- common/autotest_common.sh@352 -- # uses["$mount"]=4096 00:07:14.898 23:10:03 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:07:14.898 23:10:03 -- common/autotest_common.sh@355 -- # printf '* Looking for test storage...\n' 00:07:14.898 * Looking for test storage... 00:07:14.898 23:10:03 -- common/autotest_common.sh@357 -- # local target_space new_size 00:07:14.898 23:10:03 -- common/autotest_common.sh@358 -- # for target_dir in "${storage_candidates[@]}" 00:07:14.898 23:10:03 -- common/autotest_common.sh@361 -- # df /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:14.898 23:10:03 -- common/autotest_common.sh@361 -- # awk '$1 !~ /Filesystem/{print $6}' 00:07:14.898 23:10:03 -- common/autotest_common.sh@361 -- # mount=/ 00:07:14.898 23:10:03 -- common/autotest_common.sh@363 -- # target_space=120358506496 00:07:14.898 23:10:03 -- common/autotest_common.sh@364 -- # (( target_space == 0 || target_space < requested_size )) 00:07:14.898 23:10:03 -- common/autotest_common.sh@367 -- # (( target_space >= requested_size )) 00:07:14.898 23:10:03 -- common/autotest_common.sh@369 -- # [[ overlay == tmpfs ]] 00:07:14.898 23:10:03 -- common/autotest_common.sh@369 -- # [[ overlay == ramfs ]] 00:07:14.898 23:10:03 -- common/autotest_common.sh@369 -- # [[ / == / ]] 00:07:14.898 23:10:03 -- common/autotest_common.sh@370 -- # new_size=11227086848 00:07:14.898 23:10:03 -- common/autotest_common.sh@371 -- # (( new_size * 100 / sizes[/] > 95 )) 00:07:14.898 23:10:03 -- common/autotest_common.sh@376 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:14.898 23:10:03 -- common/autotest_common.sh@376 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:14.898 23:10:03 -- common/autotest_common.sh@377 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:14.898 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:14.898 23:10:03 -- common/autotest_common.sh@378 -- # return 0 00:07:14.898 23:10:03 -- common/autotest_common.sh@1668 -- # set -o errtrace 00:07:14.898 23:10:03 -- common/autotest_common.sh@1669 -- # shopt -s extdebug 00:07:14.898 23:10:03 -- common/autotest_common.sh@1670 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:07:14.898 23:10:03 -- common/autotest_common.sh@1672 -- # PS4=' \t -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:07:14.899 23:10:03 -- common/autotest_common.sh@1673 -- # true 00:07:14.899 23:10:03 -- common/autotest_common.sh@1675 -- # xtrace_fd 00:07:14.899 23:10:03 -- common/autotest_common.sh@25 -- # [[ -n 14 ]] 00:07:14.899 23:10:03 -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/14 ]] 00:07:14.899 23:10:03 -- common/autotest_common.sh@27 -- # exec 00:07:14.899 23:10:03 -- common/autotest_common.sh@29 -- # exec 00:07:14.899 23:10:03 -- common/autotest_common.sh@31 -- # xtrace_restore 00:07:14.899 23:10:03 -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:07:14.899 23:10:03 -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:07:14.899 23:10:03 -- common/autotest_common.sh@18 -- # set -x 00:07:14.899 23:10:03 -- target/filesystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:14.899 23:10:03 -- nvmf/common.sh@7 -- # uname -s 00:07:14.899 23:10:03 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:14.899 23:10:03 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:14.899 23:10:03 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:14.899 23:10:03 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:14.899 23:10:03 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:14.899 23:10:03 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:14.899 23:10:03 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:14.899 23:10:03 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:14.899 23:10:03 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:14.899 23:10:03 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:14.899 23:10:03 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:07:14.899 23:10:03 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:07:14.899 23:10:03 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:14.899 23:10:03 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:14.899 23:10:03 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:14.899 23:10:03 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:14.899 23:10:03 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:14.899 23:10:03 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:14.899 23:10:03 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:14.899 23:10:03 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:14.899 23:10:03 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:14.899 23:10:03 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:14.899 23:10:03 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:14.899 23:10:03 -- paths/export.sh@5 -- # export PATH 00:07:14.899 23:10:03 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:14.899 23:10:03 -- nvmf/common.sh@47 -- # : 0 00:07:14.899 23:10:03 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:14.899 23:10:03 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:14.899 23:10:03 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:14.899 23:10:03 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:14.899 23:10:03 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:14.899 23:10:03 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:14.899 23:10:03 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:14.899 23:10:03 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:14.899 23:10:03 -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:07:14.899 23:10:03 -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:07:14.899 23:10:03 -- target/filesystem.sh@15 -- # nvmftestinit 00:07:14.899 23:10:03 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:07:14.899 23:10:03 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:14.899 23:10:03 -- nvmf/common.sh@437 -- # prepare_net_devs 00:07:14.899 23:10:03 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:07:14.899 23:10:03 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:07:14.899 23:10:03 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:14.899 23:10:03 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:14.899 23:10:03 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:14.899 23:10:04 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:07:14.899 23:10:04 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:07:14.899 23:10:04 -- nvmf/common.sh@285 -- # xtrace_disable 00:07:14.899 23:10:04 -- common/autotest_common.sh@10 -- # set +x 00:07:21.484 23:10:10 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:07:21.484 23:10:10 -- nvmf/common.sh@291 -- # pci_devs=() 00:07:21.484 23:10:10 -- nvmf/common.sh@291 -- # local -a pci_devs 00:07:21.484 23:10:10 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:07:21.484 23:10:10 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:07:21.484 23:10:10 -- nvmf/common.sh@293 -- # pci_drivers=() 00:07:21.484 23:10:10 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:07:21.484 23:10:10 -- nvmf/common.sh@295 -- # net_devs=() 00:07:21.484 23:10:10 -- nvmf/common.sh@295 -- # local -ga net_devs 00:07:21.484 23:10:10 -- nvmf/common.sh@296 -- # e810=() 00:07:21.484 23:10:10 -- nvmf/common.sh@296 -- # local -ga e810 00:07:21.484 23:10:10 -- nvmf/common.sh@297 -- # x722=() 00:07:21.484 23:10:10 -- nvmf/common.sh@297 -- # local -ga x722 00:07:21.484 23:10:10 -- nvmf/common.sh@298 -- # mlx=() 00:07:21.484 23:10:10 -- nvmf/common.sh@298 -- # local -ga mlx 00:07:21.484 23:10:10 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:21.484 23:10:10 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:21.484 23:10:10 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:21.484 23:10:10 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:21.484 23:10:10 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:21.484 23:10:10 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:21.484 23:10:10 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:21.484 23:10:10 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:21.484 23:10:10 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:21.484 23:10:10 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:21.484 23:10:10 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:21.484 23:10:10 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:07:21.484 23:10:10 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:07:21.484 23:10:10 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:07:21.484 23:10:10 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:07:21.484 23:10:10 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:07:21.484 23:10:10 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:07:21.484 23:10:10 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:21.484 23:10:10 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:07:21.484 Found 0000:31:00.0 (0x8086 - 0x159b) 00:07:21.484 23:10:10 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:21.484 23:10:10 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:21.484 23:10:10 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:21.485 23:10:10 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:21.485 23:10:10 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:21.485 23:10:10 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:21.485 23:10:10 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:07:21.485 Found 0000:31:00.1 (0x8086 - 0x159b) 00:07:21.485 23:10:10 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:21.485 23:10:10 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:21.485 23:10:10 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:21.485 23:10:10 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:21.485 23:10:10 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:21.485 23:10:10 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:07:21.485 23:10:10 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:07:21.485 23:10:10 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:07:21.485 23:10:10 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:21.485 23:10:10 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:21.485 23:10:10 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:07:21.485 23:10:10 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:21.485 23:10:10 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:07:21.485 Found net devices under 0000:31:00.0: cvl_0_0 00:07:21.485 23:10:10 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:07:21.485 23:10:10 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:21.485 23:10:10 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:21.485 23:10:10 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:07:21.485 23:10:10 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:21.485 23:10:10 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:07:21.485 Found net devices under 0000:31:00.1: cvl_0_1 00:07:21.485 23:10:10 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:07:21.485 23:10:10 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:07:21.485 23:10:10 -- nvmf/common.sh@403 -- # is_hw=yes 00:07:21.485 23:10:10 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:07:21.485 23:10:10 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:07:21.485 23:10:10 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:07:21.485 23:10:10 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:21.485 23:10:10 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:21.485 23:10:10 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:21.485 23:10:10 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:07:21.485 23:10:10 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:21.485 23:10:10 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:21.485 23:10:10 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:07:21.485 23:10:10 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:21.485 23:10:10 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:21.485 23:10:10 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:07:21.485 23:10:10 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:07:21.485 23:10:10 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:07:21.485 23:10:10 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:21.746 23:10:10 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:21.746 23:10:10 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:21.746 23:10:10 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:07:21.746 23:10:10 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:21.746 23:10:10 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:21.746 23:10:10 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:21.746 23:10:10 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:07:21.746 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:21.746 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.580 ms 00:07:21.746 00:07:21.746 --- 10.0.0.2 ping statistics --- 00:07:21.746 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:21.746 rtt min/avg/max/mdev = 0.580/0.580/0.580/0.000 ms 00:07:21.746 23:10:10 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:21.746 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:21.746 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.342 ms 00:07:21.746 00:07:21.746 --- 10.0.0.1 ping statistics --- 00:07:21.746 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:21.746 rtt min/avg/max/mdev = 0.342/0.342/0.342/0.000 ms 00:07:21.746 23:10:10 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:21.746 23:10:10 -- nvmf/common.sh@411 -- # return 0 00:07:21.746 23:10:10 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:07:21.746 23:10:10 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:21.746 23:10:10 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:07:21.746 23:10:10 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:07:21.746 23:10:10 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:21.746 23:10:10 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:07:21.746 23:10:10 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:07:22.007 23:10:11 -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:07:22.007 23:10:11 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:07:22.007 23:10:11 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:22.007 23:10:11 -- common/autotest_common.sh@10 -- # set +x 00:07:22.007 ************************************ 00:07:22.007 START TEST nvmf_filesystem_no_in_capsule 00:07:22.007 ************************************ 00:07:22.007 23:10:11 -- common/autotest_common.sh@1111 -- # nvmf_filesystem_part 0 00:07:22.007 23:10:11 -- target/filesystem.sh@47 -- # in_capsule=0 00:07:22.007 23:10:11 -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:07:22.007 23:10:11 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:07:22.007 23:10:11 -- common/autotest_common.sh@710 -- # xtrace_disable 00:07:22.007 23:10:11 -- common/autotest_common.sh@10 -- # set +x 00:07:22.007 23:10:11 -- nvmf/common.sh@470 -- # nvmfpid=3753677 00:07:22.007 23:10:11 -- nvmf/common.sh@471 -- # waitforlisten 3753677 00:07:22.007 23:10:11 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:22.007 23:10:11 -- common/autotest_common.sh@817 -- # '[' -z 3753677 ']' 00:07:22.007 23:10:11 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:22.007 23:10:11 -- common/autotest_common.sh@822 -- # local max_retries=100 00:07:22.007 23:10:11 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:22.007 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:22.007 23:10:11 -- common/autotest_common.sh@826 -- # xtrace_disable 00:07:22.007 23:10:11 -- common/autotest_common.sh@10 -- # set +x 00:07:22.007 [2024-04-26 23:10:11.242248] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:07:22.007 [2024-04-26 23:10:11.242296] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:22.266 EAL: No free 2048 kB hugepages reported on node 1 00:07:22.266 [2024-04-26 23:10:11.309387] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:22.266 [2024-04-26 23:10:11.344072] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:22.266 [2024-04-26 23:10:11.344111] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:22.266 [2024-04-26 23:10:11.344124] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:22.266 [2024-04-26 23:10:11.344132] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:22.266 [2024-04-26 23:10:11.344138] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:22.266 [2024-04-26 23:10:11.344298] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:22.266 [2024-04-26 23:10:11.344413] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:22.266 [2024-04-26 23:10:11.344549] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:22.266 [2024-04-26 23:10:11.344550] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:22.838 23:10:12 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:07:22.838 23:10:12 -- common/autotest_common.sh@850 -- # return 0 00:07:22.838 23:10:12 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:07:22.838 23:10:12 -- common/autotest_common.sh@716 -- # xtrace_disable 00:07:22.838 23:10:12 -- common/autotest_common.sh@10 -- # set +x 00:07:22.838 23:10:12 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:22.838 23:10:12 -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:07:22.838 23:10:12 -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:07:22.838 23:10:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:22.838 23:10:12 -- common/autotest_common.sh@10 -- # set +x 00:07:22.838 [2024-04-26 23:10:12.053499] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:22.838 23:10:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:22.838 23:10:12 -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:07:22.838 23:10:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:22.838 23:10:12 -- common/autotest_common.sh@10 -- # set +x 00:07:23.099 Malloc1 00:07:23.099 23:10:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:23.099 23:10:12 -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:07:23.099 23:10:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:23.099 23:10:12 -- common/autotest_common.sh@10 -- # set +x 00:07:23.099 23:10:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:23.099 23:10:12 -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:07:23.099 23:10:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:23.099 23:10:12 -- common/autotest_common.sh@10 -- # set +x 00:07:23.099 23:10:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:23.099 23:10:12 -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:23.099 23:10:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:23.099 23:10:12 -- common/autotest_common.sh@10 -- # set +x 00:07:23.099 [2024-04-26 23:10:12.185066] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:23.099 23:10:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:23.099 23:10:12 -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:07:23.099 23:10:12 -- common/autotest_common.sh@1364 -- # local bdev_name=Malloc1 00:07:23.099 23:10:12 -- common/autotest_common.sh@1365 -- # local bdev_info 00:07:23.099 23:10:12 -- common/autotest_common.sh@1366 -- # local bs 00:07:23.099 23:10:12 -- common/autotest_common.sh@1367 -- # local nb 00:07:23.099 23:10:12 -- common/autotest_common.sh@1368 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:07:23.099 23:10:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:23.099 23:10:12 -- common/autotest_common.sh@10 -- # set +x 00:07:23.099 23:10:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:23.099 23:10:12 -- common/autotest_common.sh@1368 -- # bdev_info='[ 00:07:23.099 { 00:07:23.099 "name": "Malloc1", 00:07:23.099 "aliases": [ 00:07:23.099 "d53d84e1-27bd-43fc-be43-d8b2c1a51784" 00:07:23.099 ], 00:07:23.099 "product_name": "Malloc disk", 00:07:23.099 "block_size": 512, 00:07:23.099 "num_blocks": 1048576, 00:07:23.099 "uuid": "d53d84e1-27bd-43fc-be43-d8b2c1a51784", 00:07:23.099 "assigned_rate_limits": { 00:07:23.099 "rw_ios_per_sec": 0, 00:07:23.099 "rw_mbytes_per_sec": 0, 00:07:23.099 "r_mbytes_per_sec": 0, 00:07:23.099 "w_mbytes_per_sec": 0 00:07:23.099 }, 00:07:23.099 "claimed": true, 00:07:23.099 "claim_type": "exclusive_write", 00:07:23.099 "zoned": false, 00:07:23.099 "supported_io_types": { 00:07:23.099 "read": true, 00:07:23.099 "write": true, 00:07:23.099 "unmap": true, 00:07:23.099 "write_zeroes": true, 00:07:23.099 "flush": true, 00:07:23.099 "reset": true, 00:07:23.099 "compare": false, 00:07:23.099 "compare_and_write": false, 00:07:23.099 "abort": true, 00:07:23.099 "nvme_admin": false, 00:07:23.099 "nvme_io": false 00:07:23.099 }, 00:07:23.099 "memory_domains": [ 00:07:23.099 { 00:07:23.099 "dma_device_id": "system", 00:07:23.099 "dma_device_type": 1 00:07:23.099 }, 00:07:23.099 { 00:07:23.099 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:23.099 "dma_device_type": 2 00:07:23.099 } 00:07:23.099 ], 00:07:23.099 "driver_specific": {} 00:07:23.099 } 00:07:23.099 ]' 00:07:23.099 23:10:12 -- common/autotest_common.sh@1369 -- # jq '.[] .block_size' 00:07:23.099 23:10:12 -- common/autotest_common.sh@1369 -- # bs=512 00:07:23.099 23:10:12 -- common/autotest_common.sh@1370 -- # jq '.[] .num_blocks' 00:07:23.099 23:10:12 -- common/autotest_common.sh@1370 -- # nb=1048576 00:07:23.099 23:10:12 -- common/autotest_common.sh@1373 -- # bdev_size=512 00:07:23.099 23:10:12 -- common/autotest_common.sh@1374 -- # echo 512 00:07:23.099 23:10:12 -- target/filesystem.sh@58 -- # malloc_size=536870912 00:07:23.099 23:10:12 -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:25.009 23:10:13 -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:07:25.009 23:10:13 -- common/autotest_common.sh@1184 -- # local i=0 00:07:25.009 23:10:13 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:07:25.009 23:10:13 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:07:25.009 23:10:13 -- common/autotest_common.sh@1191 -- # sleep 2 00:07:26.919 23:10:15 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:07:26.919 23:10:15 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:07:26.919 23:10:15 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:07:26.919 23:10:15 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:07:26.919 23:10:15 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:07:26.919 23:10:15 -- common/autotest_common.sh@1194 -- # return 0 00:07:26.919 23:10:15 -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:07:26.919 23:10:15 -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:07:26.919 23:10:15 -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:07:26.919 23:10:15 -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:07:26.919 23:10:15 -- setup/common.sh@76 -- # local dev=nvme0n1 00:07:26.919 23:10:15 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:07:26.919 23:10:15 -- setup/common.sh@80 -- # echo 536870912 00:07:26.919 23:10:15 -- target/filesystem.sh@64 -- # nvme_size=536870912 00:07:26.919 23:10:15 -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:07:26.919 23:10:15 -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:07:26.919 23:10:15 -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:07:26.919 23:10:16 -- target/filesystem.sh@69 -- # partprobe 00:07:27.179 23:10:16 -- target/filesystem.sh@70 -- # sleep 1 00:07:28.120 23:10:17 -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:07:28.120 23:10:17 -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:07:28.120 23:10:17 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:07:28.120 23:10:17 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:28.120 23:10:17 -- common/autotest_common.sh@10 -- # set +x 00:07:28.380 ************************************ 00:07:28.380 START TEST filesystem_ext4 00:07:28.380 ************************************ 00:07:28.380 23:10:17 -- common/autotest_common.sh@1111 -- # nvmf_filesystem_create ext4 nvme0n1 00:07:28.380 23:10:17 -- target/filesystem.sh@18 -- # fstype=ext4 00:07:28.380 23:10:17 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:28.380 23:10:17 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:07:28.380 23:10:17 -- common/autotest_common.sh@912 -- # local fstype=ext4 00:07:28.380 23:10:17 -- common/autotest_common.sh@913 -- # local dev_name=/dev/nvme0n1p1 00:07:28.380 23:10:17 -- common/autotest_common.sh@914 -- # local i=0 00:07:28.380 23:10:17 -- common/autotest_common.sh@915 -- # local force 00:07:28.380 23:10:17 -- common/autotest_common.sh@917 -- # '[' ext4 = ext4 ']' 00:07:28.380 23:10:17 -- common/autotest_common.sh@918 -- # force=-F 00:07:28.380 23:10:17 -- common/autotest_common.sh@923 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:07:28.380 mke2fs 1.46.5 (30-Dec-2021) 00:07:28.380 Discarding device blocks: 0/522240 done 00:07:28.380 Creating filesystem with 522240 1k blocks and 130560 inodes 00:07:28.380 Filesystem UUID: 58dc9ad5-8be9-48e1-a1cc-e8a8c84757f7 00:07:28.380 Superblock backups stored on blocks: 00:07:28.380 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:07:28.380 00:07:28.380 Allocating group tables: 0/64 done 00:07:28.380 Writing inode tables: 0/64 done 00:07:28.640 Creating journal (8192 blocks): done 00:07:29.590 Writing superblocks and filesystem accounting information: 0/64 2/64 done 00:07:29.590 00:07:29.590 23:10:18 -- common/autotest_common.sh@931 -- # return 0 00:07:29.590 23:10:18 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:29.851 23:10:19 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:30.111 23:10:19 -- target/filesystem.sh@25 -- # sync 00:07:30.111 23:10:19 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:30.111 23:10:19 -- target/filesystem.sh@27 -- # sync 00:07:30.111 23:10:19 -- target/filesystem.sh@29 -- # i=0 00:07:30.111 23:10:19 -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:30.111 23:10:19 -- target/filesystem.sh@37 -- # kill -0 3753677 00:07:30.111 23:10:19 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:30.111 23:10:19 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:30.111 23:10:19 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:30.111 23:10:19 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:30.111 00:07:30.111 real 0m1.693s 00:07:30.111 user 0m0.019s 00:07:30.111 sys 0m0.080s 00:07:30.111 23:10:19 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:30.111 23:10:19 -- common/autotest_common.sh@10 -- # set +x 00:07:30.111 ************************************ 00:07:30.111 END TEST filesystem_ext4 00:07:30.111 ************************************ 00:07:30.111 23:10:19 -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:07:30.111 23:10:19 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:07:30.111 23:10:19 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:30.111 23:10:19 -- common/autotest_common.sh@10 -- # set +x 00:07:30.372 ************************************ 00:07:30.372 START TEST filesystem_btrfs 00:07:30.372 ************************************ 00:07:30.372 23:10:19 -- common/autotest_common.sh@1111 -- # nvmf_filesystem_create btrfs nvme0n1 00:07:30.372 23:10:19 -- target/filesystem.sh@18 -- # fstype=btrfs 00:07:30.372 23:10:19 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:30.372 23:10:19 -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:07:30.372 23:10:19 -- common/autotest_common.sh@912 -- # local fstype=btrfs 00:07:30.372 23:10:19 -- common/autotest_common.sh@913 -- # local dev_name=/dev/nvme0n1p1 00:07:30.372 23:10:19 -- common/autotest_common.sh@914 -- # local i=0 00:07:30.372 23:10:19 -- common/autotest_common.sh@915 -- # local force 00:07:30.372 23:10:19 -- common/autotest_common.sh@917 -- # '[' btrfs = ext4 ']' 00:07:30.372 23:10:19 -- common/autotest_common.sh@920 -- # force=-f 00:07:30.372 23:10:19 -- common/autotest_common.sh@923 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:07:30.632 btrfs-progs v6.6.2 00:07:30.632 See https://btrfs.readthedocs.io for more information. 00:07:30.632 00:07:30.632 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:07:30.632 NOTE: several default settings have changed in version 5.15, please make sure 00:07:30.632 this does not affect your deployments: 00:07:30.632 - DUP for metadata (-m dup) 00:07:30.632 - enabled no-holes (-O no-holes) 00:07:30.632 - enabled free-space-tree (-R free-space-tree) 00:07:30.632 00:07:30.632 Label: (null) 00:07:30.632 UUID: cf713f17-b42d-445d-b0af-ce3a58bab982 00:07:30.632 Node size: 16384 00:07:30.632 Sector size: 4096 00:07:30.632 Filesystem size: 510.00MiB 00:07:30.632 Block group profiles: 00:07:30.632 Data: single 8.00MiB 00:07:30.632 Metadata: DUP 32.00MiB 00:07:30.632 System: DUP 8.00MiB 00:07:30.632 SSD detected: yes 00:07:30.632 Zoned device: no 00:07:30.632 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:07:30.632 Runtime features: free-space-tree 00:07:30.632 Checksum: crc32c 00:07:30.632 Number of devices: 1 00:07:30.632 Devices: 00:07:30.632 ID SIZE PATH 00:07:30.632 1 510.00MiB /dev/nvme0n1p1 00:07:30.632 00:07:30.632 23:10:19 -- common/autotest_common.sh@931 -- # return 0 00:07:30.632 23:10:19 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:31.203 23:10:20 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:31.203 23:10:20 -- target/filesystem.sh@25 -- # sync 00:07:31.203 23:10:20 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:31.203 23:10:20 -- target/filesystem.sh@27 -- # sync 00:07:31.203 23:10:20 -- target/filesystem.sh@29 -- # i=0 00:07:31.203 23:10:20 -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:31.203 23:10:20 -- target/filesystem.sh@37 -- # kill -0 3753677 00:07:31.203 23:10:20 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:31.203 23:10:20 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:31.203 23:10:20 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:31.203 23:10:20 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:31.203 00:07:31.203 real 0m0.849s 00:07:31.203 user 0m0.029s 00:07:31.203 sys 0m0.132s 00:07:31.203 23:10:20 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:31.203 23:10:20 -- common/autotest_common.sh@10 -- # set +x 00:07:31.203 ************************************ 00:07:31.203 END TEST filesystem_btrfs 00:07:31.203 ************************************ 00:07:31.203 23:10:20 -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:07:31.203 23:10:20 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:07:31.203 23:10:20 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:31.203 23:10:20 -- common/autotest_common.sh@10 -- # set +x 00:07:31.203 ************************************ 00:07:31.203 START TEST filesystem_xfs 00:07:31.203 ************************************ 00:07:31.203 23:10:20 -- common/autotest_common.sh@1111 -- # nvmf_filesystem_create xfs nvme0n1 00:07:31.203 23:10:20 -- target/filesystem.sh@18 -- # fstype=xfs 00:07:31.203 23:10:20 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:31.203 23:10:20 -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:07:31.203 23:10:20 -- common/autotest_common.sh@912 -- # local fstype=xfs 00:07:31.203 23:10:20 -- common/autotest_common.sh@913 -- # local dev_name=/dev/nvme0n1p1 00:07:31.203 23:10:20 -- common/autotest_common.sh@914 -- # local i=0 00:07:31.203 23:10:20 -- common/autotest_common.sh@915 -- # local force 00:07:31.203 23:10:20 -- common/autotest_common.sh@917 -- # '[' xfs = ext4 ']' 00:07:31.203 23:10:20 -- common/autotest_common.sh@920 -- # force=-f 00:07:31.203 23:10:20 -- common/autotest_common.sh@923 -- # mkfs.xfs -f /dev/nvme0n1p1 00:07:31.463 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:07:31.463 = sectsz=512 attr=2, projid32bit=1 00:07:31.463 = crc=1 finobt=1, sparse=1, rmapbt=0 00:07:31.463 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:07:31.463 data = bsize=4096 blocks=130560, imaxpct=25 00:07:31.463 = sunit=0 swidth=0 blks 00:07:31.463 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:07:31.463 log =internal log bsize=4096 blocks=16384, version=2 00:07:31.463 = sectsz=512 sunit=0 blks, lazy-count=1 00:07:31.463 realtime =none extsz=4096 blocks=0, rtextents=0 00:07:32.404 Discarding blocks...Done. 00:07:32.404 23:10:21 -- common/autotest_common.sh@931 -- # return 0 00:07:32.404 23:10:21 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:34.317 23:10:23 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:34.317 23:10:23 -- target/filesystem.sh@25 -- # sync 00:07:34.317 23:10:23 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:34.317 23:10:23 -- target/filesystem.sh@27 -- # sync 00:07:34.317 23:10:23 -- target/filesystem.sh@29 -- # i=0 00:07:34.317 23:10:23 -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:34.317 23:10:23 -- target/filesystem.sh@37 -- # kill -0 3753677 00:07:34.317 23:10:23 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:34.317 23:10:23 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:34.317 23:10:23 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:34.317 23:10:23 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:34.317 00:07:34.317 real 0m2.950s 00:07:34.317 user 0m0.025s 00:07:34.317 sys 0m0.078s 00:07:34.317 23:10:23 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:34.317 23:10:23 -- common/autotest_common.sh@10 -- # set +x 00:07:34.317 ************************************ 00:07:34.317 END TEST filesystem_xfs 00:07:34.317 ************************************ 00:07:34.317 23:10:23 -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:07:34.578 23:10:23 -- target/filesystem.sh@93 -- # sync 00:07:34.838 23:10:24 -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:07:35.099 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:35.099 23:10:24 -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:07:35.099 23:10:24 -- common/autotest_common.sh@1205 -- # local i=0 00:07:35.099 23:10:24 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:07:35.099 23:10:24 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:35.099 23:10:24 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:07:35.099 23:10:24 -- common/autotest_common.sh@1213 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:35.099 23:10:24 -- common/autotest_common.sh@1217 -- # return 0 00:07:35.099 23:10:24 -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:35.099 23:10:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:35.099 23:10:24 -- common/autotest_common.sh@10 -- # set +x 00:07:35.099 23:10:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:35.099 23:10:24 -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:07:35.099 23:10:24 -- target/filesystem.sh@101 -- # killprocess 3753677 00:07:35.099 23:10:24 -- common/autotest_common.sh@936 -- # '[' -z 3753677 ']' 00:07:35.099 23:10:24 -- common/autotest_common.sh@940 -- # kill -0 3753677 00:07:35.099 23:10:24 -- common/autotest_common.sh@941 -- # uname 00:07:35.099 23:10:24 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:07:35.099 23:10:24 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3753677 00:07:35.099 23:10:24 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:07:35.099 23:10:24 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:07:35.099 23:10:24 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3753677' 00:07:35.099 killing process with pid 3753677 00:07:35.099 23:10:24 -- common/autotest_common.sh@955 -- # kill 3753677 00:07:35.099 23:10:24 -- common/autotest_common.sh@960 -- # wait 3753677 00:07:35.361 23:10:24 -- target/filesystem.sh@102 -- # nvmfpid= 00:07:35.361 00:07:35.361 real 0m13.306s 00:07:35.361 user 0m52.724s 00:07:35.361 sys 0m1.338s 00:07:35.361 23:10:24 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:35.361 23:10:24 -- common/autotest_common.sh@10 -- # set +x 00:07:35.361 ************************************ 00:07:35.361 END TEST nvmf_filesystem_no_in_capsule 00:07:35.361 ************************************ 00:07:35.361 23:10:24 -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:07:35.361 23:10:24 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:07:35.361 23:10:24 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:35.361 23:10:24 -- common/autotest_common.sh@10 -- # set +x 00:07:35.622 ************************************ 00:07:35.622 START TEST nvmf_filesystem_in_capsule 00:07:35.622 ************************************ 00:07:35.622 23:10:24 -- common/autotest_common.sh@1111 -- # nvmf_filesystem_part 4096 00:07:35.622 23:10:24 -- target/filesystem.sh@47 -- # in_capsule=4096 00:07:35.622 23:10:24 -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:07:35.622 23:10:24 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:07:35.622 23:10:24 -- common/autotest_common.sh@710 -- # xtrace_disable 00:07:35.622 23:10:24 -- common/autotest_common.sh@10 -- # set +x 00:07:35.622 23:10:24 -- nvmf/common.sh@470 -- # nvmfpid=3756622 00:07:35.622 23:10:24 -- nvmf/common.sh@471 -- # waitforlisten 3756622 00:07:35.622 23:10:24 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:35.622 23:10:24 -- common/autotest_common.sh@817 -- # '[' -z 3756622 ']' 00:07:35.622 23:10:24 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:35.622 23:10:24 -- common/autotest_common.sh@822 -- # local max_retries=100 00:07:35.622 23:10:24 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:35.622 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:35.622 23:10:24 -- common/autotest_common.sh@826 -- # xtrace_disable 00:07:35.622 23:10:24 -- common/autotest_common.sh@10 -- # set +x 00:07:35.622 [2024-04-26 23:10:24.729516] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:07:35.622 [2024-04-26 23:10:24.729560] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:35.622 EAL: No free 2048 kB hugepages reported on node 1 00:07:35.622 [2024-04-26 23:10:24.794435] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:35.622 [2024-04-26 23:10:24.824526] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:35.622 [2024-04-26 23:10:24.824562] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:35.622 [2024-04-26 23:10:24.824571] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:35.622 [2024-04-26 23:10:24.824579] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:35.622 [2024-04-26 23:10:24.824586] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:35.622 [2024-04-26 23:10:24.824732] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:35.622 [2024-04-26 23:10:24.824875] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:35.622 [2024-04-26 23:10:24.825027] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:35.622 [2024-04-26 23:10:24.825027] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:36.566 23:10:25 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:07:36.566 23:10:25 -- common/autotest_common.sh@850 -- # return 0 00:07:36.566 23:10:25 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:07:36.566 23:10:25 -- common/autotest_common.sh@716 -- # xtrace_disable 00:07:36.566 23:10:25 -- common/autotest_common.sh@10 -- # set +x 00:07:36.566 23:10:25 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:36.567 23:10:25 -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:07:36.567 23:10:25 -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:07:36.567 23:10:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:36.567 23:10:25 -- common/autotest_common.sh@10 -- # set +x 00:07:36.567 [2024-04-26 23:10:25.548553] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:36.567 23:10:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:36.567 23:10:25 -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:07:36.567 23:10:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:36.567 23:10:25 -- common/autotest_common.sh@10 -- # set +x 00:07:36.567 Malloc1 00:07:36.567 23:10:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:36.567 23:10:25 -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:07:36.567 23:10:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:36.567 23:10:25 -- common/autotest_common.sh@10 -- # set +x 00:07:36.567 23:10:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:36.567 23:10:25 -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:07:36.567 23:10:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:36.567 23:10:25 -- common/autotest_common.sh@10 -- # set +x 00:07:36.567 23:10:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:36.567 23:10:25 -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:36.567 23:10:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:36.567 23:10:25 -- common/autotest_common.sh@10 -- # set +x 00:07:36.567 [2024-04-26 23:10:25.670067] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:36.567 23:10:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:36.567 23:10:25 -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:07:36.567 23:10:25 -- common/autotest_common.sh@1364 -- # local bdev_name=Malloc1 00:07:36.567 23:10:25 -- common/autotest_common.sh@1365 -- # local bdev_info 00:07:36.567 23:10:25 -- common/autotest_common.sh@1366 -- # local bs 00:07:36.567 23:10:25 -- common/autotest_common.sh@1367 -- # local nb 00:07:36.567 23:10:25 -- common/autotest_common.sh@1368 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:07:36.567 23:10:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:36.567 23:10:25 -- common/autotest_common.sh@10 -- # set +x 00:07:36.567 23:10:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:36.567 23:10:25 -- common/autotest_common.sh@1368 -- # bdev_info='[ 00:07:36.567 { 00:07:36.567 "name": "Malloc1", 00:07:36.567 "aliases": [ 00:07:36.567 "73b98569-cbc2-4b89-bda5-30896dd0ff57" 00:07:36.567 ], 00:07:36.567 "product_name": "Malloc disk", 00:07:36.567 "block_size": 512, 00:07:36.567 "num_blocks": 1048576, 00:07:36.567 "uuid": "73b98569-cbc2-4b89-bda5-30896dd0ff57", 00:07:36.567 "assigned_rate_limits": { 00:07:36.567 "rw_ios_per_sec": 0, 00:07:36.567 "rw_mbytes_per_sec": 0, 00:07:36.567 "r_mbytes_per_sec": 0, 00:07:36.567 "w_mbytes_per_sec": 0 00:07:36.567 }, 00:07:36.567 "claimed": true, 00:07:36.567 "claim_type": "exclusive_write", 00:07:36.567 "zoned": false, 00:07:36.567 "supported_io_types": { 00:07:36.567 "read": true, 00:07:36.567 "write": true, 00:07:36.567 "unmap": true, 00:07:36.567 "write_zeroes": true, 00:07:36.567 "flush": true, 00:07:36.567 "reset": true, 00:07:36.567 "compare": false, 00:07:36.567 "compare_and_write": false, 00:07:36.567 "abort": true, 00:07:36.567 "nvme_admin": false, 00:07:36.567 "nvme_io": false 00:07:36.567 }, 00:07:36.567 "memory_domains": [ 00:07:36.567 { 00:07:36.567 "dma_device_id": "system", 00:07:36.567 "dma_device_type": 1 00:07:36.567 }, 00:07:36.567 { 00:07:36.567 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:36.567 "dma_device_type": 2 00:07:36.567 } 00:07:36.567 ], 00:07:36.567 "driver_specific": {} 00:07:36.567 } 00:07:36.567 ]' 00:07:36.567 23:10:25 -- common/autotest_common.sh@1369 -- # jq '.[] .block_size' 00:07:36.567 23:10:25 -- common/autotest_common.sh@1369 -- # bs=512 00:07:36.567 23:10:25 -- common/autotest_common.sh@1370 -- # jq '.[] .num_blocks' 00:07:36.567 23:10:25 -- common/autotest_common.sh@1370 -- # nb=1048576 00:07:36.567 23:10:25 -- common/autotest_common.sh@1373 -- # bdev_size=512 00:07:36.567 23:10:25 -- common/autotest_common.sh@1374 -- # echo 512 00:07:36.567 23:10:25 -- target/filesystem.sh@58 -- # malloc_size=536870912 00:07:36.567 23:10:25 -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:38.481 23:10:27 -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:07:38.481 23:10:27 -- common/autotest_common.sh@1184 -- # local i=0 00:07:38.481 23:10:27 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:07:38.481 23:10:27 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:07:38.481 23:10:27 -- common/autotest_common.sh@1191 -- # sleep 2 00:07:40.395 23:10:29 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:07:40.395 23:10:29 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:07:40.395 23:10:29 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:07:40.396 23:10:29 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:07:40.396 23:10:29 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:07:40.396 23:10:29 -- common/autotest_common.sh@1194 -- # return 0 00:07:40.396 23:10:29 -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:07:40.396 23:10:29 -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:07:40.396 23:10:29 -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:07:40.396 23:10:29 -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:07:40.396 23:10:29 -- setup/common.sh@76 -- # local dev=nvme0n1 00:07:40.396 23:10:29 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:07:40.396 23:10:29 -- setup/common.sh@80 -- # echo 536870912 00:07:40.396 23:10:29 -- target/filesystem.sh@64 -- # nvme_size=536870912 00:07:40.396 23:10:29 -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:07:40.396 23:10:29 -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:07:40.396 23:10:29 -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:07:40.657 23:10:29 -- target/filesystem.sh@69 -- # partprobe 00:07:41.228 23:10:30 -- target/filesystem.sh@70 -- # sleep 1 00:07:42.614 23:10:31 -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:07:42.614 23:10:31 -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:07:42.614 23:10:31 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:07:42.614 23:10:31 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:42.614 23:10:31 -- common/autotest_common.sh@10 -- # set +x 00:07:42.614 ************************************ 00:07:42.614 START TEST filesystem_in_capsule_ext4 00:07:42.614 ************************************ 00:07:42.614 23:10:31 -- common/autotest_common.sh@1111 -- # nvmf_filesystem_create ext4 nvme0n1 00:07:42.614 23:10:31 -- target/filesystem.sh@18 -- # fstype=ext4 00:07:42.614 23:10:31 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:42.614 23:10:31 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:07:42.614 23:10:31 -- common/autotest_common.sh@912 -- # local fstype=ext4 00:07:42.614 23:10:31 -- common/autotest_common.sh@913 -- # local dev_name=/dev/nvme0n1p1 00:07:42.614 23:10:31 -- common/autotest_common.sh@914 -- # local i=0 00:07:42.614 23:10:31 -- common/autotest_common.sh@915 -- # local force 00:07:42.614 23:10:31 -- common/autotest_common.sh@917 -- # '[' ext4 = ext4 ']' 00:07:42.614 23:10:31 -- common/autotest_common.sh@918 -- # force=-F 00:07:42.614 23:10:31 -- common/autotest_common.sh@923 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:07:42.614 mke2fs 1.46.5 (30-Dec-2021) 00:07:42.614 Discarding device blocks: 0/522240 done 00:07:42.614 Creating filesystem with 522240 1k blocks and 130560 inodes 00:07:42.614 Filesystem UUID: 354e125d-2d80-41b3-bf03-d3e26dcb4279 00:07:42.614 Superblock backups stored on blocks: 00:07:42.614 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:07:42.614 00:07:42.614 Allocating group tables: 0/64 done 00:07:42.614 Writing inode tables: 0/64 done 00:07:42.614 Creating journal (8192 blocks): done 00:07:42.614 Writing superblocks and filesystem accounting information: 0/64 done 00:07:42.614 00:07:42.614 23:10:31 -- common/autotest_common.sh@931 -- # return 0 00:07:42.614 23:10:31 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:43.185 23:10:32 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:43.185 23:10:32 -- target/filesystem.sh@25 -- # sync 00:07:43.185 23:10:32 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:43.185 23:10:32 -- target/filesystem.sh@27 -- # sync 00:07:43.185 23:10:32 -- target/filesystem.sh@29 -- # i=0 00:07:43.185 23:10:32 -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:43.185 23:10:32 -- target/filesystem.sh@37 -- # kill -0 3756622 00:07:43.185 23:10:32 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:43.185 23:10:32 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:43.185 23:10:32 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:43.185 23:10:32 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:43.185 00:07:43.185 real 0m0.735s 00:07:43.185 user 0m0.028s 00:07:43.185 sys 0m0.069s 00:07:43.185 23:10:32 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:43.185 23:10:32 -- common/autotest_common.sh@10 -- # set +x 00:07:43.185 ************************************ 00:07:43.185 END TEST filesystem_in_capsule_ext4 00:07:43.185 ************************************ 00:07:43.185 23:10:32 -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:07:43.185 23:10:32 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:07:43.185 23:10:32 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:43.185 23:10:32 -- common/autotest_common.sh@10 -- # set +x 00:07:43.446 ************************************ 00:07:43.446 START TEST filesystem_in_capsule_btrfs 00:07:43.446 ************************************ 00:07:43.446 23:10:32 -- common/autotest_common.sh@1111 -- # nvmf_filesystem_create btrfs nvme0n1 00:07:43.446 23:10:32 -- target/filesystem.sh@18 -- # fstype=btrfs 00:07:43.446 23:10:32 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:43.446 23:10:32 -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:07:43.446 23:10:32 -- common/autotest_common.sh@912 -- # local fstype=btrfs 00:07:43.446 23:10:32 -- common/autotest_common.sh@913 -- # local dev_name=/dev/nvme0n1p1 00:07:43.446 23:10:32 -- common/autotest_common.sh@914 -- # local i=0 00:07:43.446 23:10:32 -- common/autotest_common.sh@915 -- # local force 00:07:43.446 23:10:32 -- common/autotest_common.sh@917 -- # '[' btrfs = ext4 ']' 00:07:43.447 23:10:32 -- common/autotest_common.sh@920 -- # force=-f 00:07:43.447 23:10:32 -- common/autotest_common.sh@923 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:07:43.707 btrfs-progs v6.6.2 00:07:43.707 See https://btrfs.readthedocs.io for more information. 00:07:43.707 00:07:43.707 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:07:43.707 NOTE: several default settings have changed in version 5.15, please make sure 00:07:43.707 this does not affect your deployments: 00:07:43.707 - DUP for metadata (-m dup) 00:07:43.707 - enabled no-holes (-O no-holes) 00:07:43.707 - enabled free-space-tree (-R free-space-tree) 00:07:43.707 00:07:43.707 Label: (null) 00:07:43.707 UUID: 8605190c-b87b-4c29-b0c7-463762f90a4e 00:07:43.707 Node size: 16384 00:07:43.707 Sector size: 4096 00:07:43.707 Filesystem size: 510.00MiB 00:07:43.707 Block group profiles: 00:07:43.707 Data: single 8.00MiB 00:07:43.707 Metadata: DUP 32.00MiB 00:07:43.707 System: DUP 8.00MiB 00:07:43.707 SSD detected: yes 00:07:43.707 Zoned device: no 00:07:43.707 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:07:43.707 Runtime features: free-space-tree 00:07:43.707 Checksum: crc32c 00:07:43.707 Number of devices: 1 00:07:43.707 Devices: 00:07:43.707 ID SIZE PATH 00:07:43.707 1 510.00MiB /dev/nvme0n1p1 00:07:43.707 00:07:43.707 23:10:32 -- common/autotest_common.sh@931 -- # return 0 00:07:43.707 23:10:32 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:44.647 23:10:33 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:44.647 23:10:33 -- target/filesystem.sh@25 -- # sync 00:07:44.647 23:10:33 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:44.647 23:10:33 -- target/filesystem.sh@27 -- # sync 00:07:44.647 23:10:33 -- target/filesystem.sh@29 -- # i=0 00:07:44.647 23:10:33 -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:44.647 23:10:33 -- target/filesystem.sh@37 -- # kill -0 3756622 00:07:44.647 23:10:33 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:44.647 23:10:33 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:44.647 23:10:33 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:44.647 23:10:33 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:44.647 00:07:44.647 real 0m1.152s 00:07:44.647 user 0m0.027s 00:07:44.647 sys 0m0.135s 00:07:44.647 23:10:33 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:44.647 23:10:33 -- common/autotest_common.sh@10 -- # set +x 00:07:44.647 ************************************ 00:07:44.647 END TEST filesystem_in_capsule_btrfs 00:07:44.647 ************************************ 00:07:44.647 23:10:33 -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:07:44.647 23:10:33 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:07:44.647 23:10:33 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:44.647 23:10:33 -- common/autotest_common.sh@10 -- # set +x 00:07:44.647 ************************************ 00:07:44.647 START TEST filesystem_in_capsule_xfs 00:07:44.647 ************************************ 00:07:44.647 23:10:33 -- common/autotest_common.sh@1111 -- # nvmf_filesystem_create xfs nvme0n1 00:07:44.647 23:10:33 -- target/filesystem.sh@18 -- # fstype=xfs 00:07:44.647 23:10:33 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:44.647 23:10:33 -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:07:44.647 23:10:33 -- common/autotest_common.sh@912 -- # local fstype=xfs 00:07:44.647 23:10:33 -- common/autotest_common.sh@913 -- # local dev_name=/dev/nvme0n1p1 00:07:44.647 23:10:33 -- common/autotest_common.sh@914 -- # local i=0 00:07:44.647 23:10:33 -- common/autotest_common.sh@915 -- # local force 00:07:44.648 23:10:33 -- common/autotest_common.sh@917 -- # '[' xfs = ext4 ']' 00:07:44.648 23:10:33 -- common/autotest_common.sh@920 -- # force=-f 00:07:44.648 23:10:33 -- common/autotest_common.sh@923 -- # mkfs.xfs -f /dev/nvme0n1p1 00:07:44.907 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:07:44.907 = sectsz=512 attr=2, projid32bit=1 00:07:44.907 = crc=1 finobt=1, sparse=1, rmapbt=0 00:07:44.907 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:07:44.907 data = bsize=4096 blocks=130560, imaxpct=25 00:07:44.907 = sunit=0 swidth=0 blks 00:07:44.907 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:07:44.907 log =internal log bsize=4096 blocks=16384, version=2 00:07:44.907 = sectsz=512 sunit=0 blks, lazy-count=1 00:07:44.907 realtime =none extsz=4096 blocks=0, rtextents=0 00:07:45.849 Discarding blocks...Done. 00:07:45.849 23:10:34 -- common/autotest_common.sh@931 -- # return 0 00:07:45.849 23:10:34 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:47.767 23:10:36 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:47.767 23:10:36 -- target/filesystem.sh@25 -- # sync 00:07:47.767 23:10:36 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:47.767 23:10:36 -- target/filesystem.sh@27 -- # sync 00:07:47.767 23:10:36 -- target/filesystem.sh@29 -- # i=0 00:07:47.767 23:10:36 -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:47.767 23:10:36 -- target/filesystem.sh@37 -- # kill -0 3756622 00:07:47.767 23:10:36 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:47.767 23:10:36 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:47.767 23:10:36 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:47.767 23:10:36 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:47.767 00:07:47.767 real 0m3.069s 00:07:47.767 user 0m0.027s 00:07:47.767 sys 0m0.077s 00:07:47.767 23:10:36 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:47.767 23:10:36 -- common/autotest_common.sh@10 -- # set +x 00:07:47.767 ************************************ 00:07:47.767 END TEST filesystem_in_capsule_xfs 00:07:47.767 ************************************ 00:07:47.767 23:10:36 -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:07:48.339 23:10:37 -- target/filesystem.sh@93 -- # sync 00:07:48.339 23:10:37 -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:07:48.339 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:48.339 23:10:37 -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:07:48.339 23:10:37 -- common/autotest_common.sh@1205 -- # local i=0 00:07:48.339 23:10:37 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:07:48.339 23:10:37 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:48.339 23:10:37 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:07:48.339 23:10:37 -- common/autotest_common.sh@1213 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:48.339 23:10:37 -- common/autotest_common.sh@1217 -- # return 0 00:07:48.339 23:10:37 -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:48.339 23:10:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:48.339 23:10:37 -- common/autotest_common.sh@10 -- # set +x 00:07:48.339 23:10:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:48.339 23:10:37 -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:07:48.339 23:10:37 -- target/filesystem.sh@101 -- # killprocess 3756622 00:07:48.339 23:10:37 -- common/autotest_common.sh@936 -- # '[' -z 3756622 ']' 00:07:48.339 23:10:37 -- common/autotest_common.sh@940 -- # kill -0 3756622 00:07:48.339 23:10:37 -- common/autotest_common.sh@941 -- # uname 00:07:48.339 23:10:37 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:07:48.339 23:10:37 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3756622 00:07:48.339 23:10:37 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:07:48.339 23:10:37 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:07:48.339 23:10:37 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3756622' 00:07:48.339 killing process with pid 3756622 00:07:48.339 23:10:37 -- common/autotest_common.sh@955 -- # kill 3756622 00:07:48.339 23:10:37 -- common/autotest_common.sh@960 -- # wait 3756622 00:07:48.600 23:10:37 -- target/filesystem.sh@102 -- # nvmfpid= 00:07:48.600 00:07:48.600 real 0m13.101s 00:07:48.600 user 0m51.916s 00:07:48.600 sys 0m1.393s 00:07:48.600 23:10:37 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:48.600 23:10:37 -- common/autotest_common.sh@10 -- # set +x 00:07:48.600 ************************************ 00:07:48.600 END TEST nvmf_filesystem_in_capsule 00:07:48.600 ************************************ 00:07:48.600 23:10:37 -- target/filesystem.sh@108 -- # nvmftestfini 00:07:48.600 23:10:37 -- nvmf/common.sh@477 -- # nvmfcleanup 00:07:48.600 23:10:37 -- nvmf/common.sh@117 -- # sync 00:07:48.600 23:10:37 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:07:48.600 23:10:37 -- nvmf/common.sh@120 -- # set +e 00:07:48.600 23:10:37 -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:48.600 23:10:37 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:07:48.600 rmmod nvme_tcp 00:07:48.600 rmmod nvme_fabrics 00:07:48.861 rmmod nvme_keyring 00:07:48.861 23:10:37 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:48.861 23:10:37 -- nvmf/common.sh@124 -- # set -e 00:07:48.861 23:10:37 -- nvmf/common.sh@125 -- # return 0 00:07:48.861 23:10:37 -- nvmf/common.sh@478 -- # '[' -n '' ']' 00:07:48.861 23:10:37 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:07:48.861 23:10:37 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:07:48.861 23:10:37 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:07:48.861 23:10:37 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:48.861 23:10:37 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:07:48.861 23:10:37 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:48.861 23:10:37 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:48.861 23:10:37 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:50.776 23:10:39 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:07:50.776 00:07:50.776 real 0m36.264s 00:07:50.776 user 1m46.862s 00:07:50.776 sys 0m8.225s 00:07:50.776 23:10:39 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:50.776 23:10:39 -- common/autotest_common.sh@10 -- # set +x 00:07:50.776 ************************************ 00:07:50.776 END TEST nvmf_filesystem 00:07:50.776 ************************************ 00:07:50.776 23:10:40 -- nvmf/nvmf.sh@25 -- # run_test nvmf_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:07:50.776 23:10:40 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:07:50.776 23:10:40 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:50.776 23:10:40 -- common/autotest_common.sh@10 -- # set +x 00:07:51.037 ************************************ 00:07:51.037 START TEST nvmf_discovery 00:07:51.037 ************************************ 00:07:51.037 23:10:40 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:07:51.037 * Looking for test storage... 00:07:51.037 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:51.037 23:10:40 -- target/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:51.037 23:10:40 -- nvmf/common.sh@7 -- # uname -s 00:07:51.037 23:10:40 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:51.037 23:10:40 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:51.037 23:10:40 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:51.037 23:10:40 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:51.037 23:10:40 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:51.037 23:10:40 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:51.037 23:10:40 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:51.037 23:10:40 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:51.037 23:10:40 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:51.037 23:10:40 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:51.037 23:10:40 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:07:51.037 23:10:40 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:07:51.037 23:10:40 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:51.037 23:10:40 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:51.037 23:10:40 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:51.037 23:10:40 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:51.037 23:10:40 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:51.037 23:10:40 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:51.037 23:10:40 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:51.037 23:10:40 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:51.037 23:10:40 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:51.037 23:10:40 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:51.037 23:10:40 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:51.037 23:10:40 -- paths/export.sh@5 -- # export PATH 00:07:51.037 23:10:40 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:51.037 23:10:40 -- nvmf/common.sh@47 -- # : 0 00:07:51.037 23:10:40 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:51.037 23:10:40 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:51.037 23:10:40 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:51.037 23:10:40 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:51.037 23:10:40 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:51.037 23:10:40 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:51.037 23:10:40 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:51.037 23:10:40 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:51.037 23:10:40 -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:07:51.037 23:10:40 -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:07:51.038 23:10:40 -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:07:51.038 23:10:40 -- target/discovery.sh@15 -- # hash nvme 00:07:51.038 23:10:40 -- target/discovery.sh@20 -- # nvmftestinit 00:07:51.038 23:10:40 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:07:51.038 23:10:40 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:51.038 23:10:40 -- nvmf/common.sh@437 -- # prepare_net_devs 00:07:51.038 23:10:40 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:07:51.038 23:10:40 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:07:51.038 23:10:40 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:51.038 23:10:40 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:51.038 23:10:40 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:51.298 23:10:40 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:07:51.299 23:10:40 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:07:51.299 23:10:40 -- nvmf/common.sh@285 -- # xtrace_disable 00:07:51.299 23:10:40 -- common/autotest_common.sh@10 -- # set +x 00:07:57.890 23:10:46 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:07:57.890 23:10:46 -- nvmf/common.sh@291 -- # pci_devs=() 00:07:57.890 23:10:46 -- nvmf/common.sh@291 -- # local -a pci_devs 00:07:57.890 23:10:46 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:07:57.890 23:10:46 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:07:57.890 23:10:46 -- nvmf/common.sh@293 -- # pci_drivers=() 00:07:57.890 23:10:47 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:07:57.890 23:10:47 -- nvmf/common.sh@295 -- # net_devs=() 00:07:57.890 23:10:47 -- nvmf/common.sh@295 -- # local -ga net_devs 00:07:57.890 23:10:47 -- nvmf/common.sh@296 -- # e810=() 00:07:57.890 23:10:47 -- nvmf/common.sh@296 -- # local -ga e810 00:07:57.890 23:10:47 -- nvmf/common.sh@297 -- # x722=() 00:07:57.890 23:10:47 -- nvmf/common.sh@297 -- # local -ga x722 00:07:57.890 23:10:47 -- nvmf/common.sh@298 -- # mlx=() 00:07:57.890 23:10:47 -- nvmf/common.sh@298 -- # local -ga mlx 00:07:57.890 23:10:47 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:57.890 23:10:47 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:57.890 23:10:47 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:57.890 23:10:47 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:57.890 23:10:47 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:57.890 23:10:47 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:57.890 23:10:47 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:57.890 23:10:47 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:57.890 23:10:47 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:57.890 23:10:47 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:57.890 23:10:47 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:57.890 23:10:47 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:07:57.890 23:10:47 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:07:57.890 23:10:47 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:07:57.890 23:10:47 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:07:57.890 23:10:47 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:07:57.890 23:10:47 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:07:57.890 23:10:47 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:57.890 23:10:47 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:07:57.890 Found 0000:31:00.0 (0x8086 - 0x159b) 00:07:57.890 23:10:47 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:57.890 23:10:47 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:57.890 23:10:47 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:57.890 23:10:47 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:57.890 23:10:47 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:57.890 23:10:47 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:57.890 23:10:47 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:07:57.890 Found 0000:31:00.1 (0x8086 - 0x159b) 00:07:57.890 23:10:47 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:57.890 23:10:47 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:57.890 23:10:47 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:57.890 23:10:47 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:57.890 23:10:47 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:57.890 23:10:47 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:07:57.890 23:10:47 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:07:57.890 23:10:47 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:07:57.890 23:10:47 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:57.890 23:10:47 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:57.890 23:10:47 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:07:57.890 23:10:47 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:57.891 23:10:47 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:07:57.891 Found net devices under 0000:31:00.0: cvl_0_0 00:07:57.891 23:10:47 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:07:57.891 23:10:47 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:57.891 23:10:47 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:57.891 23:10:47 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:07:57.891 23:10:47 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:57.891 23:10:47 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:07:57.891 Found net devices under 0000:31:00.1: cvl_0_1 00:07:57.891 23:10:47 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:07:57.891 23:10:47 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:07:57.891 23:10:47 -- nvmf/common.sh@403 -- # is_hw=yes 00:07:57.891 23:10:47 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:07:57.891 23:10:47 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:07:57.891 23:10:47 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:07:57.891 23:10:47 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:57.891 23:10:47 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:57.891 23:10:47 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:57.891 23:10:47 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:07:57.891 23:10:47 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:57.891 23:10:47 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:57.891 23:10:47 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:07:57.891 23:10:47 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:57.891 23:10:47 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:57.891 23:10:47 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:07:57.891 23:10:47 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:07:57.891 23:10:47 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:07:57.891 23:10:47 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:58.152 23:10:47 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:58.152 23:10:47 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:58.152 23:10:47 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:07:58.152 23:10:47 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:58.152 23:10:47 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:58.152 23:10:47 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:58.152 23:10:47 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:07:58.152 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:58.152 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.757 ms 00:07:58.152 00:07:58.152 --- 10.0.0.2 ping statistics --- 00:07:58.152 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:58.152 rtt min/avg/max/mdev = 0.757/0.757/0.757/0.000 ms 00:07:58.152 23:10:47 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:58.152 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:58.152 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.294 ms 00:07:58.152 00:07:58.152 --- 10.0.0.1 ping statistics --- 00:07:58.152 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:58.152 rtt min/avg/max/mdev = 0.294/0.294/0.294/0.000 ms 00:07:58.152 23:10:47 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:58.152 23:10:47 -- nvmf/common.sh@411 -- # return 0 00:07:58.152 23:10:47 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:07:58.152 23:10:47 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:58.152 23:10:47 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:07:58.152 23:10:47 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:07:58.152 23:10:47 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:58.152 23:10:47 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:07:58.152 23:10:47 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:07:58.152 23:10:47 -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:07:58.152 23:10:47 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:07:58.152 23:10:47 -- common/autotest_common.sh@710 -- # xtrace_disable 00:07:58.152 23:10:47 -- common/autotest_common.sh@10 -- # set +x 00:07:58.152 23:10:47 -- nvmf/common.sh@470 -- # nvmfpid=3763620 00:07:58.152 23:10:47 -- nvmf/common.sh@471 -- # waitforlisten 3763620 00:07:58.152 23:10:47 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:58.152 23:10:47 -- common/autotest_common.sh@817 -- # '[' -z 3763620 ']' 00:07:58.152 23:10:47 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:58.152 23:10:47 -- common/autotest_common.sh@822 -- # local max_retries=100 00:07:58.152 23:10:47 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:58.152 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:58.152 23:10:47 -- common/autotest_common.sh@826 -- # xtrace_disable 00:07:58.152 23:10:47 -- common/autotest_common.sh@10 -- # set +x 00:07:58.413 [2024-04-26 23:10:47.449748] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:07:58.413 [2024-04-26 23:10:47.449830] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:58.413 EAL: No free 2048 kB hugepages reported on node 1 00:07:58.413 [2024-04-26 23:10:47.521664] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:58.413 [2024-04-26 23:10:47.559042] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:58.413 [2024-04-26 23:10:47.559092] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:58.413 [2024-04-26 23:10:47.559102] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:58.413 [2024-04-26 23:10:47.559110] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:58.413 [2024-04-26 23:10:47.559117] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:58.413 [2024-04-26 23:10:47.559258] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:58.413 [2024-04-26 23:10:47.559378] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:58.413 [2024-04-26 23:10:47.559539] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:58.413 [2024-04-26 23:10:47.559540] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:58.983 23:10:48 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:07:58.983 23:10:48 -- common/autotest_common.sh@850 -- # return 0 00:07:58.983 23:10:48 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:07:58.983 23:10:48 -- common/autotest_common.sh@716 -- # xtrace_disable 00:07:58.983 23:10:48 -- common/autotest_common.sh@10 -- # set +x 00:07:59.245 23:10:48 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:59.245 23:10:48 -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:59.245 23:10:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:59.245 23:10:48 -- common/autotest_common.sh@10 -- # set +x 00:07:59.245 [2024-04-26 23:10:48.262515] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:59.245 23:10:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:59.245 23:10:48 -- target/discovery.sh@26 -- # seq 1 4 00:07:59.245 23:10:48 -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:07:59.245 23:10:48 -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:07:59.245 23:10:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:59.245 23:10:48 -- common/autotest_common.sh@10 -- # set +x 00:07:59.245 Null1 00:07:59.245 23:10:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:59.245 23:10:48 -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:07:59.245 23:10:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:59.245 23:10:48 -- common/autotest_common.sh@10 -- # set +x 00:07:59.245 23:10:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:59.245 23:10:48 -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:07:59.245 23:10:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:59.245 23:10:48 -- common/autotest_common.sh@10 -- # set +x 00:07:59.245 23:10:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:59.245 23:10:48 -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:59.245 23:10:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:59.245 23:10:48 -- common/autotest_common.sh@10 -- # set +x 00:07:59.245 [2024-04-26 23:10:48.322831] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:59.245 23:10:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:59.245 23:10:48 -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:07:59.245 23:10:48 -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:07:59.245 23:10:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:59.245 23:10:48 -- common/autotest_common.sh@10 -- # set +x 00:07:59.245 Null2 00:07:59.245 23:10:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:59.245 23:10:48 -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:07:59.245 23:10:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:59.245 23:10:48 -- common/autotest_common.sh@10 -- # set +x 00:07:59.245 23:10:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:59.245 23:10:48 -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:07:59.245 23:10:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:59.245 23:10:48 -- common/autotest_common.sh@10 -- # set +x 00:07:59.245 23:10:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:59.245 23:10:48 -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:07:59.245 23:10:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:59.245 23:10:48 -- common/autotest_common.sh@10 -- # set +x 00:07:59.245 23:10:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:59.245 23:10:48 -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:07:59.245 23:10:48 -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:07:59.245 23:10:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:59.245 23:10:48 -- common/autotest_common.sh@10 -- # set +x 00:07:59.245 Null3 00:07:59.245 23:10:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:59.245 23:10:48 -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:07:59.245 23:10:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:59.245 23:10:48 -- common/autotest_common.sh@10 -- # set +x 00:07:59.245 23:10:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:59.245 23:10:48 -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:07:59.245 23:10:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:59.245 23:10:48 -- common/autotest_common.sh@10 -- # set +x 00:07:59.245 23:10:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:59.245 23:10:48 -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:07:59.245 23:10:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:59.245 23:10:48 -- common/autotest_common.sh@10 -- # set +x 00:07:59.245 23:10:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:59.245 23:10:48 -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:07:59.245 23:10:48 -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:07:59.245 23:10:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:59.245 23:10:48 -- common/autotest_common.sh@10 -- # set +x 00:07:59.245 Null4 00:07:59.245 23:10:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:59.245 23:10:48 -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:07:59.245 23:10:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:59.245 23:10:48 -- common/autotest_common.sh@10 -- # set +x 00:07:59.245 23:10:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:59.245 23:10:48 -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:07:59.245 23:10:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:59.245 23:10:48 -- common/autotest_common.sh@10 -- # set +x 00:07:59.245 23:10:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:59.246 23:10:48 -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:07:59.246 23:10:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:59.246 23:10:48 -- common/autotest_common.sh@10 -- # set +x 00:07:59.246 23:10:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:59.246 23:10:48 -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:59.246 23:10:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:59.246 23:10:48 -- common/autotest_common.sh@10 -- # set +x 00:07:59.246 23:10:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:59.246 23:10:48 -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430 00:07:59.246 23:10:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:59.246 23:10:48 -- common/autotest_common.sh@10 -- # set +x 00:07:59.246 23:10:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:59.246 23:10:48 -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -a 10.0.0.2 -s 4420 00:07:59.506 00:07:59.506 Discovery Log Number of Records 6, Generation counter 6 00:07:59.506 =====Discovery Log Entry 0====== 00:07:59.506 trtype: tcp 00:07:59.506 adrfam: ipv4 00:07:59.506 subtype: current discovery subsystem 00:07:59.506 treq: not required 00:07:59.506 portid: 0 00:07:59.506 trsvcid: 4420 00:07:59.506 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:07:59.506 traddr: 10.0.0.2 00:07:59.506 eflags: explicit discovery connections, duplicate discovery information 00:07:59.506 sectype: none 00:07:59.506 =====Discovery Log Entry 1====== 00:07:59.506 trtype: tcp 00:07:59.506 adrfam: ipv4 00:07:59.506 subtype: nvme subsystem 00:07:59.506 treq: not required 00:07:59.506 portid: 0 00:07:59.506 trsvcid: 4420 00:07:59.506 subnqn: nqn.2016-06.io.spdk:cnode1 00:07:59.506 traddr: 10.0.0.2 00:07:59.506 eflags: none 00:07:59.506 sectype: none 00:07:59.506 =====Discovery Log Entry 2====== 00:07:59.506 trtype: tcp 00:07:59.506 adrfam: ipv4 00:07:59.506 subtype: nvme subsystem 00:07:59.506 treq: not required 00:07:59.506 portid: 0 00:07:59.506 trsvcid: 4420 00:07:59.506 subnqn: nqn.2016-06.io.spdk:cnode2 00:07:59.506 traddr: 10.0.0.2 00:07:59.506 eflags: none 00:07:59.506 sectype: none 00:07:59.506 =====Discovery Log Entry 3====== 00:07:59.506 trtype: tcp 00:07:59.506 adrfam: ipv4 00:07:59.506 subtype: nvme subsystem 00:07:59.506 treq: not required 00:07:59.506 portid: 0 00:07:59.506 trsvcid: 4420 00:07:59.506 subnqn: nqn.2016-06.io.spdk:cnode3 00:07:59.506 traddr: 10.0.0.2 00:07:59.506 eflags: none 00:07:59.506 sectype: none 00:07:59.506 =====Discovery Log Entry 4====== 00:07:59.506 trtype: tcp 00:07:59.506 adrfam: ipv4 00:07:59.506 subtype: nvme subsystem 00:07:59.506 treq: not required 00:07:59.506 portid: 0 00:07:59.506 trsvcid: 4420 00:07:59.506 subnqn: nqn.2016-06.io.spdk:cnode4 00:07:59.506 traddr: 10.0.0.2 00:07:59.506 eflags: none 00:07:59.506 sectype: none 00:07:59.506 =====Discovery Log Entry 5====== 00:07:59.506 trtype: tcp 00:07:59.506 adrfam: ipv4 00:07:59.506 subtype: discovery subsystem referral 00:07:59.506 treq: not required 00:07:59.506 portid: 0 00:07:59.506 trsvcid: 4430 00:07:59.506 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:07:59.506 traddr: 10.0.0.2 00:07:59.506 eflags: none 00:07:59.506 sectype: none 00:07:59.506 23:10:48 -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:07:59.506 Perform nvmf subsystem discovery via RPC 00:07:59.506 23:10:48 -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:07:59.506 23:10:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:59.506 23:10:48 -- common/autotest_common.sh@10 -- # set +x 00:07:59.506 [2024-04-26 23:10:48.579509] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:07:59.506 [ 00:07:59.506 { 00:07:59.506 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:07:59.506 "subtype": "Discovery", 00:07:59.506 "listen_addresses": [ 00:07:59.506 { 00:07:59.506 "transport": "TCP", 00:07:59.506 "trtype": "TCP", 00:07:59.506 "adrfam": "IPv4", 00:07:59.506 "traddr": "10.0.0.2", 00:07:59.506 "trsvcid": "4420" 00:07:59.506 } 00:07:59.506 ], 00:07:59.506 "allow_any_host": true, 00:07:59.506 "hosts": [] 00:07:59.506 }, 00:07:59.506 { 00:07:59.506 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:07:59.506 "subtype": "NVMe", 00:07:59.506 "listen_addresses": [ 00:07:59.506 { 00:07:59.506 "transport": "TCP", 00:07:59.506 "trtype": "TCP", 00:07:59.506 "adrfam": "IPv4", 00:07:59.506 "traddr": "10.0.0.2", 00:07:59.506 "trsvcid": "4420" 00:07:59.506 } 00:07:59.506 ], 00:07:59.506 "allow_any_host": true, 00:07:59.506 "hosts": [], 00:07:59.506 "serial_number": "SPDK00000000000001", 00:07:59.506 "model_number": "SPDK bdev Controller", 00:07:59.506 "max_namespaces": 32, 00:07:59.506 "min_cntlid": 1, 00:07:59.506 "max_cntlid": 65519, 00:07:59.506 "namespaces": [ 00:07:59.506 { 00:07:59.506 "nsid": 1, 00:07:59.506 "bdev_name": "Null1", 00:07:59.506 "name": "Null1", 00:07:59.506 "nguid": "3692A6B9788B43F7BD68573F12F16E7B", 00:07:59.506 "uuid": "3692a6b9-788b-43f7-bd68-573f12f16e7b" 00:07:59.506 } 00:07:59.506 ] 00:07:59.506 }, 00:07:59.506 { 00:07:59.506 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:07:59.506 "subtype": "NVMe", 00:07:59.506 "listen_addresses": [ 00:07:59.506 { 00:07:59.506 "transport": "TCP", 00:07:59.506 "trtype": "TCP", 00:07:59.506 "adrfam": "IPv4", 00:07:59.506 "traddr": "10.0.0.2", 00:07:59.506 "trsvcid": "4420" 00:07:59.506 } 00:07:59.506 ], 00:07:59.506 "allow_any_host": true, 00:07:59.506 "hosts": [], 00:07:59.506 "serial_number": "SPDK00000000000002", 00:07:59.506 "model_number": "SPDK bdev Controller", 00:07:59.506 "max_namespaces": 32, 00:07:59.506 "min_cntlid": 1, 00:07:59.506 "max_cntlid": 65519, 00:07:59.506 "namespaces": [ 00:07:59.506 { 00:07:59.506 "nsid": 1, 00:07:59.506 "bdev_name": "Null2", 00:07:59.506 "name": "Null2", 00:07:59.506 "nguid": "E0DFFD1BC1594FF394B3FE269D3F589A", 00:07:59.506 "uuid": "e0dffd1b-c159-4ff3-94b3-fe269d3f589a" 00:07:59.506 } 00:07:59.506 ] 00:07:59.506 }, 00:07:59.506 { 00:07:59.506 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:07:59.506 "subtype": "NVMe", 00:07:59.507 "listen_addresses": [ 00:07:59.507 { 00:07:59.507 "transport": "TCP", 00:07:59.507 "trtype": "TCP", 00:07:59.507 "adrfam": "IPv4", 00:07:59.507 "traddr": "10.0.0.2", 00:07:59.507 "trsvcid": "4420" 00:07:59.507 } 00:07:59.507 ], 00:07:59.507 "allow_any_host": true, 00:07:59.507 "hosts": [], 00:07:59.507 "serial_number": "SPDK00000000000003", 00:07:59.507 "model_number": "SPDK bdev Controller", 00:07:59.507 "max_namespaces": 32, 00:07:59.507 "min_cntlid": 1, 00:07:59.507 "max_cntlid": 65519, 00:07:59.507 "namespaces": [ 00:07:59.507 { 00:07:59.507 "nsid": 1, 00:07:59.507 "bdev_name": "Null3", 00:07:59.507 "name": "Null3", 00:07:59.507 "nguid": "9C0A121B33D44020BCFED4123EF76AAA", 00:07:59.507 "uuid": "9c0a121b-33d4-4020-bcfe-d4123ef76aaa" 00:07:59.507 } 00:07:59.507 ] 00:07:59.507 }, 00:07:59.507 { 00:07:59.507 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:07:59.507 "subtype": "NVMe", 00:07:59.507 "listen_addresses": [ 00:07:59.507 { 00:07:59.507 "transport": "TCP", 00:07:59.507 "trtype": "TCP", 00:07:59.507 "adrfam": "IPv4", 00:07:59.507 "traddr": "10.0.0.2", 00:07:59.507 "trsvcid": "4420" 00:07:59.507 } 00:07:59.507 ], 00:07:59.507 "allow_any_host": true, 00:07:59.507 "hosts": [], 00:07:59.507 "serial_number": "SPDK00000000000004", 00:07:59.507 "model_number": "SPDK bdev Controller", 00:07:59.507 "max_namespaces": 32, 00:07:59.507 "min_cntlid": 1, 00:07:59.507 "max_cntlid": 65519, 00:07:59.507 "namespaces": [ 00:07:59.507 { 00:07:59.507 "nsid": 1, 00:07:59.507 "bdev_name": "Null4", 00:07:59.507 "name": "Null4", 00:07:59.507 "nguid": "C9C8EF6CC4624640B6B7473D8A9B7272", 00:07:59.507 "uuid": "c9c8ef6c-c462-4640-b6b7-473d8a9b7272" 00:07:59.507 } 00:07:59.507 ] 00:07:59.507 } 00:07:59.507 ] 00:07:59.507 23:10:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:59.507 23:10:48 -- target/discovery.sh@42 -- # seq 1 4 00:07:59.507 23:10:48 -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:07:59.507 23:10:48 -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:59.507 23:10:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:59.507 23:10:48 -- common/autotest_common.sh@10 -- # set +x 00:07:59.507 23:10:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:59.507 23:10:48 -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:07:59.507 23:10:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:59.507 23:10:48 -- common/autotest_common.sh@10 -- # set +x 00:07:59.507 23:10:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:59.507 23:10:48 -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:07:59.507 23:10:48 -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:07:59.507 23:10:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:59.507 23:10:48 -- common/autotest_common.sh@10 -- # set +x 00:07:59.507 23:10:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:59.507 23:10:48 -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:07:59.507 23:10:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:59.507 23:10:48 -- common/autotest_common.sh@10 -- # set +x 00:07:59.507 23:10:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:59.507 23:10:48 -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:07:59.507 23:10:48 -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:07:59.507 23:10:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:59.507 23:10:48 -- common/autotest_common.sh@10 -- # set +x 00:07:59.507 23:10:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:59.507 23:10:48 -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:07:59.507 23:10:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:59.507 23:10:48 -- common/autotest_common.sh@10 -- # set +x 00:07:59.507 23:10:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:59.507 23:10:48 -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:07:59.507 23:10:48 -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:07:59.507 23:10:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:59.507 23:10:48 -- common/autotest_common.sh@10 -- # set +x 00:07:59.507 23:10:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:59.507 23:10:48 -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:07:59.507 23:10:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:59.507 23:10:48 -- common/autotest_common.sh@10 -- # set +x 00:07:59.507 23:10:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:59.507 23:10:48 -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430 00:07:59.507 23:10:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:59.507 23:10:48 -- common/autotest_common.sh@10 -- # set +x 00:07:59.507 23:10:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:59.507 23:10:48 -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:07:59.507 23:10:48 -- target/discovery.sh@49 -- # jq -r '.[].name' 00:07:59.507 23:10:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:59.507 23:10:48 -- common/autotest_common.sh@10 -- # set +x 00:07:59.507 23:10:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:59.507 23:10:48 -- target/discovery.sh@49 -- # check_bdevs= 00:07:59.507 23:10:48 -- target/discovery.sh@50 -- # '[' -n '' ']' 00:07:59.507 23:10:48 -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:07:59.507 23:10:48 -- target/discovery.sh@57 -- # nvmftestfini 00:07:59.507 23:10:48 -- nvmf/common.sh@477 -- # nvmfcleanup 00:07:59.507 23:10:48 -- nvmf/common.sh@117 -- # sync 00:07:59.507 23:10:48 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:07:59.507 23:10:48 -- nvmf/common.sh@120 -- # set +e 00:07:59.507 23:10:48 -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:59.507 23:10:48 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:07:59.507 rmmod nvme_tcp 00:07:59.767 rmmod nvme_fabrics 00:07:59.767 rmmod nvme_keyring 00:07:59.767 23:10:48 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:59.767 23:10:48 -- nvmf/common.sh@124 -- # set -e 00:07:59.767 23:10:48 -- nvmf/common.sh@125 -- # return 0 00:07:59.767 23:10:48 -- nvmf/common.sh@478 -- # '[' -n 3763620 ']' 00:07:59.767 23:10:48 -- nvmf/common.sh@479 -- # killprocess 3763620 00:07:59.767 23:10:48 -- common/autotest_common.sh@936 -- # '[' -z 3763620 ']' 00:07:59.767 23:10:48 -- common/autotest_common.sh@940 -- # kill -0 3763620 00:07:59.767 23:10:48 -- common/autotest_common.sh@941 -- # uname 00:07:59.767 23:10:48 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:07:59.767 23:10:48 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3763620 00:07:59.767 23:10:48 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:07:59.767 23:10:48 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:07:59.767 23:10:48 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3763620' 00:07:59.767 killing process with pid 3763620 00:07:59.767 23:10:48 -- common/autotest_common.sh@955 -- # kill 3763620 00:07:59.767 [2024-04-26 23:10:48.861285] app.c: 937:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:07:59.767 23:10:48 -- common/autotest_common.sh@960 -- # wait 3763620 00:07:59.767 23:10:48 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:07:59.767 23:10:48 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:07:59.767 23:10:48 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:07:59.767 23:10:48 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:59.767 23:10:48 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:07:59.767 23:10:48 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:59.767 23:10:48 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:59.767 23:10:48 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:02.387 23:10:51 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:08:02.387 00:08:02.387 real 0m10.892s 00:08:02.387 user 0m7.936s 00:08:02.387 sys 0m5.534s 00:08:02.387 23:10:51 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:08:02.387 23:10:51 -- common/autotest_common.sh@10 -- # set +x 00:08:02.387 ************************************ 00:08:02.387 END TEST nvmf_discovery 00:08:02.387 ************************************ 00:08:02.387 23:10:51 -- nvmf/nvmf.sh@26 -- # run_test nvmf_referrals /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:08:02.387 23:10:51 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:08:02.387 23:10:51 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:02.387 23:10:51 -- common/autotest_common.sh@10 -- # set +x 00:08:02.387 ************************************ 00:08:02.387 START TEST nvmf_referrals 00:08:02.387 ************************************ 00:08:02.387 23:10:51 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:08:02.387 * Looking for test storage... 00:08:02.387 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:02.387 23:10:51 -- target/referrals.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:02.387 23:10:51 -- nvmf/common.sh@7 -- # uname -s 00:08:02.387 23:10:51 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:02.387 23:10:51 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:02.387 23:10:51 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:02.387 23:10:51 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:02.387 23:10:51 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:02.387 23:10:51 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:02.387 23:10:51 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:02.387 23:10:51 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:02.387 23:10:51 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:02.387 23:10:51 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:02.387 23:10:51 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:08:02.387 23:10:51 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:08:02.387 23:10:51 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:02.387 23:10:51 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:02.387 23:10:51 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:02.387 23:10:51 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:02.387 23:10:51 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:02.387 23:10:51 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:02.387 23:10:51 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:02.388 23:10:51 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:02.388 23:10:51 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:02.388 23:10:51 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:02.388 23:10:51 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:02.388 23:10:51 -- paths/export.sh@5 -- # export PATH 00:08:02.388 23:10:51 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:02.388 23:10:51 -- nvmf/common.sh@47 -- # : 0 00:08:02.388 23:10:51 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:02.388 23:10:51 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:02.388 23:10:51 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:02.388 23:10:51 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:02.388 23:10:51 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:02.388 23:10:51 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:02.388 23:10:51 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:02.388 23:10:51 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:02.388 23:10:51 -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:08:02.388 23:10:51 -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:08:02.388 23:10:51 -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:08:02.388 23:10:51 -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:08:02.388 23:10:51 -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:08:02.388 23:10:51 -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:08:02.388 23:10:51 -- target/referrals.sh@37 -- # nvmftestinit 00:08:02.388 23:10:51 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:08:02.388 23:10:51 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:02.388 23:10:51 -- nvmf/common.sh@437 -- # prepare_net_devs 00:08:02.388 23:10:51 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:08:02.388 23:10:51 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:08:02.388 23:10:51 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:02.388 23:10:51 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:02.388 23:10:51 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:02.388 23:10:51 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:08:02.388 23:10:51 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:08:02.388 23:10:51 -- nvmf/common.sh@285 -- # xtrace_disable 00:08:02.388 23:10:51 -- common/autotest_common.sh@10 -- # set +x 00:08:10.529 23:10:58 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:08:10.529 23:10:58 -- nvmf/common.sh@291 -- # pci_devs=() 00:08:10.529 23:10:58 -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:10.529 23:10:58 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:10.529 23:10:58 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:10.529 23:10:58 -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:10.529 23:10:58 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:10.529 23:10:58 -- nvmf/common.sh@295 -- # net_devs=() 00:08:10.529 23:10:58 -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:10.529 23:10:58 -- nvmf/common.sh@296 -- # e810=() 00:08:10.529 23:10:58 -- nvmf/common.sh@296 -- # local -ga e810 00:08:10.529 23:10:58 -- nvmf/common.sh@297 -- # x722=() 00:08:10.529 23:10:58 -- nvmf/common.sh@297 -- # local -ga x722 00:08:10.530 23:10:58 -- nvmf/common.sh@298 -- # mlx=() 00:08:10.530 23:10:58 -- nvmf/common.sh@298 -- # local -ga mlx 00:08:10.530 23:10:58 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:10.530 23:10:58 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:10.530 23:10:58 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:10.530 23:10:58 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:10.530 23:10:58 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:10.530 23:10:58 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:10.530 23:10:58 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:10.530 23:10:58 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:10.530 23:10:58 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:10.530 23:10:58 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:10.530 23:10:58 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:10.530 23:10:58 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:10.530 23:10:58 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:10.530 23:10:58 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:10.530 23:10:58 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:10.530 23:10:58 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:10.530 23:10:58 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:10.530 23:10:58 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:10.530 23:10:58 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:08:10.530 Found 0000:31:00.0 (0x8086 - 0x159b) 00:08:10.530 23:10:58 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:10.530 23:10:58 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:10.530 23:10:58 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:10.530 23:10:58 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:10.530 23:10:58 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:10.530 23:10:58 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:10.530 23:10:58 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:08:10.530 Found 0000:31:00.1 (0x8086 - 0x159b) 00:08:10.530 23:10:58 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:10.530 23:10:58 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:10.530 23:10:58 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:10.530 23:10:58 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:10.530 23:10:58 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:10.530 23:10:58 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:10.530 23:10:58 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:10.530 23:10:58 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:10.530 23:10:58 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:10.530 23:10:58 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:10.530 23:10:58 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:08:10.530 23:10:58 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:10.530 23:10:58 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:08:10.530 Found net devices under 0000:31:00.0: cvl_0_0 00:08:10.530 23:10:58 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:08:10.530 23:10:58 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:10.530 23:10:58 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:10.530 23:10:58 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:08:10.530 23:10:58 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:10.530 23:10:58 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:08:10.530 Found net devices under 0000:31:00.1: cvl_0_1 00:08:10.530 23:10:58 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:08:10.530 23:10:58 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:08:10.530 23:10:58 -- nvmf/common.sh@403 -- # is_hw=yes 00:08:10.530 23:10:58 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:08:10.530 23:10:58 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:08:10.530 23:10:58 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:08:10.530 23:10:58 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:10.530 23:10:58 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:10.530 23:10:58 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:10.530 23:10:58 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:10.530 23:10:58 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:10.530 23:10:58 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:10.530 23:10:58 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:10.530 23:10:58 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:10.530 23:10:58 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:10.530 23:10:58 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:10.530 23:10:58 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:10.530 23:10:58 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:10.530 23:10:58 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:10.530 23:10:58 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:10.530 23:10:58 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:10.530 23:10:58 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:10.530 23:10:58 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:10.530 23:10:58 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:10.530 23:10:58 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:10.530 23:10:58 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:10.530 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:10.530 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.475 ms 00:08:10.530 00:08:10.530 --- 10.0.0.2 ping statistics --- 00:08:10.530 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:10.530 rtt min/avg/max/mdev = 0.475/0.475/0.475/0.000 ms 00:08:10.530 23:10:58 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:10.530 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:10.530 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.305 ms 00:08:10.530 00:08:10.530 --- 10.0.0.1 ping statistics --- 00:08:10.530 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:10.530 rtt min/avg/max/mdev = 0.305/0.305/0.305/0.000 ms 00:08:10.530 23:10:58 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:10.530 23:10:58 -- nvmf/common.sh@411 -- # return 0 00:08:10.530 23:10:58 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:08:10.530 23:10:58 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:10.530 23:10:58 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:08:10.530 23:10:58 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:08:10.530 23:10:58 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:10.530 23:10:58 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:08:10.530 23:10:58 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:08:10.530 23:10:58 -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:08:10.530 23:10:58 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:08:10.530 23:10:58 -- common/autotest_common.sh@710 -- # xtrace_disable 00:08:10.530 23:10:58 -- common/autotest_common.sh@10 -- # set +x 00:08:10.530 23:10:58 -- nvmf/common.sh@470 -- # nvmfpid=3768138 00:08:10.530 23:10:58 -- nvmf/common.sh@471 -- # waitforlisten 3768138 00:08:10.530 23:10:58 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:10.530 23:10:58 -- common/autotest_common.sh@817 -- # '[' -z 3768138 ']' 00:08:10.530 23:10:58 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:10.530 23:10:58 -- common/autotest_common.sh@822 -- # local max_retries=100 00:08:10.530 23:10:58 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:10.530 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:10.530 23:10:58 -- common/autotest_common.sh@826 -- # xtrace_disable 00:08:10.530 23:10:58 -- common/autotest_common.sh@10 -- # set +x 00:08:10.530 [2024-04-26 23:10:58.750058] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:08:10.530 [2024-04-26 23:10:58.750126] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:10.530 EAL: No free 2048 kB hugepages reported on node 1 00:08:10.530 [2024-04-26 23:10:58.821847] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:10.530 [2024-04-26 23:10:58.857161] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:10.530 [2024-04-26 23:10:58.857205] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:10.530 [2024-04-26 23:10:58.857213] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:10.530 [2024-04-26 23:10:58.857220] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:10.530 [2024-04-26 23:10:58.857225] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:10.530 [2024-04-26 23:10:58.857343] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:10.530 [2024-04-26 23:10:58.857359] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:10.530 [2024-04-26 23:10:58.857498] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:10.530 [2024-04-26 23:10:58.857499] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:10.530 23:10:59 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:08:10.530 23:10:59 -- common/autotest_common.sh@850 -- # return 0 00:08:10.530 23:10:59 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:08:10.530 23:10:59 -- common/autotest_common.sh@716 -- # xtrace_disable 00:08:10.530 23:10:59 -- common/autotest_common.sh@10 -- # set +x 00:08:10.530 23:10:59 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:10.530 23:10:59 -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:10.530 23:10:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:10.530 23:10:59 -- common/autotest_common.sh@10 -- # set +x 00:08:10.530 [2024-04-26 23:10:59.573631] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:10.530 23:10:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:10.530 23:10:59 -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery 00:08:10.530 23:10:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:10.530 23:10:59 -- common/autotest_common.sh@10 -- # set +x 00:08:10.530 [2024-04-26 23:10:59.589825] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:08:10.530 23:10:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:10.530 23:10:59 -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:08:10.530 23:10:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:10.530 23:10:59 -- common/autotest_common.sh@10 -- # set +x 00:08:10.530 23:10:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:10.531 23:10:59 -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:08:10.531 23:10:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:10.531 23:10:59 -- common/autotest_common.sh@10 -- # set +x 00:08:10.531 23:10:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:10.531 23:10:59 -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:08:10.531 23:10:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:10.531 23:10:59 -- common/autotest_common.sh@10 -- # set +x 00:08:10.531 23:10:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:10.531 23:10:59 -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:10.531 23:10:59 -- target/referrals.sh@48 -- # jq length 00:08:10.531 23:10:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:10.531 23:10:59 -- common/autotest_common.sh@10 -- # set +x 00:08:10.531 23:10:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:10.531 23:10:59 -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:08:10.531 23:10:59 -- target/referrals.sh@49 -- # get_referral_ips rpc 00:08:10.531 23:10:59 -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:08:10.531 23:10:59 -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:10.531 23:10:59 -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:08:10.531 23:10:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:10.531 23:10:59 -- target/referrals.sh@21 -- # sort 00:08:10.531 23:10:59 -- common/autotest_common.sh@10 -- # set +x 00:08:10.531 23:10:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:10.531 23:10:59 -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:08:10.531 23:10:59 -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:08:10.531 23:10:59 -- target/referrals.sh@50 -- # get_referral_ips nvme 00:08:10.531 23:10:59 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:10.531 23:10:59 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:10.531 23:10:59 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:10.531 23:10:59 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:10.531 23:10:59 -- target/referrals.sh@26 -- # sort 00:08:10.791 23:10:59 -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:08:10.791 23:10:59 -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:08:10.791 23:10:59 -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:08:10.791 23:10:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:10.791 23:10:59 -- common/autotest_common.sh@10 -- # set +x 00:08:10.791 23:10:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:10.791 23:10:59 -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:08:10.791 23:10:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:10.791 23:10:59 -- common/autotest_common.sh@10 -- # set +x 00:08:10.791 23:10:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:10.791 23:10:59 -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:08:10.791 23:10:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:10.791 23:10:59 -- common/autotest_common.sh@10 -- # set +x 00:08:10.791 23:10:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:10.791 23:10:59 -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:10.791 23:10:59 -- target/referrals.sh@56 -- # jq length 00:08:10.791 23:10:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:10.791 23:10:59 -- common/autotest_common.sh@10 -- # set +x 00:08:10.791 23:10:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:10.791 23:11:00 -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:08:10.791 23:11:00 -- target/referrals.sh@57 -- # get_referral_ips nvme 00:08:10.791 23:11:00 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:10.791 23:11:00 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:10.791 23:11:00 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:10.791 23:11:00 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:10.791 23:11:00 -- target/referrals.sh@26 -- # sort 00:08:11.064 23:11:00 -- target/referrals.sh@26 -- # echo 00:08:11.064 23:11:00 -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:08:11.065 23:11:00 -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:08:11.065 23:11:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:11.065 23:11:00 -- common/autotest_common.sh@10 -- # set +x 00:08:11.065 23:11:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:11.065 23:11:00 -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:08:11.065 23:11:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:11.065 23:11:00 -- common/autotest_common.sh@10 -- # set +x 00:08:11.065 23:11:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:11.065 23:11:00 -- target/referrals.sh@65 -- # get_referral_ips rpc 00:08:11.065 23:11:00 -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:08:11.065 23:11:00 -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:11.065 23:11:00 -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:08:11.065 23:11:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:11.065 23:11:00 -- target/referrals.sh@21 -- # sort 00:08:11.065 23:11:00 -- common/autotest_common.sh@10 -- # set +x 00:08:11.065 23:11:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:11.065 23:11:00 -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:08:11.065 23:11:00 -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:08:11.065 23:11:00 -- target/referrals.sh@66 -- # get_referral_ips nvme 00:08:11.065 23:11:00 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:11.065 23:11:00 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:11.065 23:11:00 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:11.065 23:11:00 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:11.065 23:11:00 -- target/referrals.sh@26 -- # sort 00:08:11.329 23:11:00 -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:08:11.329 23:11:00 -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:08:11.329 23:11:00 -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:08:11.329 23:11:00 -- target/referrals.sh@67 -- # jq -r .subnqn 00:08:11.329 23:11:00 -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:08:11.329 23:11:00 -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:11.329 23:11:00 -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:08:11.589 23:11:00 -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:08:11.589 23:11:00 -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:08:11.589 23:11:00 -- target/referrals.sh@68 -- # jq -r .subnqn 00:08:11.590 23:11:00 -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:08:11.590 23:11:00 -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:11.590 23:11:00 -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:08:11.590 23:11:00 -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:08:11.590 23:11:00 -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:08:11.590 23:11:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:11.590 23:11:00 -- common/autotest_common.sh@10 -- # set +x 00:08:11.590 23:11:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:11.590 23:11:00 -- target/referrals.sh@73 -- # get_referral_ips rpc 00:08:11.590 23:11:00 -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:08:11.590 23:11:00 -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:11.590 23:11:00 -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:08:11.590 23:11:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:11.590 23:11:00 -- target/referrals.sh@21 -- # sort 00:08:11.590 23:11:00 -- common/autotest_common.sh@10 -- # set +x 00:08:11.590 23:11:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:11.850 23:11:00 -- target/referrals.sh@21 -- # echo 127.0.0.2 00:08:11.850 23:11:00 -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:08:11.850 23:11:00 -- target/referrals.sh@74 -- # get_referral_ips nvme 00:08:11.851 23:11:00 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:11.851 23:11:00 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:11.851 23:11:00 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:11.851 23:11:00 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:11.851 23:11:00 -- target/referrals.sh@26 -- # sort 00:08:11.851 23:11:01 -- target/referrals.sh@26 -- # echo 127.0.0.2 00:08:11.851 23:11:01 -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:08:11.851 23:11:01 -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:08:11.851 23:11:01 -- target/referrals.sh@75 -- # jq -r .subnqn 00:08:11.851 23:11:01 -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:08:11.851 23:11:01 -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:11.851 23:11:01 -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:08:12.111 23:11:01 -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:08:12.111 23:11:01 -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:08:12.111 23:11:01 -- target/referrals.sh@76 -- # jq -r .subnqn 00:08:12.111 23:11:01 -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:08:12.111 23:11:01 -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:12.111 23:11:01 -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:08:12.111 23:11:01 -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:08:12.111 23:11:01 -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:08:12.111 23:11:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:12.111 23:11:01 -- common/autotest_common.sh@10 -- # set +x 00:08:12.111 23:11:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:12.111 23:11:01 -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:12.111 23:11:01 -- target/referrals.sh@82 -- # jq length 00:08:12.111 23:11:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:12.111 23:11:01 -- common/autotest_common.sh@10 -- # set +x 00:08:12.111 23:11:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:12.111 23:11:01 -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:08:12.111 23:11:01 -- target/referrals.sh@83 -- # get_referral_ips nvme 00:08:12.111 23:11:01 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:12.111 23:11:01 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:12.111 23:11:01 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:12.111 23:11:01 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:12.111 23:11:01 -- target/referrals.sh@26 -- # sort 00:08:12.371 23:11:01 -- target/referrals.sh@26 -- # echo 00:08:12.371 23:11:01 -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:08:12.371 23:11:01 -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:08:12.371 23:11:01 -- target/referrals.sh@86 -- # nvmftestfini 00:08:12.371 23:11:01 -- nvmf/common.sh@477 -- # nvmfcleanup 00:08:12.371 23:11:01 -- nvmf/common.sh@117 -- # sync 00:08:12.371 23:11:01 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:12.371 23:11:01 -- nvmf/common.sh@120 -- # set +e 00:08:12.371 23:11:01 -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:12.371 23:11:01 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:12.371 rmmod nvme_tcp 00:08:12.371 rmmod nvme_fabrics 00:08:12.371 rmmod nvme_keyring 00:08:12.371 23:11:01 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:12.371 23:11:01 -- nvmf/common.sh@124 -- # set -e 00:08:12.371 23:11:01 -- nvmf/common.sh@125 -- # return 0 00:08:12.371 23:11:01 -- nvmf/common.sh@478 -- # '[' -n 3768138 ']' 00:08:12.371 23:11:01 -- nvmf/common.sh@479 -- # killprocess 3768138 00:08:12.371 23:11:01 -- common/autotest_common.sh@936 -- # '[' -z 3768138 ']' 00:08:12.371 23:11:01 -- common/autotest_common.sh@940 -- # kill -0 3768138 00:08:12.371 23:11:01 -- common/autotest_common.sh@941 -- # uname 00:08:12.371 23:11:01 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:08:12.371 23:11:01 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3768138 00:08:12.371 23:11:01 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:08:12.371 23:11:01 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:08:12.371 23:11:01 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3768138' 00:08:12.371 killing process with pid 3768138 00:08:12.371 23:11:01 -- common/autotest_common.sh@955 -- # kill 3768138 00:08:12.371 23:11:01 -- common/autotest_common.sh@960 -- # wait 3768138 00:08:12.632 23:11:01 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:08:12.632 23:11:01 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:08:12.632 23:11:01 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:08:12.632 23:11:01 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:12.632 23:11:01 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:12.632 23:11:01 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:12.632 23:11:01 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:12.632 23:11:01 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:14.541 23:11:03 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:08:14.541 00:08:14.541 real 0m12.558s 00:08:14.541 user 0m14.305s 00:08:14.541 sys 0m6.103s 00:08:14.541 23:11:03 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:08:14.541 23:11:03 -- common/autotest_common.sh@10 -- # set +x 00:08:14.541 ************************************ 00:08:14.541 END TEST nvmf_referrals 00:08:14.541 ************************************ 00:08:14.541 23:11:03 -- nvmf/nvmf.sh@27 -- # run_test nvmf_connect_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:08:14.541 23:11:03 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:08:14.541 23:11:03 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:14.541 23:11:03 -- common/autotest_common.sh@10 -- # set +x 00:08:14.802 ************************************ 00:08:14.802 START TEST nvmf_connect_disconnect 00:08:14.802 ************************************ 00:08:14.802 23:11:03 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:08:14.802 * Looking for test storage... 00:08:14.802 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:14.802 23:11:04 -- target/connect_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:14.802 23:11:04 -- nvmf/common.sh@7 -- # uname -s 00:08:14.802 23:11:04 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:14.802 23:11:04 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:14.802 23:11:04 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:14.802 23:11:04 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:14.802 23:11:04 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:14.802 23:11:04 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:14.802 23:11:04 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:14.802 23:11:04 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:14.802 23:11:04 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:14.802 23:11:04 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:14.802 23:11:04 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:08:14.802 23:11:04 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:08:14.802 23:11:04 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:14.802 23:11:04 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:14.802 23:11:04 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:14.802 23:11:04 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:14.802 23:11:04 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:14.802 23:11:04 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:14.802 23:11:04 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:14.802 23:11:04 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:14.802 23:11:04 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:14.802 23:11:04 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:14.802 23:11:04 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:14.802 23:11:04 -- paths/export.sh@5 -- # export PATH 00:08:14.803 23:11:04 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:14.803 23:11:04 -- nvmf/common.sh@47 -- # : 0 00:08:14.803 23:11:04 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:14.803 23:11:04 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:14.803 23:11:04 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:14.803 23:11:04 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:14.803 23:11:04 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:14.803 23:11:04 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:14.803 23:11:04 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:14.803 23:11:04 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:14.803 23:11:04 -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:14.803 23:11:04 -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:14.803 23:11:04 -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:08:14.803 23:11:04 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:08:14.803 23:11:04 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:14.803 23:11:04 -- nvmf/common.sh@437 -- # prepare_net_devs 00:08:14.803 23:11:04 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:08:14.803 23:11:04 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:08:14.803 23:11:04 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:14.803 23:11:04 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:14.803 23:11:04 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:15.063 23:11:04 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:08:15.063 23:11:04 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:08:15.063 23:11:04 -- nvmf/common.sh@285 -- # xtrace_disable 00:08:15.063 23:11:04 -- common/autotest_common.sh@10 -- # set +x 00:08:23.202 23:11:11 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:08:23.202 23:11:11 -- nvmf/common.sh@291 -- # pci_devs=() 00:08:23.202 23:11:11 -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:23.202 23:11:11 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:23.202 23:11:11 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:23.202 23:11:11 -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:23.202 23:11:11 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:23.202 23:11:11 -- nvmf/common.sh@295 -- # net_devs=() 00:08:23.202 23:11:11 -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:23.202 23:11:11 -- nvmf/common.sh@296 -- # e810=() 00:08:23.202 23:11:11 -- nvmf/common.sh@296 -- # local -ga e810 00:08:23.202 23:11:11 -- nvmf/common.sh@297 -- # x722=() 00:08:23.202 23:11:11 -- nvmf/common.sh@297 -- # local -ga x722 00:08:23.202 23:11:11 -- nvmf/common.sh@298 -- # mlx=() 00:08:23.202 23:11:11 -- nvmf/common.sh@298 -- # local -ga mlx 00:08:23.202 23:11:11 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:23.202 23:11:11 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:23.202 23:11:11 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:23.202 23:11:11 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:23.202 23:11:11 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:23.202 23:11:11 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:23.202 23:11:11 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:23.202 23:11:11 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:23.202 23:11:11 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:23.202 23:11:11 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:23.202 23:11:11 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:23.202 23:11:11 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:23.202 23:11:11 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:23.202 23:11:11 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:23.202 23:11:11 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:23.202 23:11:11 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:23.202 23:11:11 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:23.202 23:11:11 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:23.202 23:11:11 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:08:23.202 Found 0000:31:00.0 (0x8086 - 0x159b) 00:08:23.202 23:11:11 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:23.202 23:11:11 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:23.202 23:11:11 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:23.202 23:11:11 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:23.202 23:11:11 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:23.202 23:11:11 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:23.202 23:11:11 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:08:23.202 Found 0000:31:00.1 (0x8086 - 0x159b) 00:08:23.202 23:11:11 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:23.202 23:11:11 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:23.202 23:11:11 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:23.202 23:11:11 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:23.202 23:11:11 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:23.202 23:11:11 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:23.202 23:11:11 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:23.202 23:11:11 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:23.202 23:11:11 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:23.202 23:11:11 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:23.202 23:11:11 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:08:23.202 23:11:11 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:23.202 23:11:11 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:08:23.202 Found net devices under 0000:31:00.0: cvl_0_0 00:08:23.202 23:11:11 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:08:23.202 23:11:11 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:23.202 23:11:11 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:23.202 23:11:11 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:08:23.202 23:11:11 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:23.202 23:11:11 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:08:23.202 Found net devices under 0000:31:00.1: cvl_0_1 00:08:23.203 23:11:11 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:08:23.203 23:11:11 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:08:23.203 23:11:11 -- nvmf/common.sh@403 -- # is_hw=yes 00:08:23.203 23:11:11 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:08:23.203 23:11:11 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:08:23.203 23:11:11 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:08:23.203 23:11:11 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:23.203 23:11:11 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:23.203 23:11:11 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:23.203 23:11:11 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:23.203 23:11:11 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:23.203 23:11:11 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:23.203 23:11:11 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:23.203 23:11:11 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:23.203 23:11:11 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:23.203 23:11:11 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:23.203 23:11:11 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:23.203 23:11:11 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:23.203 23:11:11 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:23.203 23:11:11 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:23.203 23:11:11 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:23.203 23:11:11 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:23.203 23:11:11 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:23.203 23:11:11 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:23.203 23:11:11 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:23.203 23:11:11 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:23.203 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:23.203 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.801 ms 00:08:23.203 00:08:23.203 --- 10.0.0.2 ping statistics --- 00:08:23.203 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:23.203 rtt min/avg/max/mdev = 0.801/0.801/0.801/0.000 ms 00:08:23.203 23:11:11 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:23.203 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:23.203 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.331 ms 00:08:23.203 00:08:23.203 --- 10.0.0.1 ping statistics --- 00:08:23.203 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:23.203 rtt min/avg/max/mdev = 0.331/0.331/0.331/0.000 ms 00:08:23.203 23:11:11 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:23.203 23:11:11 -- nvmf/common.sh@411 -- # return 0 00:08:23.203 23:11:11 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:08:23.203 23:11:11 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:23.203 23:11:11 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:08:23.203 23:11:11 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:08:23.203 23:11:11 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:23.203 23:11:11 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:08:23.203 23:11:11 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:08:23.203 23:11:11 -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:08:23.203 23:11:11 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:08:23.203 23:11:11 -- common/autotest_common.sh@710 -- # xtrace_disable 00:08:23.203 23:11:11 -- common/autotest_common.sh@10 -- # set +x 00:08:23.203 23:11:11 -- nvmf/common.sh@470 -- # nvmfpid=3773215 00:08:23.203 23:11:11 -- nvmf/common.sh@471 -- # waitforlisten 3773215 00:08:23.203 23:11:11 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:23.203 23:11:11 -- common/autotest_common.sh@817 -- # '[' -z 3773215 ']' 00:08:23.203 23:11:11 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:23.203 23:11:11 -- common/autotest_common.sh@822 -- # local max_retries=100 00:08:23.203 23:11:11 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:23.203 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:23.203 23:11:11 -- common/autotest_common.sh@826 -- # xtrace_disable 00:08:23.203 23:11:11 -- common/autotest_common.sh@10 -- # set +x 00:08:23.203 [2024-04-26 23:11:11.459826] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:08:23.203 [2024-04-26 23:11:11.459899] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:23.203 EAL: No free 2048 kB hugepages reported on node 1 00:08:23.203 [2024-04-26 23:11:11.526512] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:23.203 [2024-04-26 23:11:11.556368] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:23.203 [2024-04-26 23:11:11.556408] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:23.203 [2024-04-26 23:11:11.556417] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:23.203 [2024-04-26 23:11:11.556425] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:23.203 [2024-04-26 23:11:11.556432] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:23.203 [2024-04-26 23:11:11.556575] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:23.203 [2024-04-26 23:11:11.556695] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:23.203 [2024-04-26 23:11:11.556873] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:23.203 [2024-04-26 23:11:11.556873] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:23.203 23:11:12 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:08:23.203 23:11:12 -- common/autotest_common.sh@850 -- # return 0 00:08:23.203 23:11:12 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:08:23.203 23:11:12 -- common/autotest_common.sh@716 -- # xtrace_disable 00:08:23.203 23:11:12 -- common/autotest_common.sh@10 -- # set +x 00:08:23.203 23:11:12 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:23.203 23:11:12 -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:08:23.203 23:11:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:23.203 23:11:12 -- common/autotest_common.sh@10 -- # set +x 00:08:23.203 [2024-04-26 23:11:12.275550] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:23.203 23:11:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:23.203 23:11:12 -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:08:23.203 23:11:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:23.203 23:11:12 -- common/autotest_common.sh@10 -- # set +x 00:08:23.203 23:11:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:23.203 23:11:12 -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:08:23.203 23:11:12 -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:08:23.203 23:11:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:23.203 23:11:12 -- common/autotest_common.sh@10 -- # set +x 00:08:23.203 23:11:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:23.203 23:11:12 -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:23.203 23:11:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:23.203 23:11:12 -- common/autotest_common.sh@10 -- # set +x 00:08:23.203 23:11:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:23.203 23:11:12 -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:23.203 23:11:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:23.203 23:11:12 -- common/autotest_common.sh@10 -- # set +x 00:08:23.203 [2024-04-26 23:11:12.334978] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:23.203 23:11:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:23.203 23:11:12 -- target/connect_disconnect.sh@26 -- # '[' 1 -eq 1 ']' 00:08:23.203 23:11:12 -- target/connect_disconnect.sh@27 -- # num_iterations=100 00:08:23.203 23:11:12 -- target/connect_disconnect.sh@29 -- # NVME_CONNECT='nvme connect -i 8' 00:08:23.203 23:11:12 -- target/connect_disconnect.sh@34 -- # set +x 00:08:25.745 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:28.288 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:30.201 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:32.748 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:35.309 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:37.226 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:39.775 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:42.322 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:44.278 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:46.842 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:49.390 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:51.329 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:53.873 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:56.419 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:58.335 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:00.880 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:03.429 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:05.345 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:07.892 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:10.442 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:12.354 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:14.899 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:17.447 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:19.363 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:21.939 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:23.910 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:26.457 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:29.006 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:30.924 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:33.468 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:35.385 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:37.927 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:40.469 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:43.015 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:44.926 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:47.469 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:50.012 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:51.921 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:54.472 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:57.021 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:59.007 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:01.562 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:03.479 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:06.028 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:08.578 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:11.132 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:13.044 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:15.590 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:18.137 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:20.074 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:22.614 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:24.527 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:27.076 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:29.618 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:32.157 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:34.068 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:36.642 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:39.192 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:41.104 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:43.646 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:45.555 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:48.105 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:50.653 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:52.715 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:55.264 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:57.180 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:59.731 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:02.280 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:04.192 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:06.734 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:09.276 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:11.188 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:13.730 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:16.265 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:18.174 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:20.720 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:23.262 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:25.182 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:27.732 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:30.280 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:32.197 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:34.745 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:37.295 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:39.211 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:41.761 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:44.306 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:46.218 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:48.762 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:51.306 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:53.218 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:55.889 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:58.437 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:00.352 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:02.896 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:05.456 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:07.372 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:09.921 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:12.464 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:15.006 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:16.919 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:16.919 23:15:05 -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:12:16.919 23:15:05 -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:12:16.919 23:15:05 -- nvmf/common.sh@477 -- # nvmfcleanup 00:12:16.919 23:15:05 -- nvmf/common.sh@117 -- # sync 00:12:16.919 23:15:05 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:16.919 23:15:05 -- nvmf/common.sh@120 -- # set +e 00:12:16.919 23:15:05 -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:16.919 23:15:05 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:16.919 rmmod nvme_tcp 00:12:16.919 rmmod nvme_fabrics 00:12:16.919 rmmod nvme_keyring 00:12:16.919 23:15:06 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:16.919 23:15:06 -- nvmf/common.sh@124 -- # set -e 00:12:16.919 23:15:06 -- nvmf/common.sh@125 -- # return 0 00:12:16.919 23:15:06 -- nvmf/common.sh@478 -- # '[' -n 3773215 ']' 00:12:16.919 23:15:06 -- nvmf/common.sh@479 -- # killprocess 3773215 00:12:16.919 23:15:06 -- common/autotest_common.sh@936 -- # '[' -z 3773215 ']' 00:12:16.919 23:15:06 -- common/autotest_common.sh@940 -- # kill -0 3773215 00:12:16.919 23:15:06 -- common/autotest_common.sh@941 -- # uname 00:12:16.919 23:15:06 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:12:16.919 23:15:06 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3773215 00:12:16.919 23:15:06 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:12:16.919 23:15:06 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:12:16.919 23:15:06 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3773215' 00:12:16.919 killing process with pid 3773215 00:12:16.919 23:15:06 -- common/autotest_common.sh@955 -- # kill 3773215 00:12:16.919 23:15:06 -- common/autotest_common.sh@960 -- # wait 3773215 00:12:17.180 23:15:06 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:12:17.180 23:15:06 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:12:17.180 23:15:06 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:12:17.180 23:15:06 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:17.180 23:15:06 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:17.180 23:15:06 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:17.180 23:15:06 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:17.180 23:15:06 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:19.093 23:15:08 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:12:19.093 00:12:19.093 real 4m4.398s 00:12:19.093 user 15m32.113s 00:12:19.093 sys 0m22.778s 00:12:19.093 23:15:08 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:12:19.093 23:15:08 -- common/autotest_common.sh@10 -- # set +x 00:12:19.093 ************************************ 00:12:19.093 END TEST nvmf_connect_disconnect 00:12:19.093 ************************************ 00:12:19.355 23:15:08 -- nvmf/nvmf.sh@28 -- # run_test nvmf_multitarget /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:12:19.355 23:15:08 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:12:19.355 23:15:08 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:19.355 23:15:08 -- common/autotest_common.sh@10 -- # set +x 00:12:19.355 ************************************ 00:12:19.355 START TEST nvmf_multitarget 00:12:19.355 ************************************ 00:12:19.355 23:15:08 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:12:19.355 * Looking for test storage... 00:12:19.618 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:19.618 23:15:08 -- target/multitarget.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:19.618 23:15:08 -- nvmf/common.sh@7 -- # uname -s 00:12:19.618 23:15:08 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:19.618 23:15:08 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:19.618 23:15:08 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:19.618 23:15:08 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:19.618 23:15:08 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:19.618 23:15:08 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:19.618 23:15:08 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:19.618 23:15:08 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:19.618 23:15:08 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:19.618 23:15:08 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:19.618 23:15:08 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:12:19.618 23:15:08 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:12:19.618 23:15:08 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:19.618 23:15:08 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:19.618 23:15:08 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:19.618 23:15:08 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:19.618 23:15:08 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:19.618 23:15:08 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:19.618 23:15:08 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:19.618 23:15:08 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:19.618 23:15:08 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:19.618 23:15:08 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:19.618 23:15:08 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:19.618 23:15:08 -- paths/export.sh@5 -- # export PATH 00:12:19.618 23:15:08 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:19.618 23:15:08 -- nvmf/common.sh@47 -- # : 0 00:12:19.618 23:15:08 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:19.618 23:15:08 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:19.618 23:15:08 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:19.618 23:15:08 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:19.618 23:15:08 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:19.618 23:15:08 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:19.618 23:15:08 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:19.618 23:15:08 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:19.618 23:15:08 -- target/multitarget.sh@13 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:12:19.618 23:15:08 -- target/multitarget.sh@15 -- # nvmftestinit 00:12:19.618 23:15:08 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:12:19.618 23:15:08 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:19.618 23:15:08 -- nvmf/common.sh@437 -- # prepare_net_devs 00:12:19.618 23:15:08 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:12:19.618 23:15:08 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:12:19.618 23:15:08 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:19.618 23:15:08 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:19.618 23:15:08 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:19.618 23:15:08 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:12:19.618 23:15:08 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:12:19.618 23:15:08 -- nvmf/common.sh@285 -- # xtrace_disable 00:12:19.618 23:15:08 -- common/autotest_common.sh@10 -- # set +x 00:12:27.770 23:15:15 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:12:27.770 23:15:15 -- nvmf/common.sh@291 -- # pci_devs=() 00:12:27.770 23:15:15 -- nvmf/common.sh@291 -- # local -a pci_devs 00:12:27.770 23:15:15 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:12:27.770 23:15:15 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:12:27.770 23:15:15 -- nvmf/common.sh@293 -- # pci_drivers=() 00:12:27.770 23:15:15 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:12:27.770 23:15:15 -- nvmf/common.sh@295 -- # net_devs=() 00:12:27.770 23:15:15 -- nvmf/common.sh@295 -- # local -ga net_devs 00:12:27.770 23:15:15 -- nvmf/common.sh@296 -- # e810=() 00:12:27.770 23:15:15 -- nvmf/common.sh@296 -- # local -ga e810 00:12:27.770 23:15:15 -- nvmf/common.sh@297 -- # x722=() 00:12:27.770 23:15:15 -- nvmf/common.sh@297 -- # local -ga x722 00:12:27.770 23:15:15 -- nvmf/common.sh@298 -- # mlx=() 00:12:27.770 23:15:15 -- nvmf/common.sh@298 -- # local -ga mlx 00:12:27.770 23:15:15 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:27.770 23:15:15 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:27.770 23:15:15 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:27.770 23:15:15 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:27.770 23:15:15 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:27.770 23:15:15 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:27.770 23:15:15 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:27.770 23:15:15 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:27.770 23:15:15 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:27.770 23:15:15 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:27.770 23:15:15 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:27.770 23:15:15 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:12:27.770 23:15:15 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:12:27.770 23:15:15 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:12:27.770 23:15:15 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:12:27.770 23:15:15 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:12:27.770 23:15:15 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:12:27.770 23:15:15 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:27.770 23:15:15 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:12:27.770 Found 0000:31:00.0 (0x8086 - 0x159b) 00:12:27.770 23:15:15 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:27.770 23:15:15 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:27.770 23:15:15 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:27.770 23:15:15 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:27.770 23:15:15 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:27.770 23:15:15 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:27.770 23:15:15 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:12:27.770 Found 0000:31:00.1 (0x8086 - 0x159b) 00:12:27.770 23:15:15 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:27.770 23:15:15 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:27.770 23:15:15 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:27.770 23:15:15 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:27.770 23:15:15 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:27.770 23:15:15 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:12:27.770 23:15:15 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:12:27.770 23:15:15 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:12:27.770 23:15:15 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:27.770 23:15:15 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:27.770 23:15:15 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:12:27.770 23:15:15 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:27.770 23:15:15 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:12:27.770 Found net devices under 0000:31:00.0: cvl_0_0 00:12:27.770 23:15:15 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:12:27.770 23:15:15 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:27.770 23:15:15 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:27.770 23:15:15 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:12:27.770 23:15:15 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:27.770 23:15:15 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:12:27.770 Found net devices under 0000:31:00.1: cvl_0_1 00:12:27.770 23:15:15 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:12:27.770 23:15:15 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:12:27.770 23:15:15 -- nvmf/common.sh@403 -- # is_hw=yes 00:12:27.770 23:15:15 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:12:27.770 23:15:15 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:12:27.770 23:15:15 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:12:27.770 23:15:15 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:27.770 23:15:15 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:27.770 23:15:15 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:27.770 23:15:15 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:12:27.770 23:15:15 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:27.770 23:15:15 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:27.770 23:15:15 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:12:27.770 23:15:15 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:27.770 23:15:15 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:27.770 23:15:15 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:12:27.770 23:15:15 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:12:27.770 23:15:15 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:12:27.770 23:15:15 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:27.770 23:15:15 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:27.770 23:15:15 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:27.770 23:15:15 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:12:27.770 23:15:15 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:27.770 23:15:15 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:27.770 23:15:15 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:27.770 23:15:15 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:12:27.770 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:27.770 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.754 ms 00:12:27.770 00:12:27.770 --- 10.0.0.2 ping statistics --- 00:12:27.770 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:27.770 rtt min/avg/max/mdev = 0.754/0.754/0.754/0.000 ms 00:12:27.770 23:15:15 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:27.770 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:27.770 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.275 ms 00:12:27.770 00:12:27.770 --- 10.0.0.1 ping statistics --- 00:12:27.770 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:27.770 rtt min/avg/max/mdev = 0.275/0.275/0.275/0.000 ms 00:12:27.770 23:15:15 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:27.770 23:15:15 -- nvmf/common.sh@411 -- # return 0 00:12:27.770 23:15:15 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:12:27.770 23:15:15 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:27.770 23:15:15 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:12:27.770 23:15:15 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:12:27.770 23:15:15 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:27.770 23:15:15 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:12:27.770 23:15:15 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:12:27.770 23:15:15 -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:12:27.770 23:15:15 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:12:27.770 23:15:15 -- common/autotest_common.sh@710 -- # xtrace_disable 00:12:27.770 23:15:15 -- common/autotest_common.sh@10 -- # set +x 00:12:27.770 23:15:15 -- nvmf/common.sh@470 -- # nvmfpid=3825410 00:12:27.770 23:15:15 -- nvmf/common.sh@471 -- # waitforlisten 3825410 00:12:27.770 23:15:15 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:27.770 23:15:15 -- common/autotest_common.sh@817 -- # '[' -z 3825410 ']' 00:12:27.770 23:15:15 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:27.770 23:15:15 -- common/autotest_common.sh@822 -- # local max_retries=100 00:12:27.770 23:15:15 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:27.770 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:27.770 23:15:15 -- common/autotest_common.sh@826 -- # xtrace_disable 00:12:27.770 23:15:15 -- common/autotest_common.sh@10 -- # set +x 00:12:27.770 [2024-04-26 23:15:15.919647] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:12:27.771 [2024-04-26 23:15:15.919712] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:27.771 EAL: No free 2048 kB hugepages reported on node 1 00:12:27.771 [2024-04-26 23:15:15.993382] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:27.771 [2024-04-26 23:15:16.031857] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:27.771 [2024-04-26 23:15:16.031908] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:27.771 [2024-04-26 23:15:16.031915] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:27.771 [2024-04-26 23:15:16.031922] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:27.771 [2024-04-26 23:15:16.031928] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:27.771 [2024-04-26 23:15:16.032085] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:27.771 [2024-04-26 23:15:16.032233] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:12:27.771 [2024-04-26 23:15:16.032394] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:27.771 [2024-04-26 23:15:16.032395] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:12:27.771 23:15:16 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:12:27.771 23:15:16 -- common/autotest_common.sh@850 -- # return 0 00:12:27.771 23:15:16 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:12:27.771 23:15:16 -- common/autotest_common.sh@716 -- # xtrace_disable 00:12:27.771 23:15:16 -- common/autotest_common.sh@10 -- # set +x 00:12:27.771 23:15:16 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:27.771 23:15:16 -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:12:27.771 23:15:16 -- target/multitarget.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:12:27.771 23:15:16 -- target/multitarget.sh@21 -- # jq length 00:12:27.771 23:15:16 -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:12:27.771 23:15:16 -- target/multitarget.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:12:27.771 "nvmf_tgt_1" 00:12:27.771 23:15:16 -- target/multitarget.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:12:27.771 "nvmf_tgt_2" 00:12:28.032 23:15:17 -- target/multitarget.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:12:28.032 23:15:17 -- target/multitarget.sh@28 -- # jq length 00:12:28.032 23:15:17 -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:12:28.032 23:15:17 -- target/multitarget.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:12:28.032 true 00:12:28.032 23:15:17 -- target/multitarget.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:12:28.292 true 00:12:28.292 23:15:17 -- target/multitarget.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:12:28.292 23:15:17 -- target/multitarget.sh@35 -- # jq length 00:12:28.292 23:15:17 -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:12:28.292 23:15:17 -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:12:28.292 23:15:17 -- target/multitarget.sh@41 -- # nvmftestfini 00:12:28.292 23:15:17 -- nvmf/common.sh@477 -- # nvmfcleanup 00:12:28.292 23:15:17 -- nvmf/common.sh@117 -- # sync 00:12:28.292 23:15:17 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:28.292 23:15:17 -- nvmf/common.sh@120 -- # set +e 00:12:28.292 23:15:17 -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:28.292 23:15:17 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:28.292 rmmod nvme_tcp 00:12:28.292 rmmod nvme_fabrics 00:12:28.292 rmmod nvme_keyring 00:12:28.292 23:15:17 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:28.292 23:15:17 -- nvmf/common.sh@124 -- # set -e 00:12:28.292 23:15:17 -- nvmf/common.sh@125 -- # return 0 00:12:28.292 23:15:17 -- nvmf/common.sh@478 -- # '[' -n 3825410 ']' 00:12:28.292 23:15:17 -- nvmf/common.sh@479 -- # killprocess 3825410 00:12:28.292 23:15:17 -- common/autotest_common.sh@936 -- # '[' -z 3825410 ']' 00:12:28.292 23:15:17 -- common/autotest_common.sh@940 -- # kill -0 3825410 00:12:28.292 23:15:17 -- common/autotest_common.sh@941 -- # uname 00:12:28.292 23:15:17 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:12:28.292 23:15:17 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3825410 00:12:28.552 23:15:17 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:12:28.552 23:15:17 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:12:28.553 23:15:17 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3825410' 00:12:28.553 killing process with pid 3825410 00:12:28.553 23:15:17 -- common/autotest_common.sh@955 -- # kill 3825410 00:12:28.553 23:15:17 -- common/autotest_common.sh@960 -- # wait 3825410 00:12:28.553 23:15:17 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:12:28.553 23:15:17 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:12:28.553 23:15:17 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:12:28.553 23:15:17 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:28.553 23:15:17 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:28.553 23:15:17 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:28.553 23:15:17 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:28.553 23:15:17 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:31.096 23:15:19 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:12:31.096 00:12:31.096 real 0m11.246s 00:12:31.096 user 0m9.337s 00:12:31.096 sys 0m5.798s 00:12:31.096 23:15:19 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:12:31.096 23:15:19 -- common/autotest_common.sh@10 -- # set +x 00:12:31.096 ************************************ 00:12:31.096 END TEST nvmf_multitarget 00:12:31.096 ************************************ 00:12:31.096 23:15:19 -- nvmf/nvmf.sh@29 -- # run_test nvmf_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:12:31.096 23:15:19 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:12:31.096 23:15:19 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:31.096 23:15:19 -- common/autotest_common.sh@10 -- # set +x 00:12:31.096 ************************************ 00:12:31.096 START TEST nvmf_rpc 00:12:31.096 ************************************ 00:12:31.096 23:15:19 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:12:31.096 * Looking for test storage... 00:12:31.096 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:31.096 23:15:20 -- target/rpc.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:31.096 23:15:20 -- nvmf/common.sh@7 -- # uname -s 00:12:31.096 23:15:20 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:31.096 23:15:20 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:31.096 23:15:20 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:31.096 23:15:20 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:31.096 23:15:20 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:31.096 23:15:20 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:31.096 23:15:20 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:31.096 23:15:20 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:31.096 23:15:20 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:31.096 23:15:20 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:31.096 23:15:20 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:12:31.096 23:15:20 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:12:31.096 23:15:20 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:31.096 23:15:20 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:31.096 23:15:20 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:31.096 23:15:20 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:31.096 23:15:20 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:31.096 23:15:20 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:31.096 23:15:20 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:31.096 23:15:20 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:31.096 23:15:20 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:31.096 23:15:20 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:31.096 23:15:20 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:31.096 23:15:20 -- paths/export.sh@5 -- # export PATH 00:12:31.096 23:15:20 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:31.096 23:15:20 -- nvmf/common.sh@47 -- # : 0 00:12:31.096 23:15:20 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:31.096 23:15:20 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:31.096 23:15:20 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:31.096 23:15:20 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:31.096 23:15:20 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:31.096 23:15:20 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:31.096 23:15:20 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:31.096 23:15:20 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:31.096 23:15:20 -- target/rpc.sh@11 -- # loops=5 00:12:31.096 23:15:20 -- target/rpc.sh@23 -- # nvmftestinit 00:12:31.096 23:15:20 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:12:31.096 23:15:20 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:31.096 23:15:20 -- nvmf/common.sh@437 -- # prepare_net_devs 00:12:31.096 23:15:20 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:12:31.096 23:15:20 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:12:31.096 23:15:20 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:31.096 23:15:20 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:31.096 23:15:20 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:31.096 23:15:20 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:12:31.096 23:15:20 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:12:31.096 23:15:20 -- nvmf/common.sh@285 -- # xtrace_disable 00:12:31.096 23:15:20 -- common/autotest_common.sh@10 -- # set +x 00:12:37.750 23:15:26 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:12:37.750 23:15:26 -- nvmf/common.sh@291 -- # pci_devs=() 00:12:37.750 23:15:26 -- nvmf/common.sh@291 -- # local -a pci_devs 00:12:37.750 23:15:26 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:12:37.750 23:15:26 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:12:37.750 23:15:26 -- nvmf/common.sh@293 -- # pci_drivers=() 00:12:37.750 23:15:26 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:12:37.750 23:15:26 -- nvmf/common.sh@295 -- # net_devs=() 00:12:37.750 23:15:26 -- nvmf/common.sh@295 -- # local -ga net_devs 00:12:37.750 23:15:26 -- nvmf/common.sh@296 -- # e810=() 00:12:37.750 23:15:26 -- nvmf/common.sh@296 -- # local -ga e810 00:12:37.750 23:15:26 -- nvmf/common.sh@297 -- # x722=() 00:12:37.750 23:15:26 -- nvmf/common.sh@297 -- # local -ga x722 00:12:37.750 23:15:26 -- nvmf/common.sh@298 -- # mlx=() 00:12:37.750 23:15:26 -- nvmf/common.sh@298 -- # local -ga mlx 00:12:37.750 23:15:26 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:37.750 23:15:26 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:37.750 23:15:26 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:37.750 23:15:26 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:37.750 23:15:26 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:37.750 23:15:26 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:37.750 23:15:26 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:37.750 23:15:26 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:37.750 23:15:26 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:37.750 23:15:26 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:37.750 23:15:26 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:37.750 23:15:26 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:12:37.750 23:15:26 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:12:37.750 23:15:26 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:12:37.750 23:15:26 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:12:37.750 23:15:26 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:12:37.750 23:15:26 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:12:37.750 23:15:26 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:37.750 23:15:26 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:12:37.750 Found 0000:31:00.0 (0x8086 - 0x159b) 00:12:37.750 23:15:26 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:37.750 23:15:26 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:37.750 23:15:26 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:37.750 23:15:26 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:37.750 23:15:26 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:37.750 23:15:26 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:37.750 23:15:26 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:12:37.750 Found 0000:31:00.1 (0x8086 - 0x159b) 00:12:37.750 23:15:26 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:37.750 23:15:26 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:37.750 23:15:26 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:37.750 23:15:26 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:37.750 23:15:26 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:37.750 23:15:26 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:12:37.750 23:15:26 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:12:37.750 23:15:26 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:12:37.750 23:15:26 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:37.750 23:15:26 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:37.750 23:15:26 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:12:37.750 23:15:26 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:37.750 23:15:26 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:12:37.750 Found net devices under 0000:31:00.0: cvl_0_0 00:12:37.750 23:15:26 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:12:37.750 23:15:26 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:37.750 23:15:26 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:37.750 23:15:26 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:12:37.750 23:15:26 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:37.750 23:15:26 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:12:37.750 Found net devices under 0000:31:00.1: cvl_0_1 00:12:37.750 23:15:26 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:12:37.750 23:15:26 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:12:37.750 23:15:26 -- nvmf/common.sh@403 -- # is_hw=yes 00:12:37.750 23:15:26 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:12:37.750 23:15:26 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:12:37.750 23:15:26 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:12:37.750 23:15:26 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:37.750 23:15:26 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:37.750 23:15:26 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:37.750 23:15:26 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:12:37.750 23:15:26 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:37.750 23:15:26 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:37.750 23:15:26 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:12:37.750 23:15:26 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:37.750 23:15:26 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:37.750 23:15:26 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:12:37.750 23:15:26 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:12:37.750 23:15:26 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:12:37.750 23:15:26 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:38.011 23:15:27 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:38.011 23:15:27 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:38.011 23:15:27 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:12:38.011 23:15:27 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:38.011 23:15:27 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:38.011 23:15:27 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:38.011 23:15:27 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:12:38.011 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:38.011 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.496 ms 00:12:38.011 00:12:38.011 --- 10.0.0.2 ping statistics --- 00:12:38.011 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:38.011 rtt min/avg/max/mdev = 0.496/0.496/0.496/0.000 ms 00:12:38.011 23:15:27 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:38.272 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:38.272 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.487 ms 00:12:38.272 00:12:38.272 --- 10.0.0.1 ping statistics --- 00:12:38.272 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:38.272 rtt min/avg/max/mdev = 0.487/0.487/0.487/0.000 ms 00:12:38.272 23:15:27 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:38.272 23:15:27 -- nvmf/common.sh@411 -- # return 0 00:12:38.272 23:15:27 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:12:38.272 23:15:27 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:38.272 23:15:27 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:12:38.272 23:15:27 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:12:38.272 23:15:27 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:38.272 23:15:27 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:12:38.272 23:15:27 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:12:38.272 23:15:27 -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:12:38.272 23:15:27 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:12:38.272 23:15:27 -- common/autotest_common.sh@710 -- # xtrace_disable 00:12:38.272 23:15:27 -- common/autotest_common.sh@10 -- # set +x 00:12:38.272 23:15:27 -- nvmf/common.sh@470 -- # nvmfpid=3829989 00:12:38.272 23:15:27 -- nvmf/common.sh@471 -- # waitforlisten 3829989 00:12:38.272 23:15:27 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:38.272 23:15:27 -- common/autotest_common.sh@817 -- # '[' -z 3829989 ']' 00:12:38.272 23:15:27 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:38.272 23:15:27 -- common/autotest_common.sh@822 -- # local max_retries=100 00:12:38.272 23:15:27 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:38.272 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:38.272 23:15:27 -- common/autotest_common.sh@826 -- # xtrace_disable 00:12:38.272 23:15:27 -- common/autotest_common.sh@10 -- # set +x 00:12:38.272 [2024-04-26 23:15:27.374214] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:12:38.272 [2024-04-26 23:15:27.374279] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:38.272 EAL: No free 2048 kB hugepages reported on node 1 00:12:38.272 [2024-04-26 23:15:27.446527] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:38.272 [2024-04-26 23:15:27.484239] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:38.272 [2024-04-26 23:15:27.484282] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:38.272 [2024-04-26 23:15:27.484291] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:38.272 [2024-04-26 23:15:27.484299] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:38.272 [2024-04-26 23:15:27.484305] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:38.272 [2024-04-26 23:15:27.484433] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:38.272 [2024-04-26 23:15:27.484574] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:12:38.272 [2024-04-26 23:15:27.484733] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:38.272 [2024-04-26 23:15:27.484735] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:12:39.214 23:15:28 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:12:39.214 23:15:28 -- common/autotest_common.sh@850 -- # return 0 00:12:39.214 23:15:28 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:12:39.214 23:15:28 -- common/autotest_common.sh@716 -- # xtrace_disable 00:12:39.214 23:15:28 -- common/autotest_common.sh@10 -- # set +x 00:12:39.214 23:15:28 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:39.214 23:15:28 -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:12:39.214 23:15:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:39.214 23:15:28 -- common/autotest_common.sh@10 -- # set +x 00:12:39.214 23:15:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:39.214 23:15:28 -- target/rpc.sh@26 -- # stats='{ 00:12:39.214 "tick_rate": 2400000000, 00:12:39.214 "poll_groups": [ 00:12:39.214 { 00:12:39.214 "name": "nvmf_tgt_poll_group_0", 00:12:39.214 "admin_qpairs": 0, 00:12:39.214 "io_qpairs": 0, 00:12:39.214 "current_admin_qpairs": 0, 00:12:39.214 "current_io_qpairs": 0, 00:12:39.214 "pending_bdev_io": 0, 00:12:39.214 "completed_nvme_io": 0, 00:12:39.214 "transports": [] 00:12:39.214 }, 00:12:39.214 { 00:12:39.214 "name": "nvmf_tgt_poll_group_1", 00:12:39.214 "admin_qpairs": 0, 00:12:39.214 "io_qpairs": 0, 00:12:39.214 "current_admin_qpairs": 0, 00:12:39.214 "current_io_qpairs": 0, 00:12:39.214 "pending_bdev_io": 0, 00:12:39.214 "completed_nvme_io": 0, 00:12:39.214 "transports": [] 00:12:39.214 }, 00:12:39.214 { 00:12:39.214 "name": "nvmf_tgt_poll_group_2", 00:12:39.214 "admin_qpairs": 0, 00:12:39.214 "io_qpairs": 0, 00:12:39.214 "current_admin_qpairs": 0, 00:12:39.214 "current_io_qpairs": 0, 00:12:39.214 "pending_bdev_io": 0, 00:12:39.214 "completed_nvme_io": 0, 00:12:39.214 "transports": [] 00:12:39.214 }, 00:12:39.214 { 00:12:39.214 "name": "nvmf_tgt_poll_group_3", 00:12:39.214 "admin_qpairs": 0, 00:12:39.214 "io_qpairs": 0, 00:12:39.214 "current_admin_qpairs": 0, 00:12:39.214 "current_io_qpairs": 0, 00:12:39.214 "pending_bdev_io": 0, 00:12:39.214 "completed_nvme_io": 0, 00:12:39.214 "transports": [] 00:12:39.214 } 00:12:39.214 ] 00:12:39.214 }' 00:12:39.214 23:15:28 -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:12:39.214 23:15:28 -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:12:39.214 23:15:28 -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:12:39.214 23:15:28 -- target/rpc.sh@15 -- # wc -l 00:12:39.214 23:15:28 -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:12:39.214 23:15:28 -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:12:39.214 23:15:28 -- target/rpc.sh@29 -- # [[ null == null ]] 00:12:39.214 23:15:28 -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:39.214 23:15:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:39.214 23:15:28 -- common/autotest_common.sh@10 -- # set +x 00:12:39.214 [2024-04-26 23:15:28.300926] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:39.214 23:15:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:39.214 23:15:28 -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:12:39.214 23:15:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:39.214 23:15:28 -- common/autotest_common.sh@10 -- # set +x 00:12:39.214 23:15:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:39.214 23:15:28 -- target/rpc.sh@33 -- # stats='{ 00:12:39.214 "tick_rate": 2400000000, 00:12:39.214 "poll_groups": [ 00:12:39.214 { 00:12:39.214 "name": "nvmf_tgt_poll_group_0", 00:12:39.214 "admin_qpairs": 0, 00:12:39.214 "io_qpairs": 0, 00:12:39.214 "current_admin_qpairs": 0, 00:12:39.214 "current_io_qpairs": 0, 00:12:39.214 "pending_bdev_io": 0, 00:12:39.214 "completed_nvme_io": 0, 00:12:39.214 "transports": [ 00:12:39.214 { 00:12:39.214 "trtype": "TCP" 00:12:39.214 } 00:12:39.214 ] 00:12:39.214 }, 00:12:39.214 { 00:12:39.214 "name": "nvmf_tgt_poll_group_1", 00:12:39.214 "admin_qpairs": 0, 00:12:39.214 "io_qpairs": 0, 00:12:39.214 "current_admin_qpairs": 0, 00:12:39.214 "current_io_qpairs": 0, 00:12:39.214 "pending_bdev_io": 0, 00:12:39.214 "completed_nvme_io": 0, 00:12:39.214 "transports": [ 00:12:39.214 { 00:12:39.214 "trtype": "TCP" 00:12:39.214 } 00:12:39.214 ] 00:12:39.214 }, 00:12:39.214 { 00:12:39.214 "name": "nvmf_tgt_poll_group_2", 00:12:39.214 "admin_qpairs": 0, 00:12:39.214 "io_qpairs": 0, 00:12:39.214 "current_admin_qpairs": 0, 00:12:39.214 "current_io_qpairs": 0, 00:12:39.214 "pending_bdev_io": 0, 00:12:39.214 "completed_nvme_io": 0, 00:12:39.214 "transports": [ 00:12:39.214 { 00:12:39.214 "trtype": "TCP" 00:12:39.214 } 00:12:39.214 ] 00:12:39.214 }, 00:12:39.214 { 00:12:39.214 "name": "nvmf_tgt_poll_group_3", 00:12:39.214 "admin_qpairs": 0, 00:12:39.214 "io_qpairs": 0, 00:12:39.214 "current_admin_qpairs": 0, 00:12:39.214 "current_io_qpairs": 0, 00:12:39.214 "pending_bdev_io": 0, 00:12:39.214 "completed_nvme_io": 0, 00:12:39.214 "transports": [ 00:12:39.214 { 00:12:39.214 "trtype": "TCP" 00:12:39.214 } 00:12:39.214 ] 00:12:39.214 } 00:12:39.214 ] 00:12:39.214 }' 00:12:39.214 23:15:28 -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:12:39.214 23:15:28 -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:12:39.214 23:15:28 -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:12:39.214 23:15:28 -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:12:39.214 23:15:28 -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:12:39.214 23:15:28 -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:12:39.214 23:15:28 -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:12:39.214 23:15:28 -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:12:39.215 23:15:28 -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:12:39.215 23:15:28 -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:12:39.215 23:15:28 -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:12:39.215 23:15:28 -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:12:39.215 23:15:28 -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:12:39.215 23:15:28 -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:12:39.215 23:15:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:39.215 23:15:28 -- common/autotest_common.sh@10 -- # set +x 00:12:39.215 Malloc1 00:12:39.215 23:15:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:39.215 23:15:28 -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:12:39.215 23:15:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:39.215 23:15:28 -- common/autotest_common.sh@10 -- # set +x 00:12:39.215 23:15:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:39.215 23:15:28 -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:39.215 23:15:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:39.215 23:15:28 -- common/autotest_common.sh@10 -- # set +x 00:12:39.475 23:15:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:39.475 23:15:28 -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:12:39.475 23:15:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:39.475 23:15:28 -- common/autotest_common.sh@10 -- # set +x 00:12:39.475 23:15:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:39.475 23:15:28 -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:39.475 23:15:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:39.475 23:15:28 -- common/autotest_common.sh@10 -- # set +x 00:12:39.475 [2024-04-26 23:15:28.488703] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:39.475 23:15:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:39.475 23:15:28 -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -a 10.0.0.2 -s 4420 00:12:39.475 23:15:28 -- common/autotest_common.sh@638 -- # local es=0 00:12:39.475 23:15:28 -- common/autotest_common.sh@640 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -a 10.0.0.2 -s 4420 00:12:39.475 23:15:28 -- common/autotest_common.sh@626 -- # local arg=nvme 00:12:39.475 23:15:28 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:12:39.475 23:15:28 -- common/autotest_common.sh@630 -- # type -t nvme 00:12:39.475 23:15:28 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:12:39.475 23:15:28 -- common/autotest_common.sh@632 -- # type -P nvme 00:12:39.475 23:15:28 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:12:39.475 23:15:28 -- common/autotest_common.sh@632 -- # arg=/usr/sbin/nvme 00:12:39.475 23:15:28 -- common/autotest_common.sh@632 -- # [[ -x /usr/sbin/nvme ]] 00:12:39.475 23:15:28 -- common/autotest_common.sh@641 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -a 10.0.0.2 -s 4420 00:12:39.475 [2024-04-26 23:15:28.515445] ctrlr.c: 766:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396' 00:12:39.475 Failed to write to /dev/nvme-fabrics: Input/output error 00:12:39.475 could not add new controller: failed to write to nvme-fabrics device 00:12:39.475 23:15:28 -- common/autotest_common.sh@641 -- # es=1 00:12:39.475 23:15:28 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:12:39.475 23:15:28 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:12:39.475 23:15:28 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:12:39.475 23:15:28 -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:12:39.475 23:15:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:39.475 23:15:28 -- common/autotest_common.sh@10 -- # set +x 00:12:39.475 23:15:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:39.475 23:15:28 -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:41.387 23:15:30 -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:12:41.387 23:15:30 -- common/autotest_common.sh@1184 -- # local i=0 00:12:41.387 23:15:30 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:12:41.387 23:15:30 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:12:41.387 23:15:30 -- common/autotest_common.sh@1191 -- # sleep 2 00:12:43.298 23:15:32 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:12:43.298 23:15:32 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:12:43.298 23:15:32 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:12:43.298 23:15:32 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:12:43.298 23:15:32 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:12:43.298 23:15:32 -- common/autotest_common.sh@1194 -- # return 0 00:12:43.298 23:15:32 -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:43.298 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:43.298 23:15:32 -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:43.298 23:15:32 -- common/autotest_common.sh@1205 -- # local i=0 00:12:43.298 23:15:32 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:12:43.298 23:15:32 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:43.298 23:15:32 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:12:43.298 23:15:32 -- common/autotest_common.sh@1213 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:43.298 23:15:32 -- common/autotest_common.sh@1217 -- # return 0 00:12:43.298 23:15:32 -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:12:43.298 23:15:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:43.298 23:15:32 -- common/autotest_common.sh@10 -- # set +x 00:12:43.298 23:15:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:43.298 23:15:32 -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:43.299 23:15:32 -- common/autotest_common.sh@638 -- # local es=0 00:12:43.299 23:15:32 -- common/autotest_common.sh@640 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:43.299 23:15:32 -- common/autotest_common.sh@626 -- # local arg=nvme 00:12:43.299 23:15:32 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:12:43.299 23:15:32 -- common/autotest_common.sh@630 -- # type -t nvme 00:12:43.299 23:15:32 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:12:43.299 23:15:32 -- common/autotest_common.sh@632 -- # type -P nvme 00:12:43.299 23:15:32 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:12:43.299 23:15:32 -- common/autotest_common.sh@632 -- # arg=/usr/sbin/nvme 00:12:43.299 23:15:32 -- common/autotest_common.sh@632 -- # [[ -x /usr/sbin/nvme ]] 00:12:43.299 23:15:32 -- common/autotest_common.sh@641 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:43.299 [2024-04-26 23:15:32.289507] ctrlr.c: 766:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396' 00:12:43.299 Failed to write to /dev/nvme-fabrics: Input/output error 00:12:43.299 could not add new controller: failed to write to nvme-fabrics device 00:12:43.299 23:15:32 -- common/autotest_common.sh@641 -- # es=1 00:12:43.299 23:15:32 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:12:43.299 23:15:32 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:12:43.299 23:15:32 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:12:43.299 23:15:32 -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:12:43.299 23:15:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:43.299 23:15:32 -- common/autotest_common.sh@10 -- # set +x 00:12:43.299 23:15:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:43.299 23:15:32 -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:44.681 23:15:33 -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:12:44.681 23:15:33 -- common/autotest_common.sh@1184 -- # local i=0 00:12:44.681 23:15:33 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:12:44.681 23:15:33 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:12:44.681 23:15:33 -- common/autotest_common.sh@1191 -- # sleep 2 00:12:47.225 23:15:35 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:12:47.225 23:15:35 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:12:47.225 23:15:35 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:12:47.225 23:15:35 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:12:47.225 23:15:35 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:12:47.225 23:15:35 -- common/autotest_common.sh@1194 -- # return 0 00:12:47.225 23:15:35 -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:47.225 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:47.225 23:15:35 -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:47.225 23:15:35 -- common/autotest_common.sh@1205 -- # local i=0 00:12:47.225 23:15:35 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:12:47.225 23:15:35 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:47.225 23:15:35 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:12:47.225 23:15:35 -- common/autotest_common.sh@1213 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:47.225 23:15:36 -- common/autotest_common.sh@1217 -- # return 0 00:12:47.225 23:15:36 -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:47.225 23:15:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:47.225 23:15:36 -- common/autotest_common.sh@10 -- # set +x 00:12:47.225 23:15:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:47.225 23:15:36 -- target/rpc.sh@81 -- # seq 1 5 00:12:47.225 23:15:36 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:47.225 23:15:36 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:47.225 23:15:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:47.225 23:15:36 -- common/autotest_common.sh@10 -- # set +x 00:12:47.225 23:15:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:47.225 23:15:36 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:47.225 23:15:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:47.225 23:15:36 -- common/autotest_common.sh@10 -- # set +x 00:12:47.225 [2024-04-26 23:15:36.043238] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:47.225 23:15:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:47.225 23:15:36 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:47.225 23:15:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:47.225 23:15:36 -- common/autotest_common.sh@10 -- # set +x 00:12:47.225 23:15:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:47.225 23:15:36 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:47.225 23:15:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:47.225 23:15:36 -- common/autotest_common.sh@10 -- # set +x 00:12:47.225 23:15:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:47.225 23:15:36 -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:48.607 23:15:37 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:48.607 23:15:37 -- common/autotest_common.sh@1184 -- # local i=0 00:12:48.607 23:15:37 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:12:48.607 23:15:37 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:12:48.607 23:15:37 -- common/autotest_common.sh@1191 -- # sleep 2 00:12:50.519 23:15:39 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:12:50.519 23:15:39 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:12:50.519 23:15:39 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:12:50.519 23:15:39 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:12:50.519 23:15:39 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:12:50.520 23:15:39 -- common/autotest_common.sh@1194 -- # return 0 00:12:50.520 23:15:39 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:50.520 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:50.520 23:15:39 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:50.520 23:15:39 -- common/autotest_common.sh@1205 -- # local i=0 00:12:50.520 23:15:39 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:12:50.520 23:15:39 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:50.520 23:15:39 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:12:50.520 23:15:39 -- common/autotest_common.sh@1213 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:50.520 23:15:39 -- common/autotest_common.sh@1217 -- # return 0 00:12:50.520 23:15:39 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:50.520 23:15:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:50.520 23:15:39 -- common/autotest_common.sh@10 -- # set +x 00:12:50.520 23:15:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:50.520 23:15:39 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:50.520 23:15:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:50.520 23:15:39 -- common/autotest_common.sh@10 -- # set +x 00:12:50.520 23:15:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:50.520 23:15:39 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:50.520 23:15:39 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:50.520 23:15:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:50.520 23:15:39 -- common/autotest_common.sh@10 -- # set +x 00:12:50.520 23:15:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:50.520 23:15:39 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:50.520 23:15:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:50.520 23:15:39 -- common/autotest_common.sh@10 -- # set +x 00:12:50.520 [2024-04-26 23:15:39.765325] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:50.520 23:15:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:50.520 23:15:39 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:50.520 23:15:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:50.520 23:15:39 -- common/autotest_common.sh@10 -- # set +x 00:12:50.779 23:15:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:50.779 23:15:39 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:50.779 23:15:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:50.779 23:15:39 -- common/autotest_common.sh@10 -- # set +x 00:12:50.779 23:15:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:50.779 23:15:39 -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:52.162 23:15:41 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:52.162 23:15:41 -- common/autotest_common.sh@1184 -- # local i=0 00:12:52.162 23:15:41 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:12:52.162 23:15:41 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:12:52.162 23:15:41 -- common/autotest_common.sh@1191 -- # sleep 2 00:12:54.710 23:15:43 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:12:54.710 23:15:43 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:12:54.710 23:15:43 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:12:54.710 23:15:43 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:12:54.710 23:15:43 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:12:54.710 23:15:43 -- common/autotest_common.sh@1194 -- # return 0 00:12:54.710 23:15:43 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:54.710 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:54.711 23:15:43 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:54.711 23:15:43 -- common/autotest_common.sh@1205 -- # local i=0 00:12:54.711 23:15:43 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:12:54.711 23:15:43 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:54.711 23:15:43 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:12:54.711 23:15:43 -- common/autotest_common.sh@1213 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:54.711 23:15:43 -- common/autotest_common.sh@1217 -- # return 0 00:12:54.711 23:15:43 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:54.711 23:15:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:54.711 23:15:43 -- common/autotest_common.sh@10 -- # set +x 00:12:54.711 23:15:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:54.711 23:15:43 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:54.711 23:15:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:54.711 23:15:43 -- common/autotest_common.sh@10 -- # set +x 00:12:54.711 23:15:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:54.711 23:15:43 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:54.711 23:15:43 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:54.711 23:15:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:54.711 23:15:43 -- common/autotest_common.sh@10 -- # set +x 00:12:54.711 23:15:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:54.711 23:15:43 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:54.711 23:15:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:54.711 23:15:43 -- common/autotest_common.sh@10 -- # set +x 00:12:54.711 [2024-04-26 23:15:43.510799] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:54.711 23:15:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:54.711 23:15:43 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:54.711 23:15:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:54.711 23:15:43 -- common/autotest_common.sh@10 -- # set +x 00:12:54.711 23:15:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:54.711 23:15:43 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:54.711 23:15:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:54.711 23:15:43 -- common/autotest_common.sh@10 -- # set +x 00:12:54.711 23:15:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:54.711 23:15:43 -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:56.099 23:15:45 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:56.099 23:15:45 -- common/autotest_common.sh@1184 -- # local i=0 00:12:56.099 23:15:45 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:12:56.099 23:15:45 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:12:56.099 23:15:45 -- common/autotest_common.sh@1191 -- # sleep 2 00:12:58.014 23:15:47 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:12:58.014 23:15:47 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:12:58.014 23:15:47 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:12:58.014 23:15:47 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:12:58.014 23:15:47 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:12:58.014 23:15:47 -- common/autotest_common.sh@1194 -- # return 0 00:12:58.014 23:15:47 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:58.014 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:58.014 23:15:47 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:58.014 23:15:47 -- common/autotest_common.sh@1205 -- # local i=0 00:12:58.014 23:15:47 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:12:58.014 23:15:47 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:58.014 23:15:47 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:12:58.014 23:15:47 -- common/autotest_common.sh@1213 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:58.014 23:15:47 -- common/autotest_common.sh@1217 -- # return 0 00:12:58.014 23:15:47 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:58.014 23:15:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:58.014 23:15:47 -- common/autotest_common.sh@10 -- # set +x 00:12:58.014 23:15:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:58.014 23:15:47 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:58.014 23:15:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:58.014 23:15:47 -- common/autotest_common.sh@10 -- # set +x 00:12:58.014 23:15:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:58.014 23:15:47 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:58.014 23:15:47 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:58.014 23:15:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:58.014 23:15:47 -- common/autotest_common.sh@10 -- # set +x 00:12:58.275 23:15:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:58.275 23:15:47 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:58.275 23:15:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:58.275 23:15:47 -- common/autotest_common.sh@10 -- # set +x 00:12:58.275 [2024-04-26 23:15:47.280784] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:58.275 23:15:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:58.275 23:15:47 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:58.275 23:15:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:58.275 23:15:47 -- common/autotest_common.sh@10 -- # set +x 00:12:58.275 23:15:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:58.275 23:15:47 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:58.275 23:15:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:58.275 23:15:47 -- common/autotest_common.sh@10 -- # set +x 00:12:58.275 23:15:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:58.275 23:15:47 -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:59.660 23:15:48 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:59.660 23:15:48 -- common/autotest_common.sh@1184 -- # local i=0 00:12:59.660 23:15:48 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:12:59.661 23:15:48 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:12:59.661 23:15:48 -- common/autotest_common.sh@1191 -- # sleep 2 00:13:01.573 23:15:50 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:13:01.573 23:15:50 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:13:01.573 23:15:50 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:13:01.573 23:15:50 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:13:01.573 23:15:50 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:13:01.573 23:15:50 -- common/autotest_common.sh@1194 -- # return 0 00:13:01.573 23:15:50 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:01.834 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:01.834 23:15:50 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:01.834 23:15:50 -- common/autotest_common.sh@1205 -- # local i=0 00:13:01.834 23:15:50 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:13:01.834 23:15:50 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:01.834 23:15:50 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:13:01.834 23:15:50 -- common/autotest_common.sh@1213 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:01.834 23:15:50 -- common/autotest_common.sh@1217 -- # return 0 00:13:01.834 23:15:50 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:01.834 23:15:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:01.834 23:15:50 -- common/autotest_common.sh@10 -- # set +x 00:13:01.834 23:15:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:01.834 23:15:50 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:01.834 23:15:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:01.834 23:15:50 -- common/autotest_common.sh@10 -- # set +x 00:13:01.834 23:15:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:01.834 23:15:50 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:13:01.834 23:15:50 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:01.834 23:15:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:01.834 23:15:50 -- common/autotest_common.sh@10 -- # set +x 00:13:01.834 23:15:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:01.834 23:15:50 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:01.834 23:15:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:01.834 23:15:50 -- common/autotest_common.sh@10 -- # set +x 00:13:01.834 [2024-04-26 23:15:50.990565] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:01.834 23:15:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:01.834 23:15:50 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:13:01.834 23:15:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:01.834 23:15:50 -- common/autotest_common.sh@10 -- # set +x 00:13:01.834 23:15:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:01.834 23:15:51 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:01.834 23:15:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:01.834 23:15:51 -- common/autotest_common.sh@10 -- # set +x 00:13:01.834 23:15:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:01.834 23:15:51 -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:03.747 23:15:52 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:13:03.747 23:15:52 -- common/autotest_common.sh@1184 -- # local i=0 00:13:03.747 23:15:52 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:13:03.747 23:15:52 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:13:03.747 23:15:52 -- common/autotest_common.sh@1191 -- # sleep 2 00:13:05.667 23:15:54 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:13:05.667 23:15:54 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:13:05.667 23:15:54 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:13:05.667 23:15:54 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:13:05.667 23:15:54 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:13:05.667 23:15:54 -- common/autotest_common.sh@1194 -- # return 0 00:13:05.667 23:15:54 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:05.667 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:05.667 23:15:54 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:05.667 23:15:54 -- common/autotest_common.sh@1205 -- # local i=0 00:13:05.667 23:15:54 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:13:05.667 23:15:54 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:05.667 23:15:54 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:13:05.667 23:15:54 -- common/autotest_common.sh@1213 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:05.667 23:15:54 -- common/autotest_common.sh@1217 -- # return 0 00:13:05.667 23:15:54 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:05.667 23:15:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:05.667 23:15:54 -- common/autotest_common.sh@10 -- # set +x 00:13:05.667 23:15:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:05.667 23:15:54 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:05.667 23:15:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:05.667 23:15:54 -- common/autotest_common.sh@10 -- # set +x 00:13:05.667 23:15:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:05.667 23:15:54 -- target/rpc.sh@99 -- # seq 1 5 00:13:05.667 23:15:54 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:13:05.667 23:15:54 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:05.667 23:15:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:05.667 23:15:54 -- common/autotest_common.sh@10 -- # set +x 00:13:05.667 23:15:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:05.667 23:15:54 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:05.667 23:15:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:05.667 23:15:54 -- common/autotest_common.sh@10 -- # set +x 00:13:05.667 [2024-04-26 23:15:54.697710] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:05.667 23:15:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:05.667 23:15:54 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:05.667 23:15:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:05.667 23:15:54 -- common/autotest_common.sh@10 -- # set +x 00:13:05.667 23:15:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:05.667 23:15:54 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:05.667 23:15:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:05.667 23:15:54 -- common/autotest_common.sh@10 -- # set +x 00:13:05.667 23:15:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:05.667 23:15:54 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:05.668 23:15:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:05.668 23:15:54 -- common/autotest_common.sh@10 -- # set +x 00:13:05.668 23:15:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:05.668 23:15:54 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:05.668 23:15:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:05.668 23:15:54 -- common/autotest_common.sh@10 -- # set +x 00:13:05.668 23:15:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:05.668 23:15:54 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:13:05.668 23:15:54 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:05.668 23:15:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:05.668 23:15:54 -- common/autotest_common.sh@10 -- # set +x 00:13:05.668 23:15:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:05.668 23:15:54 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:05.668 23:15:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:05.668 23:15:54 -- common/autotest_common.sh@10 -- # set +x 00:13:05.668 [2024-04-26 23:15:54.761866] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:05.668 23:15:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:05.668 23:15:54 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:05.668 23:15:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:05.668 23:15:54 -- common/autotest_common.sh@10 -- # set +x 00:13:05.668 23:15:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:05.668 23:15:54 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:05.668 23:15:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:05.668 23:15:54 -- common/autotest_common.sh@10 -- # set +x 00:13:05.668 23:15:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:05.668 23:15:54 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:05.668 23:15:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:05.668 23:15:54 -- common/autotest_common.sh@10 -- # set +x 00:13:05.668 23:15:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:05.668 23:15:54 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:05.668 23:15:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:05.668 23:15:54 -- common/autotest_common.sh@10 -- # set +x 00:13:05.668 23:15:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:05.668 23:15:54 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:13:05.668 23:15:54 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:05.668 23:15:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:05.668 23:15:54 -- common/autotest_common.sh@10 -- # set +x 00:13:05.668 23:15:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:05.668 23:15:54 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:05.668 23:15:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:05.668 23:15:54 -- common/autotest_common.sh@10 -- # set +x 00:13:05.668 [2024-04-26 23:15:54.818013] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:05.668 23:15:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:05.668 23:15:54 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:05.668 23:15:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:05.668 23:15:54 -- common/autotest_common.sh@10 -- # set +x 00:13:05.668 23:15:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:05.668 23:15:54 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:05.668 23:15:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:05.668 23:15:54 -- common/autotest_common.sh@10 -- # set +x 00:13:05.668 23:15:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:05.668 23:15:54 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:05.668 23:15:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:05.668 23:15:54 -- common/autotest_common.sh@10 -- # set +x 00:13:05.668 23:15:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:05.668 23:15:54 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:05.668 23:15:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:05.668 23:15:54 -- common/autotest_common.sh@10 -- # set +x 00:13:05.668 23:15:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:05.668 23:15:54 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:13:05.668 23:15:54 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:05.668 23:15:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:05.668 23:15:54 -- common/autotest_common.sh@10 -- # set +x 00:13:05.668 23:15:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:05.668 23:15:54 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:05.668 23:15:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:05.668 23:15:54 -- common/autotest_common.sh@10 -- # set +x 00:13:05.668 [2024-04-26 23:15:54.878218] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:05.668 23:15:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:05.668 23:15:54 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:05.668 23:15:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:05.668 23:15:54 -- common/autotest_common.sh@10 -- # set +x 00:13:05.668 23:15:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:05.668 23:15:54 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:05.668 23:15:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:05.668 23:15:54 -- common/autotest_common.sh@10 -- # set +x 00:13:05.668 23:15:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:05.668 23:15:54 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:05.668 23:15:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:05.668 23:15:54 -- common/autotest_common.sh@10 -- # set +x 00:13:05.668 23:15:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:05.668 23:15:54 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:05.668 23:15:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:05.668 23:15:54 -- common/autotest_common.sh@10 -- # set +x 00:13:05.668 23:15:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:05.668 23:15:54 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:13:05.668 23:15:54 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:05.668 23:15:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:05.668 23:15:54 -- common/autotest_common.sh@10 -- # set +x 00:13:05.929 23:15:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:05.929 23:15:54 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:05.929 23:15:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:05.929 23:15:54 -- common/autotest_common.sh@10 -- # set +x 00:13:05.929 [2024-04-26 23:15:54.938406] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:05.929 23:15:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:05.929 23:15:54 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:05.929 23:15:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:05.929 23:15:54 -- common/autotest_common.sh@10 -- # set +x 00:13:05.929 23:15:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:05.929 23:15:54 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:05.929 23:15:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:05.929 23:15:54 -- common/autotest_common.sh@10 -- # set +x 00:13:05.929 23:15:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:05.929 23:15:54 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:05.929 23:15:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:05.929 23:15:54 -- common/autotest_common.sh@10 -- # set +x 00:13:05.929 23:15:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:05.929 23:15:54 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:05.929 23:15:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:05.929 23:15:54 -- common/autotest_common.sh@10 -- # set +x 00:13:05.929 23:15:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:05.929 23:15:54 -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:13:05.929 23:15:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:05.929 23:15:54 -- common/autotest_common.sh@10 -- # set +x 00:13:05.929 23:15:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:05.929 23:15:54 -- target/rpc.sh@110 -- # stats='{ 00:13:05.929 "tick_rate": 2400000000, 00:13:05.929 "poll_groups": [ 00:13:05.929 { 00:13:05.929 "name": "nvmf_tgt_poll_group_0", 00:13:05.929 "admin_qpairs": 0, 00:13:05.929 "io_qpairs": 224, 00:13:05.929 "current_admin_qpairs": 0, 00:13:05.929 "current_io_qpairs": 0, 00:13:05.929 "pending_bdev_io": 0, 00:13:05.929 "completed_nvme_io": 322, 00:13:05.929 "transports": [ 00:13:05.929 { 00:13:05.929 "trtype": "TCP" 00:13:05.929 } 00:13:05.929 ] 00:13:05.929 }, 00:13:05.929 { 00:13:05.929 "name": "nvmf_tgt_poll_group_1", 00:13:05.929 "admin_qpairs": 1, 00:13:05.929 "io_qpairs": 223, 00:13:05.929 "current_admin_qpairs": 0, 00:13:05.929 "current_io_qpairs": 0, 00:13:05.929 "pending_bdev_io": 0, 00:13:05.929 "completed_nvme_io": 313, 00:13:05.929 "transports": [ 00:13:05.929 { 00:13:05.929 "trtype": "TCP" 00:13:05.929 } 00:13:05.929 ] 00:13:05.929 }, 00:13:05.929 { 00:13:05.929 "name": "nvmf_tgt_poll_group_2", 00:13:05.929 "admin_qpairs": 6, 00:13:05.929 "io_qpairs": 218, 00:13:05.929 "current_admin_qpairs": 0, 00:13:05.929 "current_io_qpairs": 0, 00:13:05.929 "pending_bdev_io": 0, 00:13:05.929 "completed_nvme_io": 329, 00:13:05.929 "transports": [ 00:13:05.929 { 00:13:05.929 "trtype": "TCP" 00:13:05.929 } 00:13:05.929 ] 00:13:05.929 }, 00:13:05.929 { 00:13:05.929 "name": "nvmf_tgt_poll_group_3", 00:13:05.929 "admin_qpairs": 0, 00:13:05.929 "io_qpairs": 224, 00:13:05.929 "current_admin_qpairs": 0, 00:13:05.929 "current_io_qpairs": 0, 00:13:05.929 "pending_bdev_io": 0, 00:13:05.929 "completed_nvme_io": 275, 00:13:05.929 "transports": [ 00:13:05.929 { 00:13:05.929 "trtype": "TCP" 00:13:05.929 } 00:13:05.929 ] 00:13:05.929 } 00:13:05.929 ] 00:13:05.929 }' 00:13:05.929 23:15:54 -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:13:05.929 23:15:54 -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:13:05.929 23:15:54 -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:13:05.929 23:15:55 -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:13:05.929 23:15:55 -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:13:05.929 23:15:55 -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:13:05.929 23:15:55 -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:13:05.929 23:15:55 -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:13:05.929 23:15:55 -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:13:05.929 23:15:55 -- target/rpc.sh@113 -- # (( 889 > 0 )) 00:13:05.929 23:15:55 -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:13:05.929 23:15:55 -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:13:05.929 23:15:55 -- target/rpc.sh@123 -- # nvmftestfini 00:13:05.929 23:15:55 -- nvmf/common.sh@477 -- # nvmfcleanup 00:13:05.929 23:15:55 -- nvmf/common.sh@117 -- # sync 00:13:05.929 23:15:55 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:05.929 23:15:55 -- nvmf/common.sh@120 -- # set +e 00:13:05.929 23:15:55 -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:05.929 23:15:55 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:05.929 rmmod nvme_tcp 00:13:05.929 rmmod nvme_fabrics 00:13:05.929 rmmod nvme_keyring 00:13:05.929 23:15:55 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:05.929 23:15:55 -- nvmf/common.sh@124 -- # set -e 00:13:05.929 23:15:55 -- nvmf/common.sh@125 -- # return 0 00:13:05.929 23:15:55 -- nvmf/common.sh@478 -- # '[' -n 3829989 ']' 00:13:05.929 23:15:55 -- nvmf/common.sh@479 -- # killprocess 3829989 00:13:05.929 23:15:55 -- common/autotest_common.sh@936 -- # '[' -z 3829989 ']' 00:13:05.929 23:15:55 -- common/autotest_common.sh@940 -- # kill -0 3829989 00:13:05.929 23:15:55 -- common/autotest_common.sh@941 -- # uname 00:13:05.929 23:15:55 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:13:05.929 23:15:55 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3829989 00:13:06.191 23:15:55 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:13:06.191 23:15:55 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:13:06.191 23:15:55 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3829989' 00:13:06.191 killing process with pid 3829989 00:13:06.191 23:15:55 -- common/autotest_common.sh@955 -- # kill 3829989 00:13:06.191 23:15:55 -- common/autotest_common.sh@960 -- # wait 3829989 00:13:06.191 23:15:55 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:13:06.191 23:15:55 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:13:06.191 23:15:55 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:13:06.191 23:15:55 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:06.191 23:15:55 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:06.191 23:15:55 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:06.191 23:15:55 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:06.191 23:15:55 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:08.739 23:15:57 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:13:08.739 00:13:08.739 real 0m37.466s 00:13:08.739 user 1m53.224s 00:13:08.739 sys 0m7.329s 00:13:08.739 23:15:57 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:13:08.739 23:15:57 -- common/autotest_common.sh@10 -- # set +x 00:13:08.739 ************************************ 00:13:08.739 END TEST nvmf_rpc 00:13:08.739 ************************************ 00:13:08.739 23:15:57 -- nvmf/nvmf.sh@30 -- # run_test nvmf_invalid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:13:08.739 23:15:57 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:13:08.739 23:15:57 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:08.739 23:15:57 -- common/autotest_common.sh@10 -- # set +x 00:13:08.739 ************************************ 00:13:08.739 START TEST nvmf_invalid 00:13:08.739 ************************************ 00:13:08.739 23:15:57 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:13:08.739 * Looking for test storage... 00:13:08.739 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:08.739 23:15:57 -- target/invalid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:08.739 23:15:57 -- nvmf/common.sh@7 -- # uname -s 00:13:08.739 23:15:57 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:08.739 23:15:57 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:08.739 23:15:57 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:08.739 23:15:57 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:08.739 23:15:57 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:08.739 23:15:57 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:08.739 23:15:57 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:08.739 23:15:57 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:08.739 23:15:57 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:08.739 23:15:57 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:08.739 23:15:57 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:13:08.739 23:15:57 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:13:08.739 23:15:57 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:08.739 23:15:57 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:08.739 23:15:57 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:08.739 23:15:57 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:08.739 23:15:57 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:08.739 23:15:57 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:08.739 23:15:57 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:08.739 23:15:57 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:08.739 23:15:57 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:08.739 23:15:57 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:08.740 23:15:57 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:08.740 23:15:57 -- paths/export.sh@5 -- # export PATH 00:13:08.740 23:15:57 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:08.740 23:15:57 -- nvmf/common.sh@47 -- # : 0 00:13:08.740 23:15:57 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:08.740 23:15:57 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:08.740 23:15:57 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:08.740 23:15:57 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:08.740 23:15:57 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:08.740 23:15:57 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:08.740 23:15:57 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:08.740 23:15:57 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:08.740 23:15:57 -- target/invalid.sh@11 -- # multi_target_rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:13:08.740 23:15:57 -- target/invalid.sh@12 -- # rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:08.740 23:15:57 -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:13:08.740 23:15:57 -- target/invalid.sh@14 -- # target=foobar 00:13:08.740 23:15:57 -- target/invalid.sh@16 -- # RANDOM=0 00:13:08.740 23:15:57 -- target/invalid.sh@34 -- # nvmftestinit 00:13:08.740 23:15:57 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:13:08.740 23:15:57 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:08.740 23:15:57 -- nvmf/common.sh@437 -- # prepare_net_devs 00:13:08.740 23:15:57 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:13:08.740 23:15:57 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:13:08.740 23:15:57 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:08.740 23:15:57 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:08.740 23:15:57 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:08.740 23:15:57 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:13:08.740 23:15:57 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:13:08.740 23:15:57 -- nvmf/common.sh@285 -- # xtrace_disable 00:13:08.740 23:15:57 -- common/autotest_common.sh@10 -- # set +x 00:13:15.336 23:16:04 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:13:15.336 23:16:04 -- nvmf/common.sh@291 -- # pci_devs=() 00:13:15.336 23:16:04 -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:15.336 23:16:04 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:15.336 23:16:04 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:15.336 23:16:04 -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:15.336 23:16:04 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:15.336 23:16:04 -- nvmf/common.sh@295 -- # net_devs=() 00:13:15.336 23:16:04 -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:15.336 23:16:04 -- nvmf/common.sh@296 -- # e810=() 00:13:15.336 23:16:04 -- nvmf/common.sh@296 -- # local -ga e810 00:13:15.336 23:16:04 -- nvmf/common.sh@297 -- # x722=() 00:13:15.336 23:16:04 -- nvmf/common.sh@297 -- # local -ga x722 00:13:15.336 23:16:04 -- nvmf/common.sh@298 -- # mlx=() 00:13:15.336 23:16:04 -- nvmf/common.sh@298 -- # local -ga mlx 00:13:15.336 23:16:04 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:15.336 23:16:04 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:15.336 23:16:04 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:15.336 23:16:04 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:15.336 23:16:04 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:15.336 23:16:04 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:15.336 23:16:04 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:15.336 23:16:04 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:15.336 23:16:04 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:15.336 23:16:04 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:15.336 23:16:04 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:15.336 23:16:04 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:15.336 23:16:04 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:13:15.336 23:16:04 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:13:15.336 23:16:04 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:13:15.336 23:16:04 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:13:15.336 23:16:04 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:15.336 23:16:04 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:15.336 23:16:04 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:13:15.336 Found 0000:31:00.0 (0x8086 - 0x159b) 00:13:15.336 23:16:04 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:15.336 23:16:04 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:15.336 23:16:04 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:15.336 23:16:04 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:15.336 23:16:04 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:15.336 23:16:04 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:15.336 23:16:04 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:13:15.336 Found 0000:31:00.1 (0x8086 - 0x159b) 00:13:15.336 23:16:04 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:15.336 23:16:04 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:15.336 23:16:04 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:15.336 23:16:04 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:15.336 23:16:04 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:15.336 23:16:04 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:15.336 23:16:04 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:13:15.336 23:16:04 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:13:15.336 23:16:04 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:15.336 23:16:04 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:15.336 23:16:04 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:13:15.336 23:16:04 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:15.336 23:16:04 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:13:15.336 Found net devices under 0000:31:00.0: cvl_0_0 00:13:15.336 23:16:04 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:13:15.336 23:16:04 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:15.336 23:16:04 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:15.336 23:16:04 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:13:15.336 23:16:04 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:15.336 23:16:04 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:13:15.336 Found net devices under 0000:31:00.1: cvl_0_1 00:13:15.336 23:16:04 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:13:15.336 23:16:04 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:13:15.336 23:16:04 -- nvmf/common.sh@403 -- # is_hw=yes 00:13:15.336 23:16:04 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:13:15.336 23:16:04 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:13:15.336 23:16:04 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:13:15.336 23:16:04 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:15.336 23:16:04 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:15.337 23:16:04 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:15.337 23:16:04 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:13:15.337 23:16:04 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:15.337 23:16:04 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:15.337 23:16:04 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:13:15.337 23:16:04 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:15.337 23:16:04 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:15.337 23:16:04 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:13:15.337 23:16:04 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:13:15.337 23:16:04 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:13:15.337 23:16:04 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:15.337 23:16:04 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:15.337 23:16:04 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:15.337 23:16:04 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:13:15.337 23:16:04 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:15.337 23:16:04 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:15.337 23:16:04 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:15.337 23:16:04 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:13:15.598 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:15.598 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.870 ms 00:13:15.598 00:13:15.598 --- 10.0.0.2 ping statistics --- 00:13:15.598 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:15.598 rtt min/avg/max/mdev = 0.870/0.870/0.870/0.000 ms 00:13:15.598 23:16:04 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:15.598 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:15.598 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.279 ms 00:13:15.598 00:13:15.598 --- 10.0.0.1 ping statistics --- 00:13:15.598 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:15.598 rtt min/avg/max/mdev = 0.279/0.279/0.279/0.000 ms 00:13:15.599 23:16:04 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:15.599 23:16:04 -- nvmf/common.sh@411 -- # return 0 00:13:15.599 23:16:04 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:13:15.599 23:16:04 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:15.599 23:16:04 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:13:15.599 23:16:04 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:13:15.599 23:16:04 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:15.599 23:16:04 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:13:15.599 23:16:04 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:13:15.599 23:16:04 -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:13:15.599 23:16:04 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:13:15.599 23:16:04 -- common/autotest_common.sh@710 -- # xtrace_disable 00:13:15.599 23:16:04 -- common/autotest_common.sh@10 -- # set +x 00:13:15.599 23:16:04 -- nvmf/common.sh@470 -- # nvmfpid=3839758 00:13:15.599 23:16:04 -- nvmf/common.sh@471 -- # waitforlisten 3839758 00:13:15.599 23:16:04 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:15.599 23:16:04 -- common/autotest_common.sh@817 -- # '[' -z 3839758 ']' 00:13:15.599 23:16:04 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:15.599 23:16:04 -- common/autotest_common.sh@822 -- # local max_retries=100 00:13:15.599 23:16:04 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:15.599 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:15.599 23:16:04 -- common/autotest_common.sh@826 -- # xtrace_disable 00:13:15.599 23:16:04 -- common/autotest_common.sh@10 -- # set +x 00:13:15.599 [2024-04-26 23:16:04.707114] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:13:15.599 [2024-04-26 23:16:04.707176] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:15.599 EAL: No free 2048 kB hugepages reported on node 1 00:13:15.599 [2024-04-26 23:16:04.781784] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:15.599 [2024-04-26 23:16:04.820151] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:15.599 [2024-04-26 23:16:04.820202] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:15.599 [2024-04-26 23:16:04.820210] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:15.599 [2024-04-26 23:16:04.820216] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:15.599 [2024-04-26 23:16:04.820222] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:15.599 [2024-04-26 23:16:04.820342] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:15.599 [2024-04-26 23:16:04.820484] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:13:15.599 [2024-04-26 23:16:04.820646] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:15.599 [2024-04-26 23:16:04.820647] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:13:16.543 23:16:05 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:13:16.543 23:16:05 -- common/autotest_common.sh@850 -- # return 0 00:13:16.543 23:16:05 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:13:16.543 23:16:05 -- common/autotest_common.sh@716 -- # xtrace_disable 00:13:16.543 23:16:05 -- common/autotest_common.sh@10 -- # set +x 00:13:16.543 23:16:05 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:16.543 23:16:05 -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:13:16.543 23:16:05 -- target/invalid.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode5635 00:13:16.543 [2024-04-26 23:16:05.675869] nvmf_rpc.c: 401:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:13:16.543 23:16:05 -- target/invalid.sh@40 -- # out='request: 00:13:16.543 { 00:13:16.543 "nqn": "nqn.2016-06.io.spdk:cnode5635", 00:13:16.543 "tgt_name": "foobar", 00:13:16.543 "method": "nvmf_create_subsystem", 00:13:16.543 "req_id": 1 00:13:16.543 } 00:13:16.543 Got JSON-RPC error response 00:13:16.543 response: 00:13:16.543 { 00:13:16.543 "code": -32603, 00:13:16.543 "message": "Unable to find target foobar" 00:13:16.543 }' 00:13:16.543 23:16:05 -- target/invalid.sh@41 -- # [[ request: 00:13:16.543 { 00:13:16.543 "nqn": "nqn.2016-06.io.spdk:cnode5635", 00:13:16.543 "tgt_name": "foobar", 00:13:16.543 "method": "nvmf_create_subsystem", 00:13:16.543 "req_id": 1 00:13:16.543 } 00:13:16.543 Got JSON-RPC error response 00:13:16.543 response: 00:13:16.543 { 00:13:16.543 "code": -32603, 00:13:16.543 "message": "Unable to find target foobar" 00:13:16.543 } == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:13:16.543 23:16:05 -- target/invalid.sh@45 -- # echo -e '\x1f' 00:13:16.543 23:16:05 -- target/invalid.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode1074 00:13:16.804 [2024-04-26 23:16:05.852495] nvmf_rpc.c: 418:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode1074: invalid serial number 'SPDKISFASTANDAWESOME' 00:13:16.804 23:16:05 -- target/invalid.sh@45 -- # out='request: 00:13:16.804 { 00:13:16.804 "nqn": "nqn.2016-06.io.spdk:cnode1074", 00:13:16.804 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:13:16.804 "method": "nvmf_create_subsystem", 00:13:16.804 "req_id": 1 00:13:16.804 } 00:13:16.804 Got JSON-RPC error response 00:13:16.804 response: 00:13:16.804 { 00:13:16.804 "code": -32602, 00:13:16.804 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:13:16.804 }' 00:13:16.804 23:16:05 -- target/invalid.sh@46 -- # [[ request: 00:13:16.804 { 00:13:16.804 "nqn": "nqn.2016-06.io.spdk:cnode1074", 00:13:16.804 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:13:16.804 "method": "nvmf_create_subsystem", 00:13:16.804 "req_id": 1 00:13:16.804 } 00:13:16.804 Got JSON-RPC error response 00:13:16.804 response: 00:13:16.804 { 00:13:16.804 "code": -32602, 00:13:16.804 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:13:16.804 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:13:16.804 23:16:05 -- target/invalid.sh@50 -- # echo -e '\x1f' 00:13:16.804 23:16:05 -- target/invalid.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode11865 00:13:16.804 [2024-04-26 23:16:06.029022] nvmf_rpc.c: 427:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode11865: invalid model number 'SPDK_Controller' 00:13:17.067 23:16:06 -- target/invalid.sh@50 -- # out='request: 00:13:17.067 { 00:13:17.067 "nqn": "nqn.2016-06.io.spdk:cnode11865", 00:13:17.067 "model_number": "SPDK_Controller\u001f", 00:13:17.067 "method": "nvmf_create_subsystem", 00:13:17.067 "req_id": 1 00:13:17.067 } 00:13:17.067 Got JSON-RPC error response 00:13:17.067 response: 00:13:17.067 { 00:13:17.067 "code": -32602, 00:13:17.067 "message": "Invalid MN SPDK_Controller\u001f" 00:13:17.067 }' 00:13:17.067 23:16:06 -- target/invalid.sh@51 -- # [[ request: 00:13:17.067 { 00:13:17.067 "nqn": "nqn.2016-06.io.spdk:cnode11865", 00:13:17.067 "model_number": "SPDK_Controller\u001f", 00:13:17.067 "method": "nvmf_create_subsystem", 00:13:17.067 "req_id": 1 00:13:17.067 } 00:13:17.067 Got JSON-RPC error response 00:13:17.067 response: 00:13:17.067 { 00:13:17.067 "code": -32602, 00:13:17.067 "message": "Invalid MN SPDK_Controller\u001f" 00:13:17.067 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:13:17.067 23:16:06 -- target/invalid.sh@54 -- # gen_random_s 21 00:13:17.067 23:16:06 -- target/invalid.sh@19 -- # local length=21 ll 00:13:17.067 23:16:06 -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:13:17.067 23:16:06 -- target/invalid.sh@21 -- # local chars 00:13:17.067 23:16:06 -- target/invalid.sh@22 -- # local string 00:13:17.067 23:16:06 -- target/invalid.sh@24 -- # (( ll = 0 )) 00:13:17.067 23:16:06 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:17.067 23:16:06 -- target/invalid.sh@25 -- # printf %x 68 00:13:17.067 23:16:06 -- target/invalid.sh@25 -- # echo -e '\x44' 00:13:17.067 23:16:06 -- target/invalid.sh@25 -- # string+=D 00:13:17.067 23:16:06 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:17.067 23:16:06 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:17.067 23:16:06 -- target/invalid.sh@25 -- # printf %x 95 00:13:17.067 23:16:06 -- target/invalid.sh@25 -- # echo -e '\x5f' 00:13:17.067 23:16:06 -- target/invalid.sh@25 -- # string+=_ 00:13:17.067 23:16:06 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:17.067 23:16:06 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:17.067 23:16:06 -- target/invalid.sh@25 -- # printf %x 69 00:13:17.067 23:16:06 -- target/invalid.sh@25 -- # echo -e '\x45' 00:13:17.067 23:16:06 -- target/invalid.sh@25 -- # string+=E 00:13:17.067 23:16:06 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:17.067 23:16:06 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:17.067 23:16:06 -- target/invalid.sh@25 -- # printf %x 46 00:13:17.067 23:16:06 -- target/invalid.sh@25 -- # echo -e '\x2e' 00:13:17.067 23:16:06 -- target/invalid.sh@25 -- # string+=. 00:13:17.067 23:16:06 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:17.067 23:16:06 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:17.067 23:16:06 -- target/invalid.sh@25 -- # printf %x 101 00:13:17.067 23:16:06 -- target/invalid.sh@25 -- # echo -e '\x65' 00:13:17.067 23:16:06 -- target/invalid.sh@25 -- # string+=e 00:13:17.067 23:16:06 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:17.067 23:16:06 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:17.067 23:16:06 -- target/invalid.sh@25 -- # printf %x 73 00:13:17.067 23:16:06 -- target/invalid.sh@25 -- # echo -e '\x49' 00:13:17.067 23:16:06 -- target/invalid.sh@25 -- # string+=I 00:13:17.067 23:16:06 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:17.067 23:16:06 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:17.067 23:16:06 -- target/invalid.sh@25 -- # printf %x 79 00:13:17.067 23:16:06 -- target/invalid.sh@25 -- # echo -e '\x4f' 00:13:17.067 23:16:06 -- target/invalid.sh@25 -- # string+=O 00:13:17.067 23:16:06 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:17.067 23:16:06 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:17.067 23:16:06 -- target/invalid.sh@25 -- # printf %x 54 00:13:17.068 23:16:06 -- target/invalid.sh@25 -- # echo -e '\x36' 00:13:17.068 23:16:06 -- target/invalid.sh@25 -- # string+=6 00:13:17.068 23:16:06 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:17.068 23:16:06 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:17.068 23:16:06 -- target/invalid.sh@25 -- # printf %x 88 00:13:17.068 23:16:06 -- target/invalid.sh@25 -- # echo -e '\x58' 00:13:17.068 23:16:06 -- target/invalid.sh@25 -- # string+=X 00:13:17.068 23:16:06 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:17.068 23:16:06 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:17.068 23:16:06 -- target/invalid.sh@25 -- # printf %x 34 00:13:17.068 23:16:06 -- target/invalid.sh@25 -- # echo -e '\x22' 00:13:17.068 23:16:06 -- target/invalid.sh@25 -- # string+='"' 00:13:17.068 23:16:06 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:17.068 23:16:06 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:17.068 23:16:06 -- target/invalid.sh@25 -- # printf %x 119 00:13:17.068 23:16:06 -- target/invalid.sh@25 -- # echo -e '\x77' 00:13:17.068 23:16:06 -- target/invalid.sh@25 -- # string+=w 00:13:17.068 23:16:06 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:17.068 23:16:06 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:17.068 23:16:06 -- target/invalid.sh@25 -- # printf %x 97 00:13:17.068 23:16:06 -- target/invalid.sh@25 -- # echo -e '\x61' 00:13:17.068 23:16:06 -- target/invalid.sh@25 -- # string+=a 00:13:17.068 23:16:06 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:17.068 23:16:06 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:17.068 23:16:06 -- target/invalid.sh@25 -- # printf %x 68 00:13:17.068 23:16:06 -- target/invalid.sh@25 -- # echo -e '\x44' 00:13:17.068 23:16:06 -- target/invalid.sh@25 -- # string+=D 00:13:17.068 23:16:06 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:17.068 23:16:06 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:17.068 23:16:06 -- target/invalid.sh@25 -- # printf %x 110 00:13:17.068 23:16:06 -- target/invalid.sh@25 -- # echo -e '\x6e' 00:13:17.068 23:16:06 -- target/invalid.sh@25 -- # string+=n 00:13:17.068 23:16:06 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:17.068 23:16:06 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:17.068 23:16:06 -- target/invalid.sh@25 -- # printf %x 55 00:13:17.068 23:16:06 -- target/invalid.sh@25 -- # echo -e '\x37' 00:13:17.068 23:16:06 -- target/invalid.sh@25 -- # string+=7 00:13:17.068 23:16:06 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:17.068 23:16:06 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:17.068 23:16:06 -- target/invalid.sh@25 -- # printf %x 53 00:13:17.068 23:16:06 -- target/invalid.sh@25 -- # echo -e '\x35' 00:13:17.068 23:16:06 -- target/invalid.sh@25 -- # string+=5 00:13:17.068 23:16:06 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:17.068 23:16:06 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:17.068 23:16:06 -- target/invalid.sh@25 -- # printf %x 43 00:13:17.068 23:16:06 -- target/invalid.sh@25 -- # echo -e '\x2b' 00:13:17.068 23:16:06 -- target/invalid.sh@25 -- # string+=+ 00:13:17.068 23:16:06 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:17.068 23:16:06 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:17.068 23:16:06 -- target/invalid.sh@25 -- # printf %x 104 00:13:17.068 23:16:06 -- target/invalid.sh@25 -- # echo -e '\x68' 00:13:17.068 23:16:06 -- target/invalid.sh@25 -- # string+=h 00:13:17.068 23:16:06 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:17.068 23:16:06 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:17.068 23:16:06 -- target/invalid.sh@25 -- # printf %x 37 00:13:17.068 23:16:06 -- target/invalid.sh@25 -- # echo -e '\x25' 00:13:17.068 23:16:06 -- target/invalid.sh@25 -- # string+=% 00:13:17.068 23:16:06 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:17.068 23:16:06 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:17.068 23:16:06 -- target/invalid.sh@25 -- # printf %x 97 00:13:17.068 23:16:06 -- target/invalid.sh@25 -- # echo -e '\x61' 00:13:17.068 23:16:06 -- target/invalid.sh@25 -- # string+=a 00:13:17.068 23:16:06 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:17.068 23:16:06 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:17.068 23:16:06 -- target/invalid.sh@25 -- # printf %x 57 00:13:17.068 23:16:06 -- target/invalid.sh@25 -- # echo -e '\x39' 00:13:17.068 23:16:06 -- target/invalid.sh@25 -- # string+=9 00:13:17.068 23:16:06 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:17.068 23:16:06 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:17.068 23:16:06 -- target/invalid.sh@28 -- # [[ D == \- ]] 00:13:17.068 23:16:06 -- target/invalid.sh@31 -- # echo 'D_E.eIO6X"waDn75+h%a9' 00:13:17.068 23:16:06 -- target/invalid.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s 'D_E.eIO6X"waDn75+h%a9' nqn.2016-06.io.spdk:cnode24776 00:13:17.332 [2024-04-26 23:16:06.362102] nvmf_rpc.c: 418:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode24776: invalid serial number 'D_E.eIO6X"waDn75+h%a9' 00:13:17.332 23:16:06 -- target/invalid.sh@54 -- # out='request: 00:13:17.332 { 00:13:17.332 "nqn": "nqn.2016-06.io.spdk:cnode24776", 00:13:17.332 "serial_number": "D_E.eIO6X\"waDn75+h%a9", 00:13:17.332 "method": "nvmf_create_subsystem", 00:13:17.332 "req_id": 1 00:13:17.332 } 00:13:17.332 Got JSON-RPC error response 00:13:17.332 response: 00:13:17.332 { 00:13:17.332 "code": -32602, 00:13:17.332 "message": "Invalid SN D_E.eIO6X\"waDn75+h%a9" 00:13:17.332 }' 00:13:17.332 23:16:06 -- target/invalid.sh@55 -- # [[ request: 00:13:17.332 { 00:13:17.332 "nqn": "nqn.2016-06.io.spdk:cnode24776", 00:13:17.332 "serial_number": "D_E.eIO6X\"waDn75+h%a9", 00:13:17.332 "method": "nvmf_create_subsystem", 00:13:17.332 "req_id": 1 00:13:17.332 } 00:13:17.332 Got JSON-RPC error response 00:13:17.332 response: 00:13:17.332 { 00:13:17.332 "code": -32602, 00:13:17.332 "message": "Invalid SN D_E.eIO6X\"waDn75+h%a9" 00:13:17.332 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:13:17.332 23:16:06 -- target/invalid.sh@58 -- # gen_random_s 41 00:13:17.332 23:16:06 -- target/invalid.sh@19 -- # local length=41 ll 00:13:17.332 23:16:06 -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:13:17.332 23:16:06 -- target/invalid.sh@21 -- # local chars 00:13:17.332 23:16:06 -- target/invalid.sh@22 -- # local string 00:13:17.332 23:16:06 -- target/invalid.sh@24 -- # (( ll = 0 )) 00:13:17.332 23:16:06 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:17.332 23:16:06 -- target/invalid.sh@25 -- # printf %x 48 00:13:17.332 23:16:06 -- target/invalid.sh@25 -- # echo -e '\x30' 00:13:17.332 23:16:06 -- target/invalid.sh@25 -- # string+=0 00:13:17.332 23:16:06 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:17.332 23:16:06 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:17.332 23:16:06 -- target/invalid.sh@25 -- # printf %x 77 00:13:17.332 23:16:06 -- target/invalid.sh@25 -- # echo -e '\x4d' 00:13:17.332 23:16:06 -- target/invalid.sh@25 -- # string+=M 00:13:17.332 23:16:06 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:17.332 23:16:06 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:17.332 23:16:06 -- target/invalid.sh@25 -- # printf %x 63 00:13:17.332 23:16:06 -- target/invalid.sh@25 -- # echo -e '\x3f' 00:13:17.332 23:16:06 -- target/invalid.sh@25 -- # string+='?' 00:13:17.332 23:16:06 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:17.332 23:16:06 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:17.332 23:16:06 -- target/invalid.sh@25 -- # printf %x 114 00:13:17.332 23:16:06 -- target/invalid.sh@25 -- # echo -e '\x72' 00:13:17.332 23:16:06 -- target/invalid.sh@25 -- # string+=r 00:13:17.332 23:16:06 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:17.332 23:16:06 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:17.332 23:16:06 -- target/invalid.sh@25 -- # printf %x 32 00:13:17.332 23:16:06 -- target/invalid.sh@25 -- # echo -e '\x20' 00:13:17.332 23:16:06 -- target/invalid.sh@25 -- # string+=' ' 00:13:17.332 23:16:06 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:17.332 23:16:06 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:17.332 23:16:06 -- target/invalid.sh@25 -- # printf %x 57 00:13:17.332 23:16:06 -- target/invalid.sh@25 -- # echo -e '\x39' 00:13:17.332 23:16:06 -- target/invalid.sh@25 -- # string+=9 00:13:17.332 23:16:06 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:17.332 23:16:06 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:17.332 23:16:06 -- target/invalid.sh@25 -- # printf %x 112 00:13:17.332 23:16:06 -- target/invalid.sh@25 -- # echo -e '\x70' 00:13:17.332 23:16:06 -- target/invalid.sh@25 -- # string+=p 00:13:17.332 23:16:06 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:17.332 23:16:06 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:17.332 23:16:06 -- target/invalid.sh@25 -- # printf %x 114 00:13:17.332 23:16:06 -- target/invalid.sh@25 -- # echo -e '\x72' 00:13:17.332 23:16:06 -- target/invalid.sh@25 -- # string+=r 00:13:17.332 23:16:06 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:17.332 23:16:06 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:17.332 23:16:06 -- target/invalid.sh@25 -- # printf %x 84 00:13:17.332 23:16:06 -- target/invalid.sh@25 -- # echo -e '\x54' 00:13:17.332 23:16:06 -- target/invalid.sh@25 -- # string+=T 00:13:17.332 23:16:06 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:17.332 23:16:06 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:17.332 23:16:06 -- target/invalid.sh@25 -- # printf %x 51 00:13:17.332 23:16:06 -- target/invalid.sh@25 -- # echo -e '\x33' 00:13:17.332 23:16:06 -- target/invalid.sh@25 -- # string+=3 00:13:17.332 23:16:06 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:17.332 23:16:06 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:17.332 23:16:06 -- target/invalid.sh@25 -- # printf %x 73 00:13:17.332 23:16:06 -- target/invalid.sh@25 -- # echo -e '\x49' 00:13:17.332 23:16:06 -- target/invalid.sh@25 -- # string+=I 00:13:17.332 23:16:06 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:17.332 23:16:06 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:17.332 23:16:06 -- target/invalid.sh@25 -- # printf %x 125 00:13:17.332 23:16:06 -- target/invalid.sh@25 -- # echo -e '\x7d' 00:13:17.332 23:16:06 -- target/invalid.sh@25 -- # string+='}' 00:13:17.332 23:16:06 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:17.332 23:16:06 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:17.332 23:16:06 -- target/invalid.sh@25 -- # printf %x 77 00:13:17.332 23:16:06 -- target/invalid.sh@25 -- # echo -e '\x4d' 00:13:17.332 23:16:06 -- target/invalid.sh@25 -- # string+=M 00:13:17.332 23:16:06 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:17.332 23:16:06 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:17.332 23:16:06 -- target/invalid.sh@25 -- # printf %x 40 00:13:17.332 23:16:06 -- target/invalid.sh@25 -- # echo -e '\x28' 00:13:17.332 23:16:06 -- target/invalid.sh@25 -- # string+='(' 00:13:17.332 23:16:06 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:17.332 23:16:06 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:17.332 23:16:06 -- target/invalid.sh@25 -- # printf %x 86 00:13:17.332 23:16:06 -- target/invalid.sh@25 -- # echo -e '\x56' 00:13:17.332 23:16:06 -- target/invalid.sh@25 -- # string+=V 00:13:17.332 23:16:06 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:17.332 23:16:06 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:17.332 23:16:06 -- target/invalid.sh@25 -- # printf %x 36 00:13:17.332 23:16:06 -- target/invalid.sh@25 -- # echo -e '\x24' 00:13:17.333 23:16:06 -- target/invalid.sh@25 -- # string+='$' 00:13:17.333 23:16:06 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:17.333 23:16:06 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:17.333 23:16:06 -- target/invalid.sh@25 -- # printf %x 72 00:13:17.333 23:16:06 -- target/invalid.sh@25 -- # echo -e '\x48' 00:13:17.333 23:16:06 -- target/invalid.sh@25 -- # string+=H 00:13:17.333 23:16:06 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:17.333 23:16:06 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:17.333 23:16:06 -- target/invalid.sh@25 -- # printf %x 86 00:13:17.333 23:16:06 -- target/invalid.sh@25 -- # echo -e '\x56' 00:13:17.333 23:16:06 -- target/invalid.sh@25 -- # string+=V 00:13:17.333 23:16:06 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:17.333 23:16:06 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:17.333 23:16:06 -- target/invalid.sh@25 -- # printf %x 74 00:13:17.333 23:16:06 -- target/invalid.sh@25 -- # echo -e '\x4a' 00:13:17.333 23:16:06 -- target/invalid.sh@25 -- # string+=J 00:13:17.333 23:16:06 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:17.333 23:16:06 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:17.333 23:16:06 -- target/invalid.sh@25 -- # printf %x 64 00:13:17.333 23:16:06 -- target/invalid.sh@25 -- # echo -e '\x40' 00:13:17.333 23:16:06 -- target/invalid.sh@25 -- # string+=@ 00:13:17.333 23:16:06 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:17.333 23:16:06 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:17.333 23:16:06 -- target/invalid.sh@25 -- # printf %x 59 00:13:17.333 23:16:06 -- target/invalid.sh@25 -- # echo -e '\x3b' 00:13:17.333 23:16:06 -- target/invalid.sh@25 -- # string+=';' 00:13:17.333 23:16:06 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:17.333 23:16:06 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:17.333 23:16:06 -- target/invalid.sh@25 -- # printf %x 105 00:13:17.333 23:16:06 -- target/invalid.sh@25 -- # echo -e '\x69' 00:13:17.333 23:16:06 -- target/invalid.sh@25 -- # string+=i 00:13:17.333 23:16:06 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:17.333 23:16:06 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:17.333 23:16:06 -- target/invalid.sh@25 -- # printf %x 109 00:13:17.333 23:16:06 -- target/invalid.sh@25 -- # echo -e '\x6d' 00:13:17.333 23:16:06 -- target/invalid.sh@25 -- # string+=m 00:13:17.333 23:16:06 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:17.333 23:16:06 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:17.333 23:16:06 -- target/invalid.sh@25 -- # printf %x 42 00:13:17.333 23:16:06 -- target/invalid.sh@25 -- # echo -e '\x2a' 00:13:17.333 23:16:06 -- target/invalid.sh@25 -- # string+='*' 00:13:17.333 23:16:06 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:17.333 23:16:06 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:17.333 23:16:06 -- target/invalid.sh@25 -- # printf %x 116 00:13:17.333 23:16:06 -- target/invalid.sh@25 -- # echo -e '\x74' 00:13:17.595 23:16:06 -- target/invalid.sh@25 -- # string+=t 00:13:17.595 23:16:06 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:17.595 23:16:06 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:17.595 23:16:06 -- target/invalid.sh@25 -- # printf %x 126 00:13:17.595 23:16:06 -- target/invalid.sh@25 -- # echo -e '\x7e' 00:13:17.595 23:16:06 -- target/invalid.sh@25 -- # string+='~' 00:13:17.595 23:16:06 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:17.595 23:16:06 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:17.595 23:16:06 -- target/invalid.sh@25 -- # printf %x 102 00:13:17.595 23:16:06 -- target/invalid.sh@25 -- # echo -e '\x66' 00:13:17.595 23:16:06 -- target/invalid.sh@25 -- # string+=f 00:13:17.595 23:16:06 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:17.595 23:16:06 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:17.595 23:16:06 -- target/invalid.sh@25 -- # printf %x 45 00:13:17.595 23:16:06 -- target/invalid.sh@25 -- # echo -e '\x2d' 00:13:17.595 23:16:06 -- target/invalid.sh@25 -- # string+=- 00:13:17.595 23:16:06 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:17.595 23:16:06 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:17.595 23:16:06 -- target/invalid.sh@25 -- # printf %x 61 00:13:17.595 23:16:06 -- target/invalid.sh@25 -- # echo -e '\x3d' 00:13:17.595 23:16:06 -- target/invalid.sh@25 -- # string+== 00:13:17.595 23:16:06 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:17.596 23:16:06 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:17.596 23:16:06 -- target/invalid.sh@25 -- # printf %x 104 00:13:17.596 23:16:06 -- target/invalid.sh@25 -- # echo -e '\x68' 00:13:17.596 23:16:06 -- target/invalid.sh@25 -- # string+=h 00:13:17.596 23:16:06 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:17.596 23:16:06 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:17.596 23:16:06 -- target/invalid.sh@25 -- # printf %x 40 00:13:17.596 23:16:06 -- target/invalid.sh@25 -- # echo -e '\x28' 00:13:17.596 23:16:06 -- target/invalid.sh@25 -- # string+='(' 00:13:17.596 23:16:06 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:17.596 23:16:06 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:17.596 23:16:06 -- target/invalid.sh@25 -- # printf %x 80 00:13:17.596 23:16:06 -- target/invalid.sh@25 -- # echo -e '\x50' 00:13:17.596 23:16:06 -- target/invalid.sh@25 -- # string+=P 00:13:17.596 23:16:06 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:17.596 23:16:06 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:17.596 23:16:06 -- target/invalid.sh@25 -- # printf %x 45 00:13:17.596 23:16:06 -- target/invalid.sh@25 -- # echo -e '\x2d' 00:13:17.596 23:16:06 -- target/invalid.sh@25 -- # string+=- 00:13:17.596 23:16:06 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:17.596 23:16:06 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:17.596 23:16:06 -- target/invalid.sh@25 -- # printf %x 85 00:13:17.596 23:16:06 -- target/invalid.sh@25 -- # echo -e '\x55' 00:13:17.596 23:16:06 -- target/invalid.sh@25 -- # string+=U 00:13:17.596 23:16:06 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:17.596 23:16:06 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:17.596 23:16:06 -- target/invalid.sh@25 -- # printf %x 71 00:13:17.596 23:16:06 -- target/invalid.sh@25 -- # echo -e '\x47' 00:13:17.596 23:16:06 -- target/invalid.sh@25 -- # string+=G 00:13:17.596 23:16:06 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:17.596 23:16:06 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:17.596 23:16:06 -- target/invalid.sh@25 -- # printf %x 85 00:13:17.596 23:16:06 -- target/invalid.sh@25 -- # echo -e '\x55' 00:13:17.596 23:16:06 -- target/invalid.sh@25 -- # string+=U 00:13:17.596 23:16:06 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:17.596 23:16:06 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:17.596 23:16:06 -- target/invalid.sh@25 -- # printf %x 45 00:13:17.596 23:16:06 -- target/invalid.sh@25 -- # echo -e '\x2d' 00:13:17.596 23:16:06 -- target/invalid.sh@25 -- # string+=- 00:13:17.596 23:16:06 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:17.596 23:16:06 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:17.596 23:16:06 -- target/invalid.sh@25 -- # printf %x 62 00:13:17.596 23:16:06 -- target/invalid.sh@25 -- # echo -e '\x3e' 00:13:17.596 23:16:06 -- target/invalid.sh@25 -- # string+='>' 00:13:17.596 23:16:06 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:17.596 23:16:06 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:17.596 23:16:06 -- target/invalid.sh@25 -- # printf %x 117 00:13:17.596 23:16:06 -- target/invalid.sh@25 -- # echo -e '\x75' 00:13:17.596 23:16:06 -- target/invalid.sh@25 -- # string+=u 00:13:17.596 23:16:06 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:17.596 23:16:06 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:17.596 23:16:06 -- target/invalid.sh@25 -- # printf %x 100 00:13:17.596 23:16:06 -- target/invalid.sh@25 -- # echo -e '\x64' 00:13:17.596 23:16:06 -- target/invalid.sh@25 -- # string+=d 00:13:17.596 23:16:06 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:17.596 23:16:06 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:17.596 23:16:06 -- target/invalid.sh@25 -- # printf %x 109 00:13:17.596 23:16:06 -- target/invalid.sh@25 -- # echo -e '\x6d' 00:13:17.596 23:16:06 -- target/invalid.sh@25 -- # string+=m 00:13:17.596 23:16:06 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:17.596 23:16:06 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:17.596 23:16:06 -- target/invalid.sh@28 -- # [[ 0 == \- ]] 00:13:17.596 23:16:06 -- target/invalid.sh@31 -- # echo '0M?r 9prT3I}M(V$HVJ@;im*t~f-=h(P-UGU->udm' 00:13:17.596 23:16:06 -- target/invalid.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d '0M?r 9prT3I}M(V$HVJ@;im*t~f-=h(P-UGU->udm' nqn.2016-06.io.spdk:cnode14158 00:13:17.596 [2024-04-26 23:16:06.847709] nvmf_rpc.c: 427:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode14158: invalid model number '0M?r 9prT3I}M(V$HVJ@;im*t~f-=h(P-UGU->udm' 00:13:17.858 23:16:06 -- target/invalid.sh@58 -- # out='request: 00:13:17.858 { 00:13:17.858 "nqn": "nqn.2016-06.io.spdk:cnode14158", 00:13:17.858 "model_number": "0M?r 9prT3I}M(V$HVJ@;im*t~f-=h(P-UGU->udm", 00:13:17.858 "method": "nvmf_create_subsystem", 00:13:17.858 "req_id": 1 00:13:17.858 } 00:13:17.858 Got JSON-RPC error response 00:13:17.858 response: 00:13:17.858 { 00:13:17.858 "code": -32602, 00:13:17.858 "message": "Invalid MN 0M?r 9prT3I}M(V$HVJ@;im*t~f-=h(P-UGU->udm" 00:13:17.858 }' 00:13:17.858 23:16:06 -- target/invalid.sh@59 -- # [[ request: 00:13:17.858 { 00:13:17.858 "nqn": "nqn.2016-06.io.spdk:cnode14158", 00:13:17.858 "model_number": "0M?r 9prT3I}M(V$HVJ@;im*t~f-=h(P-UGU->udm", 00:13:17.858 "method": "nvmf_create_subsystem", 00:13:17.858 "req_id": 1 00:13:17.858 } 00:13:17.858 Got JSON-RPC error response 00:13:17.858 response: 00:13:17.858 { 00:13:17.858 "code": -32602, 00:13:17.858 "message": "Invalid MN 0M?r 9prT3I}M(V$HVJ@;im*t~f-=h(P-UGU->udm" 00:13:17.858 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:13:17.858 23:16:06 -- target/invalid.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport --trtype tcp 00:13:17.858 [2024-04-26 23:16:07.020324] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:17.858 23:16:07 -- target/invalid.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode -s SPDK001 -a 00:13:18.118 23:16:07 -- target/invalid.sh@64 -- # [[ tcp == \T\C\P ]] 00:13:18.118 23:16:07 -- target/invalid.sh@67 -- # echo '' 00:13:18.118 23:16:07 -- target/invalid.sh@67 -- # head -n 1 00:13:18.118 23:16:07 -- target/invalid.sh@67 -- # IP= 00:13:18.119 23:16:07 -- target/invalid.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode -t tcp -a '' -s 4421 00:13:18.380 [2024-04-26 23:16:07.373443] nvmf_rpc.c: 792:nvmf_rpc_listen_paused: *ERROR*: Unable to remove listener, rc -2 00:13:18.380 23:16:07 -- target/invalid.sh@69 -- # out='request: 00:13:18.380 { 00:13:18.380 "nqn": "nqn.2016-06.io.spdk:cnode", 00:13:18.380 "listen_address": { 00:13:18.380 "trtype": "tcp", 00:13:18.380 "traddr": "", 00:13:18.380 "trsvcid": "4421" 00:13:18.380 }, 00:13:18.380 "method": "nvmf_subsystem_remove_listener", 00:13:18.380 "req_id": 1 00:13:18.380 } 00:13:18.380 Got JSON-RPC error response 00:13:18.380 response: 00:13:18.380 { 00:13:18.380 "code": -32602, 00:13:18.380 "message": "Invalid parameters" 00:13:18.380 }' 00:13:18.380 23:16:07 -- target/invalid.sh@70 -- # [[ request: 00:13:18.380 { 00:13:18.380 "nqn": "nqn.2016-06.io.spdk:cnode", 00:13:18.380 "listen_address": { 00:13:18.380 "trtype": "tcp", 00:13:18.380 "traddr": "", 00:13:18.380 "trsvcid": "4421" 00:13:18.380 }, 00:13:18.380 "method": "nvmf_subsystem_remove_listener", 00:13:18.380 "req_id": 1 00:13:18.380 } 00:13:18.380 Got JSON-RPC error response 00:13:18.380 response: 00:13:18.380 { 00:13:18.380 "code": -32602, 00:13:18.380 "message": "Invalid parameters" 00:13:18.380 } != *\U\n\a\b\l\e\ \t\o\ \s\t\o\p\ \l\i\s\t\e\n\e\r\.* ]] 00:13:18.380 23:16:07 -- target/invalid.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode9377 -i 0 00:13:18.380 [2024-04-26 23:16:07.529913] nvmf_rpc.c: 439:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode9377: invalid cntlid range [0-65519] 00:13:18.380 23:16:07 -- target/invalid.sh@73 -- # out='request: 00:13:18.380 { 00:13:18.380 "nqn": "nqn.2016-06.io.spdk:cnode9377", 00:13:18.380 "min_cntlid": 0, 00:13:18.380 "method": "nvmf_create_subsystem", 00:13:18.380 "req_id": 1 00:13:18.380 } 00:13:18.380 Got JSON-RPC error response 00:13:18.380 response: 00:13:18.380 { 00:13:18.380 "code": -32602, 00:13:18.380 "message": "Invalid cntlid range [0-65519]" 00:13:18.380 }' 00:13:18.380 23:16:07 -- target/invalid.sh@74 -- # [[ request: 00:13:18.380 { 00:13:18.380 "nqn": "nqn.2016-06.io.spdk:cnode9377", 00:13:18.380 "min_cntlid": 0, 00:13:18.380 "method": "nvmf_create_subsystem", 00:13:18.380 "req_id": 1 00:13:18.380 } 00:13:18.380 Got JSON-RPC error response 00:13:18.380 response: 00:13:18.380 { 00:13:18.380 "code": -32602, 00:13:18.380 "message": "Invalid cntlid range [0-65519]" 00:13:18.380 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:13:18.380 23:16:07 -- target/invalid.sh@75 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode31709 -i 65520 00:13:18.642 [2024-04-26 23:16:07.702460] nvmf_rpc.c: 439:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode31709: invalid cntlid range [65520-65519] 00:13:18.642 23:16:07 -- target/invalid.sh@75 -- # out='request: 00:13:18.642 { 00:13:18.642 "nqn": "nqn.2016-06.io.spdk:cnode31709", 00:13:18.642 "min_cntlid": 65520, 00:13:18.642 "method": "nvmf_create_subsystem", 00:13:18.642 "req_id": 1 00:13:18.642 } 00:13:18.642 Got JSON-RPC error response 00:13:18.642 response: 00:13:18.642 { 00:13:18.642 "code": -32602, 00:13:18.642 "message": "Invalid cntlid range [65520-65519]" 00:13:18.642 }' 00:13:18.642 23:16:07 -- target/invalid.sh@76 -- # [[ request: 00:13:18.642 { 00:13:18.642 "nqn": "nqn.2016-06.io.spdk:cnode31709", 00:13:18.642 "min_cntlid": 65520, 00:13:18.642 "method": "nvmf_create_subsystem", 00:13:18.642 "req_id": 1 00:13:18.642 } 00:13:18.642 Got JSON-RPC error response 00:13:18.642 response: 00:13:18.642 { 00:13:18.642 "code": -32602, 00:13:18.642 "message": "Invalid cntlid range [65520-65519]" 00:13:18.642 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:13:18.642 23:16:07 -- target/invalid.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode12442 -I 0 00:13:18.642 [2024-04-26 23:16:07.875027] nvmf_rpc.c: 439:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode12442: invalid cntlid range [1-0] 00:13:18.903 23:16:07 -- target/invalid.sh@77 -- # out='request: 00:13:18.903 { 00:13:18.903 "nqn": "nqn.2016-06.io.spdk:cnode12442", 00:13:18.903 "max_cntlid": 0, 00:13:18.903 "method": "nvmf_create_subsystem", 00:13:18.903 "req_id": 1 00:13:18.903 } 00:13:18.903 Got JSON-RPC error response 00:13:18.903 response: 00:13:18.903 { 00:13:18.903 "code": -32602, 00:13:18.903 "message": "Invalid cntlid range [1-0]" 00:13:18.903 }' 00:13:18.903 23:16:07 -- target/invalid.sh@78 -- # [[ request: 00:13:18.903 { 00:13:18.903 "nqn": "nqn.2016-06.io.spdk:cnode12442", 00:13:18.903 "max_cntlid": 0, 00:13:18.903 "method": "nvmf_create_subsystem", 00:13:18.903 "req_id": 1 00:13:18.903 } 00:13:18.903 Got JSON-RPC error response 00:13:18.903 response: 00:13:18.903 { 00:13:18.903 "code": -32602, 00:13:18.903 "message": "Invalid cntlid range [1-0]" 00:13:18.903 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:13:18.903 23:16:07 -- target/invalid.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode7549 -I 65520 00:13:18.903 [2024-04-26 23:16:08.051601] nvmf_rpc.c: 439:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode7549: invalid cntlid range [1-65520] 00:13:18.903 23:16:08 -- target/invalid.sh@79 -- # out='request: 00:13:18.903 { 00:13:18.903 "nqn": "nqn.2016-06.io.spdk:cnode7549", 00:13:18.903 "max_cntlid": 65520, 00:13:18.903 "method": "nvmf_create_subsystem", 00:13:18.903 "req_id": 1 00:13:18.903 } 00:13:18.903 Got JSON-RPC error response 00:13:18.903 response: 00:13:18.903 { 00:13:18.903 "code": -32602, 00:13:18.903 "message": "Invalid cntlid range [1-65520]" 00:13:18.903 }' 00:13:18.903 23:16:08 -- target/invalid.sh@80 -- # [[ request: 00:13:18.903 { 00:13:18.903 "nqn": "nqn.2016-06.io.spdk:cnode7549", 00:13:18.903 "max_cntlid": 65520, 00:13:18.903 "method": "nvmf_create_subsystem", 00:13:18.903 "req_id": 1 00:13:18.903 } 00:13:18.903 Got JSON-RPC error response 00:13:18.903 response: 00:13:18.903 { 00:13:18.903 "code": -32602, 00:13:18.903 "message": "Invalid cntlid range [1-65520]" 00:13:18.903 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:13:18.903 23:16:08 -- target/invalid.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode29996 -i 6 -I 5 00:13:19.164 [2024-04-26 23:16:08.228167] nvmf_rpc.c: 439:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode29996: invalid cntlid range [6-5] 00:13:19.164 23:16:08 -- target/invalid.sh@83 -- # out='request: 00:13:19.164 { 00:13:19.164 "nqn": "nqn.2016-06.io.spdk:cnode29996", 00:13:19.164 "min_cntlid": 6, 00:13:19.164 "max_cntlid": 5, 00:13:19.164 "method": "nvmf_create_subsystem", 00:13:19.164 "req_id": 1 00:13:19.164 } 00:13:19.164 Got JSON-RPC error response 00:13:19.164 response: 00:13:19.164 { 00:13:19.164 "code": -32602, 00:13:19.164 "message": "Invalid cntlid range [6-5]" 00:13:19.164 }' 00:13:19.164 23:16:08 -- target/invalid.sh@84 -- # [[ request: 00:13:19.164 { 00:13:19.164 "nqn": "nqn.2016-06.io.spdk:cnode29996", 00:13:19.164 "min_cntlid": 6, 00:13:19.164 "max_cntlid": 5, 00:13:19.164 "method": "nvmf_create_subsystem", 00:13:19.164 "req_id": 1 00:13:19.164 } 00:13:19.164 Got JSON-RPC error response 00:13:19.164 response: 00:13:19.164 { 00:13:19.164 "code": -32602, 00:13:19.164 "message": "Invalid cntlid range [6-5]" 00:13:19.164 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:13:19.164 23:16:08 -- target/invalid.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target --name foobar 00:13:19.164 23:16:08 -- target/invalid.sh@87 -- # out='request: 00:13:19.164 { 00:13:19.164 "name": "foobar", 00:13:19.164 "method": "nvmf_delete_target", 00:13:19.164 "req_id": 1 00:13:19.164 } 00:13:19.164 Got JSON-RPC error response 00:13:19.164 response: 00:13:19.164 { 00:13:19.164 "code": -32602, 00:13:19.164 "message": "The specified target doesn'\''t exist, cannot delete it." 00:13:19.164 }' 00:13:19.164 23:16:08 -- target/invalid.sh@88 -- # [[ request: 00:13:19.164 { 00:13:19.164 "name": "foobar", 00:13:19.164 "method": "nvmf_delete_target", 00:13:19.164 "req_id": 1 00:13:19.164 } 00:13:19.164 Got JSON-RPC error response 00:13:19.164 response: 00:13:19.164 { 00:13:19.164 "code": -32602, 00:13:19.164 "message": "The specified target doesn't exist, cannot delete it." 00:13:19.164 } == *\T\h\e\ \s\p\e\c\i\f\i\e\d\ \t\a\r\g\e\t\ \d\o\e\s\n\'\t\ \e\x\i\s\t\,\ \c\a\n\n\o\t\ \d\e\l\e\t\e\ \i\t\.* ]] 00:13:19.164 23:16:08 -- target/invalid.sh@90 -- # trap - SIGINT SIGTERM EXIT 00:13:19.164 23:16:08 -- target/invalid.sh@91 -- # nvmftestfini 00:13:19.164 23:16:08 -- nvmf/common.sh@477 -- # nvmfcleanup 00:13:19.164 23:16:08 -- nvmf/common.sh@117 -- # sync 00:13:19.164 23:16:08 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:19.164 23:16:08 -- nvmf/common.sh@120 -- # set +e 00:13:19.164 23:16:08 -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:19.164 23:16:08 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:19.164 rmmod nvme_tcp 00:13:19.164 rmmod nvme_fabrics 00:13:19.164 rmmod nvme_keyring 00:13:19.424 23:16:08 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:19.424 23:16:08 -- nvmf/common.sh@124 -- # set -e 00:13:19.424 23:16:08 -- nvmf/common.sh@125 -- # return 0 00:13:19.424 23:16:08 -- nvmf/common.sh@478 -- # '[' -n 3839758 ']' 00:13:19.424 23:16:08 -- nvmf/common.sh@479 -- # killprocess 3839758 00:13:19.424 23:16:08 -- common/autotest_common.sh@936 -- # '[' -z 3839758 ']' 00:13:19.424 23:16:08 -- common/autotest_common.sh@940 -- # kill -0 3839758 00:13:19.424 23:16:08 -- common/autotest_common.sh@941 -- # uname 00:13:19.424 23:16:08 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:13:19.424 23:16:08 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3839758 00:13:19.424 23:16:08 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:13:19.424 23:16:08 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:13:19.424 23:16:08 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3839758' 00:13:19.424 killing process with pid 3839758 00:13:19.424 23:16:08 -- common/autotest_common.sh@955 -- # kill 3839758 00:13:19.424 23:16:08 -- common/autotest_common.sh@960 -- # wait 3839758 00:13:19.424 23:16:08 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:13:19.424 23:16:08 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:13:19.424 23:16:08 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:13:19.424 23:16:08 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:19.424 23:16:08 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:19.424 23:16:08 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:19.424 23:16:08 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:19.424 23:16:08 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:22.018 23:16:10 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:13:22.018 00:13:22.018 real 0m13.050s 00:13:22.018 user 0m19.203s 00:13:22.018 sys 0m6.020s 00:13:22.018 23:16:10 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:13:22.018 23:16:10 -- common/autotest_common.sh@10 -- # set +x 00:13:22.018 ************************************ 00:13:22.018 END TEST nvmf_invalid 00:13:22.018 ************************************ 00:13:22.018 23:16:10 -- nvmf/nvmf.sh@31 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:13:22.018 23:16:10 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:13:22.018 23:16:10 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:22.018 23:16:10 -- common/autotest_common.sh@10 -- # set +x 00:13:22.018 ************************************ 00:13:22.018 START TEST nvmf_abort 00:13:22.018 ************************************ 00:13:22.018 23:16:10 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:13:22.018 * Looking for test storage... 00:13:22.018 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:22.018 23:16:10 -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:22.018 23:16:10 -- nvmf/common.sh@7 -- # uname -s 00:13:22.018 23:16:10 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:22.018 23:16:10 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:22.018 23:16:10 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:22.018 23:16:10 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:22.018 23:16:10 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:22.018 23:16:10 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:22.018 23:16:10 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:22.018 23:16:10 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:22.018 23:16:10 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:22.018 23:16:10 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:22.018 23:16:10 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:13:22.018 23:16:10 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:13:22.018 23:16:10 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:22.018 23:16:10 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:22.018 23:16:10 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:22.018 23:16:10 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:22.018 23:16:10 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:22.018 23:16:10 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:22.018 23:16:10 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:22.018 23:16:10 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:22.018 23:16:10 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:22.018 23:16:10 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:22.018 23:16:10 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:22.018 23:16:10 -- paths/export.sh@5 -- # export PATH 00:13:22.018 23:16:10 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:22.018 23:16:10 -- nvmf/common.sh@47 -- # : 0 00:13:22.018 23:16:10 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:22.018 23:16:10 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:22.018 23:16:10 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:22.018 23:16:10 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:22.018 23:16:10 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:22.018 23:16:10 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:22.018 23:16:10 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:22.018 23:16:10 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:22.018 23:16:10 -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:22.018 23:16:10 -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:13:22.018 23:16:10 -- target/abort.sh@14 -- # nvmftestinit 00:13:22.018 23:16:10 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:13:22.018 23:16:10 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:22.018 23:16:10 -- nvmf/common.sh@437 -- # prepare_net_devs 00:13:22.018 23:16:10 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:13:22.018 23:16:10 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:13:22.018 23:16:10 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:22.018 23:16:10 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:22.018 23:16:10 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:22.018 23:16:10 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:13:22.018 23:16:10 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:13:22.018 23:16:10 -- nvmf/common.sh@285 -- # xtrace_disable 00:13:22.018 23:16:10 -- common/autotest_common.sh@10 -- # set +x 00:13:30.162 23:16:17 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:13:30.162 23:16:17 -- nvmf/common.sh@291 -- # pci_devs=() 00:13:30.162 23:16:17 -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:30.162 23:16:17 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:30.162 23:16:17 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:30.162 23:16:17 -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:30.162 23:16:17 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:30.162 23:16:17 -- nvmf/common.sh@295 -- # net_devs=() 00:13:30.162 23:16:17 -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:30.162 23:16:17 -- nvmf/common.sh@296 -- # e810=() 00:13:30.162 23:16:17 -- nvmf/common.sh@296 -- # local -ga e810 00:13:30.162 23:16:17 -- nvmf/common.sh@297 -- # x722=() 00:13:30.162 23:16:17 -- nvmf/common.sh@297 -- # local -ga x722 00:13:30.162 23:16:17 -- nvmf/common.sh@298 -- # mlx=() 00:13:30.162 23:16:17 -- nvmf/common.sh@298 -- # local -ga mlx 00:13:30.162 23:16:17 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:30.162 23:16:17 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:30.162 23:16:17 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:30.162 23:16:17 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:30.162 23:16:17 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:30.162 23:16:17 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:30.162 23:16:17 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:30.162 23:16:17 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:30.162 23:16:17 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:30.162 23:16:17 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:30.162 23:16:17 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:30.162 23:16:17 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:30.162 23:16:17 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:13:30.162 23:16:17 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:13:30.162 23:16:17 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:13:30.162 23:16:17 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:13:30.162 23:16:17 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:30.163 23:16:17 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:30.163 23:16:17 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:13:30.163 Found 0000:31:00.0 (0x8086 - 0x159b) 00:13:30.163 23:16:17 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:30.163 23:16:17 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:30.163 23:16:17 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:30.163 23:16:17 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:30.163 23:16:17 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:30.163 23:16:17 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:30.163 23:16:17 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:13:30.163 Found 0000:31:00.1 (0x8086 - 0x159b) 00:13:30.163 23:16:17 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:30.163 23:16:17 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:30.163 23:16:17 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:30.163 23:16:17 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:30.163 23:16:17 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:30.163 23:16:17 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:30.163 23:16:17 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:13:30.163 23:16:17 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:13:30.163 23:16:17 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:30.163 23:16:17 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:30.163 23:16:17 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:13:30.163 23:16:17 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:30.163 23:16:17 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:13:30.163 Found net devices under 0000:31:00.0: cvl_0_0 00:13:30.163 23:16:17 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:13:30.163 23:16:17 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:30.163 23:16:17 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:30.163 23:16:17 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:13:30.163 23:16:17 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:30.163 23:16:17 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:13:30.163 Found net devices under 0000:31:00.1: cvl_0_1 00:13:30.163 23:16:17 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:13:30.163 23:16:17 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:13:30.163 23:16:17 -- nvmf/common.sh@403 -- # is_hw=yes 00:13:30.163 23:16:17 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:13:30.163 23:16:17 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:13:30.163 23:16:17 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:13:30.163 23:16:17 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:30.163 23:16:17 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:30.163 23:16:17 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:30.163 23:16:17 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:13:30.163 23:16:17 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:30.163 23:16:17 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:30.163 23:16:17 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:13:30.163 23:16:17 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:30.163 23:16:17 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:30.163 23:16:17 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:13:30.163 23:16:17 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:13:30.163 23:16:17 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:13:30.163 23:16:17 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:30.163 23:16:18 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:30.163 23:16:18 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:30.163 23:16:18 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:13:30.163 23:16:18 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:30.163 23:16:18 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:30.163 23:16:18 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:30.163 23:16:18 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:13:30.163 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:30.163 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.523 ms 00:13:30.163 00:13:30.163 --- 10.0.0.2 ping statistics --- 00:13:30.163 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:30.163 rtt min/avg/max/mdev = 0.523/0.523/0.523/0.000 ms 00:13:30.163 23:16:18 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:30.163 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:30.163 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.297 ms 00:13:30.163 00:13:30.163 --- 10.0.0.1 ping statistics --- 00:13:30.163 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:30.163 rtt min/avg/max/mdev = 0.297/0.297/0.297/0.000 ms 00:13:30.163 23:16:18 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:30.163 23:16:18 -- nvmf/common.sh@411 -- # return 0 00:13:30.163 23:16:18 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:13:30.163 23:16:18 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:30.163 23:16:18 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:13:30.163 23:16:18 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:13:30.163 23:16:18 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:30.163 23:16:18 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:13:30.163 23:16:18 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:13:30.163 23:16:18 -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:13:30.163 23:16:18 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:13:30.163 23:16:18 -- common/autotest_common.sh@710 -- # xtrace_disable 00:13:30.163 23:16:18 -- common/autotest_common.sh@10 -- # set +x 00:13:30.163 23:16:18 -- nvmf/common.sh@470 -- # nvmfpid=3844924 00:13:30.163 23:16:18 -- nvmf/common.sh@471 -- # waitforlisten 3844924 00:13:30.163 23:16:18 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:13:30.163 23:16:18 -- common/autotest_common.sh@817 -- # '[' -z 3844924 ']' 00:13:30.163 23:16:18 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:30.163 23:16:18 -- common/autotest_common.sh@822 -- # local max_retries=100 00:13:30.163 23:16:18 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:30.163 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:30.163 23:16:18 -- common/autotest_common.sh@826 -- # xtrace_disable 00:13:30.163 23:16:18 -- common/autotest_common.sh@10 -- # set +x 00:13:30.163 [2024-04-26 23:16:18.365704] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:13:30.163 [2024-04-26 23:16:18.365769] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:30.163 EAL: No free 2048 kB hugepages reported on node 1 00:13:30.163 [2024-04-26 23:16:18.437984] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:13:30.163 [2024-04-26 23:16:18.475921] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:30.163 [2024-04-26 23:16:18.475969] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:30.163 [2024-04-26 23:16:18.475978] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:30.163 [2024-04-26 23:16:18.475986] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:30.163 [2024-04-26 23:16:18.475993] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:30.163 [2024-04-26 23:16:18.476147] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:13:30.163 [2024-04-26 23:16:18.476280] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:30.163 [2024-04-26 23:16:18.476281] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:13:30.163 23:16:19 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:13:30.163 23:16:19 -- common/autotest_common.sh@850 -- # return 0 00:13:30.163 23:16:19 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:13:30.163 23:16:19 -- common/autotest_common.sh@716 -- # xtrace_disable 00:13:30.163 23:16:19 -- common/autotest_common.sh@10 -- # set +x 00:13:30.163 23:16:19 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:30.163 23:16:19 -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:13:30.163 23:16:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:30.163 23:16:19 -- common/autotest_common.sh@10 -- # set +x 00:13:30.163 [2024-04-26 23:16:19.191145] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:30.163 23:16:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:30.163 23:16:19 -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:13:30.163 23:16:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:30.163 23:16:19 -- common/autotest_common.sh@10 -- # set +x 00:13:30.163 Malloc0 00:13:30.163 23:16:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:30.163 23:16:19 -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:13:30.163 23:16:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:30.163 23:16:19 -- common/autotest_common.sh@10 -- # set +x 00:13:30.163 Delay0 00:13:30.163 23:16:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:30.163 23:16:19 -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:13:30.163 23:16:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:30.163 23:16:19 -- common/autotest_common.sh@10 -- # set +x 00:13:30.163 23:16:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:30.163 23:16:19 -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:13:30.163 23:16:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:30.163 23:16:19 -- common/autotest_common.sh@10 -- # set +x 00:13:30.163 23:16:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:30.163 23:16:19 -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:13:30.163 23:16:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:30.163 23:16:19 -- common/autotest_common.sh@10 -- # set +x 00:13:30.163 [2024-04-26 23:16:19.268214] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:30.163 23:16:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:30.163 23:16:19 -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:13:30.163 23:16:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:30.163 23:16:19 -- common/autotest_common.sh@10 -- # set +x 00:13:30.163 23:16:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:30.163 23:16:19 -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:13:30.163 EAL: No free 2048 kB hugepages reported on node 1 00:13:30.163 [2024-04-26 23:16:19.336089] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:13:32.706 Initializing NVMe Controllers 00:13:32.706 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:13:32.706 controller IO queue size 128 less than required 00:13:32.706 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:13:32.706 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:13:32.706 Initialization complete. Launching workers. 00:13:32.706 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 33955 00:13:32.706 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 34016, failed to submit 62 00:13:32.706 success 33959, unsuccess 57, failed 0 00:13:32.706 23:16:21 -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:13:32.706 23:16:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:32.706 23:16:21 -- common/autotest_common.sh@10 -- # set +x 00:13:32.706 23:16:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:32.706 23:16:21 -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:13:32.706 23:16:21 -- target/abort.sh@38 -- # nvmftestfini 00:13:32.706 23:16:21 -- nvmf/common.sh@477 -- # nvmfcleanup 00:13:32.706 23:16:21 -- nvmf/common.sh@117 -- # sync 00:13:32.706 23:16:21 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:32.706 23:16:21 -- nvmf/common.sh@120 -- # set +e 00:13:32.706 23:16:21 -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:32.706 23:16:21 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:32.706 rmmod nvme_tcp 00:13:32.706 rmmod nvme_fabrics 00:13:32.706 rmmod nvme_keyring 00:13:32.706 23:16:21 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:32.706 23:16:21 -- nvmf/common.sh@124 -- # set -e 00:13:32.706 23:16:21 -- nvmf/common.sh@125 -- # return 0 00:13:32.706 23:16:21 -- nvmf/common.sh@478 -- # '[' -n 3844924 ']' 00:13:32.706 23:16:21 -- nvmf/common.sh@479 -- # killprocess 3844924 00:13:32.706 23:16:21 -- common/autotest_common.sh@936 -- # '[' -z 3844924 ']' 00:13:32.706 23:16:21 -- common/autotest_common.sh@940 -- # kill -0 3844924 00:13:32.706 23:16:21 -- common/autotest_common.sh@941 -- # uname 00:13:32.706 23:16:21 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:13:32.706 23:16:21 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3844924 00:13:32.706 23:16:21 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:13:32.706 23:16:21 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:13:32.706 23:16:21 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3844924' 00:13:32.706 killing process with pid 3844924 00:13:32.706 23:16:21 -- common/autotest_common.sh@955 -- # kill 3844924 00:13:32.706 23:16:21 -- common/autotest_common.sh@960 -- # wait 3844924 00:13:32.706 23:16:21 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:13:32.706 23:16:21 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:13:32.706 23:16:21 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:13:32.706 23:16:21 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:32.706 23:16:21 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:32.706 23:16:21 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:32.706 23:16:21 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:32.706 23:16:21 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:34.618 23:16:23 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:13:34.618 00:13:34.618 real 0m12.862s 00:13:34.618 user 0m13.272s 00:13:34.618 sys 0m6.228s 00:13:34.618 23:16:23 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:13:34.618 23:16:23 -- common/autotest_common.sh@10 -- # set +x 00:13:34.618 ************************************ 00:13:34.618 END TEST nvmf_abort 00:13:34.618 ************************************ 00:13:34.618 23:16:23 -- nvmf/nvmf.sh@32 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:13:34.618 23:16:23 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:13:34.618 23:16:23 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:34.618 23:16:23 -- common/autotest_common.sh@10 -- # set +x 00:13:34.879 ************************************ 00:13:34.879 START TEST nvmf_ns_hotplug_stress 00:13:34.879 ************************************ 00:13:34.879 23:16:23 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:13:34.879 * Looking for test storage... 00:13:34.879 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:34.879 23:16:24 -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:34.879 23:16:24 -- nvmf/common.sh@7 -- # uname -s 00:13:34.879 23:16:24 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:34.879 23:16:24 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:34.879 23:16:24 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:34.879 23:16:24 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:34.879 23:16:24 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:34.879 23:16:24 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:34.879 23:16:24 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:34.879 23:16:24 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:34.879 23:16:24 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:34.879 23:16:24 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:34.879 23:16:24 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:13:34.879 23:16:24 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:13:34.879 23:16:24 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:34.879 23:16:24 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:34.879 23:16:24 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:34.879 23:16:24 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:34.879 23:16:24 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:34.879 23:16:24 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:34.879 23:16:24 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:34.879 23:16:24 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:34.879 23:16:24 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:34.879 23:16:24 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:34.879 23:16:24 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:34.879 23:16:24 -- paths/export.sh@5 -- # export PATH 00:13:34.879 23:16:24 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:34.879 23:16:24 -- nvmf/common.sh@47 -- # : 0 00:13:34.879 23:16:24 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:34.879 23:16:24 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:34.879 23:16:24 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:34.879 23:16:24 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:34.879 23:16:24 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:34.879 23:16:24 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:34.879 23:16:24 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:34.879 23:16:24 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:34.879 23:16:24 -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:34.879 23:16:24 -- target/ns_hotplug_stress.sh@13 -- # nvmftestinit 00:13:34.879 23:16:24 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:13:34.879 23:16:24 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:34.879 23:16:24 -- nvmf/common.sh@437 -- # prepare_net_devs 00:13:34.879 23:16:24 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:13:34.879 23:16:24 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:13:34.879 23:16:24 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:34.879 23:16:24 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:34.879 23:16:24 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:34.879 23:16:24 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:13:34.879 23:16:24 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:13:34.879 23:16:24 -- nvmf/common.sh@285 -- # xtrace_disable 00:13:34.879 23:16:24 -- common/autotest_common.sh@10 -- # set +x 00:13:43.014 23:16:30 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:13:43.014 23:16:30 -- nvmf/common.sh@291 -- # pci_devs=() 00:13:43.014 23:16:30 -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:43.014 23:16:30 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:43.014 23:16:30 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:43.014 23:16:30 -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:43.014 23:16:30 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:43.014 23:16:30 -- nvmf/common.sh@295 -- # net_devs=() 00:13:43.014 23:16:30 -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:43.014 23:16:30 -- nvmf/common.sh@296 -- # e810=() 00:13:43.014 23:16:30 -- nvmf/common.sh@296 -- # local -ga e810 00:13:43.014 23:16:30 -- nvmf/common.sh@297 -- # x722=() 00:13:43.014 23:16:30 -- nvmf/common.sh@297 -- # local -ga x722 00:13:43.014 23:16:30 -- nvmf/common.sh@298 -- # mlx=() 00:13:43.014 23:16:30 -- nvmf/common.sh@298 -- # local -ga mlx 00:13:43.014 23:16:30 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:43.014 23:16:30 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:43.014 23:16:30 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:43.014 23:16:30 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:43.014 23:16:30 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:43.014 23:16:30 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:43.014 23:16:30 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:43.014 23:16:30 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:43.014 23:16:30 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:43.014 23:16:30 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:43.014 23:16:30 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:43.014 23:16:30 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:43.014 23:16:30 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:13:43.014 23:16:30 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:13:43.014 23:16:30 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:13:43.014 23:16:30 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:13:43.014 23:16:30 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:43.014 23:16:30 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:43.014 23:16:30 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:13:43.014 Found 0000:31:00.0 (0x8086 - 0x159b) 00:13:43.014 23:16:30 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:43.014 23:16:30 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:43.014 23:16:30 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:43.014 23:16:30 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:43.014 23:16:30 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:43.014 23:16:30 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:43.014 23:16:30 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:13:43.014 Found 0000:31:00.1 (0x8086 - 0x159b) 00:13:43.014 23:16:30 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:43.014 23:16:30 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:43.014 23:16:30 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:43.014 23:16:30 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:43.014 23:16:30 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:43.014 23:16:30 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:43.014 23:16:30 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:13:43.014 23:16:30 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:13:43.014 23:16:30 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:43.014 23:16:30 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:43.014 23:16:30 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:13:43.014 23:16:30 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:43.014 23:16:30 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:13:43.014 Found net devices under 0000:31:00.0: cvl_0_0 00:13:43.014 23:16:30 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:13:43.014 23:16:30 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:43.014 23:16:30 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:43.014 23:16:30 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:13:43.014 23:16:30 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:43.014 23:16:30 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:13:43.014 Found net devices under 0000:31:00.1: cvl_0_1 00:13:43.014 23:16:30 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:13:43.014 23:16:30 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:13:43.015 23:16:30 -- nvmf/common.sh@403 -- # is_hw=yes 00:13:43.015 23:16:30 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:13:43.015 23:16:30 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:13:43.015 23:16:30 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:13:43.015 23:16:30 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:43.015 23:16:30 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:43.015 23:16:30 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:43.015 23:16:30 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:13:43.015 23:16:30 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:43.015 23:16:30 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:43.015 23:16:30 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:13:43.015 23:16:30 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:43.015 23:16:30 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:43.015 23:16:30 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:13:43.015 23:16:30 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:13:43.015 23:16:30 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:13:43.015 23:16:30 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:43.015 23:16:30 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:43.015 23:16:31 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:43.015 23:16:31 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:13:43.015 23:16:31 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:43.015 23:16:31 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:43.015 23:16:31 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:43.015 23:16:31 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:13:43.015 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:43.015 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.678 ms 00:13:43.015 00:13:43.015 --- 10.0.0.2 ping statistics --- 00:13:43.015 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:43.015 rtt min/avg/max/mdev = 0.678/0.678/0.678/0.000 ms 00:13:43.015 23:16:31 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:43.015 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:43.015 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.231 ms 00:13:43.015 00:13:43.015 --- 10.0.0.1 ping statistics --- 00:13:43.015 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:43.015 rtt min/avg/max/mdev = 0.231/0.231/0.231/0.000 ms 00:13:43.015 23:16:31 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:43.015 23:16:31 -- nvmf/common.sh@411 -- # return 0 00:13:43.015 23:16:31 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:13:43.015 23:16:31 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:43.015 23:16:31 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:13:43.015 23:16:31 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:13:43.015 23:16:31 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:43.015 23:16:31 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:13:43.015 23:16:31 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:13:43.015 23:16:31 -- target/ns_hotplug_stress.sh@14 -- # nvmfappstart -m 0xE 00:13:43.015 23:16:31 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:13:43.015 23:16:31 -- common/autotest_common.sh@710 -- # xtrace_disable 00:13:43.015 23:16:31 -- common/autotest_common.sh@10 -- # set +x 00:13:43.015 23:16:31 -- nvmf/common.sh@470 -- # nvmfpid=3849917 00:13:43.015 23:16:31 -- nvmf/common.sh@471 -- # waitforlisten 3849917 00:13:43.015 23:16:31 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:13:43.015 23:16:31 -- common/autotest_common.sh@817 -- # '[' -z 3849917 ']' 00:13:43.015 23:16:31 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:43.015 23:16:31 -- common/autotest_common.sh@822 -- # local max_retries=100 00:13:43.015 23:16:31 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:43.015 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:43.015 23:16:31 -- common/autotest_common.sh@826 -- # xtrace_disable 00:13:43.015 23:16:31 -- common/autotest_common.sh@10 -- # set +x 00:13:43.015 [2024-04-26 23:16:31.250605] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:13:43.015 [2024-04-26 23:16:31.250656] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:43.015 EAL: No free 2048 kB hugepages reported on node 1 00:13:43.015 [2024-04-26 23:16:31.320189] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:13:43.015 [2024-04-26 23:16:31.352938] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:43.015 [2024-04-26 23:16:31.352982] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:43.015 [2024-04-26 23:16:31.352991] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:43.015 [2024-04-26 23:16:31.352999] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:43.015 [2024-04-26 23:16:31.353005] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:43.015 [2024-04-26 23:16:31.353169] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:13:43.015 [2024-04-26 23:16:31.353339] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:43.015 [2024-04-26 23:16:31.353339] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:13:43.015 23:16:32 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:13:43.015 23:16:32 -- common/autotest_common.sh@850 -- # return 0 00:13:43.015 23:16:32 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:13:43.015 23:16:32 -- common/autotest_common.sh@716 -- # xtrace_disable 00:13:43.015 23:16:32 -- common/autotest_common.sh@10 -- # set +x 00:13:43.015 23:16:32 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:43.015 23:16:32 -- target/ns_hotplug_stress.sh@16 -- # null_size=1000 00:13:43.015 23:16:32 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:13:43.015 [2024-04-26 23:16:32.203316] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:43.015 23:16:32 -- target/ns_hotplug_stress.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:13:43.275 23:16:32 -- target/ns_hotplug_stress.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:43.536 [2024-04-26 23:16:32.540647] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:43.536 23:16:32 -- target/ns_hotplug_stress.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:13:43.536 23:16:32 -- target/ns_hotplug_stress.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:13:43.796 Malloc0 00:13:43.796 23:16:32 -- target/ns_hotplug_stress.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:13:44.056 Delay0 00:13:44.056 23:16:33 -- target/ns_hotplug_stress.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:44.056 23:16:33 -- target/ns_hotplug_stress.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:13:44.316 NULL1 00:13:44.316 23:16:33 -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:13:44.578 23:16:33 -- target/ns_hotplug_stress.sh@33 -- # PERF_PID=3850295 00:13:44.578 23:16:33 -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:13:44.578 23:16:33 -- target/ns_hotplug_stress.sh@35 -- # kill -0 3850295 00:13:44.578 23:16:33 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:44.578 EAL: No free 2048 kB hugepages reported on node 1 00:13:44.578 23:16:33 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:44.839 23:16:33 -- target/ns_hotplug_stress.sh@40 -- # null_size=1001 00:13:44.839 23:16:33 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:13:44.839 [2024-04-26 23:16:34.055546] bdev.c:4971:_tmp_bdev_event_cb: *NOTICE*: Unexpected event type: 1 00:13:44.839 true 00:13:44.839 23:16:34 -- target/ns_hotplug_stress.sh@35 -- # kill -0 3850295 00:13:44.839 23:16:34 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:45.099 23:16:34 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:45.360 23:16:34 -- target/ns_hotplug_stress.sh@40 -- # null_size=1002 00:13:45.360 23:16:34 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:13:45.360 true 00:13:45.360 23:16:34 -- target/ns_hotplug_stress.sh@35 -- # kill -0 3850295 00:13:45.360 23:16:34 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:45.622 23:16:34 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:45.883 23:16:34 -- target/ns_hotplug_stress.sh@40 -- # null_size=1003 00:13:45.883 23:16:34 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:13:45.883 true 00:13:45.883 23:16:35 -- target/ns_hotplug_stress.sh@35 -- # kill -0 3850295 00:13:45.883 23:16:35 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:46.145 23:16:35 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:46.407 23:16:35 -- target/ns_hotplug_stress.sh@40 -- # null_size=1004 00:13:46.407 23:16:35 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:13:46.407 true 00:13:46.407 23:16:35 -- target/ns_hotplug_stress.sh@35 -- # kill -0 3850295 00:13:46.407 23:16:35 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:46.668 23:16:35 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:46.927 23:16:35 -- target/ns_hotplug_stress.sh@40 -- # null_size=1005 00:13:46.927 23:16:35 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:13:46.927 true 00:13:46.927 23:16:36 -- target/ns_hotplug_stress.sh@35 -- # kill -0 3850295 00:13:46.927 23:16:36 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:47.189 23:16:36 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:47.449 23:16:36 -- target/ns_hotplug_stress.sh@40 -- # null_size=1006 00:13:47.449 23:16:36 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:13:47.449 true 00:13:47.450 23:16:36 -- target/ns_hotplug_stress.sh@35 -- # kill -0 3850295 00:13:47.450 23:16:36 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:47.711 23:16:36 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:47.971 23:16:36 -- target/ns_hotplug_stress.sh@40 -- # null_size=1007 00:13:47.971 23:16:36 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:13:47.971 true 00:13:47.971 23:16:37 -- target/ns_hotplug_stress.sh@35 -- # kill -0 3850295 00:13:47.971 23:16:37 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:48.233 23:16:37 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:48.493 23:16:37 -- target/ns_hotplug_stress.sh@40 -- # null_size=1008 00:13:48.493 23:16:37 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:13:48.493 true 00:13:48.493 23:16:37 -- target/ns_hotplug_stress.sh@35 -- # kill -0 3850295 00:13:48.493 23:16:37 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:48.754 23:16:37 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:49.015 23:16:38 -- target/ns_hotplug_stress.sh@40 -- # null_size=1009 00:13:49.015 23:16:38 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:13:49.015 true 00:13:49.015 23:16:38 -- target/ns_hotplug_stress.sh@35 -- # kill -0 3850295 00:13:49.015 23:16:38 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:49.275 23:16:38 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:49.535 23:16:38 -- target/ns_hotplug_stress.sh@40 -- # null_size=1010 00:13:49.535 23:16:38 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:13:49.535 true 00:13:49.535 23:16:38 -- target/ns_hotplug_stress.sh@35 -- # kill -0 3850295 00:13:49.535 23:16:38 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:49.795 23:16:38 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:49.795 23:16:39 -- target/ns_hotplug_stress.sh@40 -- # null_size=1011 00:13:49.795 23:16:39 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:13:50.056 true 00:13:50.056 23:16:39 -- target/ns_hotplug_stress.sh@35 -- # kill -0 3850295 00:13:50.056 23:16:39 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:50.317 23:16:39 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:50.317 23:16:39 -- target/ns_hotplug_stress.sh@40 -- # null_size=1012 00:13:50.317 23:16:39 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:13:50.577 true 00:13:50.577 23:16:39 -- target/ns_hotplug_stress.sh@35 -- # kill -0 3850295 00:13:50.577 23:16:39 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:50.838 23:16:39 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:50.838 23:16:40 -- target/ns_hotplug_stress.sh@40 -- # null_size=1013 00:13:50.838 23:16:40 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:13:51.099 true 00:13:51.099 23:16:40 -- target/ns_hotplug_stress.sh@35 -- # kill -0 3850295 00:13:51.099 23:16:40 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:51.359 23:16:40 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:51.359 23:16:40 -- target/ns_hotplug_stress.sh@40 -- # null_size=1014 00:13:51.359 23:16:40 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:13:51.621 true 00:13:51.621 23:16:40 -- target/ns_hotplug_stress.sh@35 -- # kill -0 3850295 00:13:51.621 23:16:40 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:51.882 23:16:40 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:51.882 23:16:41 -- target/ns_hotplug_stress.sh@40 -- # null_size=1015 00:13:51.882 23:16:41 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:13:52.143 true 00:13:52.143 23:16:41 -- target/ns_hotplug_stress.sh@35 -- # kill -0 3850295 00:13:52.143 23:16:41 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:52.403 23:16:41 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:52.403 23:16:41 -- target/ns_hotplug_stress.sh@40 -- # null_size=1016 00:13:52.403 23:16:41 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:13:52.663 true 00:13:52.663 23:16:41 -- target/ns_hotplug_stress.sh@35 -- # kill -0 3850295 00:13:52.663 23:16:41 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:52.663 23:16:41 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:52.924 23:16:42 -- target/ns_hotplug_stress.sh@40 -- # null_size=1017 00:13:52.924 23:16:42 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:13:52.924 true 00:13:53.184 23:16:42 -- target/ns_hotplug_stress.sh@35 -- # kill -0 3850295 00:13:53.184 23:16:42 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:53.184 23:16:42 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:53.445 23:16:42 -- target/ns_hotplug_stress.sh@40 -- # null_size=1018 00:13:53.445 23:16:42 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:13:53.445 true 00:13:53.445 23:16:42 -- target/ns_hotplug_stress.sh@35 -- # kill -0 3850295 00:13:53.445 23:16:42 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:53.705 23:16:42 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:53.964 23:16:43 -- target/ns_hotplug_stress.sh@40 -- # null_size=1019 00:13:53.964 23:16:43 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:13:53.964 true 00:13:53.964 23:16:43 -- target/ns_hotplug_stress.sh@35 -- # kill -0 3850295 00:13:53.964 23:16:43 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:54.224 23:16:43 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:54.484 23:16:43 -- target/ns_hotplug_stress.sh@40 -- # null_size=1020 00:13:54.484 23:16:43 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:13:54.484 true 00:13:54.484 23:16:43 -- target/ns_hotplug_stress.sh@35 -- # kill -0 3850295 00:13:54.484 23:16:43 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:54.745 23:16:43 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:55.005 23:16:44 -- target/ns_hotplug_stress.sh@40 -- # null_size=1021 00:13:55.005 23:16:44 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:13:55.005 true 00:13:55.005 23:16:44 -- target/ns_hotplug_stress.sh@35 -- # kill -0 3850295 00:13:55.005 23:16:44 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:55.266 23:16:44 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:55.527 23:16:44 -- target/ns_hotplug_stress.sh@40 -- # null_size=1022 00:13:55.527 23:16:44 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:13:55.527 true 00:13:55.527 23:16:44 -- target/ns_hotplug_stress.sh@35 -- # kill -0 3850295 00:13:55.527 23:16:44 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:55.788 23:16:44 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:56.048 23:16:45 -- target/ns_hotplug_stress.sh@40 -- # null_size=1023 00:13:56.048 23:16:45 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:13:56.048 true 00:13:56.048 23:16:45 -- target/ns_hotplug_stress.sh@35 -- # kill -0 3850295 00:13:56.048 23:16:45 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:56.309 23:16:45 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:56.570 23:16:45 -- target/ns_hotplug_stress.sh@40 -- # null_size=1024 00:13:56.570 23:16:45 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:13:56.570 true 00:13:56.570 23:16:45 -- target/ns_hotplug_stress.sh@35 -- # kill -0 3850295 00:13:56.570 23:16:45 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:56.830 23:16:45 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:57.090 23:16:46 -- target/ns_hotplug_stress.sh@40 -- # null_size=1025 00:13:57.090 23:16:46 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:13:57.090 true 00:13:57.090 23:16:46 -- target/ns_hotplug_stress.sh@35 -- # kill -0 3850295 00:13:57.090 23:16:46 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:57.351 23:16:46 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:57.611 23:16:46 -- target/ns_hotplug_stress.sh@40 -- # null_size=1026 00:13:57.611 23:16:46 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:13:57.611 true 00:13:57.611 23:16:46 -- target/ns_hotplug_stress.sh@35 -- # kill -0 3850295 00:13:57.611 23:16:46 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:57.871 23:16:47 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:58.132 23:16:47 -- target/ns_hotplug_stress.sh@40 -- # null_size=1027 00:13:58.132 23:16:47 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:13:58.132 true 00:13:58.132 23:16:47 -- target/ns_hotplug_stress.sh@35 -- # kill -0 3850295 00:13:58.132 23:16:47 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:58.392 23:16:47 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:58.654 23:16:47 -- target/ns_hotplug_stress.sh@40 -- # null_size=1028 00:13:58.654 23:16:47 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:13:58.654 true 00:13:58.654 23:16:47 -- target/ns_hotplug_stress.sh@35 -- # kill -0 3850295 00:13:58.654 23:16:47 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:58.914 23:16:48 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:59.175 23:16:48 -- target/ns_hotplug_stress.sh@40 -- # null_size=1029 00:13:59.175 23:16:48 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:13:59.175 true 00:13:59.175 23:16:48 -- target/ns_hotplug_stress.sh@35 -- # kill -0 3850295 00:13:59.175 23:16:48 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:59.435 23:16:48 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:59.696 23:16:48 -- target/ns_hotplug_stress.sh@40 -- # null_size=1030 00:13:59.696 23:16:48 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1030 00:13:59.696 true 00:13:59.696 23:16:48 -- target/ns_hotplug_stress.sh@35 -- # kill -0 3850295 00:13:59.696 23:16:48 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:59.957 23:16:49 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:00.219 23:16:49 -- target/ns_hotplug_stress.sh@40 -- # null_size=1031 00:14:00.219 23:16:49 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1031 00:14:00.219 true 00:14:00.219 23:16:49 -- target/ns_hotplug_stress.sh@35 -- # kill -0 3850295 00:14:00.219 23:16:49 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:00.482 23:16:49 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:00.812 23:16:49 -- target/ns_hotplug_stress.sh@40 -- # null_size=1032 00:14:00.812 23:16:49 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1032 00:14:00.812 true 00:14:00.812 23:16:49 -- target/ns_hotplug_stress.sh@35 -- # kill -0 3850295 00:14:00.812 23:16:49 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:01.102 23:16:50 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:01.102 23:16:50 -- target/ns_hotplug_stress.sh@40 -- # null_size=1033 00:14:01.102 23:16:50 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1033 00:14:01.363 true 00:14:01.363 23:16:50 -- target/ns_hotplug_stress.sh@35 -- # kill -0 3850295 00:14:01.363 23:16:50 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:01.623 23:16:50 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:01.623 23:16:50 -- target/ns_hotplug_stress.sh@40 -- # null_size=1034 00:14:01.623 23:16:50 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1034 00:14:01.883 true 00:14:01.883 23:16:51 -- target/ns_hotplug_stress.sh@35 -- # kill -0 3850295 00:14:01.883 23:16:51 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:02.143 23:16:51 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:02.143 23:16:51 -- target/ns_hotplug_stress.sh@40 -- # null_size=1035 00:14:02.143 23:16:51 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1035 00:14:02.403 true 00:14:02.403 23:16:51 -- target/ns_hotplug_stress.sh@35 -- # kill -0 3850295 00:14:02.403 23:16:51 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:02.662 23:16:51 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:02.662 23:16:51 -- target/ns_hotplug_stress.sh@40 -- # null_size=1036 00:14:02.662 23:16:51 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1036 00:14:02.923 true 00:14:02.923 23:16:52 -- target/ns_hotplug_stress.sh@35 -- # kill -0 3850295 00:14:02.923 23:16:52 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:03.183 23:16:52 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:03.184 23:16:52 -- target/ns_hotplug_stress.sh@40 -- # null_size=1037 00:14:03.184 23:16:52 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1037 00:14:03.444 true 00:14:03.444 23:16:52 -- target/ns_hotplug_stress.sh@35 -- # kill -0 3850295 00:14:03.444 23:16:52 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:03.704 23:16:52 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:03.704 23:16:52 -- target/ns_hotplug_stress.sh@40 -- # null_size=1038 00:14:03.704 23:16:52 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1038 00:14:03.964 true 00:14:03.964 23:16:53 -- target/ns_hotplug_stress.sh@35 -- # kill -0 3850295 00:14:03.964 23:16:53 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:03.964 23:16:53 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:04.225 23:16:53 -- target/ns_hotplug_stress.sh@40 -- # null_size=1039 00:14:04.225 23:16:53 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1039 00:14:04.485 true 00:14:04.485 23:16:53 -- target/ns_hotplug_stress.sh@35 -- # kill -0 3850295 00:14:04.485 23:16:53 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:04.485 23:16:53 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:04.747 23:16:53 -- target/ns_hotplug_stress.sh@40 -- # null_size=1040 00:14:04.747 23:16:53 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1040 00:14:05.008 true 00:14:05.008 23:16:54 -- target/ns_hotplug_stress.sh@35 -- # kill -0 3850295 00:14:05.008 23:16:54 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:05.008 23:16:54 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:05.269 23:16:54 -- target/ns_hotplug_stress.sh@40 -- # null_size=1041 00:14:05.269 23:16:54 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1041 00:14:05.269 true 00:14:05.269 23:16:54 -- target/ns_hotplug_stress.sh@35 -- # kill -0 3850295 00:14:05.269 23:16:54 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:05.530 23:16:54 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:05.791 23:16:54 -- target/ns_hotplug_stress.sh@40 -- # null_size=1042 00:14:05.791 23:16:54 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1042 00:14:05.791 true 00:14:05.791 23:16:55 -- target/ns_hotplug_stress.sh@35 -- # kill -0 3850295 00:14:05.791 23:16:55 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:06.051 23:16:55 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:06.312 23:16:55 -- target/ns_hotplug_stress.sh@40 -- # null_size=1043 00:14:06.312 23:16:55 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1043 00:14:06.312 true 00:14:06.312 23:16:55 -- target/ns_hotplug_stress.sh@35 -- # kill -0 3850295 00:14:06.312 23:16:55 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:06.574 23:16:55 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:06.574 23:16:55 -- target/ns_hotplug_stress.sh@40 -- # null_size=1044 00:14:06.574 23:16:55 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1044 00:14:06.834 true 00:14:06.834 23:16:55 -- target/ns_hotplug_stress.sh@35 -- # kill -0 3850295 00:14:06.834 23:16:55 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:07.094 23:16:56 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:07.094 23:16:56 -- target/ns_hotplug_stress.sh@40 -- # null_size=1045 00:14:07.094 23:16:56 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1045 00:14:07.354 true 00:14:07.354 23:16:56 -- target/ns_hotplug_stress.sh@35 -- # kill -0 3850295 00:14:07.354 23:16:56 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:07.615 23:16:56 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:07.615 23:16:56 -- target/ns_hotplug_stress.sh@40 -- # null_size=1046 00:14:07.615 23:16:56 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1046 00:14:07.875 true 00:14:07.875 23:16:56 -- target/ns_hotplug_stress.sh@35 -- # kill -0 3850295 00:14:07.875 23:16:56 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:08.135 23:16:57 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:08.135 23:16:57 -- target/ns_hotplug_stress.sh@40 -- # null_size=1047 00:14:08.135 23:16:57 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1047 00:14:08.396 true 00:14:08.396 23:16:57 -- target/ns_hotplug_stress.sh@35 -- # kill -0 3850295 00:14:08.396 23:16:57 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:08.657 23:16:57 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:08.657 23:16:57 -- target/ns_hotplug_stress.sh@40 -- # null_size=1048 00:14:08.657 23:16:57 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1048 00:14:08.918 true 00:14:08.918 23:16:57 -- target/ns_hotplug_stress.sh@35 -- # kill -0 3850295 00:14:08.918 23:16:57 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:08.918 23:16:58 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:09.178 23:16:58 -- target/ns_hotplug_stress.sh@40 -- # null_size=1049 00:14:09.178 23:16:58 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1049 00:14:09.439 true 00:14:09.439 23:16:58 -- target/ns_hotplug_stress.sh@35 -- # kill -0 3850295 00:14:09.439 23:16:58 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:09.439 23:16:58 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:09.699 23:16:58 -- target/ns_hotplug_stress.sh@40 -- # null_size=1050 00:14:09.699 23:16:58 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1050 00:14:09.959 true 00:14:09.959 23:16:59 -- target/ns_hotplug_stress.sh@35 -- # kill -0 3850295 00:14:09.959 23:16:59 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:09.959 23:16:59 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:10.220 23:16:59 -- target/ns_hotplug_stress.sh@40 -- # null_size=1051 00:14:10.220 23:16:59 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1051 00:14:10.482 true 00:14:10.482 23:16:59 -- target/ns_hotplug_stress.sh@35 -- # kill -0 3850295 00:14:10.482 23:16:59 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:10.482 23:16:59 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:10.744 23:16:59 -- target/ns_hotplug_stress.sh@40 -- # null_size=1052 00:14:10.744 23:16:59 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1052 00:14:11.014 true 00:14:11.015 23:17:00 -- target/ns_hotplug_stress.sh@35 -- # kill -0 3850295 00:14:11.015 23:17:00 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:11.015 23:17:00 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:11.281 23:17:00 -- target/ns_hotplug_stress.sh@40 -- # null_size=1053 00:14:11.281 23:17:00 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1053 00:14:11.281 true 00:14:11.281 23:17:00 -- target/ns_hotplug_stress.sh@35 -- # kill -0 3850295 00:14:11.281 23:17:00 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:11.543 23:17:00 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:11.805 23:17:00 -- target/ns_hotplug_stress.sh@40 -- # null_size=1054 00:14:11.805 23:17:00 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1054 00:14:11.805 true 00:14:11.805 23:17:00 -- target/ns_hotplug_stress.sh@35 -- # kill -0 3850295 00:14:11.805 23:17:00 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:12.067 23:17:01 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:12.067 23:17:01 -- target/ns_hotplug_stress.sh@40 -- # null_size=1055 00:14:12.067 23:17:01 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1055 00:14:12.328 true 00:14:12.328 23:17:01 -- target/ns_hotplug_stress.sh@35 -- # kill -0 3850295 00:14:12.328 23:17:01 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:12.589 23:17:01 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:12.850 23:17:01 -- target/ns_hotplug_stress.sh@40 -- # null_size=1056 00:14:12.851 23:17:01 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1056 00:14:12.851 true 00:14:12.851 23:17:02 -- target/ns_hotplug_stress.sh@35 -- # kill -0 3850295 00:14:12.851 23:17:02 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:13.112 23:17:02 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:13.112 23:17:02 -- target/ns_hotplug_stress.sh@40 -- # null_size=1057 00:14:13.112 23:17:02 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1057 00:14:13.373 true 00:14:13.373 23:17:02 -- target/ns_hotplug_stress.sh@35 -- # kill -0 3850295 00:14:13.373 23:17:02 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:13.635 23:17:02 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:13.635 23:17:02 -- target/ns_hotplug_stress.sh@40 -- # null_size=1058 00:14:13.635 23:17:02 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1058 00:14:13.896 true 00:14:13.896 23:17:03 -- target/ns_hotplug_stress.sh@35 -- # kill -0 3850295 00:14:13.896 23:17:03 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:14.157 23:17:03 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:14.157 23:17:03 -- target/ns_hotplug_stress.sh@40 -- # null_size=1059 00:14:14.157 23:17:03 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1059 00:14:14.419 true 00:14:14.419 23:17:03 -- target/ns_hotplug_stress.sh@35 -- # kill -0 3850295 00:14:14.419 23:17:03 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:14.680 23:17:03 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:14.680 Initializing NVMe Controllers 00:14:14.680 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:14:14.680 Controller IO queue size 128, less than required. 00:14:14.680 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:14:14.680 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:14:14.680 Initialization complete. Launching workers. 00:14:14.680 ======================================================== 00:14:14.680 Latency(us) 00:14:14.680 Device Information : IOPS MiB/s Average min max 00:14:14.680 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 21330.73 10.42 6000.68 1772.49 10642.54 00:14:14.680 ======================================================== 00:14:14.680 Total : 21330.73 10.42 6000.68 1772.49 10642.54 00:14:14.680 00:14:14.680 23:17:03 -- target/ns_hotplug_stress.sh@40 -- # null_size=1060 00:14:14.680 23:17:03 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1060 00:14:14.942 true 00:14:14.942 23:17:04 -- target/ns_hotplug_stress.sh@35 -- # kill -0 3850295 00:14:14.942 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 35: kill: (3850295) - No such process 00:14:14.942 23:17:04 -- target/ns_hotplug_stress.sh@44 -- # wait 3850295 00:14:14.942 23:17:04 -- target/ns_hotplug_stress.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:14:14.942 23:17:04 -- target/ns_hotplug_stress.sh@48 -- # nvmftestfini 00:14:14.942 23:17:04 -- nvmf/common.sh@477 -- # nvmfcleanup 00:14:14.942 23:17:04 -- nvmf/common.sh@117 -- # sync 00:14:14.942 23:17:04 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:14.942 23:17:04 -- nvmf/common.sh@120 -- # set +e 00:14:14.942 23:17:04 -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:14.942 23:17:04 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:14.942 rmmod nvme_tcp 00:14:14.942 rmmod nvme_fabrics 00:14:14.942 rmmod nvme_keyring 00:14:14.942 23:17:04 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:14.942 23:17:04 -- nvmf/common.sh@124 -- # set -e 00:14:14.942 23:17:04 -- nvmf/common.sh@125 -- # return 0 00:14:14.942 23:17:04 -- nvmf/common.sh@478 -- # '[' -n 3849917 ']' 00:14:14.942 23:17:04 -- nvmf/common.sh@479 -- # killprocess 3849917 00:14:14.942 23:17:04 -- common/autotest_common.sh@936 -- # '[' -z 3849917 ']' 00:14:14.942 23:17:04 -- common/autotest_common.sh@940 -- # kill -0 3849917 00:14:14.942 23:17:04 -- common/autotest_common.sh@941 -- # uname 00:14:14.942 23:17:04 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:14:14.942 23:17:04 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3849917 00:14:15.204 23:17:04 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:14:15.204 23:17:04 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:14:15.204 23:17:04 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3849917' 00:14:15.204 killing process with pid 3849917 00:14:15.204 23:17:04 -- common/autotest_common.sh@955 -- # kill 3849917 00:14:15.204 23:17:04 -- common/autotest_common.sh@960 -- # wait 3849917 00:14:15.204 23:17:04 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:14:15.204 23:17:04 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:14:15.204 23:17:04 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:14:15.204 23:17:04 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:15.204 23:17:04 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:15.204 23:17:04 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:15.204 23:17:04 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:15.204 23:17:04 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:17.752 23:17:06 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:14:17.752 00:14:17.752 real 0m42.491s 00:14:17.752 user 2m35.029s 00:14:17.752 sys 0m12.199s 00:14:17.752 23:17:06 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:14:17.752 23:17:06 -- common/autotest_common.sh@10 -- # set +x 00:14:17.752 ************************************ 00:14:17.752 END TEST nvmf_ns_hotplug_stress 00:14:17.752 ************************************ 00:14:17.752 23:17:06 -- nvmf/nvmf.sh@33 -- # run_test nvmf_connect_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:14:17.752 23:17:06 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:14:17.752 23:17:06 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:17.752 23:17:06 -- common/autotest_common.sh@10 -- # set +x 00:14:17.752 ************************************ 00:14:17.752 START TEST nvmf_connect_stress 00:14:17.752 ************************************ 00:14:17.752 23:17:06 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:14:17.752 * Looking for test storage... 00:14:17.752 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:17.752 23:17:06 -- target/connect_stress.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:17.752 23:17:06 -- nvmf/common.sh@7 -- # uname -s 00:14:17.752 23:17:06 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:17.752 23:17:06 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:17.752 23:17:06 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:17.752 23:17:06 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:17.752 23:17:06 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:17.752 23:17:06 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:17.752 23:17:06 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:17.752 23:17:06 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:17.752 23:17:06 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:17.752 23:17:06 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:17.752 23:17:06 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:14:17.752 23:17:06 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:14:17.752 23:17:06 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:17.752 23:17:06 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:17.752 23:17:06 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:17.752 23:17:06 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:17.752 23:17:06 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:17.752 23:17:06 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:17.752 23:17:06 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:17.752 23:17:06 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:17.752 23:17:06 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:17.752 23:17:06 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:17.752 23:17:06 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:17.752 23:17:06 -- paths/export.sh@5 -- # export PATH 00:14:17.752 23:17:06 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:17.752 23:17:06 -- nvmf/common.sh@47 -- # : 0 00:14:17.752 23:17:06 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:17.752 23:17:06 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:17.752 23:17:06 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:17.752 23:17:06 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:17.752 23:17:06 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:17.752 23:17:06 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:17.752 23:17:06 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:17.752 23:17:06 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:17.752 23:17:06 -- target/connect_stress.sh@12 -- # nvmftestinit 00:14:17.752 23:17:06 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:14:17.752 23:17:06 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:17.752 23:17:06 -- nvmf/common.sh@437 -- # prepare_net_devs 00:14:17.752 23:17:06 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:14:17.752 23:17:06 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:14:17.752 23:17:06 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:17.752 23:17:06 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:17.752 23:17:06 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:17.752 23:17:06 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:14:17.752 23:17:06 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:14:17.752 23:17:06 -- nvmf/common.sh@285 -- # xtrace_disable 00:14:17.752 23:17:06 -- common/autotest_common.sh@10 -- # set +x 00:14:24.339 23:17:13 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:14:24.339 23:17:13 -- nvmf/common.sh@291 -- # pci_devs=() 00:14:24.339 23:17:13 -- nvmf/common.sh@291 -- # local -a pci_devs 00:14:24.339 23:17:13 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:14:24.339 23:17:13 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:14:24.339 23:17:13 -- nvmf/common.sh@293 -- # pci_drivers=() 00:14:24.339 23:17:13 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:14:24.339 23:17:13 -- nvmf/common.sh@295 -- # net_devs=() 00:14:24.340 23:17:13 -- nvmf/common.sh@295 -- # local -ga net_devs 00:14:24.340 23:17:13 -- nvmf/common.sh@296 -- # e810=() 00:14:24.340 23:17:13 -- nvmf/common.sh@296 -- # local -ga e810 00:14:24.340 23:17:13 -- nvmf/common.sh@297 -- # x722=() 00:14:24.340 23:17:13 -- nvmf/common.sh@297 -- # local -ga x722 00:14:24.340 23:17:13 -- nvmf/common.sh@298 -- # mlx=() 00:14:24.340 23:17:13 -- nvmf/common.sh@298 -- # local -ga mlx 00:14:24.340 23:17:13 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:24.340 23:17:13 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:24.340 23:17:13 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:24.340 23:17:13 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:24.340 23:17:13 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:24.340 23:17:13 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:24.340 23:17:13 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:24.340 23:17:13 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:24.340 23:17:13 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:24.340 23:17:13 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:24.340 23:17:13 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:24.340 23:17:13 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:14:24.340 23:17:13 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:14:24.340 23:17:13 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:14:24.340 23:17:13 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:14:24.340 23:17:13 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:14:24.340 23:17:13 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:14:24.340 23:17:13 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:24.340 23:17:13 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:14:24.340 Found 0000:31:00.0 (0x8086 - 0x159b) 00:14:24.340 23:17:13 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:24.340 23:17:13 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:24.340 23:17:13 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:24.340 23:17:13 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:24.340 23:17:13 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:24.340 23:17:13 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:24.340 23:17:13 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:14:24.340 Found 0000:31:00.1 (0x8086 - 0x159b) 00:14:24.340 23:17:13 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:24.340 23:17:13 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:24.340 23:17:13 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:24.340 23:17:13 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:24.340 23:17:13 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:24.340 23:17:13 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:14:24.340 23:17:13 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:14:24.340 23:17:13 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:14:24.340 23:17:13 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:24.340 23:17:13 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:24.340 23:17:13 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:14:24.340 23:17:13 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:24.340 23:17:13 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:14:24.340 Found net devices under 0000:31:00.0: cvl_0_0 00:14:24.340 23:17:13 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:14:24.340 23:17:13 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:24.340 23:17:13 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:24.340 23:17:13 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:14:24.340 23:17:13 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:24.340 23:17:13 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:14:24.340 Found net devices under 0000:31:00.1: cvl_0_1 00:14:24.340 23:17:13 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:14:24.340 23:17:13 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:14:24.340 23:17:13 -- nvmf/common.sh@403 -- # is_hw=yes 00:14:24.340 23:17:13 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:14:24.340 23:17:13 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:14:24.340 23:17:13 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:14:24.340 23:17:13 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:24.340 23:17:13 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:24.340 23:17:13 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:24.340 23:17:13 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:14:24.340 23:17:13 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:24.340 23:17:13 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:24.340 23:17:13 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:14:24.340 23:17:13 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:24.340 23:17:13 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:24.340 23:17:13 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:14:24.340 23:17:13 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:14:24.340 23:17:13 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:14:24.340 23:17:13 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:24.340 23:17:13 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:24.340 23:17:13 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:24.601 23:17:13 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:14:24.601 23:17:13 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:24.601 23:17:13 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:24.601 23:17:13 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:24.601 23:17:13 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:14:24.601 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:24.601 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.734 ms 00:14:24.601 00:14:24.601 --- 10.0.0.2 ping statistics --- 00:14:24.601 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:24.601 rtt min/avg/max/mdev = 0.734/0.734/0.734/0.000 ms 00:14:24.601 23:17:13 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:24.601 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:24.601 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.352 ms 00:14:24.601 00:14:24.601 --- 10.0.0.1 ping statistics --- 00:14:24.601 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:24.601 rtt min/avg/max/mdev = 0.352/0.352/0.352/0.000 ms 00:14:24.601 23:17:13 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:24.601 23:17:13 -- nvmf/common.sh@411 -- # return 0 00:14:24.601 23:17:13 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:14:24.601 23:17:13 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:24.601 23:17:13 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:14:24.601 23:17:13 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:14:24.601 23:17:13 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:24.601 23:17:13 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:14:24.601 23:17:13 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:14:24.601 23:17:13 -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:14:24.601 23:17:13 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:14:24.601 23:17:13 -- common/autotest_common.sh@710 -- # xtrace_disable 00:14:24.601 23:17:13 -- common/autotest_common.sh@10 -- # set +x 00:14:24.601 23:17:13 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:14:24.601 23:17:13 -- nvmf/common.sh@470 -- # nvmfpid=3860794 00:14:24.601 23:17:13 -- nvmf/common.sh@471 -- # waitforlisten 3860794 00:14:24.601 23:17:13 -- common/autotest_common.sh@817 -- # '[' -z 3860794 ']' 00:14:24.601 23:17:13 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:24.601 23:17:13 -- common/autotest_common.sh@822 -- # local max_retries=100 00:14:24.601 23:17:13 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:24.601 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:24.601 23:17:13 -- common/autotest_common.sh@826 -- # xtrace_disable 00:14:24.601 23:17:13 -- common/autotest_common.sh@10 -- # set +x 00:14:24.601 [2024-04-26 23:17:13.836965] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:14:24.601 [2024-04-26 23:17:13.837026] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:24.865 EAL: No free 2048 kB hugepages reported on node 1 00:14:24.866 [2024-04-26 23:17:13.904759] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:14:24.866 [2024-04-26 23:17:13.935129] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:24.866 [2024-04-26 23:17:13.935169] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:24.866 [2024-04-26 23:17:13.935182] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:24.866 [2024-04-26 23:17:13.935188] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:24.866 [2024-04-26 23:17:13.935194] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:24.866 [2024-04-26 23:17:13.935298] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:14:24.866 [2024-04-26 23:17:13.935454] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:24.866 [2024-04-26 23:17:13.935456] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:14:24.866 23:17:14 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:14:24.866 23:17:14 -- common/autotest_common.sh@850 -- # return 0 00:14:24.866 23:17:14 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:14:24.866 23:17:14 -- common/autotest_common.sh@716 -- # xtrace_disable 00:14:24.866 23:17:14 -- common/autotest_common.sh@10 -- # set +x 00:14:24.866 23:17:14 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:24.866 23:17:14 -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:24.866 23:17:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:24.866 23:17:14 -- common/autotest_common.sh@10 -- # set +x 00:14:24.866 [2024-04-26 23:17:14.062780] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:24.866 23:17:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:24.866 23:17:14 -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:14:24.866 23:17:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:24.866 23:17:14 -- common/autotest_common.sh@10 -- # set +x 00:14:24.866 23:17:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:24.866 23:17:14 -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:24.866 23:17:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:24.866 23:17:14 -- common/autotest_common.sh@10 -- # set +x 00:14:24.866 [2024-04-26 23:17:14.087225] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:24.866 23:17:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:24.866 23:17:14 -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:14:24.866 23:17:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:24.866 23:17:14 -- common/autotest_common.sh@10 -- # set +x 00:14:24.866 NULL1 00:14:24.866 23:17:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:24.866 23:17:14 -- target/connect_stress.sh@21 -- # PERF_PID=3860895 00:14:24.866 23:17:14 -- target/connect_stress.sh@23 -- # rpcs=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:14:24.866 23:17:14 -- target/connect_stress.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:14:24.866 23:17:14 -- target/connect_stress.sh@25 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:14:25.128 23:17:14 -- target/connect_stress.sh@27 -- # seq 1 20 00:14:25.128 23:17:14 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:25.128 23:17:14 -- target/connect_stress.sh@28 -- # cat 00:14:25.128 23:17:14 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:25.128 23:17:14 -- target/connect_stress.sh@28 -- # cat 00:14:25.128 23:17:14 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:25.128 23:17:14 -- target/connect_stress.sh@28 -- # cat 00:14:25.128 23:17:14 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:25.128 23:17:14 -- target/connect_stress.sh@28 -- # cat 00:14:25.128 23:17:14 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:25.128 23:17:14 -- target/connect_stress.sh@28 -- # cat 00:14:25.128 23:17:14 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:25.128 23:17:14 -- target/connect_stress.sh@28 -- # cat 00:14:25.128 EAL: No free 2048 kB hugepages reported on node 1 00:14:25.128 23:17:14 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:25.128 23:17:14 -- target/connect_stress.sh@28 -- # cat 00:14:25.128 23:17:14 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:25.128 23:17:14 -- target/connect_stress.sh@28 -- # cat 00:14:25.128 23:17:14 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:25.128 23:17:14 -- target/connect_stress.sh@28 -- # cat 00:14:25.128 23:17:14 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:25.128 23:17:14 -- target/connect_stress.sh@28 -- # cat 00:14:25.128 23:17:14 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:25.128 23:17:14 -- target/connect_stress.sh@28 -- # cat 00:14:25.128 23:17:14 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:25.128 23:17:14 -- target/connect_stress.sh@28 -- # cat 00:14:25.128 23:17:14 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:25.128 23:17:14 -- target/connect_stress.sh@28 -- # cat 00:14:25.128 23:17:14 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:25.128 23:17:14 -- target/connect_stress.sh@28 -- # cat 00:14:25.128 23:17:14 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:25.128 23:17:14 -- target/connect_stress.sh@28 -- # cat 00:14:25.128 23:17:14 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:25.128 23:17:14 -- target/connect_stress.sh@28 -- # cat 00:14:25.128 23:17:14 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:25.128 23:17:14 -- target/connect_stress.sh@28 -- # cat 00:14:25.128 23:17:14 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:25.128 23:17:14 -- target/connect_stress.sh@28 -- # cat 00:14:25.128 23:17:14 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:25.128 23:17:14 -- target/connect_stress.sh@28 -- # cat 00:14:25.128 23:17:14 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:25.128 23:17:14 -- target/connect_stress.sh@28 -- # cat 00:14:25.128 23:17:14 -- target/connect_stress.sh@34 -- # kill -0 3860895 00:14:25.128 23:17:14 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:25.128 23:17:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:25.128 23:17:14 -- common/autotest_common.sh@10 -- # set +x 00:14:25.389 23:17:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:25.389 23:17:14 -- target/connect_stress.sh@34 -- # kill -0 3860895 00:14:25.389 23:17:14 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:25.389 23:17:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:25.389 23:17:14 -- common/autotest_common.sh@10 -- # set +x 00:14:25.649 23:17:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:25.649 23:17:14 -- target/connect_stress.sh@34 -- # kill -0 3860895 00:14:25.649 23:17:14 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:25.649 23:17:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:25.649 23:17:14 -- common/autotest_common.sh@10 -- # set +x 00:14:26.220 23:17:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:26.221 23:17:15 -- target/connect_stress.sh@34 -- # kill -0 3860895 00:14:26.221 23:17:15 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:26.221 23:17:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:26.221 23:17:15 -- common/autotest_common.sh@10 -- # set +x 00:14:26.482 23:17:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:26.482 23:17:15 -- target/connect_stress.sh@34 -- # kill -0 3860895 00:14:26.482 23:17:15 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:26.482 23:17:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:26.482 23:17:15 -- common/autotest_common.sh@10 -- # set +x 00:14:26.743 23:17:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:26.743 23:17:15 -- target/connect_stress.sh@34 -- # kill -0 3860895 00:14:26.744 23:17:15 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:26.744 23:17:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:26.744 23:17:15 -- common/autotest_common.sh@10 -- # set +x 00:14:27.005 23:17:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:27.005 23:17:16 -- target/connect_stress.sh@34 -- # kill -0 3860895 00:14:27.005 23:17:16 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:27.005 23:17:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:27.005 23:17:16 -- common/autotest_common.sh@10 -- # set +x 00:14:27.266 23:17:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:27.266 23:17:16 -- target/connect_stress.sh@34 -- # kill -0 3860895 00:14:27.266 23:17:16 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:27.266 23:17:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:27.266 23:17:16 -- common/autotest_common.sh@10 -- # set +x 00:14:27.838 23:17:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:27.838 23:17:16 -- target/connect_stress.sh@34 -- # kill -0 3860895 00:14:27.838 23:17:16 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:27.838 23:17:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:27.838 23:17:16 -- common/autotest_common.sh@10 -- # set +x 00:14:28.099 23:17:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:28.099 23:17:17 -- target/connect_stress.sh@34 -- # kill -0 3860895 00:14:28.099 23:17:17 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:28.099 23:17:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:28.099 23:17:17 -- common/autotest_common.sh@10 -- # set +x 00:14:28.359 23:17:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:28.359 23:17:17 -- target/connect_stress.sh@34 -- # kill -0 3860895 00:14:28.359 23:17:17 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:28.359 23:17:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:28.359 23:17:17 -- common/autotest_common.sh@10 -- # set +x 00:14:28.619 23:17:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:28.619 23:17:17 -- target/connect_stress.sh@34 -- # kill -0 3860895 00:14:28.619 23:17:17 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:28.619 23:17:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:28.619 23:17:17 -- common/autotest_common.sh@10 -- # set +x 00:14:28.880 23:17:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:28.880 23:17:18 -- target/connect_stress.sh@34 -- # kill -0 3860895 00:14:28.880 23:17:18 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:28.880 23:17:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:28.880 23:17:18 -- common/autotest_common.sh@10 -- # set +x 00:14:29.483 23:17:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:29.483 23:17:18 -- target/connect_stress.sh@34 -- # kill -0 3860895 00:14:29.483 23:17:18 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:29.483 23:17:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:29.483 23:17:18 -- common/autotest_common.sh@10 -- # set +x 00:14:29.743 23:17:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:29.743 23:17:18 -- target/connect_stress.sh@34 -- # kill -0 3860895 00:14:29.743 23:17:18 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:29.743 23:17:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:29.743 23:17:18 -- common/autotest_common.sh@10 -- # set +x 00:14:30.003 23:17:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:30.003 23:17:19 -- target/connect_stress.sh@34 -- # kill -0 3860895 00:14:30.003 23:17:19 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:30.003 23:17:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:30.003 23:17:19 -- common/autotest_common.sh@10 -- # set +x 00:14:30.310 23:17:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:30.310 23:17:19 -- target/connect_stress.sh@34 -- # kill -0 3860895 00:14:30.310 23:17:19 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:30.310 23:17:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:30.310 23:17:19 -- common/autotest_common.sh@10 -- # set +x 00:14:30.570 23:17:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:30.570 23:17:19 -- target/connect_stress.sh@34 -- # kill -0 3860895 00:14:30.570 23:17:19 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:30.570 23:17:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:30.570 23:17:19 -- common/autotest_common.sh@10 -- # set +x 00:14:30.829 23:17:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:30.829 23:17:20 -- target/connect_stress.sh@34 -- # kill -0 3860895 00:14:30.829 23:17:20 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:30.829 23:17:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:30.829 23:17:20 -- common/autotest_common.sh@10 -- # set +x 00:14:31.398 23:17:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:31.398 23:17:20 -- target/connect_stress.sh@34 -- # kill -0 3860895 00:14:31.398 23:17:20 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:31.398 23:17:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:31.398 23:17:20 -- common/autotest_common.sh@10 -- # set +x 00:14:31.658 23:17:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:31.658 23:17:20 -- target/connect_stress.sh@34 -- # kill -0 3860895 00:14:31.658 23:17:20 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:31.658 23:17:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:31.658 23:17:20 -- common/autotest_common.sh@10 -- # set +x 00:14:31.920 23:17:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:31.920 23:17:21 -- target/connect_stress.sh@34 -- # kill -0 3860895 00:14:31.920 23:17:21 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:31.920 23:17:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:31.920 23:17:21 -- common/autotest_common.sh@10 -- # set +x 00:14:32.181 23:17:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:32.181 23:17:21 -- target/connect_stress.sh@34 -- # kill -0 3860895 00:14:32.181 23:17:21 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:32.181 23:17:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:32.181 23:17:21 -- common/autotest_common.sh@10 -- # set +x 00:14:32.752 23:17:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:32.752 23:17:21 -- target/connect_stress.sh@34 -- # kill -0 3860895 00:14:32.752 23:17:21 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:32.752 23:17:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:32.752 23:17:21 -- common/autotest_common.sh@10 -- # set +x 00:14:33.013 23:17:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:33.013 23:17:22 -- target/connect_stress.sh@34 -- # kill -0 3860895 00:14:33.013 23:17:22 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:33.013 23:17:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:33.013 23:17:22 -- common/autotest_common.sh@10 -- # set +x 00:14:33.273 23:17:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:33.273 23:17:22 -- target/connect_stress.sh@34 -- # kill -0 3860895 00:14:33.273 23:17:22 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:33.273 23:17:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:33.273 23:17:22 -- common/autotest_common.sh@10 -- # set +x 00:14:33.559 23:17:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:33.559 23:17:22 -- target/connect_stress.sh@34 -- # kill -0 3860895 00:14:33.559 23:17:22 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:33.559 23:17:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:33.559 23:17:22 -- common/autotest_common.sh@10 -- # set +x 00:14:33.871 23:17:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:33.871 23:17:23 -- target/connect_stress.sh@34 -- # kill -0 3860895 00:14:33.871 23:17:23 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:33.871 23:17:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:33.871 23:17:23 -- common/autotest_common.sh@10 -- # set +x 00:14:34.158 23:17:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:34.158 23:17:23 -- target/connect_stress.sh@34 -- # kill -0 3860895 00:14:34.158 23:17:23 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:34.158 23:17:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:34.158 23:17:23 -- common/autotest_common.sh@10 -- # set +x 00:14:34.419 23:17:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:34.419 23:17:23 -- target/connect_stress.sh@34 -- # kill -0 3860895 00:14:34.419 23:17:23 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:34.420 23:17:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:34.420 23:17:23 -- common/autotest_common.sh@10 -- # set +x 00:14:35.011 23:17:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:35.011 23:17:23 -- target/connect_stress.sh@34 -- # kill -0 3860895 00:14:35.011 23:17:23 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:35.011 23:17:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:35.011 23:17:23 -- common/autotest_common.sh@10 -- # set +x 00:14:35.279 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:14:35.279 23:17:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:35.279 23:17:24 -- target/connect_stress.sh@34 -- # kill -0 3860895 00:14:35.279 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (3860895) - No such process 00:14:35.279 23:17:24 -- target/connect_stress.sh@38 -- # wait 3860895 00:14:35.279 23:17:24 -- target/connect_stress.sh@39 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:14:35.279 23:17:24 -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:14:35.279 23:17:24 -- target/connect_stress.sh@43 -- # nvmftestfini 00:14:35.279 23:17:24 -- nvmf/common.sh@477 -- # nvmfcleanup 00:14:35.279 23:17:24 -- nvmf/common.sh@117 -- # sync 00:14:35.279 23:17:24 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:35.279 23:17:24 -- nvmf/common.sh@120 -- # set +e 00:14:35.279 23:17:24 -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:35.279 23:17:24 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:35.279 rmmod nvme_tcp 00:14:35.279 rmmod nvme_fabrics 00:14:35.279 rmmod nvme_keyring 00:14:35.279 23:17:24 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:35.279 23:17:24 -- nvmf/common.sh@124 -- # set -e 00:14:35.279 23:17:24 -- nvmf/common.sh@125 -- # return 0 00:14:35.280 23:17:24 -- nvmf/common.sh@478 -- # '[' -n 3860794 ']' 00:14:35.280 23:17:24 -- nvmf/common.sh@479 -- # killprocess 3860794 00:14:35.280 23:17:24 -- common/autotest_common.sh@936 -- # '[' -z 3860794 ']' 00:14:35.280 23:17:24 -- common/autotest_common.sh@940 -- # kill -0 3860794 00:14:35.280 23:17:24 -- common/autotest_common.sh@941 -- # uname 00:14:35.280 23:17:24 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:14:35.280 23:17:24 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3860794 00:14:35.280 23:17:24 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:14:35.280 23:17:24 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:14:35.280 23:17:24 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3860794' 00:14:35.280 killing process with pid 3860794 00:14:35.280 23:17:24 -- common/autotest_common.sh@955 -- # kill 3860794 00:14:35.280 23:17:24 -- common/autotest_common.sh@960 -- # wait 3860794 00:14:35.541 23:17:24 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:14:35.541 23:17:24 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:14:35.541 23:17:24 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:14:35.541 23:17:24 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:35.541 23:17:24 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:35.541 23:17:24 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:35.541 23:17:24 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:35.541 23:17:24 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:37.456 23:17:26 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:14:37.456 00:14:37.456 real 0m20.041s 00:14:37.456 user 0m40.435s 00:14:37.456 sys 0m8.439s 00:14:37.456 23:17:26 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:14:37.456 23:17:26 -- common/autotest_common.sh@10 -- # set +x 00:14:37.456 ************************************ 00:14:37.456 END TEST nvmf_connect_stress 00:14:37.456 ************************************ 00:14:37.456 23:17:26 -- nvmf/nvmf.sh@34 -- # run_test nvmf_fused_ordering /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:14:37.456 23:17:26 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:14:37.456 23:17:26 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:37.456 23:17:26 -- common/autotest_common.sh@10 -- # set +x 00:14:37.718 ************************************ 00:14:37.718 START TEST nvmf_fused_ordering 00:14:37.718 ************************************ 00:14:37.718 23:17:26 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:14:37.718 * Looking for test storage... 00:14:37.718 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:37.718 23:17:26 -- target/fused_ordering.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:37.718 23:17:26 -- nvmf/common.sh@7 -- # uname -s 00:14:37.718 23:17:26 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:37.718 23:17:26 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:37.718 23:17:26 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:37.718 23:17:26 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:37.718 23:17:26 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:37.718 23:17:26 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:37.718 23:17:26 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:37.718 23:17:26 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:37.718 23:17:26 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:37.718 23:17:26 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:37.718 23:17:26 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:14:37.718 23:17:26 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:14:37.718 23:17:26 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:37.718 23:17:26 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:37.718 23:17:26 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:37.718 23:17:26 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:37.718 23:17:26 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:37.718 23:17:26 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:37.718 23:17:26 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:37.718 23:17:26 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:37.718 23:17:26 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:37.718 23:17:26 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:37.718 23:17:26 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:37.718 23:17:26 -- paths/export.sh@5 -- # export PATH 00:14:37.718 23:17:26 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:37.718 23:17:26 -- nvmf/common.sh@47 -- # : 0 00:14:37.718 23:17:26 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:37.718 23:17:26 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:37.718 23:17:26 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:37.718 23:17:26 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:37.718 23:17:26 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:37.718 23:17:26 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:37.718 23:17:26 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:37.718 23:17:26 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:37.718 23:17:26 -- target/fused_ordering.sh@12 -- # nvmftestinit 00:14:37.718 23:17:26 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:14:37.718 23:17:26 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:37.718 23:17:26 -- nvmf/common.sh@437 -- # prepare_net_devs 00:14:37.718 23:17:26 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:14:37.718 23:17:26 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:14:37.718 23:17:26 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:37.718 23:17:26 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:37.718 23:17:26 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:37.718 23:17:26 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:14:37.718 23:17:26 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:14:37.718 23:17:26 -- nvmf/common.sh@285 -- # xtrace_disable 00:14:37.718 23:17:26 -- common/autotest_common.sh@10 -- # set +x 00:14:45.859 23:17:33 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:14:45.859 23:17:33 -- nvmf/common.sh@291 -- # pci_devs=() 00:14:45.859 23:17:33 -- nvmf/common.sh@291 -- # local -a pci_devs 00:14:45.859 23:17:33 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:14:45.859 23:17:33 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:14:45.859 23:17:33 -- nvmf/common.sh@293 -- # pci_drivers=() 00:14:45.859 23:17:33 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:14:45.859 23:17:33 -- nvmf/common.sh@295 -- # net_devs=() 00:14:45.859 23:17:33 -- nvmf/common.sh@295 -- # local -ga net_devs 00:14:45.859 23:17:33 -- nvmf/common.sh@296 -- # e810=() 00:14:45.859 23:17:33 -- nvmf/common.sh@296 -- # local -ga e810 00:14:45.859 23:17:33 -- nvmf/common.sh@297 -- # x722=() 00:14:45.859 23:17:33 -- nvmf/common.sh@297 -- # local -ga x722 00:14:45.859 23:17:33 -- nvmf/common.sh@298 -- # mlx=() 00:14:45.859 23:17:33 -- nvmf/common.sh@298 -- # local -ga mlx 00:14:45.859 23:17:33 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:45.859 23:17:33 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:45.859 23:17:33 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:45.859 23:17:33 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:45.859 23:17:33 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:45.859 23:17:33 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:45.859 23:17:33 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:45.859 23:17:33 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:45.859 23:17:33 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:45.859 23:17:33 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:45.859 23:17:33 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:45.859 23:17:33 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:14:45.859 23:17:33 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:14:45.859 23:17:33 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:14:45.859 23:17:33 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:14:45.859 23:17:33 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:14:45.859 23:17:33 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:14:45.859 23:17:33 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:45.859 23:17:33 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:14:45.859 Found 0000:31:00.0 (0x8086 - 0x159b) 00:14:45.859 23:17:33 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:45.859 23:17:33 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:45.859 23:17:33 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:45.859 23:17:33 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:45.859 23:17:33 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:45.859 23:17:33 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:45.859 23:17:33 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:14:45.859 Found 0000:31:00.1 (0x8086 - 0x159b) 00:14:45.859 23:17:33 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:45.859 23:17:33 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:45.859 23:17:33 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:45.859 23:17:33 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:45.859 23:17:33 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:45.859 23:17:33 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:14:45.859 23:17:33 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:14:45.859 23:17:33 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:14:45.859 23:17:33 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:45.859 23:17:33 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:45.859 23:17:33 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:14:45.859 23:17:33 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:45.859 23:17:33 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:14:45.859 Found net devices under 0000:31:00.0: cvl_0_0 00:14:45.859 23:17:33 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:14:45.859 23:17:33 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:45.859 23:17:33 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:45.859 23:17:33 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:14:45.859 23:17:33 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:45.859 23:17:33 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:14:45.859 Found net devices under 0000:31:00.1: cvl_0_1 00:14:45.859 23:17:33 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:14:45.859 23:17:33 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:14:45.859 23:17:33 -- nvmf/common.sh@403 -- # is_hw=yes 00:14:45.859 23:17:33 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:14:45.859 23:17:33 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:14:45.859 23:17:33 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:14:45.859 23:17:33 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:45.859 23:17:33 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:45.859 23:17:33 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:45.859 23:17:33 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:14:45.859 23:17:33 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:45.859 23:17:33 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:45.859 23:17:33 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:14:45.859 23:17:33 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:45.859 23:17:33 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:45.859 23:17:33 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:14:45.859 23:17:33 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:14:45.859 23:17:33 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:14:45.859 23:17:33 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:45.859 23:17:34 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:45.859 23:17:34 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:45.859 23:17:34 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:14:45.859 23:17:34 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:45.859 23:17:34 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:45.859 23:17:34 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:45.859 23:17:34 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:14:45.859 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:45.859 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.644 ms 00:14:45.859 00:14:45.859 --- 10.0.0.2 ping statistics --- 00:14:45.859 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:45.859 rtt min/avg/max/mdev = 0.644/0.644/0.644/0.000 ms 00:14:45.859 23:17:34 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:45.859 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:45.859 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.341 ms 00:14:45.859 00:14:45.859 --- 10.0.0.1 ping statistics --- 00:14:45.859 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:45.859 rtt min/avg/max/mdev = 0.341/0.341/0.341/0.000 ms 00:14:45.859 23:17:34 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:45.859 23:17:34 -- nvmf/common.sh@411 -- # return 0 00:14:45.859 23:17:34 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:14:45.859 23:17:34 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:45.859 23:17:34 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:14:45.859 23:17:34 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:14:45.859 23:17:34 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:45.859 23:17:34 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:14:45.859 23:17:34 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:14:45.859 23:17:34 -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:14:45.859 23:17:34 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:14:45.859 23:17:34 -- common/autotest_common.sh@710 -- # xtrace_disable 00:14:45.859 23:17:34 -- common/autotest_common.sh@10 -- # set +x 00:14:45.859 23:17:34 -- nvmf/common.sh@470 -- # nvmfpid=3867015 00:14:45.859 23:17:34 -- nvmf/common.sh@471 -- # waitforlisten 3867015 00:14:45.859 23:17:34 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:14:45.859 23:17:34 -- common/autotest_common.sh@817 -- # '[' -z 3867015 ']' 00:14:45.859 23:17:34 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:45.859 23:17:34 -- common/autotest_common.sh@822 -- # local max_retries=100 00:14:45.859 23:17:34 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:45.859 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:45.859 23:17:34 -- common/autotest_common.sh@826 -- # xtrace_disable 00:14:45.859 23:17:34 -- common/autotest_common.sh@10 -- # set +x 00:14:45.859 [2024-04-26 23:17:34.353372] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:14:45.859 [2024-04-26 23:17:34.353438] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:45.859 EAL: No free 2048 kB hugepages reported on node 1 00:14:45.859 [2024-04-26 23:17:34.427286] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:45.859 [2024-04-26 23:17:34.463755] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:45.859 [2024-04-26 23:17:34.463807] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:45.859 [2024-04-26 23:17:34.463818] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:45.859 [2024-04-26 23:17:34.463825] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:45.859 [2024-04-26 23:17:34.463831] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:45.859 [2024-04-26 23:17:34.463877] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:46.120 23:17:35 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:14:46.120 23:17:35 -- common/autotest_common.sh@850 -- # return 0 00:14:46.120 23:17:35 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:14:46.120 23:17:35 -- common/autotest_common.sh@716 -- # xtrace_disable 00:14:46.120 23:17:35 -- common/autotest_common.sh@10 -- # set +x 00:14:46.120 23:17:35 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:46.120 23:17:35 -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:46.120 23:17:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:46.120 23:17:35 -- common/autotest_common.sh@10 -- # set +x 00:14:46.120 [2024-04-26 23:17:35.165996] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:46.120 23:17:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:46.120 23:17:35 -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:14:46.120 23:17:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:46.120 23:17:35 -- common/autotest_common.sh@10 -- # set +x 00:14:46.120 23:17:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:46.120 23:17:35 -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:46.120 23:17:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:46.120 23:17:35 -- common/autotest_common.sh@10 -- # set +x 00:14:46.120 [2024-04-26 23:17:35.190178] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:46.120 23:17:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:46.120 23:17:35 -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:14:46.120 23:17:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:46.120 23:17:35 -- common/autotest_common.sh@10 -- # set +x 00:14:46.120 NULL1 00:14:46.120 23:17:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:46.120 23:17:35 -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:14:46.120 23:17:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:46.120 23:17:35 -- common/autotest_common.sh@10 -- # set +x 00:14:46.120 23:17:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:46.120 23:17:35 -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:14:46.120 23:17:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:46.120 23:17:35 -- common/autotest_common.sh@10 -- # set +x 00:14:46.120 23:17:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:46.120 23:17:35 -- target/fused_ordering.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:14:46.120 [2024-04-26 23:17:35.252077] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:14:46.120 [2024-04-26 23:17:35.252120] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3867344 ] 00:14:46.120 EAL: No free 2048 kB hugepages reported on node 1 00:14:46.691 Attached to nqn.2016-06.io.spdk:cnode1 00:14:46.691 Namespace ID: 1 size: 1GB 00:14:46.691 fused_ordering(0) 00:14:46.691 fused_ordering(1) 00:14:46.691 fused_ordering(2) 00:14:46.691 fused_ordering(3) 00:14:46.691 fused_ordering(4) 00:14:46.691 fused_ordering(5) 00:14:46.691 fused_ordering(6) 00:14:46.691 fused_ordering(7) 00:14:46.691 fused_ordering(8) 00:14:46.691 fused_ordering(9) 00:14:46.691 fused_ordering(10) 00:14:46.691 fused_ordering(11) 00:14:46.691 fused_ordering(12) 00:14:46.691 fused_ordering(13) 00:14:46.691 fused_ordering(14) 00:14:46.691 fused_ordering(15) 00:14:46.691 fused_ordering(16) 00:14:46.691 fused_ordering(17) 00:14:46.691 fused_ordering(18) 00:14:46.691 fused_ordering(19) 00:14:46.691 fused_ordering(20) 00:14:46.691 fused_ordering(21) 00:14:46.691 fused_ordering(22) 00:14:46.691 fused_ordering(23) 00:14:46.691 fused_ordering(24) 00:14:46.691 fused_ordering(25) 00:14:46.691 fused_ordering(26) 00:14:46.691 fused_ordering(27) 00:14:46.691 fused_ordering(28) 00:14:46.691 fused_ordering(29) 00:14:46.691 fused_ordering(30) 00:14:46.691 fused_ordering(31) 00:14:46.691 fused_ordering(32) 00:14:46.691 fused_ordering(33) 00:14:46.691 fused_ordering(34) 00:14:46.691 fused_ordering(35) 00:14:46.691 fused_ordering(36) 00:14:46.691 fused_ordering(37) 00:14:46.691 fused_ordering(38) 00:14:46.691 fused_ordering(39) 00:14:46.691 fused_ordering(40) 00:14:46.691 fused_ordering(41) 00:14:46.691 fused_ordering(42) 00:14:46.691 fused_ordering(43) 00:14:46.691 fused_ordering(44) 00:14:46.691 fused_ordering(45) 00:14:46.691 fused_ordering(46) 00:14:46.691 fused_ordering(47) 00:14:46.691 fused_ordering(48) 00:14:46.691 fused_ordering(49) 00:14:46.691 fused_ordering(50) 00:14:46.691 fused_ordering(51) 00:14:46.691 fused_ordering(52) 00:14:46.691 fused_ordering(53) 00:14:46.691 fused_ordering(54) 00:14:46.691 fused_ordering(55) 00:14:46.691 fused_ordering(56) 00:14:46.691 fused_ordering(57) 00:14:46.691 fused_ordering(58) 00:14:46.691 fused_ordering(59) 00:14:46.691 fused_ordering(60) 00:14:46.691 fused_ordering(61) 00:14:46.691 fused_ordering(62) 00:14:46.691 fused_ordering(63) 00:14:46.691 fused_ordering(64) 00:14:46.691 fused_ordering(65) 00:14:46.691 fused_ordering(66) 00:14:46.691 fused_ordering(67) 00:14:46.691 fused_ordering(68) 00:14:46.691 fused_ordering(69) 00:14:46.691 fused_ordering(70) 00:14:46.691 fused_ordering(71) 00:14:46.691 fused_ordering(72) 00:14:46.691 fused_ordering(73) 00:14:46.691 fused_ordering(74) 00:14:46.691 fused_ordering(75) 00:14:46.691 fused_ordering(76) 00:14:46.691 fused_ordering(77) 00:14:46.691 fused_ordering(78) 00:14:46.691 fused_ordering(79) 00:14:46.691 fused_ordering(80) 00:14:46.691 fused_ordering(81) 00:14:46.691 fused_ordering(82) 00:14:46.691 fused_ordering(83) 00:14:46.691 fused_ordering(84) 00:14:46.691 fused_ordering(85) 00:14:46.691 fused_ordering(86) 00:14:46.691 fused_ordering(87) 00:14:46.691 fused_ordering(88) 00:14:46.691 fused_ordering(89) 00:14:46.691 fused_ordering(90) 00:14:46.691 fused_ordering(91) 00:14:46.691 fused_ordering(92) 00:14:46.691 fused_ordering(93) 00:14:46.691 fused_ordering(94) 00:14:46.691 fused_ordering(95) 00:14:46.691 fused_ordering(96) 00:14:46.691 fused_ordering(97) 00:14:46.691 fused_ordering(98) 00:14:46.691 fused_ordering(99) 00:14:46.691 fused_ordering(100) 00:14:46.691 fused_ordering(101) 00:14:46.691 fused_ordering(102) 00:14:46.691 fused_ordering(103) 00:14:46.691 fused_ordering(104) 00:14:46.691 fused_ordering(105) 00:14:46.691 fused_ordering(106) 00:14:46.691 fused_ordering(107) 00:14:46.691 fused_ordering(108) 00:14:46.691 fused_ordering(109) 00:14:46.691 fused_ordering(110) 00:14:46.691 fused_ordering(111) 00:14:46.691 fused_ordering(112) 00:14:46.691 fused_ordering(113) 00:14:46.691 fused_ordering(114) 00:14:46.691 fused_ordering(115) 00:14:46.691 fused_ordering(116) 00:14:46.691 fused_ordering(117) 00:14:46.691 fused_ordering(118) 00:14:46.691 fused_ordering(119) 00:14:46.691 fused_ordering(120) 00:14:46.691 fused_ordering(121) 00:14:46.691 fused_ordering(122) 00:14:46.691 fused_ordering(123) 00:14:46.691 fused_ordering(124) 00:14:46.691 fused_ordering(125) 00:14:46.691 fused_ordering(126) 00:14:46.691 fused_ordering(127) 00:14:46.691 fused_ordering(128) 00:14:46.691 fused_ordering(129) 00:14:46.691 fused_ordering(130) 00:14:46.691 fused_ordering(131) 00:14:46.691 fused_ordering(132) 00:14:46.691 fused_ordering(133) 00:14:46.691 fused_ordering(134) 00:14:46.691 fused_ordering(135) 00:14:46.691 fused_ordering(136) 00:14:46.691 fused_ordering(137) 00:14:46.691 fused_ordering(138) 00:14:46.691 fused_ordering(139) 00:14:46.691 fused_ordering(140) 00:14:46.691 fused_ordering(141) 00:14:46.691 fused_ordering(142) 00:14:46.691 fused_ordering(143) 00:14:46.691 fused_ordering(144) 00:14:46.691 fused_ordering(145) 00:14:46.691 fused_ordering(146) 00:14:46.691 fused_ordering(147) 00:14:46.691 fused_ordering(148) 00:14:46.691 fused_ordering(149) 00:14:46.691 fused_ordering(150) 00:14:46.691 fused_ordering(151) 00:14:46.691 fused_ordering(152) 00:14:46.691 fused_ordering(153) 00:14:46.691 fused_ordering(154) 00:14:46.691 fused_ordering(155) 00:14:46.691 fused_ordering(156) 00:14:46.691 fused_ordering(157) 00:14:46.691 fused_ordering(158) 00:14:46.691 fused_ordering(159) 00:14:46.691 fused_ordering(160) 00:14:46.691 fused_ordering(161) 00:14:46.691 fused_ordering(162) 00:14:46.691 fused_ordering(163) 00:14:46.691 fused_ordering(164) 00:14:46.691 fused_ordering(165) 00:14:46.691 fused_ordering(166) 00:14:46.691 fused_ordering(167) 00:14:46.691 fused_ordering(168) 00:14:46.691 fused_ordering(169) 00:14:46.691 fused_ordering(170) 00:14:46.691 fused_ordering(171) 00:14:46.691 fused_ordering(172) 00:14:46.691 fused_ordering(173) 00:14:46.691 fused_ordering(174) 00:14:46.691 fused_ordering(175) 00:14:46.691 fused_ordering(176) 00:14:46.691 fused_ordering(177) 00:14:46.691 fused_ordering(178) 00:14:46.691 fused_ordering(179) 00:14:46.691 fused_ordering(180) 00:14:46.691 fused_ordering(181) 00:14:46.691 fused_ordering(182) 00:14:46.691 fused_ordering(183) 00:14:46.691 fused_ordering(184) 00:14:46.691 fused_ordering(185) 00:14:46.691 fused_ordering(186) 00:14:46.691 fused_ordering(187) 00:14:46.691 fused_ordering(188) 00:14:46.691 fused_ordering(189) 00:14:46.691 fused_ordering(190) 00:14:46.691 fused_ordering(191) 00:14:46.691 fused_ordering(192) 00:14:46.691 fused_ordering(193) 00:14:46.691 fused_ordering(194) 00:14:46.691 fused_ordering(195) 00:14:46.691 fused_ordering(196) 00:14:46.691 fused_ordering(197) 00:14:46.691 fused_ordering(198) 00:14:46.692 fused_ordering(199) 00:14:46.692 fused_ordering(200) 00:14:46.692 fused_ordering(201) 00:14:46.692 fused_ordering(202) 00:14:46.692 fused_ordering(203) 00:14:46.692 fused_ordering(204) 00:14:46.692 fused_ordering(205) 00:14:46.952 fused_ordering(206) 00:14:46.952 fused_ordering(207) 00:14:46.952 fused_ordering(208) 00:14:46.952 fused_ordering(209) 00:14:46.952 fused_ordering(210) 00:14:46.952 fused_ordering(211) 00:14:46.952 fused_ordering(212) 00:14:46.952 fused_ordering(213) 00:14:46.952 fused_ordering(214) 00:14:46.952 fused_ordering(215) 00:14:46.952 fused_ordering(216) 00:14:46.952 fused_ordering(217) 00:14:46.952 fused_ordering(218) 00:14:46.952 fused_ordering(219) 00:14:46.952 fused_ordering(220) 00:14:46.952 fused_ordering(221) 00:14:46.952 fused_ordering(222) 00:14:46.952 fused_ordering(223) 00:14:46.953 fused_ordering(224) 00:14:46.953 fused_ordering(225) 00:14:46.953 fused_ordering(226) 00:14:46.953 fused_ordering(227) 00:14:46.953 fused_ordering(228) 00:14:46.953 fused_ordering(229) 00:14:46.953 fused_ordering(230) 00:14:46.953 fused_ordering(231) 00:14:46.953 fused_ordering(232) 00:14:46.953 fused_ordering(233) 00:14:46.953 fused_ordering(234) 00:14:46.953 fused_ordering(235) 00:14:46.953 fused_ordering(236) 00:14:46.953 fused_ordering(237) 00:14:46.953 fused_ordering(238) 00:14:46.953 fused_ordering(239) 00:14:46.953 fused_ordering(240) 00:14:46.953 fused_ordering(241) 00:14:46.953 fused_ordering(242) 00:14:46.953 fused_ordering(243) 00:14:46.953 fused_ordering(244) 00:14:46.953 fused_ordering(245) 00:14:46.953 fused_ordering(246) 00:14:46.953 fused_ordering(247) 00:14:46.953 fused_ordering(248) 00:14:46.953 fused_ordering(249) 00:14:46.953 fused_ordering(250) 00:14:46.953 fused_ordering(251) 00:14:46.953 fused_ordering(252) 00:14:46.953 fused_ordering(253) 00:14:46.953 fused_ordering(254) 00:14:46.953 fused_ordering(255) 00:14:46.953 fused_ordering(256) 00:14:46.953 fused_ordering(257) 00:14:46.953 fused_ordering(258) 00:14:46.953 fused_ordering(259) 00:14:46.953 fused_ordering(260) 00:14:46.953 fused_ordering(261) 00:14:46.953 fused_ordering(262) 00:14:46.953 fused_ordering(263) 00:14:46.953 fused_ordering(264) 00:14:46.953 fused_ordering(265) 00:14:46.953 fused_ordering(266) 00:14:46.953 fused_ordering(267) 00:14:46.953 fused_ordering(268) 00:14:46.953 fused_ordering(269) 00:14:46.953 fused_ordering(270) 00:14:46.953 fused_ordering(271) 00:14:46.953 fused_ordering(272) 00:14:46.953 fused_ordering(273) 00:14:46.953 fused_ordering(274) 00:14:46.953 fused_ordering(275) 00:14:46.953 fused_ordering(276) 00:14:46.953 fused_ordering(277) 00:14:46.953 fused_ordering(278) 00:14:46.953 fused_ordering(279) 00:14:46.953 fused_ordering(280) 00:14:46.953 fused_ordering(281) 00:14:46.953 fused_ordering(282) 00:14:46.953 fused_ordering(283) 00:14:46.953 fused_ordering(284) 00:14:46.953 fused_ordering(285) 00:14:46.953 fused_ordering(286) 00:14:46.953 fused_ordering(287) 00:14:46.953 fused_ordering(288) 00:14:46.953 fused_ordering(289) 00:14:46.953 fused_ordering(290) 00:14:46.953 fused_ordering(291) 00:14:46.953 fused_ordering(292) 00:14:46.953 fused_ordering(293) 00:14:46.953 fused_ordering(294) 00:14:46.953 fused_ordering(295) 00:14:46.953 fused_ordering(296) 00:14:46.953 fused_ordering(297) 00:14:46.953 fused_ordering(298) 00:14:46.953 fused_ordering(299) 00:14:46.953 fused_ordering(300) 00:14:46.953 fused_ordering(301) 00:14:46.953 fused_ordering(302) 00:14:46.953 fused_ordering(303) 00:14:46.953 fused_ordering(304) 00:14:46.953 fused_ordering(305) 00:14:46.953 fused_ordering(306) 00:14:46.953 fused_ordering(307) 00:14:46.953 fused_ordering(308) 00:14:46.953 fused_ordering(309) 00:14:46.953 fused_ordering(310) 00:14:46.953 fused_ordering(311) 00:14:46.953 fused_ordering(312) 00:14:46.953 fused_ordering(313) 00:14:46.953 fused_ordering(314) 00:14:46.953 fused_ordering(315) 00:14:46.953 fused_ordering(316) 00:14:46.953 fused_ordering(317) 00:14:46.953 fused_ordering(318) 00:14:46.953 fused_ordering(319) 00:14:46.953 fused_ordering(320) 00:14:46.953 fused_ordering(321) 00:14:46.953 fused_ordering(322) 00:14:46.953 fused_ordering(323) 00:14:46.953 fused_ordering(324) 00:14:46.953 fused_ordering(325) 00:14:46.953 fused_ordering(326) 00:14:46.953 fused_ordering(327) 00:14:46.953 fused_ordering(328) 00:14:46.953 fused_ordering(329) 00:14:46.953 fused_ordering(330) 00:14:46.953 fused_ordering(331) 00:14:46.953 fused_ordering(332) 00:14:46.953 fused_ordering(333) 00:14:46.953 fused_ordering(334) 00:14:46.953 fused_ordering(335) 00:14:46.953 fused_ordering(336) 00:14:46.953 fused_ordering(337) 00:14:46.953 fused_ordering(338) 00:14:46.953 fused_ordering(339) 00:14:46.953 fused_ordering(340) 00:14:46.953 fused_ordering(341) 00:14:46.953 fused_ordering(342) 00:14:46.953 fused_ordering(343) 00:14:46.953 fused_ordering(344) 00:14:46.953 fused_ordering(345) 00:14:46.953 fused_ordering(346) 00:14:46.953 fused_ordering(347) 00:14:46.953 fused_ordering(348) 00:14:46.953 fused_ordering(349) 00:14:46.953 fused_ordering(350) 00:14:46.953 fused_ordering(351) 00:14:46.953 fused_ordering(352) 00:14:46.953 fused_ordering(353) 00:14:46.953 fused_ordering(354) 00:14:46.953 fused_ordering(355) 00:14:46.953 fused_ordering(356) 00:14:46.953 fused_ordering(357) 00:14:46.953 fused_ordering(358) 00:14:46.953 fused_ordering(359) 00:14:46.953 fused_ordering(360) 00:14:46.953 fused_ordering(361) 00:14:46.953 fused_ordering(362) 00:14:46.953 fused_ordering(363) 00:14:46.953 fused_ordering(364) 00:14:46.953 fused_ordering(365) 00:14:46.953 fused_ordering(366) 00:14:46.953 fused_ordering(367) 00:14:46.953 fused_ordering(368) 00:14:46.953 fused_ordering(369) 00:14:46.953 fused_ordering(370) 00:14:46.953 fused_ordering(371) 00:14:46.953 fused_ordering(372) 00:14:46.953 fused_ordering(373) 00:14:46.953 fused_ordering(374) 00:14:46.953 fused_ordering(375) 00:14:46.953 fused_ordering(376) 00:14:46.953 fused_ordering(377) 00:14:46.953 fused_ordering(378) 00:14:46.953 fused_ordering(379) 00:14:46.953 fused_ordering(380) 00:14:46.953 fused_ordering(381) 00:14:46.953 fused_ordering(382) 00:14:46.953 fused_ordering(383) 00:14:46.953 fused_ordering(384) 00:14:46.953 fused_ordering(385) 00:14:46.953 fused_ordering(386) 00:14:46.953 fused_ordering(387) 00:14:46.953 fused_ordering(388) 00:14:46.953 fused_ordering(389) 00:14:46.953 fused_ordering(390) 00:14:46.953 fused_ordering(391) 00:14:46.953 fused_ordering(392) 00:14:46.953 fused_ordering(393) 00:14:46.953 fused_ordering(394) 00:14:46.953 fused_ordering(395) 00:14:46.953 fused_ordering(396) 00:14:46.953 fused_ordering(397) 00:14:46.953 fused_ordering(398) 00:14:46.953 fused_ordering(399) 00:14:46.953 fused_ordering(400) 00:14:46.953 fused_ordering(401) 00:14:46.953 fused_ordering(402) 00:14:46.953 fused_ordering(403) 00:14:46.953 fused_ordering(404) 00:14:46.953 fused_ordering(405) 00:14:46.953 fused_ordering(406) 00:14:46.953 fused_ordering(407) 00:14:46.953 fused_ordering(408) 00:14:46.953 fused_ordering(409) 00:14:46.953 fused_ordering(410) 00:14:47.525 fused_ordering(411) 00:14:47.525 fused_ordering(412) 00:14:47.525 fused_ordering(413) 00:14:47.525 fused_ordering(414) 00:14:47.525 fused_ordering(415) 00:14:47.525 fused_ordering(416) 00:14:47.525 fused_ordering(417) 00:14:47.525 fused_ordering(418) 00:14:47.525 fused_ordering(419) 00:14:47.525 fused_ordering(420) 00:14:47.525 fused_ordering(421) 00:14:47.525 fused_ordering(422) 00:14:47.525 fused_ordering(423) 00:14:47.525 fused_ordering(424) 00:14:47.525 fused_ordering(425) 00:14:47.525 fused_ordering(426) 00:14:47.525 fused_ordering(427) 00:14:47.525 fused_ordering(428) 00:14:47.525 fused_ordering(429) 00:14:47.525 fused_ordering(430) 00:14:47.525 fused_ordering(431) 00:14:47.525 fused_ordering(432) 00:14:47.525 fused_ordering(433) 00:14:47.525 fused_ordering(434) 00:14:47.525 fused_ordering(435) 00:14:47.525 fused_ordering(436) 00:14:47.525 fused_ordering(437) 00:14:47.525 fused_ordering(438) 00:14:47.525 fused_ordering(439) 00:14:47.525 fused_ordering(440) 00:14:47.525 fused_ordering(441) 00:14:47.525 fused_ordering(442) 00:14:47.525 fused_ordering(443) 00:14:47.525 fused_ordering(444) 00:14:47.525 fused_ordering(445) 00:14:47.525 fused_ordering(446) 00:14:47.525 fused_ordering(447) 00:14:47.525 fused_ordering(448) 00:14:47.525 fused_ordering(449) 00:14:47.525 fused_ordering(450) 00:14:47.525 fused_ordering(451) 00:14:47.525 fused_ordering(452) 00:14:47.525 fused_ordering(453) 00:14:47.525 fused_ordering(454) 00:14:47.525 fused_ordering(455) 00:14:47.525 fused_ordering(456) 00:14:47.525 fused_ordering(457) 00:14:47.525 fused_ordering(458) 00:14:47.525 fused_ordering(459) 00:14:47.525 fused_ordering(460) 00:14:47.525 fused_ordering(461) 00:14:47.525 fused_ordering(462) 00:14:47.525 fused_ordering(463) 00:14:47.525 fused_ordering(464) 00:14:47.525 fused_ordering(465) 00:14:47.525 fused_ordering(466) 00:14:47.525 fused_ordering(467) 00:14:47.525 fused_ordering(468) 00:14:47.525 fused_ordering(469) 00:14:47.525 fused_ordering(470) 00:14:47.525 fused_ordering(471) 00:14:47.525 fused_ordering(472) 00:14:47.525 fused_ordering(473) 00:14:47.525 fused_ordering(474) 00:14:47.525 fused_ordering(475) 00:14:47.525 fused_ordering(476) 00:14:47.525 fused_ordering(477) 00:14:47.525 fused_ordering(478) 00:14:47.525 fused_ordering(479) 00:14:47.525 fused_ordering(480) 00:14:47.525 fused_ordering(481) 00:14:47.525 fused_ordering(482) 00:14:47.525 fused_ordering(483) 00:14:47.525 fused_ordering(484) 00:14:47.525 fused_ordering(485) 00:14:47.525 fused_ordering(486) 00:14:47.525 fused_ordering(487) 00:14:47.525 fused_ordering(488) 00:14:47.525 fused_ordering(489) 00:14:47.525 fused_ordering(490) 00:14:47.525 fused_ordering(491) 00:14:47.525 fused_ordering(492) 00:14:47.525 fused_ordering(493) 00:14:47.525 fused_ordering(494) 00:14:47.525 fused_ordering(495) 00:14:47.525 fused_ordering(496) 00:14:47.525 fused_ordering(497) 00:14:47.525 fused_ordering(498) 00:14:47.525 fused_ordering(499) 00:14:47.525 fused_ordering(500) 00:14:47.525 fused_ordering(501) 00:14:47.525 fused_ordering(502) 00:14:47.525 fused_ordering(503) 00:14:47.525 fused_ordering(504) 00:14:47.525 fused_ordering(505) 00:14:47.525 fused_ordering(506) 00:14:47.525 fused_ordering(507) 00:14:47.525 fused_ordering(508) 00:14:47.525 fused_ordering(509) 00:14:47.525 fused_ordering(510) 00:14:47.525 fused_ordering(511) 00:14:47.525 fused_ordering(512) 00:14:47.525 fused_ordering(513) 00:14:47.525 fused_ordering(514) 00:14:47.525 fused_ordering(515) 00:14:47.525 fused_ordering(516) 00:14:47.525 fused_ordering(517) 00:14:47.525 fused_ordering(518) 00:14:47.525 fused_ordering(519) 00:14:47.525 fused_ordering(520) 00:14:47.525 fused_ordering(521) 00:14:47.525 fused_ordering(522) 00:14:47.525 fused_ordering(523) 00:14:47.525 fused_ordering(524) 00:14:47.525 fused_ordering(525) 00:14:47.525 fused_ordering(526) 00:14:47.525 fused_ordering(527) 00:14:47.525 fused_ordering(528) 00:14:47.525 fused_ordering(529) 00:14:47.525 fused_ordering(530) 00:14:47.525 fused_ordering(531) 00:14:47.525 fused_ordering(532) 00:14:47.525 fused_ordering(533) 00:14:47.525 fused_ordering(534) 00:14:47.525 fused_ordering(535) 00:14:47.525 fused_ordering(536) 00:14:47.525 fused_ordering(537) 00:14:47.525 fused_ordering(538) 00:14:47.525 fused_ordering(539) 00:14:47.525 fused_ordering(540) 00:14:47.525 fused_ordering(541) 00:14:47.525 fused_ordering(542) 00:14:47.525 fused_ordering(543) 00:14:47.525 fused_ordering(544) 00:14:47.525 fused_ordering(545) 00:14:47.525 fused_ordering(546) 00:14:47.525 fused_ordering(547) 00:14:47.525 fused_ordering(548) 00:14:47.525 fused_ordering(549) 00:14:47.525 fused_ordering(550) 00:14:47.525 fused_ordering(551) 00:14:47.525 fused_ordering(552) 00:14:47.525 fused_ordering(553) 00:14:47.525 fused_ordering(554) 00:14:47.525 fused_ordering(555) 00:14:47.525 fused_ordering(556) 00:14:47.525 fused_ordering(557) 00:14:47.525 fused_ordering(558) 00:14:47.525 fused_ordering(559) 00:14:47.525 fused_ordering(560) 00:14:47.525 fused_ordering(561) 00:14:47.525 fused_ordering(562) 00:14:47.525 fused_ordering(563) 00:14:47.525 fused_ordering(564) 00:14:47.525 fused_ordering(565) 00:14:47.525 fused_ordering(566) 00:14:47.525 fused_ordering(567) 00:14:47.525 fused_ordering(568) 00:14:47.525 fused_ordering(569) 00:14:47.525 fused_ordering(570) 00:14:47.525 fused_ordering(571) 00:14:47.525 fused_ordering(572) 00:14:47.525 fused_ordering(573) 00:14:47.525 fused_ordering(574) 00:14:47.525 fused_ordering(575) 00:14:47.525 fused_ordering(576) 00:14:47.525 fused_ordering(577) 00:14:47.525 fused_ordering(578) 00:14:47.525 fused_ordering(579) 00:14:47.525 fused_ordering(580) 00:14:47.525 fused_ordering(581) 00:14:47.525 fused_ordering(582) 00:14:47.525 fused_ordering(583) 00:14:47.525 fused_ordering(584) 00:14:47.525 fused_ordering(585) 00:14:47.525 fused_ordering(586) 00:14:47.525 fused_ordering(587) 00:14:47.525 fused_ordering(588) 00:14:47.525 fused_ordering(589) 00:14:47.525 fused_ordering(590) 00:14:47.525 fused_ordering(591) 00:14:47.525 fused_ordering(592) 00:14:47.525 fused_ordering(593) 00:14:47.525 fused_ordering(594) 00:14:47.525 fused_ordering(595) 00:14:47.525 fused_ordering(596) 00:14:47.525 fused_ordering(597) 00:14:47.525 fused_ordering(598) 00:14:47.526 fused_ordering(599) 00:14:47.526 fused_ordering(600) 00:14:47.526 fused_ordering(601) 00:14:47.526 fused_ordering(602) 00:14:47.526 fused_ordering(603) 00:14:47.526 fused_ordering(604) 00:14:47.526 fused_ordering(605) 00:14:47.526 fused_ordering(606) 00:14:47.526 fused_ordering(607) 00:14:47.526 fused_ordering(608) 00:14:47.526 fused_ordering(609) 00:14:47.526 fused_ordering(610) 00:14:47.526 fused_ordering(611) 00:14:47.526 fused_ordering(612) 00:14:47.526 fused_ordering(613) 00:14:47.526 fused_ordering(614) 00:14:47.526 fused_ordering(615) 00:14:47.787 fused_ordering(616) 00:14:47.787 fused_ordering(617) 00:14:47.787 fused_ordering(618) 00:14:47.787 fused_ordering(619) 00:14:47.787 fused_ordering(620) 00:14:47.787 fused_ordering(621) 00:14:47.787 fused_ordering(622) 00:14:47.787 fused_ordering(623) 00:14:47.787 fused_ordering(624) 00:14:47.787 fused_ordering(625) 00:14:47.787 fused_ordering(626) 00:14:47.787 fused_ordering(627) 00:14:47.787 fused_ordering(628) 00:14:47.787 fused_ordering(629) 00:14:47.787 fused_ordering(630) 00:14:47.787 fused_ordering(631) 00:14:47.787 fused_ordering(632) 00:14:47.787 fused_ordering(633) 00:14:47.787 fused_ordering(634) 00:14:47.787 fused_ordering(635) 00:14:47.787 fused_ordering(636) 00:14:47.787 fused_ordering(637) 00:14:47.787 fused_ordering(638) 00:14:47.787 fused_ordering(639) 00:14:47.787 fused_ordering(640) 00:14:47.787 fused_ordering(641) 00:14:47.787 fused_ordering(642) 00:14:47.787 fused_ordering(643) 00:14:47.787 fused_ordering(644) 00:14:47.787 fused_ordering(645) 00:14:47.787 fused_ordering(646) 00:14:47.787 fused_ordering(647) 00:14:47.787 fused_ordering(648) 00:14:47.787 fused_ordering(649) 00:14:47.787 fused_ordering(650) 00:14:47.787 fused_ordering(651) 00:14:47.787 fused_ordering(652) 00:14:47.787 fused_ordering(653) 00:14:47.787 fused_ordering(654) 00:14:47.787 fused_ordering(655) 00:14:47.787 fused_ordering(656) 00:14:47.787 fused_ordering(657) 00:14:47.787 fused_ordering(658) 00:14:47.787 fused_ordering(659) 00:14:47.787 fused_ordering(660) 00:14:47.787 fused_ordering(661) 00:14:47.787 fused_ordering(662) 00:14:47.787 fused_ordering(663) 00:14:47.787 fused_ordering(664) 00:14:47.787 fused_ordering(665) 00:14:47.787 fused_ordering(666) 00:14:47.787 fused_ordering(667) 00:14:47.787 fused_ordering(668) 00:14:47.787 fused_ordering(669) 00:14:47.787 fused_ordering(670) 00:14:47.787 fused_ordering(671) 00:14:47.787 fused_ordering(672) 00:14:47.787 fused_ordering(673) 00:14:47.787 fused_ordering(674) 00:14:47.787 fused_ordering(675) 00:14:47.787 fused_ordering(676) 00:14:47.787 fused_ordering(677) 00:14:47.787 fused_ordering(678) 00:14:47.787 fused_ordering(679) 00:14:47.787 fused_ordering(680) 00:14:47.787 fused_ordering(681) 00:14:47.787 fused_ordering(682) 00:14:47.787 fused_ordering(683) 00:14:47.787 fused_ordering(684) 00:14:47.787 fused_ordering(685) 00:14:47.787 fused_ordering(686) 00:14:47.787 fused_ordering(687) 00:14:47.787 fused_ordering(688) 00:14:47.787 fused_ordering(689) 00:14:47.787 fused_ordering(690) 00:14:47.787 fused_ordering(691) 00:14:47.787 fused_ordering(692) 00:14:47.787 fused_ordering(693) 00:14:47.787 fused_ordering(694) 00:14:47.787 fused_ordering(695) 00:14:47.787 fused_ordering(696) 00:14:47.787 fused_ordering(697) 00:14:47.787 fused_ordering(698) 00:14:47.787 fused_ordering(699) 00:14:47.787 fused_ordering(700) 00:14:47.787 fused_ordering(701) 00:14:47.787 fused_ordering(702) 00:14:47.787 fused_ordering(703) 00:14:47.787 fused_ordering(704) 00:14:47.787 fused_ordering(705) 00:14:47.787 fused_ordering(706) 00:14:47.787 fused_ordering(707) 00:14:47.787 fused_ordering(708) 00:14:47.787 fused_ordering(709) 00:14:47.787 fused_ordering(710) 00:14:47.787 fused_ordering(711) 00:14:47.787 fused_ordering(712) 00:14:47.787 fused_ordering(713) 00:14:47.787 fused_ordering(714) 00:14:47.787 fused_ordering(715) 00:14:47.787 fused_ordering(716) 00:14:47.787 fused_ordering(717) 00:14:47.787 fused_ordering(718) 00:14:47.787 fused_ordering(719) 00:14:47.787 fused_ordering(720) 00:14:47.787 fused_ordering(721) 00:14:47.787 fused_ordering(722) 00:14:47.787 fused_ordering(723) 00:14:47.787 fused_ordering(724) 00:14:47.787 fused_ordering(725) 00:14:47.787 fused_ordering(726) 00:14:47.787 fused_ordering(727) 00:14:47.787 fused_ordering(728) 00:14:47.787 fused_ordering(729) 00:14:47.787 fused_ordering(730) 00:14:47.787 fused_ordering(731) 00:14:47.787 fused_ordering(732) 00:14:47.787 fused_ordering(733) 00:14:47.787 fused_ordering(734) 00:14:47.787 fused_ordering(735) 00:14:47.787 fused_ordering(736) 00:14:47.787 fused_ordering(737) 00:14:47.787 fused_ordering(738) 00:14:47.787 fused_ordering(739) 00:14:47.787 fused_ordering(740) 00:14:47.787 fused_ordering(741) 00:14:47.787 fused_ordering(742) 00:14:47.787 fused_ordering(743) 00:14:47.787 fused_ordering(744) 00:14:47.787 fused_ordering(745) 00:14:47.787 fused_ordering(746) 00:14:47.787 fused_ordering(747) 00:14:47.787 fused_ordering(748) 00:14:47.787 fused_ordering(749) 00:14:47.787 fused_ordering(750) 00:14:47.787 fused_ordering(751) 00:14:47.787 fused_ordering(752) 00:14:47.787 fused_ordering(753) 00:14:47.787 fused_ordering(754) 00:14:47.787 fused_ordering(755) 00:14:47.787 fused_ordering(756) 00:14:47.787 fused_ordering(757) 00:14:47.787 fused_ordering(758) 00:14:47.787 fused_ordering(759) 00:14:47.787 fused_ordering(760) 00:14:47.787 fused_ordering(761) 00:14:47.787 fused_ordering(762) 00:14:47.787 fused_ordering(763) 00:14:47.787 fused_ordering(764) 00:14:47.787 fused_ordering(765) 00:14:47.787 fused_ordering(766) 00:14:47.787 fused_ordering(767) 00:14:47.787 fused_ordering(768) 00:14:47.787 fused_ordering(769) 00:14:47.787 fused_ordering(770) 00:14:47.787 fused_ordering(771) 00:14:47.787 fused_ordering(772) 00:14:47.787 fused_ordering(773) 00:14:47.787 fused_ordering(774) 00:14:47.787 fused_ordering(775) 00:14:47.787 fused_ordering(776) 00:14:47.787 fused_ordering(777) 00:14:47.787 fused_ordering(778) 00:14:47.787 fused_ordering(779) 00:14:47.787 fused_ordering(780) 00:14:47.787 fused_ordering(781) 00:14:47.787 fused_ordering(782) 00:14:47.787 fused_ordering(783) 00:14:47.787 fused_ordering(784) 00:14:47.787 fused_ordering(785) 00:14:47.787 fused_ordering(786) 00:14:47.787 fused_ordering(787) 00:14:47.787 fused_ordering(788) 00:14:47.787 fused_ordering(789) 00:14:47.787 fused_ordering(790) 00:14:47.787 fused_ordering(791) 00:14:47.787 fused_ordering(792) 00:14:47.787 fused_ordering(793) 00:14:47.787 fused_ordering(794) 00:14:47.788 fused_ordering(795) 00:14:47.788 fused_ordering(796) 00:14:47.788 fused_ordering(797) 00:14:47.788 fused_ordering(798) 00:14:47.788 fused_ordering(799) 00:14:47.788 fused_ordering(800) 00:14:47.788 fused_ordering(801) 00:14:47.788 fused_ordering(802) 00:14:47.788 fused_ordering(803) 00:14:47.788 fused_ordering(804) 00:14:47.788 fused_ordering(805) 00:14:47.788 fused_ordering(806) 00:14:47.788 fused_ordering(807) 00:14:47.788 fused_ordering(808) 00:14:47.788 fused_ordering(809) 00:14:47.788 fused_ordering(810) 00:14:47.788 fused_ordering(811) 00:14:47.788 fused_ordering(812) 00:14:47.788 fused_ordering(813) 00:14:47.788 fused_ordering(814) 00:14:47.788 fused_ordering(815) 00:14:47.788 fused_ordering(816) 00:14:47.788 fused_ordering(817) 00:14:47.788 fused_ordering(818) 00:14:47.788 fused_ordering(819) 00:14:47.788 fused_ordering(820) 00:14:48.729 fused_ordering(821) 00:14:48.729 fused_ordering(822) 00:14:48.729 fused_ordering(823) 00:14:48.729 fused_ordering(824) 00:14:48.729 fused_ordering(825) 00:14:48.729 fused_ordering(826) 00:14:48.729 fused_ordering(827) 00:14:48.729 fused_ordering(828) 00:14:48.729 fused_ordering(829) 00:14:48.729 fused_ordering(830) 00:14:48.729 fused_ordering(831) 00:14:48.729 fused_ordering(832) 00:14:48.729 fused_ordering(833) 00:14:48.729 fused_ordering(834) 00:14:48.729 fused_ordering(835) 00:14:48.729 fused_ordering(836) 00:14:48.729 fused_ordering(837) 00:14:48.729 fused_ordering(838) 00:14:48.729 fused_ordering(839) 00:14:48.729 fused_ordering(840) 00:14:48.729 fused_ordering(841) 00:14:48.729 fused_ordering(842) 00:14:48.729 fused_ordering(843) 00:14:48.729 fused_ordering(844) 00:14:48.729 fused_ordering(845) 00:14:48.729 fused_ordering(846) 00:14:48.729 fused_ordering(847) 00:14:48.729 fused_ordering(848) 00:14:48.729 fused_ordering(849) 00:14:48.729 fused_ordering(850) 00:14:48.729 fused_ordering(851) 00:14:48.729 fused_ordering(852) 00:14:48.729 fused_ordering(853) 00:14:48.729 fused_ordering(854) 00:14:48.729 fused_ordering(855) 00:14:48.729 fused_ordering(856) 00:14:48.729 fused_ordering(857) 00:14:48.729 fused_ordering(858) 00:14:48.729 fused_ordering(859) 00:14:48.729 fused_ordering(860) 00:14:48.729 fused_ordering(861) 00:14:48.729 fused_ordering(862) 00:14:48.729 fused_ordering(863) 00:14:48.729 fused_ordering(864) 00:14:48.729 fused_ordering(865) 00:14:48.729 fused_ordering(866) 00:14:48.729 fused_ordering(867) 00:14:48.729 fused_ordering(868) 00:14:48.729 fused_ordering(869) 00:14:48.729 fused_ordering(870) 00:14:48.729 fused_ordering(871) 00:14:48.729 fused_ordering(872) 00:14:48.729 fused_ordering(873) 00:14:48.729 fused_ordering(874) 00:14:48.729 fused_ordering(875) 00:14:48.729 fused_ordering(876) 00:14:48.729 fused_ordering(877) 00:14:48.729 fused_ordering(878) 00:14:48.729 fused_ordering(879) 00:14:48.729 fused_ordering(880) 00:14:48.729 fused_ordering(881) 00:14:48.729 fused_ordering(882) 00:14:48.729 fused_ordering(883) 00:14:48.729 fused_ordering(884) 00:14:48.729 fused_ordering(885) 00:14:48.729 fused_ordering(886) 00:14:48.729 fused_ordering(887) 00:14:48.729 fused_ordering(888) 00:14:48.729 fused_ordering(889) 00:14:48.729 fused_ordering(890) 00:14:48.729 fused_ordering(891) 00:14:48.729 fused_ordering(892) 00:14:48.729 fused_ordering(893) 00:14:48.729 fused_ordering(894) 00:14:48.729 fused_ordering(895) 00:14:48.729 fused_ordering(896) 00:14:48.729 fused_ordering(897) 00:14:48.729 fused_ordering(898) 00:14:48.729 fused_ordering(899) 00:14:48.729 fused_ordering(900) 00:14:48.729 fused_ordering(901) 00:14:48.729 fused_ordering(902) 00:14:48.729 fused_ordering(903) 00:14:48.729 fused_ordering(904) 00:14:48.729 fused_ordering(905) 00:14:48.729 fused_ordering(906) 00:14:48.729 fused_ordering(907) 00:14:48.729 fused_ordering(908) 00:14:48.729 fused_ordering(909) 00:14:48.729 fused_ordering(910) 00:14:48.729 fused_ordering(911) 00:14:48.729 fused_ordering(912) 00:14:48.729 fused_ordering(913) 00:14:48.729 fused_ordering(914) 00:14:48.729 fused_ordering(915) 00:14:48.729 fused_ordering(916) 00:14:48.729 fused_ordering(917) 00:14:48.729 fused_ordering(918) 00:14:48.729 fused_ordering(919) 00:14:48.729 fused_ordering(920) 00:14:48.729 fused_ordering(921) 00:14:48.729 fused_ordering(922) 00:14:48.729 fused_ordering(923) 00:14:48.729 fused_ordering(924) 00:14:48.729 fused_ordering(925) 00:14:48.729 fused_ordering(926) 00:14:48.729 fused_ordering(927) 00:14:48.729 fused_ordering(928) 00:14:48.729 fused_ordering(929) 00:14:48.729 fused_ordering(930) 00:14:48.729 fused_ordering(931) 00:14:48.729 fused_ordering(932) 00:14:48.729 fused_ordering(933) 00:14:48.729 fused_ordering(934) 00:14:48.729 fused_ordering(935) 00:14:48.729 fused_ordering(936) 00:14:48.729 fused_ordering(937) 00:14:48.729 fused_ordering(938) 00:14:48.729 fused_ordering(939) 00:14:48.729 fused_ordering(940) 00:14:48.729 fused_ordering(941) 00:14:48.729 fused_ordering(942) 00:14:48.729 fused_ordering(943) 00:14:48.729 fused_ordering(944) 00:14:48.729 fused_ordering(945) 00:14:48.729 fused_ordering(946) 00:14:48.729 fused_ordering(947) 00:14:48.729 fused_ordering(948) 00:14:48.729 fused_ordering(949) 00:14:48.729 fused_ordering(950) 00:14:48.729 fused_ordering(951) 00:14:48.729 fused_ordering(952) 00:14:48.729 fused_ordering(953) 00:14:48.729 fused_ordering(954) 00:14:48.729 fused_ordering(955) 00:14:48.729 fused_ordering(956) 00:14:48.729 fused_ordering(957) 00:14:48.729 fused_ordering(958) 00:14:48.729 fused_ordering(959) 00:14:48.729 fused_ordering(960) 00:14:48.729 fused_ordering(961) 00:14:48.729 fused_ordering(962) 00:14:48.729 fused_ordering(963) 00:14:48.729 fused_ordering(964) 00:14:48.729 fused_ordering(965) 00:14:48.729 fused_ordering(966) 00:14:48.729 fused_ordering(967) 00:14:48.729 fused_ordering(968) 00:14:48.729 fused_ordering(969) 00:14:48.729 fused_ordering(970) 00:14:48.729 fused_ordering(971) 00:14:48.730 fused_ordering(972) 00:14:48.730 fused_ordering(973) 00:14:48.730 fused_ordering(974) 00:14:48.730 fused_ordering(975) 00:14:48.730 fused_ordering(976) 00:14:48.730 fused_ordering(977) 00:14:48.730 fused_ordering(978) 00:14:48.730 fused_ordering(979) 00:14:48.730 fused_ordering(980) 00:14:48.730 fused_ordering(981) 00:14:48.730 fused_ordering(982) 00:14:48.730 fused_ordering(983) 00:14:48.730 fused_ordering(984) 00:14:48.730 fused_ordering(985) 00:14:48.730 fused_ordering(986) 00:14:48.730 fused_ordering(987) 00:14:48.730 fused_ordering(988) 00:14:48.730 fused_ordering(989) 00:14:48.730 fused_ordering(990) 00:14:48.730 fused_ordering(991) 00:14:48.730 fused_ordering(992) 00:14:48.730 fused_ordering(993) 00:14:48.730 fused_ordering(994) 00:14:48.730 fused_ordering(995) 00:14:48.730 fused_ordering(996) 00:14:48.730 fused_ordering(997) 00:14:48.730 fused_ordering(998) 00:14:48.730 fused_ordering(999) 00:14:48.730 fused_ordering(1000) 00:14:48.730 fused_ordering(1001) 00:14:48.730 fused_ordering(1002) 00:14:48.730 fused_ordering(1003) 00:14:48.730 fused_ordering(1004) 00:14:48.730 fused_ordering(1005) 00:14:48.730 fused_ordering(1006) 00:14:48.730 fused_ordering(1007) 00:14:48.730 fused_ordering(1008) 00:14:48.730 fused_ordering(1009) 00:14:48.730 fused_ordering(1010) 00:14:48.730 fused_ordering(1011) 00:14:48.730 fused_ordering(1012) 00:14:48.730 fused_ordering(1013) 00:14:48.730 fused_ordering(1014) 00:14:48.730 fused_ordering(1015) 00:14:48.730 fused_ordering(1016) 00:14:48.730 fused_ordering(1017) 00:14:48.730 fused_ordering(1018) 00:14:48.730 fused_ordering(1019) 00:14:48.730 fused_ordering(1020) 00:14:48.730 fused_ordering(1021) 00:14:48.730 fused_ordering(1022) 00:14:48.730 fused_ordering(1023) 00:14:48.730 23:17:37 -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:14:48.730 23:17:37 -- target/fused_ordering.sh@25 -- # nvmftestfini 00:14:48.730 23:17:37 -- nvmf/common.sh@477 -- # nvmfcleanup 00:14:48.730 23:17:37 -- nvmf/common.sh@117 -- # sync 00:14:48.730 23:17:37 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:48.730 23:17:37 -- nvmf/common.sh@120 -- # set +e 00:14:48.730 23:17:37 -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:48.730 23:17:37 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:48.730 rmmod nvme_tcp 00:14:48.730 rmmod nvme_fabrics 00:14:48.730 rmmod nvme_keyring 00:14:48.730 23:17:37 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:48.730 23:17:37 -- nvmf/common.sh@124 -- # set -e 00:14:48.730 23:17:37 -- nvmf/common.sh@125 -- # return 0 00:14:48.730 23:17:37 -- nvmf/common.sh@478 -- # '[' -n 3867015 ']' 00:14:48.730 23:17:37 -- nvmf/common.sh@479 -- # killprocess 3867015 00:14:48.730 23:17:37 -- common/autotest_common.sh@936 -- # '[' -z 3867015 ']' 00:14:48.730 23:17:37 -- common/autotest_common.sh@940 -- # kill -0 3867015 00:14:48.730 23:17:37 -- common/autotest_common.sh@941 -- # uname 00:14:48.730 23:17:37 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:14:48.730 23:17:37 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3867015 00:14:48.730 23:17:37 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:14:48.730 23:17:37 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:14:48.730 23:17:37 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3867015' 00:14:48.730 killing process with pid 3867015 00:14:48.730 23:17:37 -- common/autotest_common.sh@955 -- # kill 3867015 00:14:48.730 23:17:37 -- common/autotest_common.sh@960 -- # wait 3867015 00:14:48.730 23:17:37 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:14:48.730 23:17:37 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:14:48.730 23:17:37 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:14:48.730 23:17:37 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:48.730 23:17:37 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:48.730 23:17:37 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:48.730 23:17:37 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:48.730 23:17:37 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:51.298 23:17:39 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:14:51.298 00:14:51.298 real 0m13.138s 00:14:51.298 user 0m7.043s 00:14:51.298 sys 0m6.827s 00:14:51.298 23:17:39 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:14:51.298 23:17:39 -- common/autotest_common.sh@10 -- # set +x 00:14:51.298 ************************************ 00:14:51.298 END TEST nvmf_fused_ordering 00:14:51.298 ************************************ 00:14:51.298 23:17:39 -- nvmf/nvmf.sh@35 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:14:51.298 23:17:39 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:14:51.298 23:17:39 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:51.298 23:17:39 -- common/autotest_common.sh@10 -- # set +x 00:14:51.298 ************************************ 00:14:51.298 START TEST nvmf_delete_subsystem 00:14:51.298 ************************************ 00:14:51.298 23:17:40 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:14:51.298 * Looking for test storage... 00:14:51.298 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:51.298 23:17:40 -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:51.298 23:17:40 -- nvmf/common.sh@7 -- # uname -s 00:14:51.298 23:17:40 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:51.298 23:17:40 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:51.298 23:17:40 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:51.298 23:17:40 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:51.298 23:17:40 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:51.298 23:17:40 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:51.298 23:17:40 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:51.298 23:17:40 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:51.298 23:17:40 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:51.298 23:17:40 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:51.298 23:17:40 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:14:51.298 23:17:40 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:14:51.298 23:17:40 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:51.298 23:17:40 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:51.298 23:17:40 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:51.298 23:17:40 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:51.298 23:17:40 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:51.298 23:17:40 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:51.298 23:17:40 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:51.298 23:17:40 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:51.298 23:17:40 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:51.298 23:17:40 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:51.298 23:17:40 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:51.298 23:17:40 -- paths/export.sh@5 -- # export PATH 00:14:51.298 23:17:40 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:51.299 23:17:40 -- nvmf/common.sh@47 -- # : 0 00:14:51.299 23:17:40 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:51.299 23:17:40 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:51.299 23:17:40 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:51.299 23:17:40 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:51.299 23:17:40 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:51.299 23:17:40 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:51.299 23:17:40 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:51.299 23:17:40 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:51.299 23:17:40 -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:14:51.299 23:17:40 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:14:51.299 23:17:40 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:51.299 23:17:40 -- nvmf/common.sh@437 -- # prepare_net_devs 00:14:51.299 23:17:40 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:14:51.299 23:17:40 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:14:51.299 23:17:40 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:51.299 23:17:40 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:51.299 23:17:40 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:51.299 23:17:40 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:14:51.299 23:17:40 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:14:51.299 23:17:40 -- nvmf/common.sh@285 -- # xtrace_disable 00:14:51.299 23:17:40 -- common/autotest_common.sh@10 -- # set +x 00:14:57.881 23:17:46 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:14:57.881 23:17:46 -- nvmf/common.sh@291 -- # pci_devs=() 00:14:57.881 23:17:46 -- nvmf/common.sh@291 -- # local -a pci_devs 00:14:57.881 23:17:46 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:14:57.881 23:17:46 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:14:57.881 23:17:46 -- nvmf/common.sh@293 -- # pci_drivers=() 00:14:57.881 23:17:46 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:14:57.881 23:17:46 -- nvmf/common.sh@295 -- # net_devs=() 00:14:57.881 23:17:46 -- nvmf/common.sh@295 -- # local -ga net_devs 00:14:57.881 23:17:46 -- nvmf/common.sh@296 -- # e810=() 00:14:57.881 23:17:46 -- nvmf/common.sh@296 -- # local -ga e810 00:14:57.881 23:17:46 -- nvmf/common.sh@297 -- # x722=() 00:14:57.881 23:17:46 -- nvmf/common.sh@297 -- # local -ga x722 00:14:57.881 23:17:46 -- nvmf/common.sh@298 -- # mlx=() 00:14:57.881 23:17:46 -- nvmf/common.sh@298 -- # local -ga mlx 00:14:57.881 23:17:46 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:57.881 23:17:46 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:57.881 23:17:46 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:57.881 23:17:46 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:57.881 23:17:46 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:57.881 23:17:46 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:57.881 23:17:46 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:57.881 23:17:46 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:57.881 23:17:46 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:57.881 23:17:46 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:57.881 23:17:46 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:57.881 23:17:46 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:14:57.881 23:17:46 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:14:57.881 23:17:46 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:14:57.881 23:17:46 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:14:57.881 23:17:46 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:14:57.881 23:17:46 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:14:57.881 23:17:46 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:57.881 23:17:46 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:14:57.881 Found 0000:31:00.0 (0x8086 - 0x159b) 00:14:57.881 23:17:46 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:57.882 23:17:46 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:57.882 23:17:46 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:57.882 23:17:46 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:57.882 23:17:46 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:57.882 23:17:46 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:57.882 23:17:46 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:14:57.882 Found 0000:31:00.1 (0x8086 - 0x159b) 00:14:57.882 23:17:46 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:57.882 23:17:46 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:57.882 23:17:46 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:57.882 23:17:46 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:57.882 23:17:46 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:57.882 23:17:46 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:14:57.882 23:17:46 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:14:57.882 23:17:46 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:14:57.882 23:17:46 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:57.882 23:17:46 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:57.882 23:17:46 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:14:57.882 23:17:46 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:57.882 23:17:46 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:14:57.882 Found net devices under 0000:31:00.0: cvl_0_0 00:14:57.882 23:17:46 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:14:57.882 23:17:46 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:57.882 23:17:46 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:57.882 23:17:46 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:14:57.882 23:17:46 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:57.882 23:17:46 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:14:57.882 Found net devices under 0000:31:00.1: cvl_0_1 00:14:57.882 23:17:46 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:14:57.882 23:17:46 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:14:57.882 23:17:46 -- nvmf/common.sh@403 -- # is_hw=yes 00:14:57.882 23:17:46 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:14:57.882 23:17:46 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:14:57.882 23:17:46 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:14:57.882 23:17:46 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:57.882 23:17:46 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:57.882 23:17:46 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:57.882 23:17:46 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:14:57.882 23:17:46 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:57.882 23:17:46 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:57.882 23:17:46 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:14:57.882 23:17:46 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:57.882 23:17:46 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:57.882 23:17:46 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:14:57.882 23:17:46 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:14:57.882 23:17:46 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:14:57.882 23:17:46 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:57.882 23:17:47 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:57.882 23:17:47 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:57.882 23:17:47 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:14:57.882 23:17:47 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:58.143 23:17:47 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:58.143 23:17:47 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:58.143 23:17:47 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:14:58.143 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:58.143 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.571 ms 00:14:58.143 00:14:58.143 --- 10.0.0.2 ping statistics --- 00:14:58.143 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:58.143 rtt min/avg/max/mdev = 0.571/0.571/0.571/0.000 ms 00:14:58.143 23:17:47 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:58.143 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:58.143 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.268 ms 00:14:58.143 00:14:58.143 --- 10.0.0.1 ping statistics --- 00:14:58.143 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:58.143 rtt min/avg/max/mdev = 0.268/0.268/0.268/0.000 ms 00:14:58.143 23:17:47 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:58.143 23:17:47 -- nvmf/common.sh@411 -- # return 0 00:14:58.143 23:17:47 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:14:58.143 23:17:47 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:58.143 23:17:47 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:14:58.143 23:17:47 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:14:58.143 23:17:47 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:58.143 23:17:47 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:14:58.143 23:17:47 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:14:58.143 23:17:47 -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:14:58.143 23:17:47 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:14:58.143 23:17:47 -- common/autotest_common.sh@710 -- # xtrace_disable 00:14:58.143 23:17:47 -- common/autotest_common.sh@10 -- # set +x 00:14:58.143 23:17:47 -- nvmf/common.sh@470 -- # nvmfpid=3872074 00:14:58.143 23:17:47 -- nvmf/common.sh@471 -- # waitforlisten 3872074 00:14:58.143 23:17:47 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:14:58.143 23:17:47 -- common/autotest_common.sh@817 -- # '[' -z 3872074 ']' 00:14:58.143 23:17:47 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:58.143 23:17:47 -- common/autotest_common.sh@822 -- # local max_retries=100 00:14:58.143 23:17:47 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:58.143 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:58.143 23:17:47 -- common/autotest_common.sh@826 -- # xtrace_disable 00:14:58.143 23:17:47 -- common/autotest_common.sh@10 -- # set +x 00:14:58.143 [2024-04-26 23:17:47.363162] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:14:58.143 [2024-04-26 23:17:47.363226] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:58.403 EAL: No free 2048 kB hugepages reported on node 1 00:14:58.403 [2024-04-26 23:17:47.434832] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 2 00:14:58.403 [2024-04-26 23:17:47.472164] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:58.403 [2024-04-26 23:17:47.472215] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:58.403 [2024-04-26 23:17:47.472224] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:58.403 [2024-04-26 23:17:47.472230] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:58.403 [2024-04-26 23:17:47.472236] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:58.403 [2024-04-26 23:17:47.472359] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:58.403 [2024-04-26 23:17:47.472364] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:58.974 23:17:48 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:14:58.974 23:17:48 -- common/autotest_common.sh@850 -- # return 0 00:14:58.974 23:17:48 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:14:58.974 23:17:48 -- common/autotest_common.sh@716 -- # xtrace_disable 00:14:58.974 23:17:48 -- common/autotest_common.sh@10 -- # set +x 00:14:58.974 23:17:48 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:58.974 23:17:48 -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:58.974 23:17:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:58.974 23:17:48 -- common/autotest_common.sh@10 -- # set +x 00:14:58.974 [2024-04-26 23:17:48.179440] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:58.974 23:17:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:58.974 23:17:48 -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:14:58.974 23:17:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:58.974 23:17:48 -- common/autotest_common.sh@10 -- # set +x 00:14:58.974 23:17:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:58.974 23:17:48 -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:58.974 23:17:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:58.974 23:17:48 -- common/autotest_common.sh@10 -- # set +x 00:14:58.974 [2024-04-26 23:17:48.203625] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:58.974 23:17:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:58.974 23:17:48 -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:14:58.974 23:17:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:58.974 23:17:48 -- common/autotest_common.sh@10 -- # set +x 00:14:58.974 NULL1 00:14:58.974 23:17:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:58.974 23:17:48 -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:14:58.974 23:17:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:58.974 23:17:48 -- common/autotest_common.sh@10 -- # set +x 00:14:59.234 Delay0 00:14:59.234 23:17:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:59.234 23:17:48 -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:59.234 23:17:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:59.234 23:17:48 -- common/autotest_common.sh@10 -- # set +x 00:14:59.235 23:17:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:59.235 23:17:48 -- target/delete_subsystem.sh@28 -- # perf_pid=3872102 00:14:59.235 23:17:48 -- target/delete_subsystem.sh@30 -- # sleep 2 00:14:59.235 23:17:48 -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:14:59.235 EAL: No free 2048 kB hugepages reported on node 1 00:14:59.235 [2024-04-26 23:17:48.300277] subsystem.c:1435:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:15:01.146 23:17:50 -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:01.146 23:17:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:01.146 23:17:50 -- common/autotest_common.sh@10 -- # set +x 00:15:01.146 Write completed with error (sct=0, sc=8) 00:15:01.146 Read completed with error (sct=0, sc=8) 00:15:01.146 Read completed with error (sct=0, sc=8) 00:15:01.146 Read completed with error (sct=0, sc=8) 00:15:01.146 starting I/O failed: -6 00:15:01.146 Read completed with error (sct=0, sc=8) 00:15:01.146 Write completed with error (sct=0, sc=8) 00:15:01.146 Write completed with error (sct=0, sc=8) 00:15:01.146 Write completed with error (sct=0, sc=8) 00:15:01.146 starting I/O failed: -6 00:15:01.146 Read completed with error (sct=0, sc=8) 00:15:01.146 Read completed with error (sct=0, sc=8) 00:15:01.146 Read completed with error (sct=0, sc=8) 00:15:01.146 Read completed with error (sct=0, sc=8) 00:15:01.146 starting I/O failed: -6 00:15:01.146 Read completed with error (sct=0, sc=8) 00:15:01.146 Read completed with error (sct=0, sc=8) 00:15:01.146 Read completed with error (sct=0, sc=8) 00:15:01.146 Read completed with error (sct=0, sc=8) 00:15:01.146 starting I/O failed: -6 00:15:01.146 Write completed with error (sct=0, sc=8) 00:15:01.146 Read completed with error (sct=0, sc=8) 00:15:01.146 Read completed with error (sct=0, sc=8) 00:15:01.146 Read completed with error (sct=0, sc=8) 00:15:01.146 starting I/O failed: -6 00:15:01.146 Write completed with error (sct=0, sc=8) 00:15:01.146 Read completed with error (sct=0, sc=8) 00:15:01.146 Write completed with error (sct=0, sc=8) 00:15:01.146 Write completed with error (sct=0, sc=8) 00:15:01.146 starting I/O failed: -6 00:15:01.146 Write completed with error (sct=0, sc=8) 00:15:01.146 Read completed with error (sct=0, sc=8) 00:15:01.146 Read completed with error (sct=0, sc=8) 00:15:01.146 Read completed with error (sct=0, sc=8) 00:15:01.146 starting I/O failed: -6 00:15:01.146 Write completed with error (sct=0, sc=8) 00:15:01.146 Read completed with error (sct=0, sc=8) 00:15:01.146 Read completed with error (sct=0, sc=8) 00:15:01.146 Write completed with error (sct=0, sc=8) 00:15:01.146 starting I/O failed: -6 00:15:01.146 Read completed with error (sct=0, sc=8) 00:15:01.146 Write completed with error (sct=0, sc=8) 00:15:01.146 Write completed with error (sct=0, sc=8) 00:15:01.146 Read completed with error (sct=0, sc=8) 00:15:01.146 starting I/O failed: -6 00:15:01.146 Read completed with error (sct=0, sc=8) 00:15:01.146 Write completed with error (sct=0, sc=8) 00:15:01.146 Read completed with error (sct=0, sc=8) 00:15:01.146 Read completed with error (sct=0, sc=8) 00:15:01.146 starting I/O failed: -6 00:15:01.146 Read completed with error (sct=0, sc=8) 00:15:01.146 Write completed with error (sct=0, sc=8) 00:15:01.146 Read completed with error (sct=0, sc=8) 00:15:01.146 Read completed with error (sct=0, sc=8) 00:15:01.146 starting I/O failed: -6 00:15:01.146 Read completed with error (sct=0, sc=8) 00:15:01.146 Read completed with error (sct=0, sc=8) 00:15:01.146 Write completed with error (sct=0, sc=8) 00:15:01.146 Read completed with error (sct=0, sc=8) 00:15:01.146 starting I/O failed: -6 00:15:01.146 [2024-04-26 23:17:50.384945] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x151d2b0 is same with the state(5) to be set 00:15:01.146 Write completed with error (sct=0, sc=8) 00:15:01.146 Write completed with error (sct=0, sc=8) 00:15:01.146 Read completed with error (sct=0, sc=8) 00:15:01.146 Write completed with error (sct=0, sc=8) 00:15:01.146 Read completed with error (sct=0, sc=8) 00:15:01.146 Write completed with error (sct=0, sc=8) 00:15:01.146 Write completed with error (sct=0, sc=8) 00:15:01.146 Read completed with error (sct=0, sc=8) 00:15:01.146 Read completed with error (sct=0, sc=8) 00:15:01.146 Write completed with error (sct=0, sc=8) 00:15:01.146 Read completed with error (sct=0, sc=8) 00:15:01.146 Write completed with error (sct=0, sc=8) 00:15:01.146 Read completed with error (sct=0, sc=8) 00:15:01.146 Write completed with error (sct=0, sc=8) 00:15:01.146 Read completed with error (sct=0, sc=8) 00:15:01.146 Read completed with error (sct=0, sc=8) 00:15:01.146 Write completed with error (sct=0, sc=8) 00:15:01.146 Read completed with error (sct=0, sc=8) 00:15:01.146 Read completed with error (sct=0, sc=8) 00:15:01.146 Read completed with error (sct=0, sc=8) 00:15:01.146 Read completed with error (sct=0, sc=8) 00:15:01.146 Read completed with error (sct=0, sc=8) 00:15:01.146 Write completed with error (sct=0, sc=8) 00:15:01.146 Write completed with error (sct=0, sc=8) 00:15:01.146 Read completed with error (sct=0, sc=8) 00:15:01.146 Write completed with error (sct=0, sc=8) 00:15:01.146 Read completed with error (sct=0, sc=8) 00:15:01.146 Read completed with error (sct=0, sc=8) 00:15:01.146 Write completed with error (sct=0, sc=8) 00:15:01.146 Read completed with error (sct=0, sc=8) 00:15:01.146 Write completed with error (sct=0, sc=8) 00:15:01.146 Read completed with error (sct=0, sc=8) 00:15:01.146 Read completed with error (sct=0, sc=8) 00:15:01.146 Read completed with error (sct=0, sc=8) 00:15:01.146 Read completed with error (sct=0, sc=8) 00:15:01.146 Write completed with error (sct=0, sc=8) 00:15:01.146 Read completed with error (sct=0, sc=8) 00:15:01.146 Write completed with error (sct=0, sc=8) 00:15:01.146 Read completed with error (sct=0, sc=8) 00:15:01.146 Read completed with error (sct=0, sc=8) 00:15:01.146 Read completed with error (sct=0, sc=8) 00:15:01.146 Read completed with error (sct=0, sc=8) 00:15:01.146 Write completed with error (sct=0, sc=8) 00:15:01.146 Read completed with error (sct=0, sc=8) 00:15:01.146 Write completed with error (sct=0, sc=8) 00:15:01.146 Read completed with error (sct=0, sc=8) 00:15:01.146 Read completed with error (sct=0, sc=8) 00:15:01.146 Read completed with error (sct=0, sc=8) 00:15:01.146 Read completed with error (sct=0, sc=8) 00:15:01.146 Write completed with error (sct=0, sc=8) 00:15:01.146 Read completed with error (sct=0, sc=8) 00:15:01.146 Write completed with error (sct=0, sc=8) 00:15:01.146 Read completed with error (sct=0, sc=8) 00:15:01.146 Read completed with error (sct=0, sc=8) 00:15:01.146 Write completed with error (sct=0, sc=8) 00:15:01.146 Read completed with error (sct=0, sc=8) 00:15:01.146 Read completed with error (sct=0, sc=8) 00:15:01.146 Read completed with error (sct=0, sc=8) 00:15:01.146 Read completed with error (sct=0, sc=8) 00:15:01.146 Read completed with error (sct=0, sc=8) 00:15:01.146 Read completed with error (sct=0, sc=8) 00:15:01.146 Read completed with error (sct=0, sc=8) 00:15:01.146 Read completed with error (sct=0, sc=8) 00:15:01.146 starting I/O failed: -6 00:15:01.146 Read completed with error (sct=0, sc=8) 00:15:01.146 Read completed with error (sct=0, sc=8) 00:15:01.146 Read completed with error (sct=0, sc=8) 00:15:01.146 Read completed with error (sct=0, sc=8) 00:15:01.146 starting I/O failed: -6 00:15:01.146 Write completed with error (sct=0, sc=8) 00:15:01.146 Read completed with error (sct=0, sc=8) 00:15:01.146 Write completed with error (sct=0, sc=8) 00:15:01.146 Read completed with error (sct=0, sc=8) 00:15:01.146 starting I/O failed: -6 00:15:01.146 Read completed with error (sct=0, sc=8) 00:15:01.146 Read completed with error (sct=0, sc=8) 00:15:01.146 Read completed with error (sct=0, sc=8) 00:15:01.146 Write completed with error (sct=0, sc=8) 00:15:01.146 starting I/O failed: -6 00:15:01.146 Write completed with error (sct=0, sc=8) 00:15:01.146 Read completed with error (sct=0, sc=8) 00:15:01.146 Read completed with error (sct=0, sc=8) 00:15:01.146 Read completed with error (sct=0, sc=8) 00:15:01.146 starting I/O failed: -6 00:15:01.146 Write completed with error (sct=0, sc=8) 00:15:01.146 Read completed with error (sct=0, sc=8) 00:15:01.146 Read completed with error (sct=0, sc=8) 00:15:01.146 Read completed with error (sct=0, sc=8) 00:15:01.146 starting I/O failed: -6 00:15:01.146 Read completed with error (sct=0, sc=8) 00:15:01.146 Read completed with error (sct=0, sc=8) 00:15:01.146 Read completed with error (sct=0, sc=8) 00:15:01.146 Read completed with error (sct=0, sc=8) 00:15:01.146 starting I/O failed: -6 00:15:01.146 Read completed with error (sct=0, sc=8) 00:15:01.146 Read completed with error (sct=0, sc=8) 00:15:01.146 Write completed with error (sct=0, sc=8) 00:15:01.146 Read completed with error (sct=0, sc=8) 00:15:01.146 starting I/O failed: -6 00:15:01.146 Read completed with error (sct=0, sc=8) 00:15:01.146 Read completed with error (sct=0, sc=8) 00:15:01.146 Read completed with error (sct=0, sc=8) 00:15:01.146 Read completed with error (sct=0, sc=8) 00:15:01.146 starting I/O failed: -6 00:15:01.146 Read completed with error (sct=0, sc=8) 00:15:01.146 Read completed with error (sct=0, sc=8) 00:15:01.146 Write completed with error (sct=0, sc=8) 00:15:01.146 Write completed with error (sct=0, sc=8) 00:15:01.146 starting I/O failed: -6 00:15:01.146 [2024-04-26 23:17:50.388317] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7efc3000c3d0 is same with the state(5) to be set 00:15:01.146 Read completed with error (sct=0, sc=8) 00:15:01.146 Read completed with error (sct=0, sc=8) 00:15:01.146 Read completed with error (sct=0, sc=8) 00:15:01.146 Read completed with error (sct=0, sc=8) 00:15:01.146 Read completed with error (sct=0, sc=8) 00:15:01.146 Write completed with error (sct=0, sc=8) 00:15:01.146 Write completed with error (sct=0, sc=8) 00:15:01.146 Write completed with error (sct=0, sc=8) 00:15:01.146 Write completed with error (sct=0, sc=8) 00:15:01.146 Read completed with error (sct=0, sc=8) 00:15:01.146 Read completed with error (sct=0, sc=8) 00:15:01.146 Read completed with error (sct=0, sc=8) 00:15:01.146 Read completed with error (sct=0, sc=8) 00:15:01.146 Read completed with error (sct=0, sc=8) 00:15:01.146 Read completed with error (sct=0, sc=8) 00:15:01.146 Write completed with error (sct=0, sc=8) 00:15:01.146 Read completed with error (sct=0, sc=8) 00:15:01.146 Write completed with error (sct=0, sc=8) 00:15:01.146 Read completed with error (sct=0, sc=8) 00:15:01.146 Read completed with error (sct=0, sc=8) 00:15:01.146 Read completed with error (sct=0, sc=8) 00:15:01.146 Read completed with error (sct=0, sc=8) 00:15:01.146 Read completed with error (sct=0, sc=8) 00:15:01.147 Write completed with error (sct=0, sc=8) 00:15:01.147 Read completed with error (sct=0, sc=8) 00:15:01.147 Read completed with error (sct=0, sc=8) 00:15:01.147 Read completed with error (sct=0, sc=8) 00:15:01.147 Read completed with error (sct=0, sc=8) 00:15:01.147 Read completed with error (sct=0, sc=8) 00:15:01.147 Write completed with error (sct=0, sc=8) 00:15:01.147 Read completed with error (sct=0, sc=8) 00:15:01.147 Read completed with error (sct=0, sc=8) 00:15:01.147 Read completed with error (sct=0, sc=8) 00:15:01.147 Read completed with error (sct=0, sc=8) 00:15:01.147 Read completed with error (sct=0, sc=8) 00:15:01.147 Read completed with error (sct=0, sc=8) 00:15:01.147 Read completed with error (sct=0, sc=8) 00:15:01.147 Read completed with error (sct=0, sc=8) 00:15:01.147 Read completed with error (sct=0, sc=8) 00:15:01.147 Read completed with error (sct=0, sc=8) 00:15:01.147 Read completed with error (sct=0, sc=8) 00:15:01.147 Read completed with error (sct=0, sc=8) 00:15:01.147 Write completed with error (sct=0, sc=8) 00:15:01.147 Read completed with error (sct=0, sc=8) 00:15:01.147 Read completed with error (sct=0, sc=8) 00:15:01.147 Write completed with error (sct=0, sc=8) 00:15:01.147 Read completed with error (sct=0, sc=8) 00:15:01.147 Read completed with error (sct=0, sc=8) 00:15:01.147 Write completed with error (sct=0, sc=8) 00:15:01.147 Read completed with error (sct=0, sc=8) 00:15:02.528 [2024-04-26 23:17:51.357027] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1505ce0 is same with the state(5) to be set 00:15:02.528 Read completed with error (sct=0, sc=8) 00:15:02.528 Read completed with error (sct=0, sc=8) 00:15:02.528 Read completed with error (sct=0, sc=8) 00:15:02.528 Read completed with error (sct=0, sc=8) 00:15:02.528 Read completed with error (sct=0, sc=8) 00:15:02.528 Read completed with error (sct=0, sc=8) 00:15:02.528 Read completed with error (sct=0, sc=8) 00:15:02.528 Read completed with error (sct=0, sc=8) 00:15:02.528 Read completed with error (sct=0, sc=8) 00:15:02.528 Write completed with error (sct=0, sc=8) 00:15:02.528 Read completed with error (sct=0, sc=8) 00:15:02.528 Read completed with error (sct=0, sc=8) 00:15:02.528 Write completed with error (sct=0, sc=8) 00:15:02.528 Read completed with error (sct=0, sc=8) 00:15:02.528 Read completed with error (sct=0, sc=8) 00:15:02.528 Read completed with error (sct=0, sc=8) 00:15:02.528 Write completed with error (sct=0, sc=8) 00:15:02.528 Read completed with error (sct=0, sc=8) 00:15:02.528 Read completed with error (sct=0, sc=8) 00:15:02.528 Read completed with error (sct=0, sc=8) 00:15:02.528 Read completed with error (sct=0, sc=8) 00:15:02.528 Read completed with error (sct=0, sc=8) 00:15:02.528 Write completed with error (sct=0, sc=8) 00:15:02.528 Read completed with error (sct=0, sc=8) 00:15:02.528 Read completed with error (sct=0, sc=8) 00:15:02.528 Read completed with error (sct=0, sc=8) 00:15:02.528 Read completed with error (sct=0, sc=8) 00:15:02.528 [2024-04-26 23:17:51.388003] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x151d440 is same with the state(5) to be set 00:15:02.528 Read completed with error (sct=0, sc=8) 00:15:02.528 Read completed with error (sct=0, sc=8) 00:15:02.528 Read completed with error (sct=0, sc=8) 00:15:02.528 Write completed with error (sct=0, sc=8) 00:15:02.528 Read completed with error (sct=0, sc=8) 00:15:02.528 Write completed with error (sct=0, sc=8) 00:15:02.528 Read completed with error (sct=0, sc=8) 00:15:02.528 Read completed with error (sct=0, sc=8) 00:15:02.528 Read completed with error (sct=0, sc=8) 00:15:02.528 Write completed with error (sct=0, sc=8) 00:15:02.528 Read completed with error (sct=0, sc=8) 00:15:02.528 Read completed with error (sct=0, sc=8) 00:15:02.528 Read completed with error (sct=0, sc=8) 00:15:02.528 Read completed with error (sct=0, sc=8) 00:15:02.528 Read completed with error (sct=0, sc=8) 00:15:02.528 Read completed with error (sct=0, sc=8) 00:15:02.528 Write completed with error (sct=0, sc=8) 00:15:02.528 Read completed with error (sct=0, sc=8) 00:15:02.528 Write completed with error (sct=0, sc=8) 00:15:02.528 Read completed with error (sct=0, sc=8) 00:15:02.528 Read completed with error (sct=0, sc=8) 00:15:02.528 Write completed with error (sct=0, sc=8) 00:15:02.528 Read completed with error (sct=0, sc=8) 00:15:02.528 Read completed with error (sct=0, sc=8) 00:15:02.528 Write completed with error (sct=0, sc=8) 00:15:02.528 Read completed with error (sct=0, sc=8) 00:15:02.528 Read completed with error (sct=0, sc=8) 00:15:02.528 [2024-04-26 23:17:51.388648] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x151d9c0 is same with the state(5) to be set 00:15:02.528 Read completed with error (sct=0, sc=8) 00:15:02.528 Read completed with error (sct=0, sc=8) 00:15:02.528 Read completed with error (sct=0, sc=8) 00:15:02.528 Read completed with error (sct=0, sc=8) 00:15:02.528 Write completed with error (sct=0, sc=8) 00:15:02.528 Read completed with error (sct=0, sc=8) 00:15:02.528 Write completed with error (sct=0, sc=8) 00:15:02.528 Read completed with error (sct=0, sc=8) 00:15:02.528 Read completed with error (sct=0, sc=8) 00:15:02.528 Write completed with error (sct=0, sc=8) 00:15:02.528 Read completed with error (sct=0, sc=8) 00:15:02.528 Read completed with error (sct=0, sc=8) 00:15:02.528 Read completed with error (sct=0, sc=8) 00:15:02.528 Write completed with error (sct=0, sc=8) 00:15:02.528 Write completed with error (sct=0, sc=8) 00:15:02.528 Write completed with error (sct=0, sc=8) 00:15:02.528 Read completed with error (sct=0, sc=8) 00:15:02.528 Read completed with error (sct=0, sc=8) 00:15:02.528 [2024-04-26 23:17:51.390199] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7efc3000bf90 is same with the state(5) to be set 00:15:02.528 Read completed with error (sct=0, sc=8) 00:15:02.528 Read completed with error (sct=0, sc=8) 00:15:02.528 Read completed with error (sct=0, sc=8) 00:15:02.528 Write completed with error (sct=0, sc=8) 00:15:02.528 Read completed with error (sct=0, sc=8) 00:15:02.528 Read completed with error (sct=0, sc=8) 00:15:02.528 Read completed with error (sct=0, sc=8) 00:15:02.528 Read completed with error (sct=0, sc=8) 00:15:02.528 Read completed with error (sct=0, sc=8) 00:15:02.528 Read completed with error (sct=0, sc=8) 00:15:02.528 Read completed with error (sct=0, sc=8) 00:15:02.528 Read completed with error (sct=0, sc=8) 00:15:02.528 Read completed with error (sct=0, sc=8) 00:15:02.528 Read completed with error (sct=0, sc=8) 00:15:02.528 Read completed with error (sct=0, sc=8) 00:15:02.528 Read completed with error (sct=0, sc=8) 00:15:02.528 Write completed with error (sct=0, sc=8) 00:15:02.528 Read completed with error (sct=0, sc=8) 00:15:02.528 Read completed with error (sct=0, sc=8) 00:15:02.528 Write completed with error (sct=0, sc=8) 00:15:02.528 Read completed with error (sct=0, sc=8) 00:15:02.528 Read completed with error (sct=0, sc=8) 00:15:02.529 Read completed with error (sct=0, sc=8) 00:15:02.529 Read completed with error (sct=0, sc=8) 00:15:02.529 Read completed with error (sct=0, sc=8) 00:15:02.529 Read completed with error (sct=0, sc=8) 00:15:02.529 Read completed with error (sct=0, sc=8) 00:15:02.529 Write completed with error (sct=0, sc=8) 00:15:02.529 Write completed with error (sct=0, sc=8) 00:15:02.529 Write completed with error (sct=0, sc=8) 00:15:02.529 Read completed with error (sct=0, sc=8) 00:15:02.529 Read completed with error (sct=0, sc=8) 00:15:02.529 Write completed with error (sct=0, sc=8) 00:15:02.529 Read completed with error (sct=0, sc=8) 00:15:02.529 [2024-04-26 23:17:51.390318] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7efc3000c690 is same with the state(5) to be set 00:15:02.529 [2024-04-26 23:17:51.390857] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1505ce0 (9): Bad file descriptor 00:15:02.529 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:15:02.529 23:17:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:02.529 23:17:51 -- target/delete_subsystem.sh@34 -- # delay=0 00:15:02.529 23:17:51 -- target/delete_subsystem.sh@35 -- # kill -0 3872102 00:15:02.529 23:17:51 -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:15:02.529 Initializing NVMe Controllers 00:15:02.529 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:15:02.529 Controller IO queue size 128, less than required. 00:15:02.529 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:15:02.529 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:15:02.529 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:15:02.529 Initialization complete. Launching workers. 00:15:02.529 ======================================================== 00:15:02.529 Latency(us) 00:15:02.529 Device Information : IOPS MiB/s Average min max 00:15:02.529 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 175.74 0.09 881480.93 261.28 1006916.70 00:15:02.529 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 157.82 0.08 976038.59 266.57 2002394.74 00:15:02.529 ======================================================== 00:15:02.529 Total : 333.56 0.16 926219.40 261.28 2002394.74 00:15:02.529 00:15:02.790 23:17:51 -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:15:02.790 23:17:51 -- target/delete_subsystem.sh@35 -- # kill -0 3872102 00:15:02.790 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (3872102) - No such process 00:15:02.790 23:17:51 -- target/delete_subsystem.sh@45 -- # NOT wait 3872102 00:15:02.790 23:17:51 -- common/autotest_common.sh@638 -- # local es=0 00:15:02.790 23:17:51 -- common/autotest_common.sh@640 -- # valid_exec_arg wait 3872102 00:15:02.790 23:17:51 -- common/autotest_common.sh@626 -- # local arg=wait 00:15:02.790 23:17:51 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:15:02.790 23:17:51 -- common/autotest_common.sh@630 -- # type -t wait 00:15:02.790 23:17:51 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:15:02.790 23:17:51 -- common/autotest_common.sh@641 -- # wait 3872102 00:15:02.790 23:17:51 -- common/autotest_common.sh@641 -- # es=1 00:15:02.790 23:17:51 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:15:02.790 23:17:51 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:15:02.790 23:17:51 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:15:02.790 23:17:51 -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:15:02.790 23:17:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:02.790 23:17:51 -- common/autotest_common.sh@10 -- # set +x 00:15:02.790 23:17:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:02.790 23:17:51 -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:02.790 23:17:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:02.790 23:17:51 -- common/autotest_common.sh@10 -- # set +x 00:15:02.790 [2024-04-26 23:17:51.920633] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:02.790 23:17:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:02.790 23:17:51 -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:02.790 23:17:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:02.790 23:17:51 -- common/autotest_common.sh@10 -- # set +x 00:15:02.790 23:17:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:02.790 23:17:51 -- target/delete_subsystem.sh@54 -- # perf_pid=3872828 00:15:02.790 23:17:51 -- target/delete_subsystem.sh@56 -- # delay=0 00:15:02.790 23:17:51 -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:15:02.790 23:17:51 -- target/delete_subsystem.sh@57 -- # kill -0 3872828 00:15:02.790 23:17:51 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:15:02.790 EAL: No free 2048 kB hugepages reported on node 1 00:15:02.790 [2024-04-26 23:17:51.991154] subsystem.c:1435:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:15:03.361 23:17:52 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:15:03.361 23:17:52 -- target/delete_subsystem.sh@57 -- # kill -0 3872828 00:15:03.361 23:17:52 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:15:03.931 23:17:52 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:15:03.931 23:17:52 -- target/delete_subsystem.sh@57 -- # kill -0 3872828 00:15:03.931 23:17:52 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:15:04.501 23:17:53 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:15:04.501 23:17:53 -- target/delete_subsystem.sh@57 -- # kill -0 3872828 00:15:04.501 23:17:53 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:15:04.762 23:17:53 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:15:04.762 23:17:53 -- target/delete_subsystem.sh@57 -- # kill -0 3872828 00:15:04.762 23:17:53 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:15:05.335 23:17:54 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:15:05.335 23:17:54 -- target/delete_subsystem.sh@57 -- # kill -0 3872828 00:15:05.335 23:17:54 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:15:05.905 23:17:54 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:15:05.905 23:17:54 -- target/delete_subsystem.sh@57 -- # kill -0 3872828 00:15:05.905 23:17:54 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:15:05.905 Initializing NVMe Controllers 00:15:05.905 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:15:05.905 Controller IO queue size 128, less than required. 00:15:05.905 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:15:05.905 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:15:05.905 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:15:05.905 Initialization complete. Launching workers. 00:15:05.905 ======================================================== 00:15:05.905 Latency(us) 00:15:05.906 Device Information : IOPS MiB/s Average min max 00:15:05.906 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1002326.96 1000242.60 1006128.88 00:15:05.906 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1003964.76 1000279.12 1010521.28 00:15:05.906 ======================================================== 00:15:05.906 Total : 256.00 0.12 1003145.86 1000242.60 1010521.28 00:15:05.906 00:15:06.476 23:17:55 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:15:06.476 23:17:55 -- target/delete_subsystem.sh@57 -- # kill -0 3872828 00:15:06.476 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (3872828) - No such process 00:15:06.476 23:17:55 -- target/delete_subsystem.sh@67 -- # wait 3872828 00:15:06.476 23:17:55 -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:15:06.476 23:17:55 -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:15:06.476 23:17:55 -- nvmf/common.sh@477 -- # nvmfcleanup 00:15:06.476 23:17:55 -- nvmf/common.sh@117 -- # sync 00:15:06.476 23:17:55 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:06.476 23:17:55 -- nvmf/common.sh@120 -- # set +e 00:15:06.476 23:17:55 -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:06.476 23:17:55 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:06.476 rmmod nvme_tcp 00:15:06.476 rmmod nvme_fabrics 00:15:06.476 rmmod nvme_keyring 00:15:06.476 23:17:55 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:06.476 23:17:55 -- nvmf/common.sh@124 -- # set -e 00:15:06.476 23:17:55 -- nvmf/common.sh@125 -- # return 0 00:15:06.476 23:17:55 -- nvmf/common.sh@478 -- # '[' -n 3872074 ']' 00:15:06.476 23:17:55 -- nvmf/common.sh@479 -- # killprocess 3872074 00:15:06.476 23:17:55 -- common/autotest_common.sh@936 -- # '[' -z 3872074 ']' 00:15:06.476 23:17:55 -- common/autotest_common.sh@940 -- # kill -0 3872074 00:15:06.476 23:17:55 -- common/autotest_common.sh@941 -- # uname 00:15:06.476 23:17:55 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:15:06.476 23:17:55 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3872074 00:15:06.476 23:17:55 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:15:06.476 23:17:55 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:15:06.476 23:17:55 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3872074' 00:15:06.476 killing process with pid 3872074 00:15:06.476 23:17:55 -- common/autotest_common.sh@955 -- # kill 3872074 00:15:06.476 23:17:55 -- common/autotest_common.sh@960 -- # wait 3872074 00:15:06.476 23:17:55 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:15:06.476 23:17:55 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:15:06.476 23:17:55 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:15:06.476 23:17:55 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:06.476 23:17:55 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:06.476 23:17:55 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:06.476 23:17:55 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:06.476 23:17:55 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:09.019 23:17:57 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:15:09.019 00:15:09.019 real 0m17.637s 00:15:09.019 user 0m30.486s 00:15:09.019 sys 0m6.048s 00:15:09.019 23:17:57 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:15:09.019 23:17:57 -- common/autotest_common.sh@10 -- # set +x 00:15:09.019 ************************************ 00:15:09.019 END TEST nvmf_delete_subsystem 00:15:09.019 ************************************ 00:15:09.019 23:17:57 -- nvmf/nvmf.sh@36 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=tcp 00:15:09.019 23:17:57 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:15:09.019 23:17:57 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:15:09.019 23:17:57 -- common/autotest_common.sh@10 -- # set +x 00:15:09.019 ************************************ 00:15:09.019 START TEST nvmf_ns_masking 00:15:09.019 ************************************ 00:15:09.019 23:17:57 -- common/autotest_common.sh@1111 -- # test/nvmf/target/ns_masking.sh --transport=tcp 00:15:09.019 * Looking for test storage... 00:15:09.019 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:09.019 23:17:58 -- target/ns_masking.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:09.019 23:17:58 -- nvmf/common.sh@7 -- # uname -s 00:15:09.019 23:17:58 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:09.019 23:17:58 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:09.019 23:17:58 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:09.019 23:17:58 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:09.019 23:17:58 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:09.019 23:17:58 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:09.019 23:17:58 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:09.019 23:17:58 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:09.019 23:17:58 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:09.019 23:17:58 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:09.019 23:17:58 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:15:09.019 23:17:58 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:15:09.019 23:17:58 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:09.019 23:17:58 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:09.019 23:17:58 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:09.019 23:17:58 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:09.019 23:17:58 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:09.019 23:17:58 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:09.019 23:17:58 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:09.019 23:17:58 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:09.019 23:17:58 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:09.019 23:17:58 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:09.019 23:17:58 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:09.019 23:17:58 -- paths/export.sh@5 -- # export PATH 00:15:09.019 23:17:58 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:09.019 23:17:58 -- nvmf/common.sh@47 -- # : 0 00:15:09.019 23:17:58 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:09.019 23:17:58 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:09.019 23:17:58 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:09.019 23:17:58 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:09.019 23:17:58 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:09.019 23:17:58 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:09.019 23:17:58 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:09.019 23:17:58 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:09.019 23:17:58 -- target/ns_masking.sh@10 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:09.019 23:17:58 -- target/ns_masking.sh@11 -- # loops=5 00:15:09.019 23:17:58 -- target/ns_masking.sh@13 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:15:09.019 23:17:58 -- target/ns_masking.sh@14 -- # HOSTNQN=nqn.2016-06.io.spdk:host1 00:15:09.019 23:17:58 -- target/ns_masking.sh@15 -- # uuidgen 00:15:09.019 23:17:58 -- target/ns_masking.sh@15 -- # HOSTID=62c977e7-cf56-459c-a2d7-720604cd8e4e 00:15:09.019 23:17:58 -- target/ns_masking.sh@44 -- # nvmftestinit 00:15:09.019 23:17:58 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:15:09.019 23:17:58 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:09.019 23:17:58 -- nvmf/common.sh@437 -- # prepare_net_devs 00:15:09.019 23:17:58 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:15:09.019 23:17:58 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:15:09.019 23:17:58 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:09.019 23:17:58 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:09.019 23:17:58 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:09.019 23:17:58 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:15:09.019 23:17:58 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:15:09.019 23:17:58 -- nvmf/common.sh@285 -- # xtrace_disable 00:15:09.019 23:17:58 -- common/autotest_common.sh@10 -- # set +x 00:15:17.177 23:18:05 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:15:17.177 23:18:05 -- nvmf/common.sh@291 -- # pci_devs=() 00:15:17.177 23:18:05 -- nvmf/common.sh@291 -- # local -a pci_devs 00:15:17.177 23:18:05 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:15:17.177 23:18:05 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:15:17.177 23:18:05 -- nvmf/common.sh@293 -- # pci_drivers=() 00:15:17.177 23:18:05 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:15:17.177 23:18:05 -- nvmf/common.sh@295 -- # net_devs=() 00:15:17.177 23:18:05 -- nvmf/common.sh@295 -- # local -ga net_devs 00:15:17.177 23:18:05 -- nvmf/common.sh@296 -- # e810=() 00:15:17.177 23:18:05 -- nvmf/common.sh@296 -- # local -ga e810 00:15:17.177 23:18:05 -- nvmf/common.sh@297 -- # x722=() 00:15:17.177 23:18:05 -- nvmf/common.sh@297 -- # local -ga x722 00:15:17.177 23:18:05 -- nvmf/common.sh@298 -- # mlx=() 00:15:17.177 23:18:05 -- nvmf/common.sh@298 -- # local -ga mlx 00:15:17.177 23:18:05 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:17.177 23:18:05 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:17.177 23:18:05 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:17.177 23:18:05 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:17.177 23:18:05 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:17.177 23:18:05 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:17.177 23:18:05 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:17.177 23:18:05 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:17.177 23:18:05 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:17.177 23:18:05 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:17.177 23:18:05 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:17.177 23:18:05 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:15:17.177 23:18:05 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:15:17.177 23:18:05 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:15:17.177 23:18:05 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:15:17.177 23:18:05 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:15:17.177 23:18:05 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:15:17.177 23:18:05 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:17.177 23:18:05 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:15:17.177 Found 0000:31:00.0 (0x8086 - 0x159b) 00:15:17.177 23:18:05 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:17.177 23:18:05 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:17.177 23:18:05 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:17.177 23:18:05 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:17.177 23:18:05 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:17.177 23:18:05 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:17.177 23:18:05 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:15:17.177 Found 0000:31:00.1 (0x8086 - 0x159b) 00:15:17.177 23:18:05 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:17.177 23:18:05 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:17.177 23:18:05 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:17.177 23:18:05 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:17.177 23:18:05 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:17.177 23:18:05 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:15:17.177 23:18:05 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:15:17.177 23:18:05 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:15:17.177 23:18:05 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:17.177 23:18:05 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:17.177 23:18:05 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:15:17.177 23:18:05 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:17.177 23:18:05 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:15:17.177 Found net devices under 0000:31:00.0: cvl_0_0 00:15:17.177 23:18:05 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:15:17.177 23:18:05 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:17.177 23:18:05 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:17.177 23:18:05 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:15:17.177 23:18:05 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:17.177 23:18:05 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:15:17.177 Found net devices under 0000:31:00.1: cvl_0_1 00:15:17.177 23:18:05 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:15:17.177 23:18:05 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:15:17.177 23:18:05 -- nvmf/common.sh@403 -- # is_hw=yes 00:15:17.177 23:18:05 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:15:17.177 23:18:05 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:15:17.177 23:18:05 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:15:17.177 23:18:05 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:17.177 23:18:05 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:17.177 23:18:05 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:17.177 23:18:05 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:15:17.177 23:18:05 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:17.177 23:18:05 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:17.177 23:18:05 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:15:17.177 23:18:05 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:17.177 23:18:05 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:17.177 23:18:05 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:15:17.177 23:18:05 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:15:17.177 23:18:05 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:15:17.177 23:18:05 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:17.177 23:18:05 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:17.177 23:18:05 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:17.177 23:18:05 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:15:17.177 23:18:05 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:17.177 23:18:05 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:17.177 23:18:05 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:17.177 23:18:05 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:15:17.177 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:17.177 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.474 ms 00:15:17.177 00:15:17.177 --- 10.0.0.2 ping statistics --- 00:15:17.177 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:17.177 rtt min/avg/max/mdev = 0.474/0.474/0.474/0.000 ms 00:15:17.177 23:18:05 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:17.177 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:17.177 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.228 ms 00:15:17.177 00:15:17.177 --- 10.0.0.1 ping statistics --- 00:15:17.177 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:17.177 rtt min/avg/max/mdev = 0.228/0.228/0.228/0.000 ms 00:15:17.177 23:18:05 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:17.178 23:18:05 -- nvmf/common.sh@411 -- # return 0 00:15:17.178 23:18:05 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:15:17.178 23:18:05 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:17.178 23:18:05 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:15:17.178 23:18:05 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:15:17.178 23:18:05 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:17.178 23:18:05 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:15:17.178 23:18:05 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:15:17.178 23:18:05 -- target/ns_masking.sh@45 -- # nvmfappstart -m 0xF 00:15:17.178 23:18:05 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:15:17.178 23:18:05 -- common/autotest_common.sh@710 -- # xtrace_disable 00:15:17.178 23:18:05 -- common/autotest_common.sh@10 -- # set +x 00:15:17.178 23:18:05 -- nvmf/common.sh@470 -- # nvmfpid=3877950 00:15:17.178 23:18:05 -- nvmf/common.sh@471 -- # waitforlisten 3877950 00:15:17.178 23:18:05 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:15:17.178 23:18:05 -- common/autotest_common.sh@817 -- # '[' -z 3877950 ']' 00:15:17.178 23:18:05 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:17.178 23:18:05 -- common/autotest_common.sh@822 -- # local max_retries=100 00:15:17.178 23:18:05 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:17.178 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:17.178 23:18:05 -- common/autotest_common.sh@826 -- # xtrace_disable 00:15:17.178 23:18:05 -- common/autotest_common.sh@10 -- # set +x 00:15:17.178 [2024-04-26 23:18:05.426893] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:15:17.178 [2024-04-26 23:18:05.426948] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:17.178 EAL: No free 2048 kB hugepages reported on node 1 00:15:17.178 [2024-04-26 23:18:05.496419] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:17.178 [2024-04-26 23:18:05.532558] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:17.178 [2024-04-26 23:18:05.532600] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:17.178 [2024-04-26 23:18:05.532610] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:17.178 [2024-04-26 23:18:05.532618] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:17.178 [2024-04-26 23:18:05.532624] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:17.178 [2024-04-26 23:18:05.532777] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:17.178 [2024-04-26 23:18:05.532882] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:15:17.178 [2024-04-26 23:18:05.532994] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:17.178 [2024-04-26 23:18:05.532995] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:15:17.178 23:18:06 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:15:17.178 23:18:06 -- common/autotest_common.sh@850 -- # return 0 00:15:17.178 23:18:06 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:15:17.178 23:18:06 -- common/autotest_common.sh@716 -- # xtrace_disable 00:15:17.178 23:18:06 -- common/autotest_common.sh@10 -- # set +x 00:15:17.178 23:18:06 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:17.178 23:18:06 -- target/ns_masking.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:15:17.178 [2024-04-26 23:18:06.380026] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:17.178 23:18:06 -- target/ns_masking.sh@49 -- # MALLOC_BDEV_SIZE=64 00:15:17.178 23:18:06 -- target/ns_masking.sh@50 -- # MALLOC_BLOCK_SIZE=512 00:15:17.178 23:18:06 -- target/ns_masking.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:15:17.438 Malloc1 00:15:17.438 23:18:06 -- target/ns_masking.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:15:17.700 Malloc2 00:15:17.700 23:18:06 -- target/ns_masking.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:15:17.700 23:18:06 -- target/ns_masking.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:15:17.961 23:18:07 -- target/ns_masking.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:18.222 [2024-04-26 23:18:07.232328] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:18.222 23:18:07 -- target/ns_masking.sh@61 -- # connect 00:15:18.222 23:18:07 -- target/ns_masking.sh@18 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 62c977e7-cf56-459c-a2d7-720604cd8e4e -a 10.0.0.2 -s 4420 -i 4 00:15:18.222 23:18:07 -- target/ns_masking.sh@20 -- # waitforserial SPDKISFASTANDAWESOME 00:15:18.222 23:18:07 -- common/autotest_common.sh@1184 -- # local i=0 00:15:18.222 23:18:07 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:15:18.222 23:18:07 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:15:18.222 23:18:07 -- common/autotest_common.sh@1191 -- # sleep 2 00:15:20.768 23:18:09 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:15:20.768 23:18:09 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:15:20.768 23:18:09 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:15:20.768 23:18:09 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:15:20.768 23:18:09 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:15:20.768 23:18:09 -- common/autotest_common.sh@1194 -- # return 0 00:15:20.768 23:18:09 -- target/ns_masking.sh@22 -- # nvme list-subsys -o json 00:15:20.768 23:18:09 -- target/ns_masking.sh@22 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:15:20.768 23:18:09 -- target/ns_masking.sh@22 -- # ctrl_id=nvme0 00:15:20.768 23:18:09 -- target/ns_masking.sh@23 -- # [[ -z nvme0 ]] 00:15:20.768 23:18:09 -- target/ns_masking.sh@62 -- # ns_is_visible 0x1 00:15:20.768 23:18:09 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:15:20.768 23:18:09 -- target/ns_masking.sh@39 -- # grep 0x1 00:15:20.768 [ 0]:0x1 00:15:20.768 23:18:09 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:15:20.768 23:18:09 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:15:20.768 23:18:09 -- target/ns_masking.sh@40 -- # nguid=682b8583d641407fb74917760e94f1d9 00:15:20.768 23:18:09 -- target/ns_masking.sh@41 -- # [[ 682b8583d641407fb74917760e94f1d9 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:20.768 23:18:09 -- target/ns_masking.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:15:20.768 23:18:09 -- target/ns_masking.sh@66 -- # ns_is_visible 0x1 00:15:20.768 23:18:09 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:15:20.768 23:18:09 -- target/ns_masking.sh@39 -- # grep 0x1 00:15:20.768 [ 0]:0x1 00:15:20.768 23:18:09 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:15:20.769 23:18:09 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:15:20.769 23:18:09 -- target/ns_masking.sh@40 -- # nguid=682b8583d641407fb74917760e94f1d9 00:15:20.769 23:18:09 -- target/ns_masking.sh@41 -- # [[ 682b8583d641407fb74917760e94f1d9 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:20.769 23:18:09 -- target/ns_masking.sh@67 -- # ns_is_visible 0x2 00:15:20.769 23:18:09 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:15:20.769 23:18:09 -- target/ns_masking.sh@39 -- # grep 0x2 00:15:20.769 [ 1]:0x2 00:15:20.769 23:18:09 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:15:20.769 23:18:09 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:15:20.769 23:18:09 -- target/ns_masking.sh@40 -- # nguid=9f22da28c6744e3bbc4b06d0ebb0034f 00:15:20.769 23:18:09 -- target/ns_masking.sh@41 -- # [[ 9f22da28c6744e3bbc4b06d0ebb0034f != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:20.769 23:18:09 -- target/ns_masking.sh@69 -- # disconnect 00:15:20.769 23:18:09 -- target/ns_masking.sh@34 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:20.769 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:20.769 23:18:09 -- target/ns_masking.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:21.055 23:18:10 -- target/ns_masking.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:15:21.353 23:18:10 -- target/ns_masking.sh@77 -- # connect 1 00:15:21.353 23:18:10 -- target/ns_masking.sh@18 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 62c977e7-cf56-459c-a2d7-720604cd8e4e -a 10.0.0.2 -s 4420 -i 4 00:15:21.353 23:18:10 -- target/ns_masking.sh@20 -- # waitforserial SPDKISFASTANDAWESOME 1 00:15:21.353 23:18:10 -- common/autotest_common.sh@1184 -- # local i=0 00:15:21.353 23:18:10 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:15:21.353 23:18:10 -- common/autotest_common.sh@1186 -- # [[ -n 1 ]] 00:15:21.353 23:18:10 -- common/autotest_common.sh@1187 -- # nvme_device_counter=1 00:15:21.353 23:18:10 -- common/autotest_common.sh@1191 -- # sleep 2 00:15:23.273 23:18:12 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:15:23.273 23:18:12 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:15:23.273 23:18:12 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:15:23.273 23:18:12 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:15:23.273 23:18:12 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:15:23.273 23:18:12 -- common/autotest_common.sh@1194 -- # return 0 00:15:23.273 23:18:12 -- target/ns_masking.sh@22 -- # nvme list-subsys -o json 00:15:23.273 23:18:12 -- target/ns_masking.sh@22 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:15:23.273 23:18:12 -- target/ns_masking.sh@22 -- # ctrl_id=nvme0 00:15:23.273 23:18:12 -- target/ns_masking.sh@23 -- # [[ -z nvme0 ]] 00:15:23.273 23:18:12 -- target/ns_masking.sh@78 -- # NOT ns_is_visible 0x1 00:15:23.273 23:18:12 -- common/autotest_common.sh@638 -- # local es=0 00:15:23.273 23:18:12 -- common/autotest_common.sh@640 -- # valid_exec_arg ns_is_visible 0x1 00:15:23.273 23:18:12 -- common/autotest_common.sh@626 -- # local arg=ns_is_visible 00:15:23.273 23:18:12 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:15:23.273 23:18:12 -- common/autotest_common.sh@630 -- # type -t ns_is_visible 00:15:23.273 23:18:12 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:15:23.273 23:18:12 -- common/autotest_common.sh@641 -- # ns_is_visible 0x1 00:15:23.273 23:18:12 -- target/ns_masking.sh@39 -- # grep 0x1 00:15:23.273 23:18:12 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:15:23.533 23:18:12 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:15:23.533 23:18:12 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:15:23.533 23:18:12 -- target/ns_masking.sh@40 -- # nguid=00000000000000000000000000000000 00:15:23.533 23:18:12 -- target/ns_masking.sh@41 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:23.533 23:18:12 -- common/autotest_common.sh@641 -- # es=1 00:15:23.533 23:18:12 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:15:23.533 23:18:12 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:15:23.533 23:18:12 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:15:23.533 23:18:12 -- target/ns_masking.sh@79 -- # ns_is_visible 0x2 00:15:23.533 23:18:12 -- target/ns_masking.sh@39 -- # grep 0x2 00:15:23.533 23:18:12 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:15:23.533 [ 0]:0x2 00:15:23.533 23:18:12 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:15:23.533 23:18:12 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:15:23.533 23:18:12 -- target/ns_masking.sh@40 -- # nguid=9f22da28c6744e3bbc4b06d0ebb0034f 00:15:23.533 23:18:12 -- target/ns_masking.sh@41 -- # [[ 9f22da28c6744e3bbc4b06d0ebb0034f != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:23.533 23:18:12 -- target/ns_masking.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:15:23.793 23:18:12 -- target/ns_masking.sh@83 -- # ns_is_visible 0x1 00:15:23.793 23:18:12 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:15:23.793 23:18:12 -- target/ns_masking.sh@39 -- # grep 0x1 00:15:23.793 [ 0]:0x1 00:15:23.793 23:18:12 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:15:23.793 23:18:12 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:15:23.793 23:18:12 -- target/ns_masking.sh@40 -- # nguid=682b8583d641407fb74917760e94f1d9 00:15:23.793 23:18:12 -- target/ns_masking.sh@41 -- # [[ 682b8583d641407fb74917760e94f1d9 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:23.793 23:18:12 -- target/ns_masking.sh@84 -- # ns_is_visible 0x2 00:15:23.793 23:18:12 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:15:23.793 23:18:12 -- target/ns_masking.sh@39 -- # grep 0x2 00:15:23.793 [ 1]:0x2 00:15:23.793 23:18:12 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:15:23.793 23:18:12 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:15:23.793 23:18:12 -- target/ns_masking.sh@40 -- # nguid=9f22da28c6744e3bbc4b06d0ebb0034f 00:15:23.793 23:18:12 -- target/ns_masking.sh@41 -- # [[ 9f22da28c6744e3bbc4b06d0ebb0034f != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:23.793 23:18:12 -- target/ns_masking.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:15:24.053 23:18:13 -- target/ns_masking.sh@88 -- # NOT ns_is_visible 0x1 00:15:24.053 23:18:13 -- common/autotest_common.sh@638 -- # local es=0 00:15:24.053 23:18:13 -- common/autotest_common.sh@640 -- # valid_exec_arg ns_is_visible 0x1 00:15:24.053 23:18:13 -- common/autotest_common.sh@626 -- # local arg=ns_is_visible 00:15:24.053 23:18:13 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:15:24.053 23:18:13 -- common/autotest_common.sh@630 -- # type -t ns_is_visible 00:15:24.053 23:18:13 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:15:24.053 23:18:13 -- common/autotest_common.sh@641 -- # ns_is_visible 0x1 00:15:24.053 23:18:13 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:15:24.053 23:18:13 -- target/ns_masking.sh@39 -- # grep 0x1 00:15:24.053 23:18:13 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:15:24.053 23:18:13 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:15:24.053 23:18:13 -- target/ns_masking.sh@40 -- # nguid=00000000000000000000000000000000 00:15:24.053 23:18:13 -- target/ns_masking.sh@41 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:24.053 23:18:13 -- common/autotest_common.sh@641 -- # es=1 00:15:24.053 23:18:13 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:15:24.053 23:18:13 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:15:24.053 23:18:13 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:15:24.053 23:18:13 -- target/ns_masking.sh@89 -- # ns_is_visible 0x2 00:15:24.053 23:18:13 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:15:24.053 23:18:13 -- target/ns_masking.sh@39 -- # grep 0x2 00:15:24.053 [ 0]:0x2 00:15:24.053 23:18:13 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:15:24.053 23:18:13 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:15:24.053 23:18:13 -- target/ns_masking.sh@40 -- # nguid=9f22da28c6744e3bbc4b06d0ebb0034f 00:15:24.053 23:18:13 -- target/ns_masking.sh@41 -- # [[ 9f22da28c6744e3bbc4b06d0ebb0034f != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:24.053 23:18:13 -- target/ns_masking.sh@91 -- # disconnect 00:15:24.053 23:18:13 -- target/ns_masking.sh@34 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:24.313 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:24.313 23:18:13 -- target/ns_masking.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:15:24.313 23:18:13 -- target/ns_masking.sh@95 -- # connect 2 00:15:24.313 23:18:13 -- target/ns_masking.sh@18 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 62c977e7-cf56-459c-a2d7-720604cd8e4e -a 10.0.0.2 -s 4420 -i 4 00:15:24.573 23:18:13 -- target/ns_masking.sh@20 -- # waitforserial SPDKISFASTANDAWESOME 2 00:15:24.573 23:18:13 -- common/autotest_common.sh@1184 -- # local i=0 00:15:24.573 23:18:13 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:15:24.573 23:18:13 -- common/autotest_common.sh@1186 -- # [[ -n 2 ]] 00:15:24.573 23:18:13 -- common/autotest_common.sh@1187 -- # nvme_device_counter=2 00:15:24.573 23:18:13 -- common/autotest_common.sh@1191 -- # sleep 2 00:15:26.486 23:18:15 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:15:26.486 23:18:15 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:15:26.486 23:18:15 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:15:26.486 23:18:15 -- common/autotest_common.sh@1193 -- # nvme_devices=2 00:15:26.487 23:18:15 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:15:26.487 23:18:15 -- common/autotest_common.sh@1194 -- # return 0 00:15:26.487 23:18:15 -- target/ns_masking.sh@22 -- # nvme list-subsys -o json 00:15:26.487 23:18:15 -- target/ns_masking.sh@22 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:15:26.747 23:18:15 -- target/ns_masking.sh@22 -- # ctrl_id=nvme0 00:15:26.747 23:18:15 -- target/ns_masking.sh@23 -- # [[ -z nvme0 ]] 00:15:26.747 23:18:15 -- target/ns_masking.sh@96 -- # ns_is_visible 0x1 00:15:26.747 23:18:15 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:15:26.747 23:18:15 -- target/ns_masking.sh@39 -- # grep 0x1 00:15:26.747 [ 0]:0x1 00:15:26.747 23:18:15 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:15:26.747 23:18:15 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:15:26.747 23:18:15 -- target/ns_masking.sh@40 -- # nguid=682b8583d641407fb74917760e94f1d9 00:15:26.747 23:18:15 -- target/ns_masking.sh@41 -- # [[ 682b8583d641407fb74917760e94f1d9 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:26.747 23:18:15 -- target/ns_masking.sh@97 -- # ns_is_visible 0x2 00:15:26.747 23:18:15 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:15:26.747 23:18:15 -- target/ns_masking.sh@39 -- # grep 0x2 00:15:27.008 [ 1]:0x2 00:15:27.008 23:18:16 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:15:27.008 23:18:16 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:15:27.008 23:18:16 -- target/ns_masking.sh@40 -- # nguid=9f22da28c6744e3bbc4b06d0ebb0034f 00:15:27.008 23:18:16 -- target/ns_masking.sh@41 -- # [[ 9f22da28c6744e3bbc4b06d0ebb0034f != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:27.008 23:18:16 -- target/ns_masking.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:15:27.008 23:18:16 -- target/ns_masking.sh@101 -- # NOT ns_is_visible 0x1 00:15:27.008 23:18:16 -- common/autotest_common.sh@638 -- # local es=0 00:15:27.008 23:18:16 -- common/autotest_common.sh@640 -- # valid_exec_arg ns_is_visible 0x1 00:15:27.008 23:18:16 -- common/autotest_common.sh@626 -- # local arg=ns_is_visible 00:15:27.008 23:18:16 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:15:27.008 23:18:16 -- common/autotest_common.sh@630 -- # type -t ns_is_visible 00:15:27.008 23:18:16 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:15:27.008 23:18:16 -- common/autotest_common.sh@641 -- # ns_is_visible 0x1 00:15:27.008 23:18:16 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:15:27.008 23:18:16 -- target/ns_masking.sh@39 -- # grep 0x1 00:15:27.008 23:18:16 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:15:27.008 23:18:16 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:15:27.270 23:18:16 -- target/ns_masking.sh@40 -- # nguid=00000000000000000000000000000000 00:15:27.270 23:18:16 -- target/ns_masking.sh@41 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:27.270 23:18:16 -- common/autotest_common.sh@641 -- # es=1 00:15:27.270 23:18:16 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:15:27.270 23:18:16 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:15:27.270 23:18:16 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:15:27.270 23:18:16 -- target/ns_masking.sh@102 -- # ns_is_visible 0x2 00:15:27.270 23:18:16 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:15:27.270 23:18:16 -- target/ns_masking.sh@39 -- # grep 0x2 00:15:27.270 [ 0]:0x2 00:15:27.270 23:18:16 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:15:27.270 23:18:16 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:15:27.270 23:18:16 -- target/ns_masking.sh@40 -- # nguid=9f22da28c6744e3bbc4b06d0ebb0034f 00:15:27.270 23:18:16 -- target/ns_masking.sh@41 -- # [[ 9f22da28c6744e3bbc4b06d0ebb0034f != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:27.270 23:18:16 -- target/ns_masking.sh@105 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:15:27.270 23:18:16 -- common/autotest_common.sh@638 -- # local es=0 00:15:27.270 23:18:16 -- common/autotest_common.sh@640 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:15:27.270 23:18:16 -- common/autotest_common.sh@626 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:27.270 23:18:16 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:15:27.270 23:18:16 -- common/autotest_common.sh@630 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:27.270 23:18:16 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:15:27.270 23:18:16 -- common/autotest_common.sh@632 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:27.270 23:18:16 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:15:27.270 23:18:16 -- common/autotest_common.sh@632 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:27.270 23:18:16 -- common/autotest_common.sh@632 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:15:27.270 23:18:16 -- common/autotest_common.sh@641 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:15:27.270 [2024-04-26 23:18:16.489463] nvmf_rpc.c:1779:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:15:27.270 request: 00:15:27.270 { 00:15:27.270 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:27.270 "nsid": 2, 00:15:27.270 "host": "nqn.2016-06.io.spdk:host1", 00:15:27.270 "method": "nvmf_ns_remove_host", 00:15:27.270 "req_id": 1 00:15:27.270 } 00:15:27.270 Got JSON-RPC error response 00:15:27.270 response: 00:15:27.270 { 00:15:27.270 "code": -32602, 00:15:27.270 "message": "Invalid parameters" 00:15:27.270 } 00:15:27.270 23:18:16 -- common/autotest_common.sh@641 -- # es=1 00:15:27.270 23:18:16 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:15:27.270 23:18:16 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:15:27.270 23:18:16 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:15:27.270 23:18:16 -- target/ns_masking.sh@106 -- # NOT ns_is_visible 0x1 00:15:27.270 23:18:16 -- common/autotest_common.sh@638 -- # local es=0 00:15:27.270 23:18:16 -- common/autotest_common.sh@640 -- # valid_exec_arg ns_is_visible 0x1 00:15:27.270 23:18:16 -- common/autotest_common.sh@626 -- # local arg=ns_is_visible 00:15:27.531 23:18:16 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:15:27.532 23:18:16 -- common/autotest_common.sh@630 -- # type -t ns_is_visible 00:15:27.532 23:18:16 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:15:27.532 23:18:16 -- common/autotest_common.sh@641 -- # ns_is_visible 0x1 00:15:27.532 23:18:16 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:15:27.532 23:18:16 -- target/ns_masking.sh@39 -- # grep 0x1 00:15:27.532 23:18:16 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:15:27.532 23:18:16 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:15:27.532 23:18:16 -- target/ns_masking.sh@40 -- # nguid=00000000000000000000000000000000 00:15:27.532 23:18:16 -- target/ns_masking.sh@41 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:27.532 23:18:16 -- common/autotest_common.sh@641 -- # es=1 00:15:27.532 23:18:16 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:15:27.532 23:18:16 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:15:27.532 23:18:16 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:15:27.532 23:18:16 -- target/ns_masking.sh@107 -- # ns_is_visible 0x2 00:15:27.532 23:18:16 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:15:27.532 23:18:16 -- target/ns_masking.sh@39 -- # grep 0x2 00:15:27.532 [ 0]:0x2 00:15:27.532 23:18:16 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:15:27.532 23:18:16 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:15:27.532 23:18:16 -- target/ns_masking.sh@40 -- # nguid=9f22da28c6744e3bbc4b06d0ebb0034f 00:15:27.532 23:18:16 -- target/ns_masking.sh@41 -- # [[ 9f22da28c6744e3bbc4b06d0ebb0034f != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:27.532 23:18:16 -- target/ns_masking.sh@108 -- # disconnect 00:15:27.532 23:18:16 -- target/ns_masking.sh@34 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:27.532 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:27.532 23:18:16 -- target/ns_masking.sh@110 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:27.793 23:18:16 -- target/ns_masking.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:15:27.793 23:18:16 -- target/ns_masking.sh@114 -- # nvmftestfini 00:15:27.793 23:18:16 -- nvmf/common.sh@477 -- # nvmfcleanup 00:15:27.793 23:18:16 -- nvmf/common.sh@117 -- # sync 00:15:27.793 23:18:16 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:27.793 23:18:16 -- nvmf/common.sh@120 -- # set +e 00:15:27.793 23:18:16 -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:27.793 23:18:16 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:27.793 rmmod nvme_tcp 00:15:27.793 rmmod nvme_fabrics 00:15:27.793 rmmod nvme_keyring 00:15:27.793 23:18:16 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:27.793 23:18:16 -- nvmf/common.sh@124 -- # set -e 00:15:27.793 23:18:16 -- nvmf/common.sh@125 -- # return 0 00:15:27.793 23:18:16 -- nvmf/common.sh@478 -- # '[' -n 3877950 ']' 00:15:27.793 23:18:16 -- nvmf/common.sh@479 -- # killprocess 3877950 00:15:27.793 23:18:16 -- common/autotest_common.sh@936 -- # '[' -z 3877950 ']' 00:15:27.793 23:18:16 -- common/autotest_common.sh@940 -- # kill -0 3877950 00:15:27.793 23:18:16 -- common/autotest_common.sh@941 -- # uname 00:15:27.793 23:18:16 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:15:27.793 23:18:16 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3877950 00:15:27.793 23:18:16 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:15:27.793 23:18:16 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:15:27.793 23:18:16 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3877950' 00:15:27.793 killing process with pid 3877950 00:15:27.793 23:18:16 -- common/autotest_common.sh@955 -- # kill 3877950 00:15:27.793 23:18:16 -- common/autotest_common.sh@960 -- # wait 3877950 00:15:28.053 23:18:17 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:15:28.053 23:18:17 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:15:28.053 23:18:17 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:15:28.053 23:18:17 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:28.053 23:18:17 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:28.053 23:18:17 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:28.053 23:18:17 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:28.053 23:18:17 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:29.966 23:18:19 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:15:29.966 00:15:29.966 real 0m21.200s 00:15:29.966 user 0m50.842s 00:15:29.966 sys 0m6.906s 00:15:29.966 23:18:19 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:15:29.966 23:18:19 -- common/autotest_common.sh@10 -- # set +x 00:15:29.966 ************************************ 00:15:29.966 END TEST nvmf_ns_masking 00:15:29.966 ************************************ 00:15:30.226 23:18:19 -- nvmf/nvmf.sh@37 -- # [[ 1 -eq 1 ]] 00:15:30.226 23:18:19 -- nvmf/nvmf.sh@38 -- # run_test nvmf_nvme_cli /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:15:30.226 23:18:19 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:15:30.226 23:18:19 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:15:30.226 23:18:19 -- common/autotest_common.sh@10 -- # set +x 00:15:30.226 ************************************ 00:15:30.227 START TEST nvmf_nvme_cli 00:15:30.227 ************************************ 00:15:30.227 23:18:19 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:15:30.227 * Looking for test storage... 00:15:30.227 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:30.227 23:18:19 -- target/nvme_cli.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:30.227 23:18:19 -- nvmf/common.sh@7 -- # uname -s 00:15:30.227 23:18:19 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:30.227 23:18:19 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:30.227 23:18:19 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:30.227 23:18:19 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:30.227 23:18:19 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:30.227 23:18:19 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:30.227 23:18:19 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:30.227 23:18:19 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:30.227 23:18:19 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:30.487 23:18:19 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:30.487 23:18:19 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:15:30.487 23:18:19 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:15:30.487 23:18:19 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:30.487 23:18:19 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:30.487 23:18:19 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:30.487 23:18:19 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:30.487 23:18:19 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:30.487 23:18:19 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:30.487 23:18:19 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:30.487 23:18:19 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:30.487 23:18:19 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:30.488 23:18:19 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:30.488 23:18:19 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:30.488 23:18:19 -- paths/export.sh@5 -- # export PATH 00:15:30.488 23:18:19 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:30.488 23:18:19 -- nvmf/common.sh@47 -- # : 0 00:15:30.488 23:18:19 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:30.488 23:18:19 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:30.488 23:18:19 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:30.488 23:18:19 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:30.488 23:18:19 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:30.488 23:18:19 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:30.488 23:18:19 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:30.488 23:18:19 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:30.488 23:18:19 -- target/nvme_cli.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:30.488 23:18:19 -- target/nvme_cli.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:30.488 23:18:19 -- target/nvme_cli.sh@14 -- # devs=() 00:15:30.488 23:18:19 -- target/nvme_cli.sh@16 -- # nvmftestinit 00:15:30.488 23:18:19 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:15:30.488 23:18:19 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:30.488 23:18:19 -- nvmf/common.sh@437 -- # prepare_net_devs 00:15:30.488 23:18:19 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:15:30.488 23:18:19 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:15:30.488 23:18:19 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:30.488 23:18:19 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:30.488 23:18:19 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:30.488 23:18:19 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:15:30.488 23:18:19 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:15:30.488 23:18:19 -- nvmf/common.sh@285 -- # xtrace_disable 00:15:30.488 23:18:19 -- common/autotest_common.sh@10 -- # set +x 00:15:38.636 23:18:26 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:15:38.636 23:18:26 -- nvmf/common.sh@291 -- # pci_devs=() 00:15:38.636 23:18:26 -- nvmf/common.sh@291 -- # local -a pci_devs 00:15:38.636 23:18:26 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:15:38.636 23:18:26 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:15:38.636 23:18:26 -- nvmf/common.sh@293 -- # pci_drivers=() 00:15:38.636 23:18:26 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:15:38.636 23:18:26 -- nvmf/common.sh@295 -- # net_devs=() 00:15:38.636 23:18:26 -- nvmf/common.sh@295 -- # local -ga net_devs 00:15:38.636 23:18:26 -- nvmf/common.sh@296 -- # e810=() 00:15:38.636 23:18:26 -- nvmf/common.sh@296 -- # local -ga e810 00:15:38.636 23:18:26 -- nvmf/common.sh@297 -- # x722=() 00:15:38.636 23:18:26 -- nvmf/common.sh@297 -- # local -ga x722 00:15:38.636 23:18:26 -- nvmf/common.sh@298 -- # mlx=() 00:15:38.637 23:18:26 -- nvmf/common.sh@298 -- # local -ga mlx 00:15:38.637 23:18:26 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:38.637 23:18:26 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:38.637 23:18:26 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:38.637 23:18:26 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:38.637 23:18:26 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:38.637 23:18:26 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:38.637 23:18:26 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:38.637 23:18:26 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:38.637 23:18:26 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:38.637 23:18:26 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:38.637 23:18:26 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:38.637 23:18:26 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:15:38.637 23:18:26 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:15:38.637 23:18:26 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:15:38.637 23:18:26 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:15:38.637 23:18:26 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:15:38.637 23:18:26 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:15:38.637 23:18:26 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:38.637 23:18:26 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:15:38.637 Found 0000:31:00.0 (0x8086 - 0x159b) 00:15:38.637 23:18:26 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:38.637 23:18:26 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:38.637 23:18:26 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:38.637 23:18:26 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:38.637 23:18:26 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:38.637 23:18:26 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:38.637 23:18:26 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:15:38.637 Found 0000:31:00.1 (0x8086 - 0x159b) 00:15:38.637 23:18:26 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:38.637 23:18:26 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:38.637 23:18:26 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:38.637 23:18:26 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:38.637 23:18:26 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:38.637 23:18:26 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:15:38.637 23:18:26 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:15:38.637 23:18:26 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:15:38.637 23:18:26 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:38.637 23:18:26 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:38.637 23:18:26 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:15:38.637 23:18:26 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:38.637 23:18:26 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:15:38.637 Found net devices under 0000:31:00.0: cvl_0_0 00:15:38.637 23:18:26 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:15:38.637 23:18:26 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:38.637 23:18:26 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:38.637 23:18:26 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:15:38.637 23:18:26 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:38.637 23:18:26 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:15:38.637 Found net devices under 0000:31:00.1: cvl_0_1 00:15:38.637 23:18:26 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:15:38.637 23:18:26 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:15:38.637 23:18:26 -- nvmf/common.sh@403 -- # is_hw=yes 00:15:38.637 23:18:26 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:15:38.637 23:18:26 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:15:38.637 23:18:26 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:15:38.637 23:18:26 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:38.637 23:18:26 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:38.637 23:18:26 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:38.637 23:18:26 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:15:38.637 23:18:26 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:38.637 23:18:26 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:38.637 23:18:26 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:15:38.637 23:18:26 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:38.637 23:18:26 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:38.637 23:18:26 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:15:38.637 23:18:26 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:15:38.637 23:18:26 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:15:38.637 23:18:26 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:38.637 23:18:26 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:38.637 23:18:26 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:38.637 23:18:26 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:15:38.637 23:18:26 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:38.637 23:18:26 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:38.637 23:18:26 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:38.637 23:18:26 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:15:38.637 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:38.637 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.593 ms 00:15:38.637 00:15:38.637 --- 10.0.0.2 ping statistics --- 00:15:38.637 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:38.637 rtt min/avg/max/mdev = 0.593/0.593/0.593/0.000 ms 00:15:38.637 23:18:26 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:38.637 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:38.637 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.313 ms 00:15:38.637 00:15:38.637 --- 10.0.0.1 ping statistics --- 00:15:38.637 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:38.637 rtt min/avg/max/mdev = 0.313/0.313/0.313/0.000 ms 00:15:38.637 23:18:26 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:38.637 23:18:26 -- nvmf/common.sh@411 -- # return 0 00:15:38.637 23:18:26 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:15:38.637 23:18:26 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:38.637 23:18:26 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:15:38.637 23:18:26 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:15:38.637 23:18:26 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:38.637 23:18:26 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:15:38.637 23:18:26 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:15:38.637 23:18:26 -- target/nvme_cli.sh@17 -- # nvmfappstart -m 0xF 00:15:38.637 23:18:26 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:15:38.637 23:18:26 -- common/autotest_common.sh@710 -- # xtrace_disable 00:15:38.637 23:18:26 -- common/autotest_common.sh@10 -- # set +x 00:15:38.637 23:18:26 -- nvmf/common.sh@470 -- # nvmfpid=3885008 00:15:38.637 23:18:26 -- nvmf/common.sh@471 -- # waitforlisten 3885008 00:15:38.637 23:18:26 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:15:38.637 23:18:26 -- common/autotest_common.sh@817 -- # '[' -z 3885008 ']' 00:15:38.637 23:18:26 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:38.637 23:18:26 -- common/autotest_common.sh@822 -- # local max_retries=100 00:15:38.637 23:18:26 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:38.637 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:38.637 23:18:26 -- common/autotest_common.sh@826 -- # xtrace_disable 00:15:38.637 23:18:26 -- common/autotest_common.sh@10 -- # set +x 00:15:38.637 [2024-04-26 23:18:26.790888] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:15:38.637 [2024-04-26 23:18:26.790939] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:38.637 EAL: No free 2048 kB hugepages reported on node 1 00:15:38.637 [2024-04-26 23:18:26.857685] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:38.637 [2024-04-26 23:18:26.889404] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:38.637 [2024-04-26 23:18:26.889448] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:38.637 [2024-04-26 23:18:26.889456] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:38.637 [2024-04-26 23:18:26.889464] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:38.637 [2024-04-26 23:18:26.889472] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:38.637 [2024-04-26 23:18:26.889620] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:38.637 [2024-04-26 23:18:26.889752] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:15:38.637 [2024-04-26 23:18:26.889902] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:38.637 [2024-04-26 23:18:26.889903] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:15:38.637 23:18:27 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:15:38.637 23:18:27 -- common/autotest_common.sh@850 -- # return 0 00:15:38.637 23:18:27 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:15:38.637 23:18:27 -- common/autotest_common.sh@716 -- # xtrace_disable 00:15:38.637 23:18:27 -- common/autotest_common.sh@10 -- # set +x 00:15:38.637 23:18:27 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:38.637 23:18:27 -- target/nvme_cli.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:38.637 23:18:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:38.637 23:18:27 -- common/autotest_common.sh@10 -- # set +x 00:15:38.637 [2024-04-26 23:18:27.611470] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:38.637 23:18:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:38.637 23:18:27 -- target/nvme_cli.sh@21 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:15:38.637 23:18:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:38.637 23:18:27 -- common/autotest_common.sh@10 -- # set +x 00:15:38.637 Malloc0 00:15:38.637 23:18:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:38.637 23:18:27 -- target/nvme_cli.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:15:38.637 23:18:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:38.637 23:18:27 -- common/autotest_common.sh@10 -- # set +x 00:15:38.637 Malloc1 00:15:38.637 23:18:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:38.637 23:18:27 -- target/nvme_cli.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -d SPDK_Controller1 -i 291 00:15:38.637 23:18:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:38.637 23:18:27 -- common/autotest_common.sh@10 -- # set +x 00:15:38.638 23:18:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:38.638 23:18:27 -- target/nvme_cli.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:15:38.638 23:18:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:38.638 23:18:27 -- common/autotest_common.sh@10 -- # set +x 00:15:38.638 23:18:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:38.638 23:18:27 -- target/nvme_cli.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:15:38.638 23:18:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:38.638 23:18:27 -- common/autotest_common.sh@10 -- # set +x 00:15:38.638 23:18:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:38.638 23:18:27 -- target/nvme_cli.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:38.638 23:18:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:38.638 23:18:27 -- common/autotest_common.sh@10 -- # set +x 00:15:38.638 [2024-04-26 23:18:27.701278] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:38.638 23:18:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:38.638 23:18:27 -- target/nvme_cli.sh@28 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:15:38.638 23:18:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:38.638 23:18:27 -- common/autotest_common.sh@10 -- # set +x 00:15:38.638 23:18:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:38.638 23:18:27 -- target/nvme_cli.sh@30 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -a 10.0.0.2 -s 4420 00:15:38.638 00:15:38.638 Discovery Log Number of Records 2, Generation counter 2 00:15:38.638 =====Discovery Log Entry 0====== 00:15:38.638 trtype: tcp 00:15:38.638 adrfam: ipv4 00:15:38.638 subtype: current discovery subsystem 00:15:38.638 treq: not required 00:15:38.638 portid: 0 00:15:38.638 trsvcid: 4420 00:15:38.638 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:15:38.638 traddr: 10.0.0.2 00:15:38.638 eflags: explicit discovery connections, duplicate discovery information 00:15:38.638 sectype: none 00:15:38.638 =====Discovery Log Entry 1====== 00:15:38.638 trtype: tcp 00:15:38.638 adrfam: ipv4 00:15:38.638 subtype: nvme subsystem 00:15:38.638 treq: not required 00:15:38.638 portid: 0 00:15:38.638 trsvcid: 4420 00:15:38.638 subnqn: nqn.2016-06.io.spdk:cnode1 00:15:38.638 traddr: 10.0.0.2 00:15:38.638 eflags: none 00:15:38.638 sectype: none 00:15:38.638 23:18:27 -- target/nvme_cli.sh@31 -- # devs=($(get_nvme_devs)) 00:15:38.638 23:18:27 -- target/nvme_cli.sh@31 -- # get_nvme_devs 00:15:38.638 23:18:27 -- nvmf/common.sh@511 -- # local dev _ 00:15:38.638 23:18:27 -- nvmf/common.sh@513 -- # read -r dev _ 00:15:38.638 23:18:27 -- nvmf/common.sh@510 -- # nvme list 00:15:38.638 23:18:27 -- nvmf/common.sh@514 -- # [[ Node == /dev/nvme* ]] 00:15:38.638 23:18:27 -- nvmf/common.sh@513 -- # read -r dev _ 00:15:38.638 23:18:27 -- nvmf/common.sh@514 -- # [[ --------------------- == /dev/nvme* ]] 00:15:38.638 23:18:27 -- nvmf/common.sh@513 -- # read -r dev _ 00:15:38.638 23:18:27 -- target/nvme_cli.sh@31 -- # nvme_num_before_connection=0 00:15:38.638 23:18:27 -- target/nvme_cli.sh@32 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:15:40.550 23:18:29 -- target/nvme_cli.sh@34 -- # waitforserial SPDKISFASTANDAWESOME 2 00:15:40.550 23:18:29 -- common/autotest_common.sh@1184 -- # local i=0 00:15:40.550 23:18:29 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:15:40.550 23:18:29 -- common/autotest_common.sh@1186 -- # [[ -n 2 ]] 00:15:40.550 23:18:29 -- common/autotest_common.sh@1187 -- # nvme_device_counter=2 00:15:40.550 23:18:29 -- common/autotest_common.sh@1191 -- # sleep 2 00:15:42.463 23:18:31 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:15:42.463 23:18:31 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:15:42.463 23:18:31 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:15:42.463 23:18:31 -- common/autotest_common.sh@1193 -- # nvme_devices=2 00:15:42.463 23:18:31 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:15:42.463 23:18:31 -- common/autotest_common.sh@1194 -- # return 0 00:15:42.463 23:18:31 -- target/nvme_cli.sh@35 -- # get_nvme_devs 00:15:42.463 23:18:31 -- nvmf/common.sh@511 -- # local dev _ 00:15:42.463 23:18:31 -- nvmf/common.sh@513 -- # read -r dev _ 00:15:42.463 23:18:31 -- nvmf/common.sh@510 -- # nvme list 00:15:42.463 23:18:31 -- nvmf/common.sh@514 -- # [[ Node == /dev/nvme* ]] 00:15:42.463 23:18:31 -- nvmf/common.sh@513 -- # read -r dev _ 00:15:42.463 23:18:31 -- nvmf/common.sh@514 -- # [[ --------------------- == /dev/nvme* ]] 00:15:42.463 23:18:31 -- nvmf/common.sh@513 -- # read -r dev _ 00:15:42.463 23:18:31 -- nvmf/common.sh@514 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:15:42.463 23:18:31 -- nvmf/common.sh@515 -- # echo /dev/nvme0n2 00:15:42.463 23:18:31 -- nvmf/common.sh@513 -- # read -r dev _ 00:15:42.463 23:18:31 -- nvmf/common.sh@514 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:15:42.463 23:18:31 -- nvmf/common.sh@515 -- # echo /dev/nvme0n1 00:15:42.463 23:18:31 -- nvmf/common.sh@513 -- # read -r dev _ 00:15:42.463 23:18:31 -- target/nvme_cli.sh@35 -- # [[ -z /dev/nvme0n2 00:15:42.463 /dev/nvme0n1 ]] 00:15:42.463 23:18:31 -- target/nvme_cli.sh@59 -- # devs=($(get_nvme_devs)) 00:15:42.463 23:18:31 -- target/nvme_cli.sh@59 -- # get_nvme_devs 00:15:42.463 23:18:31 -- nvmf/common.sh@511 -- # local dev _ 00:15:42.463 23:18:31 -- nvmf/common.sh@513 -- # read -r dev _ 00:15:42.463 23:18:31 -- nvmf/common.sh@510 -- # nvme list 00:15:42.463 23:18:31 -- nvmf/common.sh@514 -- # [[ Node == /dev/nvme* ]] 00:15:42.463 23:18:31 -- nvmf/common.sh@513 -- # read -r dev _ 00:15:42.463 23:18:31 -- nvmf/common.sh@514 -- # [[ --------------------- == /dev/nvme* ]] 00:15:42.463 23:18:31 -- nvmf/common.sh@513 -- # read -r dev _ 00:15:42.463 23:18:31 -- nvmf/common.sh@514 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:15:42.463 23:18:31 -- nvmf/common.sh@515 -- # echo /dev/nvme0n2 00:15:42.463 23:18:31 -- nvmf/common.sh@513 -- # read -r dev _ 00:15:42.463 23:18:31 -- nvmf/common.sh@514 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:15:42.463 23:18:31 -- nvmf/common.sh@515 -- # echo /dev/nvme0n1 00:15:42.463 23:18:31 -- nvmf/common.sh@513 -- # read -r dev _ 00:15:42.463 23:18:31 -- target/nvme_cli.sh@59 -- # nvme_num=2 00:15:42.463 23:18:31 -- target/nvme_cli.sh@60 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:42.463 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:42.463 23:18:31 -- target/nvme_cli.sh@61 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:15:42.463 23:18:31 -- common/autotest_common.sh@1205 -- # local i=0 00:15:42.463 23:18:31 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:15:42.463 23:18:31 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:42.463 23:18:31 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:15:42.463 23:18:31 -- common/autotest_common.sh@1213 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:42.463 23:18:31 -- common/autotest_common.sh@1217 -- # return 0 00:15:42.463 23:18:31 -- target/nvme_cli.sh@62 -- # (( nvme_num <= nvme_num_before_connection )) 00:15:42.463 23:18:31 -- target/nvme_cli.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:42.463 23:18:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:42.463 23:18:31 -- common/autotest_common.sh@10 -- # set +x 00:15:42.463 23:18:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:42.463 23:18:31 -- target/nvme_cli.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:15:42.463 23:18:31 -- target/nvme_cli.sh@70 -- # nvmftestfini 00:15:42.463 23:18:31 -- nvmf/common.sh@477 -- # nvmfcleanup 00:15:42.463 23:18:31 -- nvmf/common.sh@117 -- # sync 00:15:42.463 23:18:31 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:42.463 23:18:31 -- nvmf/common.sh@120 -- # set +e 00:15:42.463 23:18:31 -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:42.463 23:18:31 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:42.463 rmmod nvme_tcp 00:15:42.463 rmmod nvme_fabrics 00:15:42.463 rmmod nvme_keyring 00:15:42.463 23:18:31 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:42.463 23:18:31 -- nvmf/common.sh@124 -- # set -e 00:15:42.463 23:18:31 -- nvmf/common.sh@125 -- # return 0 00:15:42.463 23:18:31 -- nvmf/common.sh@478 -- # '[' -n 3885008 ']' 00:15:42.463 23:18:31 -- nvmf/common.sh@479 -- # killprocess 3885008 00:15:42.463 23:18:31 -- common/autotest_common.sh@936 -- # '[' -z 3885008 ']' 00:15:42.463 23:18:31 -- common/autotest_common.sh@940 -- # kill -0 3885008 00:15:42.463 23:18:31 -- common/autotest_common.sh@941 -- # uname 00:15:42.463 23:18:31 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:15:42.463 23:18:31 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3885008 00:15:42.725 23:18:31 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:15:42.725 23:18:31 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:15:42.725 23:18:31 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3885008' 00:15:42.725 killing process with pid 3885008 00:15:42.725 23:18:31 -- common/autotest_common.sh@955 -- # kill 3885008 00:15:42.725 23:18:31 -- common/autotest_common.sh@960 -- # wait 3885008 00:15:42.725 23:18:31 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:15:42.725 23:18:31 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:15:42.725 23:18:31 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:15:42.725 23:18:31 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:42.725 23:18:31 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:42.725 23:18:31 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:42.725 23:18:31 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:42.725 23:18:31 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:45.270 23:18:33 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:15:45.270 00:15:45.270 real 0m14.582s 00:15:45.270 user 0m22.068s 00:15:45.270 sys 0m5.821s 00:15:45.270 23:18:33 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:15:45.270 23:18:33 -- common/autotest_common.sh@10 -- # set +x 00:15:45.270 ************************************ 00:15:45.270 END TEST nvmf_nvme_cli 00:15:45.270 ************************************ 00:15:45.270 23:18:33 -- nvmf/nvmf.sh@40 -- # [[ 1 -eq 1 ]] 00:15:45.270 23:18:33 -- nvmf/nvmf.sh@41 -- # run_test nvmf_vfio_user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:15:45.270 23:18:33 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:15:45.270 23:18:33 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:15:45.270 23:18:33 -- common/autotest_common.sh@10 -- # set +x 00:15:45.270 ************************************ 00:15:45.270 START TEST nvmf_vfio_user 00:15:45.270 ************************************ 00:15:45.270 23:18:34 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:15:45.270 * Looking for test storage... 00:15:45.270 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:45.270 23:18:34 -- target/nvmf_vfio_user.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:45.270 23:18:34 -- nvmf/common.sh@7 -- # uname -s 00:15:45.270 23:18:34 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:45.270 23:18:34 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:45.270 23:18:34 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:45.270 23:18:34 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:45.270 23:18:34 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:45.270 23:18:34 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:45.270 23:18:34 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:45.270 23:18:34 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:45.270 23:18:34 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:45.270 23:18:34 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:45.270 23:18:34 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:15:45.270 23:18:34 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:15:45.270 23:18:34 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:45.270 23:18:34 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:45.270 23:18:34 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:45.270 23:18:34 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:45.270 23:18:34 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:45.270 23:18:34 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:45.270 23:18:34 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:45.270 23:18:34 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:45.270 23:18:34 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:45.270 23:18:34 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:45.270 23:18:34 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:45.270 23:18:34 -- paths/export.sh@5 -- # export PATH 00:15:45.270 23:18:34 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:45.270 23:18:34 -- nvmf/common.sh@47 -- # : 0 00:15:45.270 23:18:34 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:45.270 23:18:34 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:45.270 23:18:34 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:45.270 23:18:34 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:45.270 23:18:34 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:45.270 23:18:34 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:45.270 23:18:34 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:45.270 23:18:34 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:45.270 23:18:34 -- target/nvmf_vfio_user.sh@12 -- # MALLOC_BDEV_SIZE=64 00:15:45.270 23:18:34 -- target/nvmf_vfio_user.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:15:45.270 23:18:34 -- target/nvmf_vfio_user.sh@14 -- # NUM_DEVICES=2 00:15:45.270 23:18:34 -- target/nvmf_vfio_user.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:45.270 23:18:34 -- target/nvmf_vfio_user.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:15:45.270 23:18:34 -- target/nvmf_vfio_user.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:15:45.270 23:18:34 -- target/nvmf_vfio_user.sh@47 -- # rm -rf /var/run/vfio-user 00:15:45.270 23:18:34 -- target/nvmf_vfio_user.sh@103 -- # setup_nvmf_vfio_user '' '' 00:15:45.270 23:18:34 -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args= 00:15:45.270 23:18:34 -- target/nvmf_vfio_user.sh@52 -- # local transport_args= 00:15:45.270 23:18:34 -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=3886772 00:15:45.271 23:18:34 -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 3886772' 00:15:45.271 Process pid: 3886772 00:15:45.271 23:18:34 -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:15:45.271 23:18:34 -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 3886772 00:15:45.271 23:18:34 -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' 00:15:45.271 23:18:34 -- common/autotest_common.sh@817 -- # '[' -z 3886772 ']' 00:15:45.271 23:18:34 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:45.271 23:18:34 -- common/autotest_common.sh@822 -- # local max_retries=100 00:15:45.271 23:18:34 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:45.271 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:45.271 23:18:34 -- common/autotest_common.sh@826 -- # xtrace_disable 00:15:45.271 23:18:34 -- common/autotest_common.sh@10 -- # set +x 00:15:45.271 [2024-04-26 23:18:34.353733] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:15:45.271 [2024-04-26 23:18:34.353803] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:45.271 EAL: No free 2048 kB hugepages reported on node 1 00:15:45.271 [2024-04-26 23:18:34.420176] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:45.271 [2024-04-26 23:18:34.457893] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:45.271 [2024-04-26 23:18:34.457943] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:45.271 [2024-04-26 23:18:34.457952] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:45.271 [2024-04-26 23:18:34.457959] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:45.271 [2024-04-26 23:18:34.457965] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:45.271 [2024-04-26 23:18:34.458081] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:45.271 [2024-04-26 23:18:34.458207] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:15:45.271 [2024-04-26 23:18:34.458372] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:45.271 [2024-04-26 23:18:34.458373] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:15:46.211 23:18:35 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:15:46.211 23:18:35 -- common/autotest_common.sh@850 -- # return 0 00:15:46.211 23:18:35 -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:15:47.152 23:18:36 -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER 00:15:47.152 23:18:36 -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:15:47.152 23:18:36 -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:15:47.152 23:18:36 -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:15:47.152 23:18:36 -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:15:47.152 23:18:36 -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:15:47.412 Malloc1 00:15:47.412 23:18:36 -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:15:47.672 23:18:36 -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:15:47.672 23:18:36 -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:15:47.947 23:18:36 -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:15:47.948 23:18:36 -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:15:47.948 23:18:36 -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:15:47.948 Malloc2 00:15:47.948 23:18:37 -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:15:48.212 23:18:37 -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:15:48.473 23:18:37 -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:15:48.473 23:18:37 -- target/nvmf_vfio_user.sh@104 -- # run_nvmf_vfio_user 00:15:48.473 23:18:37 -- target/nvmf_vfio_user.sh@80 -- # seq 1 2 00:15:48.473 23:18:37 -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:15:48.473 23:18:37 -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user1/1 00:15:48.473 23:18:37 -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode1 00:15:48.473 23:18:37 -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -L nvme -L nvme_vfio -L vfio_pci 00:15:48.473 [2024-04-26 23:18:37.696492] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:15:48.473 [2024-04-26 23:18:37.696536] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3887432 ] 00:15:48.473 EAL: No free 2048 kB hugepages reported on node 1 00:15:48.735 [2024-04-26 23:18:37.729428] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user1/1 00:15:48.735 [2024-04-26 23:18:37.738444] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:15:48.735 [2024-04-26 23:18:37.738464] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7eff5207a000 00:15:48.735 [2024-04-26 23:18:37.739442] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:48.735 [2024-04-26 23:18:37.740446] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:48.735 [2024-04-26 23:18:37.741450] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:48.735 [2024-04-26 23:18:37.742456] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:15:48.735 [2024-04-26 23:18:37.743457] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:15:48.735 [2024-04-26 23:18:37.744462] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:48.735 [2024-04-26 23:18:37.745469] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:15:48.736 [2024-04-26 23:18:37.746470] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:48.736 [2024-04-26 23:18:37.747477] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:15:48.736 [2024-04-26 23:18:37.747487] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7eff50e3e000 00:15:48.736 [2024-04-26 23:18:37.748813] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:15:48.736 [2024-04-26 23:18:37.768997] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user1/1/cntrl Setup Successfully 00:15:48.736 [2024-04-26 23:18:37.769020] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to connect adminq (no timeout) 00:15:48.736 [2024-04-26 23:18:37.771622] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:15:48.736 [2024-04-26 23:18:37.771664] nvme_pcie_common.c: 132:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:15:48.736 [2024-04-26 23:18:37.771745] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for connect adminq (no timeout) 00:15:48.736 [2024-04-26 23:18:37.771763] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read vs (no timeout) 00:15:48.736 [2024-04-26 23:18:37.771769] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read vs wait for vs (no timeout) 00:15:48.736 [2024-04-26 23:18:37.772617] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x8, value 0x10300 00:15:48.736 [2024-04-26 23:18:37.772626] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read cap (no timeout) 00:15:48.736 [2024-04-26 23:18:37.772633] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read cap wait for cap (no timeout) 00:15:48.736 [2024-04-26 23:18:37.773628] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:15:48.736 [2024-04-26 23:18:37.773636] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to check en (no timeout) 00:15:48.736 [2024-04-26 23:18:37.773644] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to check en wait for cc (timeout 15000 ms) 00:15:48.736 [2024-04-26 23:18:37.774633] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x0 00:15:48.736 [2024-04-26 23:18:37.774641] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:15:48.736 [2024-04-26 23:18:37.775637] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x0 00:15:48.736 [2024-04-26 23:18:37.775645] nvme_ctrlr.c:3749:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CC.EN = 0 && CSTS.RDY = 0 00:15:48.736 [2024-04-26 23:18:37.775650] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to controller is disabled (timeout 15000 ms) 00:15:48.736 [2024-04-26 23:18:37.775656] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:15:48.736 [2024-04-26 23:18:37.775762] nvme_ctrlr.c:3942:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Setting CC.EN = 1 00:15:48.736 [2024-04-26 23:18:37.775766] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:15:48.736 [2024-04-26 23:18:37.775771] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x28, value 0x2000003c0000 00:15:48.736 [2024-04-26 23:18:37.776640] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x30, value 0x2000003be000 00:15:48.736 [2024-04-26 23:18:37.777643] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x24, value 0xff00ff 00:15:48.736 [2024-04-26 23:18:37.778653] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:15:48.736 [2024-04-26 23:18:37.779646] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:48.736 [2024-04-26 23:18:37.779714] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:15:48.736 [2024-04-26 23:18:37.780661] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x1 00:15:48.736 [2024-04-26 23:18:37.780669] nvme_ctrlr.c:3784:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:15:48.736 [2024-04-26 23:18:37.780674] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to reset admin queue (timeout 30000 ms) 00:15:48.736 [2024-04-26 23:18:37.780695] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify controller (no timeout) 00:15:48.736 [2024-04-26 23:18:37.780706] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify controller (timeout 30000 ms) 00:15:48.736 [2024-04-26 23:18:37.780721] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:15:48.736 [2024-04-26 23:18:37.780725] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:48.736 [2024-04-26 23:18:37.780739] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:48.736 [2024-04-26 23:18:37.780783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:15:48.736 [2024-04-26 23:18:37.780792] nvme_ctrlr.c:1984:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] transport max_xfer_size 131072 00:15:48.736 [2024-04-26 23:18:37.780797] nvme_ctrlr.c:1988:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] MDTS max_xfer_size 131072 00:15:48.736 [2024-04-26 23:18:37.780801] nvme_ctrlr.c:1991:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CNTLID 0x0001 00:15:48.736 [2024-04-26 23:18:37.780806] nvme_ctrlr.c:2002:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:15:48.736 [2024-04-26 23:18:37.780811] nvme_ctrlr.c:2015:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] transport max_sges 1 00:15:48.736 [2024-04-26 23:18:37.780815] nvme_ctrlr.c:2030:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] fuses compare and write: 1 00:15:48.736 [2024-04-26 23:18:37.780820] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to configure AER (timeout 30000 ms) 00:15:48.736 [2024-04-26 23:18:37.780827] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for configure aer (timeout 30000 ms) 00:15:48.736 [2024-04-26 23:18:37.780841] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:15:48.736 [2024-04-26 23:18:37.780855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:15:48.736 [2024-04-26 23:18:37.780867] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:15:48.736 [2024-04-26 23:18:37.780876] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:15:48.736 [2024-04-26 23:18:37.780884] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:15:48.736 [2024-04-26 23:18:37.780892] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:15:48.736 [2024-04-26 23:18:37.780897] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set keep alive timeout (timeout 30000 ms) 00:15:48.736 [2024-04-26 23:18:37.780906] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:15:48.736 [2024-04-26 23:18:37.780917] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:15:48.736 [2024-04-26 23:18:37.780929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:15:48.736 [2024-04-26 23:18:37.780934] nvme_ctrlr.c:2890:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Controller adjusted keep alive timeout to 0 ms 00:15:48.736 [2024-04-26 23:18:37.780939] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify controller iocs specific (timeout 30000 ms) 00:15:48.736 [2024-04-26 23:18:37.780947] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set number of queues (timeout 30000 ms) 00:15:48.736 [2024-04-26 23:18:37.780953] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for set number of queues (timeout 30000 ms) 00:15:48.736 [2024-04-26 23:18:37.780962] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:15:48.736 [2024-04-26 23:18:37.780973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:15:48.736 [2024-04-26 23:18:37.781020] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify active ns (timeout 30000 ms) 00:15:48.736 [2024-04-26 23:18:37.781027] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify active ns (timeout 30000 ms) 00:15:48.736 [2024-04-26 23:18:37.781035] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:15:48.736 [2024-04-26 23:18:37.781039] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:15:48.736 [2024-04-26 23:18:37.781045] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:15:48.736 [2024-04-26 23:18:37.781060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:15:48.736 [2024-04-26 23:18:37.781068] nvme_ctrlr.c:4557:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Namespace 1 was added 00:15:48.736 [2024-04-26 23:18:37.781078] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify ns (timeout 30000 ms) 00:15:48.736 [2024-04-26 23:18:37.781086] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify ns (timeout 30000 ms) 00:15:48.736 [2024-04-26 23:18:37.781092] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:15:48.736 [2024-04-26 23:18:37.781096] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:48.736 [2024-04-26 23:18:37.781102] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:48.736 [2024-04-26 23:18:37.781123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:15:48.736 [2024-04-26 23:18:37.781135] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:15:48.736 [2024-04-26 23:18:37.781142] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:15:48.737 [2024-04-26 23:18:37.781148] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:15:48.737 [2024-04-26 23:18:37.781153] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:48.737 [2024-04-26 23:18:37.781160] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:48.737 [2024-04-26 23:18:37.781173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:15:48.737 [2024-04-26 23:18:37.781181] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify ns iocs specific (timeout 30000 ms) 00:15:48.737 [2024-04-26 23:18:37.781187] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set supported log pages (timeout 30000 ms) 00:15:48.737 [2024-04-26 23:18:37.781195] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set supported features (timeout 30000 ms) 00:15:48.737 [2024-04-26 23:18:37.781201] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set doorbell buffer config (timeout 30000 ms) 00:15:48.737 [2024-04-26 23:18:37.781206] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set host ID (timeout 30000 ms) 00:15:48.737 [2024-04-26 23:18:37.781210] nvme_ctrlr.c:2990:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] NVMe-oF transport - not sending Set Features - Host ID 00:15:48.737 [2024-04-26 23:18:37.781215] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to transport ready (timeout 30000 ms) 00:15:48.737 [2024-04-26 23:18:37.781220] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to ready (no timeout) 00:15:48.737 [2024-04-26 23:18:37.781236] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:15:48.737 [2024-04-26 23:18:37.781246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:15:48.737 [2024-04-26 23:18:37.781257] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:15:48.737 [2024-04-26 23:18:37.781266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:15:48.737 [2024-04-26 23:18:37.781277] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:15:48.737 [2024-04-26 23:18:37.781292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:15:48.737 [2024-04-26 23:18:37.781303] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:15:48.737 [2024-04-26 23:18:37.781312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:15:48.737 [2024-04-26 23:18:37.781321] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:15:48.737 [2024-04-26 23:18:37.781326] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:15:48.737 [2024-04-26 23:18:37.781329] nvme_pcie_common.c:1235:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:15:48.737 [2024-04-26 23:18:37.781333] nvme_pcie_common.c:1251:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:15:48.737 [2024-04-26 23:18:37.781339] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:15:48.737 [2024-04-26 23:18:37.781346] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:15:48.737 [2024-04-26 23:18:37.781350] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:15:48.737 [2024-04-26 23:18:37.781356] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:15:48.737 [2024-04-26 23:18:37.781365] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:15:48.737 [2024-04-26 23:18:37.781369] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:48.737 [2024-04-26 23:18:37.781375] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:48.737 [2024-04-26 23:18:37.781382] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:15:48.737 [2024-04-26 23:18:37.781386] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:15:48.737 [2024-04-26 23:18:37.781392] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:15:48.737 [2024-04-26 23:18:37.781399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:15:48.737 [2024-04-26 23:18:37.781412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:15:48.737 [2024-04-26 23:18:37.781421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:15:48.737 [2024-04-26 23:18:37.781428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:15:48.737 ===================================================== 00:15:48.737 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:15:48.737 ===================================================== 00:15:48.737 Controller Capabilities/Features 00:15:48.737 ================================ 00:15:48.737 Vendor ID: 4e58 00:15:48.737 Subsystem Vendor ID: 4e58 00:15:48.737 Serial Number: SPDK1 00:15:48.737 Model Number: SPDK bdev Controller 00:15:48.737 Firmware Version: 24.05 00:15:48.737 Recommended Arb Burst: 6 00:15:48.737 IEEE OUI Identifier: 8d 6b 50 00:15:48.737 Multi-path I/O 00:15:48.737 May have multiple subsystem ports: Yes 00:15:48.737 May have multiple controllers: Yes 00:15:48.737 Associated with SR-IOV VF: No 00:15:48.737 Max Data Transfer Size: 131072 00:15:48.737 Max Number of Namespaces: 32 00:15:48.737 Max Number of I/O Queues: 127 00:15:48.737 NVMe Specification Version (VS): 1.3 00:15:48.737 NVMe Specification Version (Identify): 1.3 00:15:48.737 Maximum Queue Entries: 256 00:15:48.737 Contiguous Queues Required: Yes 00:15:48.737 Arbitration Mechanisms Supported 00:15:48.737 Weighted Round Robin: Not Supported 00:15:48.737 Vendor Specific: Not Supported 00:15:48.737 Reset Timeout: 15000 ms 00:15:48.737 Doorbell Stride: 4 bytes 00:15:48.737 NVM Subsystem Reset: Not Supported 00:15:48.737 Command Sets Supported 00:15:48.737 NVM Command Set: Supported 00:15:48.737 Boot Partition: Not Supported 00:15:48.737 Memory Page Size Minimum: 4096 bytes 00:15:48.737 Memory Page Size Maximum: 4096 bytes 00:15:48.737 Persistent Memory Region: Not Supported 00:15:48.737 Optional Asynchronous Events Supported 00:15:48.737 Namespace Attribute Notices: Supported 00:15:48.737 Firmware Activation Notices: Not Supported 00:15:48.737 ANA Change Notices: Not Supported 00:15:48.737 PLE Aggregate Log Change Notices: Not Supported 00:15:48.737 LBA Status Info Alert Notices: Not Supported 00:15:48.737 EGE Aggregate Log Change Notices: Not Supported 00:15:48.737 Normal NVM Subsystem Shutdown event: Not Supported 00:15:48.737 Zone Descriptor Change Notices: Not Supported 00:15:48.737 Discovery Log Change Notices: Not Supported 00:15:48.737 Controller Attributes 00:15:48.737 128-bit Host Identifier: Supported 00:15:48.737 Non-Operational Permissive Mode: Not Supported 00:15:48.737 NVM Sets: Not Supported 00:15:48.737 Read Recovery Levels: Not Supported 00:15:48.737 Endurance Groups: Not Supported 00:15:48.737 Predictable Latency Mode: Not Supported 00:15:48.737 Traffic Based Keep ALive: Not Supported 00:15:48.737 Namespace Granularity: Not Supported 00:15:48.737 SQ Associations: Not Supported 00:15:48.737 UUID List: Not Supported 00:15:48.737 Multi-Domain Subsystem: Not Supported 00:15:48.737 Fixed Capacity Management: Not Supported 00:15:48.737 Variable Capacity Management: Not Supported 00:15:48.737 Delete Endurance Group: Not Supported 00:15:48.737 Delete NVM Set: Not Supported 00:15:48.737 Extended LBA Formats Supported: Not Supported 00:15:48.737 Flexible Data Placement Supported: Not Supported 00:15:48.737 00:15:48.737 Controller Memory Buffer Support 00:15:48.737 ================================ 00:15:48.737 Supported: No 00:15:48.737 00:15:48.737 Persistent Memory Region Support 00:15:48.737 ================================ 00:15:48.737 Supported: No 00:15:48.737 00:15:48.737 Admin Command Set Attributes 00:15:48.737 ============================ 00:15:48.737 Security Send/Receive: Not Supported 00:15:48.737 Format NVM: Not Supported 00:15:48.737 Firmware Activate/Download: Not Supported 00:15:48.737 Namespace Management: Not Supported 00:15:48.737 Device Self-Test: Not Supported 00:15:48.737 Directives: Not Supported 00:15:48.737 NVMe-MI: Not Supported 00:15:48.737 Virtualization Management: Not Supported 00:15:48.737 Doorbell Buffer Config: Not Supported 00:15:48.737 Get LBA Status Capability: Not Supported 00:15:48.737 Command & Feature Lockdown Capability: Not Supported 00:15:48.737 Abort Command Limit: 4 00:15:48.737 Async Event Request Limit: 4 00:15:48.737 Number of Firmware Slots: N/A 00:15:48.737 Firmware Slot 1 Read-Only: N/A 00:15:48.737 Firmware Activation Without Reset: N/A 00:15:48.737 Multiple Update Detection Support: N/A 00:15:48.737 Firmware Update Granularity: No Information Provided 00:15:48.737 Per-Namespace SMART Log: No 00:15:48.737 Asymmetric Namespace Access Log Page: Not Supported 00:15:48.737 Subsystem NQN: nqn.2019-07.io.spdk:cnode1 00:15:48.737 Command Effects Log Page: Supported 00:15:48.737 Get Log Page Extended Data: Supported 00:15:48.737 Telemetry Log Pages: Not Supported 00:15:48.737 Persistent Event Log Pages: Not Supported 00:15:48.737 Supported Log Pages Log Page: May Support 00:15:48.737 Commands Supported & Effects Log Page: Not Supported 00:15:48.737 Feature Identifiers & Effects Log Page:May Support 00:15:48.737 NVMe-MI Commands & Effects Log Page: May Support 00:15:48.737 Data Area 4 for Telemetry Log: Not Supported 00:15:48.738 Error Log Page Entries Supported: 128 00:15:48.738 Keep Alive: Supported 00:15:48.738 Keep Alive Granularity: 10000 ms 00:15:48.738 00:15:48.738 NVM Command Set Attributes 00:15:48.738 ========================== 00:15:48.738 Submission Queue Entry Size 00:15:48.738 Max: 64 00:15:48.738 Min: 64 00:15:48.738 Completion Queue Entry Size 00:15:48.738 Max: 16 00:15:48.738 Min: 16 00:15:48.738 Number of Namespaces: 32 00:15:48.738 Compare Command: Supported 00:15:48.738 Write Uncorrectable Command: Not Supported 00:15:48.738 Dataset Management Command: Supported 00:15:48.738 Write Zeroes Command: Supported 00:15:48.738 Set Features Save Field: Not Supported 00:15:48.738 Reservations: Not Supported 00:15:48.738 Timestamp: Not Supported 00:15:48.738 Copy: Supported 00:15:48.738 Volatile Write Cache: Present 00:15:48.738 Atomic Write Unit (Normal): 1 00:15:48.738 Atomic Write Unit (PFail): 1 00:15:48.738 Atomic Compare & Write Unit: 1 00:15:48.738 Fused Compare & Write: Supported 00:15:48.738 Scatter-Gather List 00:15:48.738 SGL Command Set: Supported (Dword aligned) 00:15:48.738 SGL Keyed: Not Supported 00:15:48.738 SGL Bit Bucket Descriptor: Not Supported 00:15:48.738 SGL Metadata Pointer: Not Supported 00:15:48.738 Oversized SGL: Not Supported 00:15:48.738 SGL Metadata Address: Not Supported 00:15:48.738 SGL Offset: Not Supported 00:15:48.738 Transport SGL Data Block: Not Supported 00:15:48.738 Replay Protected Memory Block: Not Supported 00:15:48.738 00:15:48.738 Firmware Slot Information 00:15:48.738 ========================= 00:15:48.738 Active slot: 1 00:15:48.738 Slot 1 Firmware Revision: 24.05 00:15:48.738 00:15:48.738 00:15:48.738 Commands Supported and Effects 00:15:48.738 ============================== 00:15:48.738 Admin Commands 00:15:48.738 -------------- 00:15:48.738 Get Log Page (02h): Supported 00:15:48.738 Identify (06h): Supported 00:15:48.738 Abort (08h): Supported 00:15:48.738 Set Features (09h): Supported 00:15:48.738 Get Features (0Ah): Supported 00:15:48.738 Asynchronous Event Request (0Ch): Supported 00:15:48.738 Keep Alive (18h): Supported 00:15:48.738 I/O Commands 00:15:48.738 ------------ 00:15:48.738 Flush (00h): Supported LBA-Change 00:15:48.738 Write (01h): Supported LBA-Change 00:15:48.738 Read (02h): Supported 00:15:48.738 Compare (05h): Supported 00:15:48.738 Write Zeroes (08h): Supported LBA-Change 00:15:48.738 Dataset Management (09h): Supported LBA-Change 00:15:48.738 Copy (19h): Supported LBA-Change 00:15:48.738 Unknown (79h): Supported LBA-Change 00:15:48.738 Unknown (7Ah): Supported 00:15:48.738 00:15:48.738 Error Log 00:15:48.738 ========= 00:15:48.738 00:15:48.738 Arbitration 00:15:48.738 =========== 00:15:48.738 Arbitration Burst: 1 00:15:48.738 00:15:48.738 Power Management 00:15:48.738 ================ 00:15:48.738 Number of Power States: 1 00:15:48.738 Current Power State: Power State #0 00:15:48.738 Power State #0: 00:15:48.738 Max Power: 0.00 W 00:15:48.738 Non-Operational State: Operational 00:15:48.738 Entry Latency: Not Reported 00:15:48.738 Exit Latency: Not Reported 00:15:48.738 Relative Read Throughput: 0 00:15:48.738 Relative Read Latency: 0 00:15:48.738 Relative Write Throughput: 0 00:15:48.738 Relative Write Latency: 0 00:15:48.738 Idle Power: Not Reported 00:15:48.738 Active Power: Not Reported 00:15:48.738 Non-Operational Permissive Mode: Not Supported 00:15:48.738 00:15:48.738 Health Information 00:15:48.738 ================== 00:15:48.738 Critical Warnings: 00:15:48.738 Available Spare Space: OK 00:15:48.738 Temperature: OK 00:15:48.738 Device Reliability: OK 00:15:48.738 Read Only: No 00:15:48.738 Volatile Memory Backup: OK 00:15:48.738 Current Temperature: 0 Kelvin (-2[2024-04-26 23:18:37.781533] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:15:48.738 [2024-04-26 23:18:37.781548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:15:48.738 [2024-04-26 23:18:37.781573] nvme_ctrlr.c:4221:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Prepare to destruct SSD 00:15:48.738 [2024-04-26 23:18:37.781582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:48.738 [2024-04-26 23:18:37.781588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:48.738 [2024-04-26 23:18:37.781595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:48.738 [2024-04-26 23:18:37.781601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:48.738 [2024-04-26 23:18:37.783845] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:15:48.738 [2024-04-26 23:18:37.783856] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x464001 00:15:48.738 [2024-04-26 23:18:37.784682] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:48.738 [2024-04-26 23:18:37.784733] nvme_ctrlr.c:1082:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] RTD3E = 0 us 00:15:48.738 [2024-04-26 23:18:37.784740] nvme_ctrlr.c:1085:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] shutdown timeout = 10000 ms 00:15:48.738 [2024-04-26 23:18:37.785690] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x9 00:15:48.738 [2024-04-26 23:18:37.785701] nvme_ctrlr.c:1204:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] shutdown complete in 0 milliseconds 00:15:48.738 [2024-04-26 23:18:37.785759] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user1/1/cntrl 00:15:48.738 [2024-04-26 23:18:37.787719] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:15:48.738 73 Celsius) 00:15:48.738 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:15:48.738 Available Spare: 0% 00:15:48.738 Available Spare Threshold: 0% 00:15:48.738 Life Percentage Used: 0% 00:15:48.738 Data Units Read: 0 00:15:48.738 Data Units Written: 0 00:15:48.738 Host Read Commands: 0 00:15:48.738 Host Write Commands: 0 00:15:48.738 Controller Busy Time: 0 minutes 00:15:48.738 Power Cycles: 0 00:15:48.738 Power On Hours: 0 hours 00:15:48.738 Unsafe Shutdowns: 0 00:15:48.738 Unrecoverable Media Errors: 0 00:15:48.738 Lifetime Error Log Entries: 0 00:15:48.738 Warning Temperature Time: 0 minutes 00:15:48.738 Critical Temperature Time: 0 minutes 00:15:48.738 00:15:48.738 Number of Queues 00:15:48.738 ================ 00:15:48.738 Number of I/O Submission Queues: 127 00:15:48.738 Number of I/O Completion Queues: 127 00:15:48.738 00:15:48.738 Active Namespaces 00:15:48.738 ================= 00:15:48.738 Namespace ID:1 00:15:48.738 Error Recovery Timeout: Unlimited 00:15:48.738 Command Set Identifier: NVM (00h) 00:15:48.738 Deallocate: Supported 00:15:48.738 Deallocated/Unwritten Error: Not Supported 00:15:48.738 Deallocated Read Value: Unknown 00:15:48.738 Deallocate in Write Zeroes: Not Supported 00:15:48.738 Deallocated Guard Field: 0xFFFF 00:15:48.738 Flush: Supported 00:15:48.738 Reservation: Supported 00:15:48.738 Namespace Sharing Capabilities: Multiple Controllers 00:15:48.738 Size (in LBAs): 131072 (0GiB) 00:15:48.738 Capacity (in LBAs): 131072 (0GiB) 00:15:48.738 Utilization (in LBAs): 131072 (0GiB) 00:15:48.738 NGUID: 932DD87758BD42FBA6A34A8F7748E4BD 00:15:48.738 UUID: 932dd877-58bd-42fb-a6a3-4a8f7748e4bd 00:15:48.738 Thin Provisioning: Not Supported 00:15:48.738 Per-NS Atomic Units: Yes 00:15:48.738 Atomic Boundary Size (Normal): 0 00:15:48.738 Atomic Boundary Size (PFail): 0 00:15:48.738 Atomic Boundary Offset: 0 00:15:48.738 Maximum Single Source Range Length: 65535 00:15:48.738 Maximum Copy Length: 65535 00:15:48.738 Maximum Source Range Count: 1 00:15:48.738 NGUID/EUI64 Never Reused: No 00:15:48.738 Namespace Write Protected: No 00:15:48.738 Number of LBA Formats: 1 00:15:48.738 Current LBA Format: LBA Format #00 00:15:48.738 LBA Format #00: Data Size: 512 Metadata Size: 0 00:15:48.738 00:15:48.738 23:18:37 -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:15:48.738 EAL: No free 2048 kB hugepages reported on node 1 00:15:48.999 [2024-04-26 23:18:37.988541] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:54.289 [2024-04-26 23:18:43.008309] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:54.289 Initializing NVMe Controllers 00:15:54.289 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:15:54.289 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:15:54.289 Initialization complete. Launching workers. 00:15:54.289 ======================================================== 00:15:54.289 Latency(us) 00:15:54.289 Device Information : IOPS MiB/s Average min max 00:15:54.289 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 44032.00 172.00 2906.56 959.24 9475.42 00:15:54.289 ======================================================== 00:15:54.289 Total : 44032.00 172.00 2906.56 959.24 9475.42 00:15:54.289 00:15:54.289 23:18:43 -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:15:54.289 EAL: No free 2048 kB hugepages reported on node 1 00:15:54.289 [2024-04-26 23:18:43.209302] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:59.585 [2024-04-26 23:18:48.250734] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:59.585 Initializing NVMe Controllers 00:15:59.585 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:15:59.585 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:15:59.585 Initialization complete. Launching workers. 00:15:59.585 ======================================================== 00:15:59.585 Latency(us) 00:15:59.585 Device Information : IOPS MiB/s Average min max 00:15:59.585 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 16034.00 62.63 7988.32 5991.76 15753.61 00:15:59.585 ======================================================== 00:15:59.585 Total : 16034.00 62.63 7988.32 5991.76 15753.61 00:15:59.585 00:15:59.585 23:18:48 -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:15:59.585 EAL: No free 2048 kB hugepages reported on node 1 00:15:59.585 [2024-04-26 23:18:48.467755] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:16:04.885 [2024-04-26 23:18:53.542071] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:16:04.885 Initializing NVMe Controllers 00:16:04.885 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:16:04.885 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:16:04.885 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 1 00:16:04.885 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 2 00:16:04.885 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 3 00:16:04.885 Initialization complete. Launching workers. 00:16:04.885 Starting thread on core 2 00:16:04.885 Starting thread on core 3 00:16:04.885 Starting thread on core 1 00:16:04.885 23:18:53 -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -d 256 -g 00:16:04.885 EAL: No free 2048 kB hugepages reported on node 1 00:16:04.885 [2024-04-26 23:18:53.809191] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:16:08.191 [2024-04-26 23:18:56.879993] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:16:08.191 Initializing NVMe Controllers 00:16:08.191 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:16:08.191 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:16:08.191 Associating SPDK bdev Controller (SPDK1 ) with lcore 0 00:16:08.191 Associating SPDK bdev Controller (SPDK1 ) with lcore 1 00:16:08.191 Associating SPDK bdev Controller (SPDK1 ) with lcore 2 00:16:08.191 Associating SPDK bdev Controller (SPDK1 ) with lcore 3 00:16:08.191 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:16:08.191 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:16:08.191 Initialization complete. Launching workers. 00:16:08.191 Starting thread on core 1 with urgent priority queue 00:16:08.191 Starting thread on core 2 with urgent priority queue 00:16:08.191 Starting thread on core 3 with urgent priority queue 00:16:08.191 Starting thread on core 0 with urgent priority queue 00:16:08.191 SPDK bdev Controller (SPDK1 ) core 0: 12405.33 IO/s 8.06 secs/100000 ios 00:16:08.191 SPDK bdev Controller (SPDK1 ) core 1: 12813.00 IO/s 7.80 secs/100000 ios 00:16:08.191 SPDK bdev Controller (SPDK1 ) core 2: 9855.33 IO/s 10.15 secs/100000 ios 00:16:08.191 SPDK bdev Controller (SPDK1 ) core 3: 10972.33 IO/s 9.11 secs/100000 ios 00:16:08.191 ======================================================== 00:16:08.191 00:16:08.191 23:18:56 -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:16:08.191 EAL: No free 2048 kB hugepages reported on node 1 00:16:08.191 [2024-04-26 23:18:57.144253] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:16:08.191 [2024-04-26 23:18:57.181469] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:16:08.191 Initializing NVMe Controllers 00:16:08.191 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:16:08.191 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:16:08.191 Namespace ID: 1 size: 0GB 00:16:08.191 Initialization complete. 00:16:08.191 INFO: using host memory buffer for IO 00:16:08.191 Hello world! 00:16:08.191 23:18:57 -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:16:08.191 EAL: No free 2048 kB hugepages reported on node 1 00:16:08.191 [2024-04-26 23:18:57.438369] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:16:09.600 Initializing NVMe Controllers 00:16:09.600 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:16:09.600 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:16:09.600 Initialization complete. Launching workers. 00:16:09.600 submit (in ns) avg, min, max = 7939.3, 3875.0, 4001005.0 00:16:09.600 complete (in ns) avg, min, max = 18711.6, 2341.7, 3998500.0 00:16:09.600 00:16:09.600 Submit histogram 00:16:09.600 ================ 00:16:09.600 Range in us Cumulative Count 00:16:09.600 3.867 - 3.893: 0.6800% ( 132) 00:16:09.600 3.893 - 3.920: 4.4867% ( 739) 00:16:09.600 3.920 - 3.947: 14.1658% ( 1879) 00:16:09.600 3.947 - 3.973: 25.3851% ( 2178) 00:16:09.600 3.973 - 4.000: 37.7943% ( 2409) 00:16:09.600 4.000 - 4.027: 51.0792% ( 2579) 00:16:09.600 4.027 - 4.053: 68.7220% ( 3425) 00:16:09.600 4.053 - 4.080: 84.3919% ( 3042) 00:16:09.600 4.080 - 4.107: 92.9429% ( 1660) 00:16:09.600 4.107 - 4.133: 97.0999% ( 807) 00:16:09.600 4.133 - 4.160: 98.7843% ( 327) 00:16:09.600 4.160 - 4.187: 99.3510% ( 110) 00:16:09.600 4.187 - 4.213: 99.4643% ( 22) 00:16:09.600 4.213 - 4.240: 99.5055% ( 8) 00:16:09.600 4.240 - 4.267: 99.5106% ( 1) 00:16:09.600 4.267 - 4.293: 99.5209% ( 2) 00:16:09.600 4.507 - 4.533: 99.5261% ( 1) 00:16:09.600 4.853 - 4.880: 99.5312% ( 1) 00:16:09.600 4.880 - 4.907: 99.5364% ( 1) 00:16:09.600 4.960 - 4.987: 99.5415% ( 1) 00:16:09.600 4.987 - 5.013: 99.5518% ( 2) 00:16:09.600 5.147 - 5.173: 99.5570% ( 1) 00:16:09.600 5.200 - 5.227: 99.5621% ( 1) 00:16:09.600 5.280 - 5.307: 99.5673% ( 1) 00:16:09.600 5.493 - 5.520: 99.5725% ( 1) 00:16:09.600 5.573 - 5.600: 99.5776% ( 1) 00:16:09.600 5.653 - 5.680: 99.5828% ( 1) 00:16:09.600 5.813 - 5.840: 99.5879% ( 1) 00:16:09.600 5.947 - 5.973: 99.5931% ( 1) 00:16:09.600 5.973 - 6.000: 99.5982% ( 1) 00:16:09.600 6.000 - 6.027: 99.6034% ( 1) 00:16:09.600 6.053 - 6.080: 99.6137% ( 2) 00:16:09.600 6.080 - 6.107: 99.6188% ( 1) 00:16:09.600 6.133 - 6.160: 99.6343% ( 3) 00:16:09.600 6.187 - 6.213: 99.6497% ( 3) 00:16:09.600 6.213 - 6.240: 99.6549% ( 1) 00:16:09.600 6.427 - 6.453: 99.6600% ( 1) 00:16:09.600 6.480 - 6.507: 99.6652% ( 1) 00:16:09.600 6.533 - 6.560: 99.6703% ( 1) 00:16:09.600 6.613 - 6.640: 99.6755% ( 1) 00:16:09.600 6.640 - 6.667: 99.6806% ( 1) 00:16:09.600 6.693 - 6.720: 99.6858% ( 1) 00:16:09.600 6.720 - 6.747: 99.6909% ( 1) 00:16:09.600 6.747 - 6.773: 99.7012% ( 2) 00:16:09.600 6.773 - 6.800: 99.7064% ( 1) 00:16:09.600 6.800 - 6.827: 99.7167% ( 2) 00:16:09.600 6.827 - 6.880: 99.7218% ( 1) 00:16:09.600 6.880 - 6.933: 99.7270% ( 1) 00:16:09.600 7.040 - 7.093: 99.7321% ( 1) 00:16:09.600 7.093 - 7.147: 99.7476% ( 3) 00:16:09.600 7.147 - 7.200: 99.7527% ( 1) 00:16:09.600 7.200 - 7.253: 99.7630% ( 2) 00:16:09.600 7.253 - 7.307: 99.7837% ( 4) 00:16:09.600 7.307 - 7.360: 99.7991% ( 3) 00:16:09.600 7.360 - 7.413: 99.8043% ( 1) 00:16:09.600 7.413 - 7.467: 99.8197% ( 3) 00:16:09.600 7.467 - 7.520: 99.8249% ( 1) 00:16:09.600 7.520 - 7.573: 99.8300% ( 1) 00:16:09.600 7.573 - 7.627: 99.8403% ( 2) 00:16:09.600 7.680 - 7.733: 99.8506% ( 2) 00:16:09.600 7.733 - 7.787: 99.8609% ( 2) 00:16:09.600 7.787 - 7.840: 99.8661% ( 1) 00:16:09.600 7.893 - 7.947: 99.8764% ( 2) 00:16:09.600 8.107 - 8.160: 99.8815% ( 1) 00:16:09.600 8.160 - 8.213: 99.8918% ( 2) 00:16:09.600 8.213 - 8.267: 99.8970% ( 1) 00:16:09.600 8.587 - 8.640: 99.9021% ( 1) 00:16:09.600 3986.773 - 4014.080: 100.0000% ( 19) 00:16:09.600 00:16:09.600 Complete histogram 00:16:09.600 ================== 00:16:09.600 Range in us Cumulative Count 00:16:09.600 2.333 - 2.347: 0.0052% ( 1) 00:16:09.600 2.347 - 2.360: 0.0258% ( 4) 00:16:09.600 2.360 - 2.373: 0.8809% ( 166) 00:16:09.600 2.373 - 2.387: 1.1281% ( 48) 00:16:09.600 2.387 - 2.400: 1.2363% ( 21) 00:16:09.600 2.400 - 2.413: 1.2929% ( 11) 00:16:09.600 2.413 - 2.427: 35.3268% ( 6607) 00:16:09.600 2.427 - 2.440: 59.9907% ( 4788) 00:16:09.600 2.440 - 2.453: 70.0768% ( 1958) 00:16:09.600 2.453 - [2024-04-26 23:18:58.460267] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:16:09.600 2.467: 77.8705% ( 1513) 00:16:09.600 2.467 - 2.480: 81.1673% ( 640) 00:16:09.600 2.480 - 2.493: 83.6089% ( 474) 00:16:09.600 2.493 - 2.507: 89.3989% ( 1124) 00:16:09.600 2.507 - 2.520: 94.1225% ( 917) 00:16:09.600 2.520 - 2.533: 96.8217% ( 524) 00:16:09.600 2.533 - 2.547: 98.3877% ( 304) 00:16:09.600 2.547 - 2.560: 99.0676% ( 132) 00:16:09.600 2.560 - 2.573: 99.2788% ( 41) 00:16:09.600 2.573 - 2.587: 99.3355% ( 11) 00:16:09.600 2.587 - 2.600: 99.3510% ( 3) 00:16:09.600 4.213 - 4.240: 99.3561% ( 1) 00:16:09.600 4.320 - 4.347: 99.3613% ( 1) 00:16:09.600 4.453 - 4.480: 99.3819% ( 4) 00:16:09.600 4.480 - 4.507: 99.3870% ( 1) 00:16:09.600 4.587 - 4.613: 99.3922% ( 1) 00:16:09.600 4.640 - 4.667: 99.3973% ( 1) 00:16:09.600 4.693 - 4.720: 99.4025% ( 1) 00:16:09.600 4.800 - 4.827: 99.4076% ( 1) 00:16:09.600 4.853 - 4.880: 99.4128% ( 1) 00:16:09.600 4.907 - 4.933: 99.4231% ( 2) 00:16:09.600 4.987 - 5.013: 99.4282% ( 1) 00:16:09.600 5.013 - 5.040: 99.4437% ( 3) 00:16:09.600 5.040 - 5.067: 99.4488% ( 1) 00:16:09.600 5.120 - 5.147: 99.4540% ( 1) 00:16:09.600 5.173 - 5.200: 99.4643% ( 2) 00:16:09.600 5.200 - 5.227: 99.4746% ( 2) 00:16:09.600 5.280 - 5.307: 99.4849% ( 2) 00:16:09.601 5.307 - 5.333: 99.4900% ( 1) 00:16:09.601 5.387 - 5.413: 99.4952% ( 1) 00:16:09.601 5.467 - 5.493: 99.5106% ( 3) 00:16:09.601 5.520 - 5.547: 99.5158% ( 1) 00:16:09.601 5.573 - 5.600: 99.5209% ( 1) 00:16:09.601 5.627 - 5.653: 99.5261% ( 1) 00:16:09.601 5.760 - 5.787: 99.5364% ( 2) 00:16:09.601 5.787 - 5.813: 99.5415% ( 1) 00:16:09.601 5.813 - 5.840: 99.5518% ( 2) 00:16:09.601 5.973 - 6.000: 99.5570% ( 1) 00:16:09.601 6.027 - 6.053: 99.5621% ( 1) 00:16:09.601 6.107 - 6.133: 99.5673% ( 1) 00:16:09.601 6.240 - 6.267: 99.5776% ( 2) 00:16:09.601 6.747 - 6.773: 99.5828% ( 1) 00:16:09.601 10.987 - 11.040: 99.5879% ( 1) 00:16:09.601 13.973 - 14.080: 99.5931% ( 1) 00:16:09.601 3986.773 - 4014.080: 100.0000% ( 79) 00:16:09.601 00:16:09.601 23:18:58 -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user1/1 nqn.2019-07.io.spdk:cnode1 1 00:16:09.601 23:18:58 -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user1/1 00:16:09.601 23:18:58 -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode1 00:16:09.601 23:18:58 -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc3 00:16:09.601 23:18:58 -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:16:09.601 [2024-04-26 23:18:58.647497] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:16:09.601 [ 00:16:09.601 { 00:16:09.601 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:16:09.601 "subtype": "Discovery", 00:16:09.601 "listen_addresses": [], 00:16:09.601 "allow_any_host": true, 00:16:09.601 "hosts": [] 00:16:09.601 }, 00:16:09.601 { 00:16:09.601 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:16:09.601 "subtype": "NVMe", 00:16:09.601 "listen_addresses": [ 00:16:09.601 { 00:16:09.601 "transport": "VFIOUSER", 00:16:09.601 "trtype": "VFIOUSER", 00:16:09.601 "adrfam": "IPv4", 00:16:09.601 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:16:09.601 "trsvcid": "0" 00:16:09.601 } 00:16:09.601 ], 00:16:09.601 "allow_any_host": true, 00:16:09.601 "hosts": [], 00:16:09.601 "serial_number": "SPDK1", 00:16:09.601 "model_number": "SPDK bdev Controller", 00:16:09.601 "max_namespaces": 32, 00:16:09.601 "min_cntlid": 1, 00:16:09.601 "max_cntlid": 65519, 00:16:09.601 "namespaces": [ 00:16:09.601 { 00:16:09.601 "nsid": 1, 00:16:09.601 "bdev_name": "Malloc1", 00:16:09.601 "name": "Malloc1", 00:16:09.601 "nguid": "932DD87758BD42FBA6A34A8F7748E4BD", 00:16:09.601 "uuid": "932dd877-58bd-42fb-a6a3-4a8f7748e4bd" 00:16:09.601 } 00:16:09.601 ] 00:16:09.601 }, 00:16:09.601 { 00:16:09.601 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:16:09.601 "subtype": "NVMe", 00:16:09.601 "listen_addresses": [ 00:16:09.601 { 00:16:09.601 "transport": "VFIOUSER", 00:16:09.601 "trtype": "VFIOUSER", 00:16:09.601 "adrfam": "IPv4", 00:16:09.601 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:16:09.601 "trsvcid": "0" 00:16:09.601 } 00:16:09.601 ], 00:16:09.601 "allow_any_host": true, 00:16:09.601 "hosts": [], 00:16:09.601 "serial_number": "SPDK2", 00:16:09.601 "model_number": "SPDK bdev Controller", 00:16:09.601 "max_namespaces": 32, 00:16:09.601 "min_cntlid": 1, 00:16:09.601 "max_cntlid": 65519, 00:16:09.601 "namespaces": [ 00:16:09.601 { 00:16:09.601 "nsid": 1, 00:16:09.601 "bdev_name": "Malloc2", 00:16:09.601 "name": "Malloc2", 00:16:09.601 "nguid": "B88926F51BE54A18A39378B91313A01C", 00:16:09.601 "uuid": "b88926f5-1be5-4a18-a393-78b91313a01c" 00:16:09.601 } 00:16:09.601 ] 00:16:09.601 } 00:16:09.601 ] 00:16:09.601 23:18:58 -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:16:09.601 23:18:58 -- target/nvmf_vfio_user.sh@34 -- # aerpid=3891518 00:16:09.601 23:18:58 -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:16:09.601 23:18:58 -- common/autotest_common.sh@1251 -- # local i=0 00:16:09.601 23:18:58 -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -n 2 -g -t /tmp/aer_touch_file 00:16:09.601 23:18:58 -- common/autotest_common.sh@1252 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:16:09.601 23:18:58 -- common/autotest_common.sh@1258 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:16:09.601 23:18:58 -- common/autotest_common.sh@1262 -- # return 0 00:16:09.601 23:18:58 -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:16:09.601 23:18:58 -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc3 00:16:09.601 EAL: No free 2048 kB hugepages reported on node 1 00:16:09.894 Malloc3 00:16:09.894 [2024-04-26 23:18:58.849257] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:16:09.894 23:18:58 -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc3 -n 2 00:16:09.894 [2024-04-26 23:18:59.018341] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:16:09.894 23:18:59 -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:16:09.894 Asynchronous Event Request test 00:16:09.894 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:16:09.894 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:16:09.894 Registering asynchronous event callbacks... 00:16:09.894 Starting namespace attribute notice tests for all controllers... 00:16:09.894 /var/run/vfio-user/domain/vfio-user1/1: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:16:09.894 aer_cb - Changed Namespace 00:16:09.894 Cleaning up... 00:16:10.157 [ 00:16:10.157 { 00:16:10.157 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:16:10.157 "subtype": "Discovery", 00:16:10.157 "listen_addresses": [], 00:16:10.157 "allow_any_host": true, 00:16:10.157 "hosts": [] 00:16:10.157 }, 00:16:10.157 { 00:16:10.157 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:16:10.157 "subtype": "NVMe", 00:16:10.157 "listen_addresses": [ 00:16:10.157 { 00:16:10.157 "transport": "VFIOUSER", 00:16:10.157 "trtype": "VFIOUSER", 00:16:10.157 "adrfam": "IPv4", 00:16:10.157 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:16:10.157 "trsvcid": "0" 00:16:10.157 } 00:16:10.157 ], 00:16:10.157 "allow_any_host": true, 00:16:10.157 "hosts": [], 00:16:10.157 "serial_number": "SPDK1", 00:16:10.157 "model_number": "SPDK bdev Controller", 00:16:10.157 "max_namespaces": 32, 00:16:10.157 "min_cntlid": 1, 00:16:10.157 "max_cntlid": 65519, 00:16:10.157 "namespaces": [ 00:16:10.157 { 00:16:10.157 "nsid": 1, 00:16:10.157 "bdev_name": "Malloc1", 00:16:10.157 "name": "Malloc1", 00:16:10.157 "nguid": "932DD87758BD42FBA6A34A8F7748E4BD", 00:16:10.157 "uuid": "932dd877-58bd-42fb-a6a3-4a8f7748e4bd" 00:16:10.157 }, 00:16:10.157 { 00:16:10.157 "nsid": 2, 00:16:10.157 "bdev_name": "Malloc3", 00:16:10.157 "name": "Malloc3", 00:16:10.157 "nguid": "F2C7868EA0ED4441B013B4042C6DFFEA", 00:16:10.157 "uuid": "f2c7868e-a0ed-4441-b013-b4042c6dffea" 00:16:10.157 } 00:16:10.157 ] 00:16:10.157 }, 00:16:10.157 { 00:16:10.157 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:16:10.157 "subtype": "NVMe", 00:16:10.157 "listen_addresses": [ 00:16:10.157 { 00:16:10.157 "transport": "VFIOUSER", 00:16:10.157 "trtype": "VFIOUSER", 00:16:10.157 "adrfam": "IPv4", 00:16:10.157 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:16:10.157 "trsvcid": "0" 00:16:10.157 } 00:16:10.157 ], 00:16:10.157 "allow_any_host": true, 00:16:10.157 "hosts": [], 00:16:10.157 "serial_number": "SPDK2", 00:16:10.157 "model_number": "SPDK bdev Controller", 00:16:10.157 "max_namespaces": 32, 00:16:10.157 "min_cntlid": 1, 00:16:10.157 "max_cntlid": 65519, 00:16:10.157 "namespaces": [ 00:16:10.157 { 00:16:10.157 "nsid": 1, 00:16:10.157 "bdev_name": "Malloc2", 00:16:10.157 "name": "Malloc2", 00:16:10.157 "nguid": "B88926F51BE54A18A39378B91313A01C", 00:16:10.157 "uuid": "b88926f5-1be5-4a18-a393-78b91313a01c" 00:16:10.157 } 00:16:10.157 ] 00:16:10.157 } 00:16:10.157 ] 00:16:10.157 23:18:59 -- target/nvmf_vfio_user.sh@44 -- # wait 3891518 00:16:10.157 23:18:59 -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:16:10.157 23:18:59 -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user2/2 00:16:10.157 23:18:59 -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode2 00:16:10.157 23:18:59 -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -L nvme -L nvme_vfio -L vfio_pci 00:16:10.157 [2024-04-26 23:18:59.236874] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:16:10.157 [2024-04-26 23:18:59.236927] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3891532 ] 00:16:10.157 EAL: No free 2048 kB hugepages reported on node 1 00:16:10.157 [2024-04-26 23:18:59.271386] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user2/2 00:16:10.157 [2024-04-26 23:18:59.280059] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:16:10.157 [2024-04-26 23:18:59.280080] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7fa5b7b74000 00:16:10.158 [2024-04-26 23:18:59.281059] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:16:10.158 [2024-04-26 23:18:59.282065] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:16:10.158 [2024-04-26 23:18:59.283068] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:16:10.158 [2024-04-26 23:18:59.284074] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:16:10.158 [2024-04-26 23:18:59.285082] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:16:10.158 [2024-04-26 23:18:59.286089] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:16:10.158 [2024-04-26 23:18:59.287103] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:16:10.158 [2024-04-26 23:18:59.288098] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:16:10.158 [2024-04-26 23:18:59.289105] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:16:10.158 [2024-04-26 23:18:59.289116] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7fa5b6938000 00:16:10.158 [2024-04-26 23:18:59.290443] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:16:10.158 [2024-04-26 23:18:59.306641] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user2/2/cntrl Setup Successfully 00:16:10.158 [2024-04-26 23:18:59.306662] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to connect adminq (no timeout) 00:16:10.158 [2024-04-26 23:18:59.311736] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:16:10.158 [2024-04-26 23:18:59.311781] nvme_pcie_common.c: 132:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:16:10.158 [2024-04-26 23:18:59.311863] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for connect adminq (no timeout) 00:16:10.158 [2024-04-26 23:18:59.311878] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read vs (no timeout) 00:16:10.158 [2024-04-26 23:18:59.311883] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read vs wait for vs (no timeout) 00:16:10.158 [2024-04-26 23:18:59.312746] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x8, value 0x10300 00:16:10.158 [2024-04-26 23:18:59.312754] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read cap (no timeout) 00:16:10.158 [2024-04-26 23:18:59.312761] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read cap wait for cap (no timeout) 00:16:10.158 [2024-04-26 23:18:59.313746] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:16:10.158 [2024-04-26 23:18:59.313754] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to check en (no timeout) 00:16:10.158 [2024-04-26 23:18:59.313762] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to check en wait for cc (timeout 15000 ms) 00:16:10.158 [2024-04-26 23:18:59.314760] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x0 00:16:10.158 [2024-04-26 23:18:59.314769] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:16:10.158 [2024-04-26 23:18:59.315770] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x0 00:16:10.158 [2024-04-26 23:18:59.315778] nvme_ctrlr.c:3749:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CC.EN = 0 && CSTS.RDY = 0 00:16:10.158 [2024-04-26 23:18:59.315782] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to controller is disabled (timeout 15000 ms) 00:16:10.158 [2024-04-26 23:18:59.315789] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:16:10.158 [2024-04-26 23:18:59.315894] nvme_ctrlr.c:3942:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Setting CC.EN = 1 00:16:10.158 [2024-04-26 23:18:59.315899] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:16:10.158 [2024-04-26 23:18:59.315904] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x28, value 0x2000003c0000 00:16:10.158 [2024-04-26 23:18:59.316777] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x30, value 0x2000003be000 00:16:10.158 [2024-04-26 23:18:59.317780] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x24, value 0xff00ff 00:16:10.158 [2024-04-26 23:18:59.318792] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:16:10.158 [2024-04-26 23:18:59.319800] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:16:10.158 [2024-04-26 23:18:59.319842] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:16:10.158 [2024-04-26 23:18:59.320803] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x1 00:16:10.158 [2024-04-26 23:18:59.320811] nvme_ctrlr.c:3784:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:16:10.158 [2024-04-26 23:18:59.320816] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to reset admin queue (timeout 30000 ms) 00:16:10.158 [2024-04-26 23:18:59.320839] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify controller (no timeout) 00:16:10.158 [2024-04-26 23:18:59.320847] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify controller (timeout 30000 ms) 00:16:10.158 [2024-04-26 23:18:59.320860] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:16:10.158 [2024-04-26 23:18:59.320865] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:16:10.158 [2024-04-26 23:18:59.320876] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:16:10.158 [2024-04-26 23:18:59.328844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:16:10.158 [2024-04-26 23:18:59.328854] nvme_ctrlr.c:1984:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] transport max_xfer_size 131072 00:16:10.158 [2024-04-26 23:18:59.328859] nvme_ctrlr.c:1988:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] MDTS max_xfer_size 131072 00:16:10.158 [2024-04-26 23:18:59.328864] nvme_ctrlr.c:1991:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CNTLID 0x0001 00:16:10.158 [2024-04-26 23:18:59.328868] nvme_ctrlr.c:2002:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:16:10.158 [2024-04-26 23:18:59.328873] nvme_ctrlr.c:2015:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] transport max_sges 1 00:16:10.158 [2024-04-26 23:18:59.328877] nvme_ctrlr.c:2030:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] fuses compare and write: 1 00:16:10.158 [2024-04-26 23:18:59.328882] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to configure AER (timeout 30000 ms) 00:16:10.158 [2024-04-26 23:18:59.328889] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for configure aer (timeout 30000 ms) 00:16:10.158 [2024-04-26 23:18:59.328899] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:16:10.158 [2024-04-26 23:18:59.336846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:16:10.158 [2024-04-26 23:18:59.336861] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:16:10.158 [2024-04-26 23:18:59.336870] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:16:10.158 [2024-04-26 23:18:59.336878] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:16:10.158 [2024-04-26 23:18:59.336888] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:16:10.158 [2024-04-26 23:18:59.336893] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set keep alive timeout (timeout 30000 ms) 00:16:10.158 [2024-04-26 23:18:59.336901] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:16:10.158 [2024-04-26 23:18:59.336910] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:16:10.158 [2024-04-26 23:18:59.344844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:16:10.158 [2024-04-26 23:18:59.344851] nvme_ctrlr.c:2890:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Controller adjusted keep alive timeout to 0 ms 00:16:10.158 [2024-04-26 23:18:59.344856] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify controller iocs specific (timeout 30000 ms) 00:16:10.158 [2024-04-26 23:18:59.344865] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set number of queues (timeout 30000 ms) 00:16:10.158 [2024-04-26 23:18:59.344870] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for set number of queues (timeout 30000 ms) 00:16:10.158 [2024-04-26 23:18:59.344879] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:16:10.158 [2024-04-26 23:18:59.352843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:16:10.158 [2024-04-26 23:18:59.352893] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify active ns (timeout 30000 ms) 00:16:10.158 [2024-04-26 23:18:59.352901] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify active ns (timeout 30000 ms) 00:16:10.159 [2024-04-26 23:18:59.352908] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:16:10.159 [2024-04-26 23:18:59.352912] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:16:10.159 [2024-04-26 23:18:59.352918] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:16:10.159 [2024-04-26 23:18:59.360843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:16:10.159 [2024-04-26 23:18:59.360854] nvme_ctrlr.c:4557:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Namespace 1 was added 00:16:10.159 [2024-04-26 23:18:59.360866] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify ns (timeout 30000 ms) 00:16:10.159 [2024-04-26 23:18:59.360873] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify ns (timeout 30000 ms) 00:16:10.159 [2024-04-26 23:18:59.360880] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:16:10.159 [2024-04-26 23:18:59.360884] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:16:10.159 [2024-04-26 23:18:59.360890] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:16:10.159 [2024-04-26 23:18:59.368842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:16:10.159 [2024-04-26 23:18:59.368855] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify namespace id descriptors (timeout 30000 ms) 00:16:10.159 [2024-04-26 23:18:59.368863] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:16:10.159 [2024-04-26 23:18:59.368872] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:16:10.159 [2024-04-26 23:18:59.368876] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:16:10.159 [2024-04-26 23:18:59.368882] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:16:10.159 [2024-04-26 23:18:59.376842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:16:10.159 [2024-04-26 23:18:59.376852] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify ns iocs specific (timeout 30000 ms) 00:16:10.159 [2024-04-26 23:18:59.376858] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set supported log pages (timeout 30000 ms) 00:16:10.159 [2024-04-26 23:18:59.376866] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set supported features (timeout 30000 ms) 00:16:10.159 [2024-04-26 23:18:59.376871] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set doorbell buffer config (timeout 30000 ms) 00:16:10.159 [2024-04-26 23:18:59.376876] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set host ID (timeout 30000 ms) 00:16:10.159 [2024-04-26 23:18:59.376881] nvme_ctrlr.c:2990:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] NVMe-oF transport - not sending Set Features - Host ID 00:16:10.159 [2024-04-26 23:18:59.376886] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to transport ready (timeout 30000 ms) 00:16:10.159 [2024-04-26 23:18:59.376891] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to ready (no timeout) 00:16:10.159 [2024-04-26 23:18:59.376906] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:16:10.159 [2024-04-26 23:18:59.384841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:16:10.159 [2024-04-26 23:18:59.384855] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:16:10.159 [2024-04-26 23:18:59.392844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:16:10.159 [2024-04-26 23:18:59.392863] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:16:10.159 [2024-04-26 23:18:59.400843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:16:10.159 [2024-04-26 23:18:59.400856] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:16:10.159 [2024-04-26 23:18:59.408843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:16:10.159 [2024-04-26 23:18:59.408855] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:16:10.159 [2024-04-26 23:18:59.408860] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:16:10.159 [2024-04-26 23:18:59.408863] nvme_pcie_common.c:1235:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:16:10.159 [2024-04-26 23:18:59.408867] nvme_pcie_common.c:1251:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:16:10.159 [2024-04-26 23:18:59.408873] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:16:10.159 [2024-04-26 23:18:59.408880] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:16:10.159 [2024-04-26 23:18:59.408887] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:16:10.159 [2024-04-26 23:18:59.408893] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:16:10.159 [2024-04-26 23:18:59.408900] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:16:10.159 [2024-04-26 23:18:59.408904] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:16:10.159 [2024-04-26 23:18:59.408910] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:16:10.159 [2024-04-26 23:18:59.408918] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:16:10.159 [2024-04-26 23:18:59.408922] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:16:10.159 [2024-04-26 23:18:59.408928] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:16:10.421 [2024-04-26 23:18:59.416842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:16:10.421 [2024-04-26 23:18:59.416860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:16:10.421 [2024-04-26 23:18:59.416869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:16:10.421 [2024-04-26 23:18:59.416876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:16:10.421 ===================================================== 00:16:10.421 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:16:10.421 ===================================================== 00:16:10.421 Controller Capabilities/Features 00:16:10.421 ================================ 00:16:10.421 Vendor ID: 4e58 00:16:10.421 Subsystem Vendor ID: 4e58 00:16:10.421 Serial Number: SPDK2 00:16:10.421 Model Number: SPDK bdev Controller 00:16:10.421 Firmware Version: 24.05 00:16:10.421 Recommended Arb Burst: 6 00:16:10.421 IEEE OUI Identifier: 8d 6b 50 00:16:10.421 Multi-path I/O 00:16:10.421 May have multiple subsystem ports: Yes 00:16:10.421 May have multiple controllers: Yes 00:16:10.421 Associated with SR-IOV VF: No 00:16:10.421 Max Data Transfer Size: 131072 00:16:10.421 Max Number of Namespaces: 32 00:16:10.421 Max Number of I/O Queues: 127 00:16:10.421 NVMe Specification Version (VS): 1.3 00:16:10.421 NVMe Specification Version (Identify): 1.3 00:16:10.421 Maximum Queue Entries: 256 00:16:10.421 Contiguous Queues Required: Yes 00:16:10.421 Arbitration Mechanisms Supported 00:16:10.421 Weighted Round Robin: Not Supported 00:16:10.421 Vendor Specific: Not Supported 00:16:10.421 Reset Timeout: 15000 ms 00:16:10.421 Doorbell Stride: 4 bytes 00:16:10.421 NVM Subsystem Reset: Not Supported 00:16:10.421 Command Sets Supported 00:16:10.421 NVM Command Set: Supported 00:16:10.421 Boot Partition: Not Supported 00:16:10.421 Memory Page Size Minimum: 4096 bytes 00:16:10.421 Memory Page Size Maximum: 4096 bytes 00:16:10.421 Persistent Memory Region: Not Supported 00:16:10.421 Optional Asynchronous Events Supported 00:16:10.421 Namespace Attribute Notices: Supported 00:16:10.421 Firmware Activation Notices: Not Supported 00:16:10.421 ANA Change Notices: Not Supported 00:16:10.421 PLE Aggregate Log Change Notices: Not Supported 00:16:10.421 LBA Status Info Alert Notices: Not Supported 00:16:10.421 EGE Aggregate Log Change Notices: Not Supported 00:16:10.422 Normal NVM Subsystem Shutdown event: Not Supported 00:16:10.422 Zone Descriptor Change Notices: Not Supported 00:16:10.422 Discovery Log Change Notices: Not Supported 00:16:10.422 Controller Attributes 00:16:10.422 128-bit Host Identifier: Supported 00:16:10.422 Non-Operational Permissive Mode: Not Supported 00:16:10.422 NVM Sets: Not Supported 00:16:10.422 Read Recovery Levels: Not Supported 00:16:10.422 Endurance Groups: Not Supported 00:16:10.422 Predictable Latency Mode: Not Supported 00:16:10.422 Traffic Based Keep ALive: Not Supported 00:16:10.422 Namespace Granularity: Not Supported 00:16:10.422 SQ Associations: Not Supported 00:16:10.422 UUID List: Not Supported 00:16:10.422 Multi-Domain Subsystem: Not Supported 00:16:10.422 Fixed Capacity Management: Not Supported 00:16:10.422 Variable Capacity Management: Not Supported 00:16:10.422 Delete Endurance Group: Not Supported 00:16:10.422 Delete NVM Set: Not Supported 00:16:10.422 Extended LBA Formats Supported: Not Supported 00:16:10.422 Flexible Data Placement Supported: Not Supported 00:16:10.422 00:16:10.422 Controller Memory Buffer Support 00:16:10.422 ================================ 00:16:10.422 Supported: No 00:16:10.422 00:16:10.422 Persistent Memory Region Support 00:16:10.422 ================================ 00:16:10.422 Supported: No 00:16:10.422 00:16:10.422 Admin Command Set Attributes 00:16:10.422 ============================ 00:16:10.422 Security Send/Receive: Not Supported 00:16:10.422 Format NVM: Not Supported 00:16:10.422 Firmware Activate/Download: Not Supported 00:16:10.422 Namespace Management: Not Supported 00:16:10.422 Device Self-Test: Not Supported 00:16:10.422 Directives: Not Supported 00:16:10.422 NVMe-MI: Not Supported 00:16:10.422 Virtualization Management: Not Supported 00:16:10.422 Doorbell Buffer Config: Not Supported 00:16:10.422 Get LBA Status Capability: Not Supported 00:16:10.422 Command & Feature Lockdown Capability: Not Supported 00:16:10.422 Abort Command Limit: 4 00:16:10.422 Async Event Request Limit: 4 00:16:10.422 Number of Firmware Slots: N/A 00:16:10.422 Firmware Slot 1 Read-Only: N/A 00:16:10.422 Firmware Activation Without Reset: N/A 00:16:10.422 Multiple Update Detection Support: N/A 00:16:10.422 Firmware Update Granularity: No Information Provided 00:16:10.422 Per-Namespace SMART Log: No 00:16:10.422 Asymmetric Namespace Access Log Page: Not Supported 00:16:10.422 Subsystem NQN: nqn.2019-07.io.spdk:cnode2 00:16:10.422 Command Effects Log Page: Supported 00:16:10.422 Get Log Page Extended Data: Supported 00:16:10.422 Telemetry Log Pages: Not Supported 00:16:10.422 Persistent Event Log Pages: Not Supported 00:16:10.422 Supported Log Pages Log Page: May Support 00:16:10.422 Commands Supported & Effects Log Page: Not Supported 00:16:10.422 Feature Identifiers & Effects Log Page:May Support 00:16:10.422 NVMe-MI Commands & Effects Log Page: May Support 00:16:10.422 Data Area 4 for Telemetry Log: Not Supported 00:16:10.422 Error Log Page Entries Supported: 128 00:16:10.422 Keep Alive: Supported 00:16:10.422 Keep Alive Granularity: 10000 ms 00:16:10.422 00:16:10.422 NVM Command Set Attributes 00:16:10.422 ========================== 00:16:10.422 Submission Queue Entry Size 00:16:10.422 Max: 64 00:16:10.422 Min: 64 00:16:10.422 Completion Queue Entry Size 00:16:10.422 Max: 16 00:16:10.422 Min: 16 00:16:10.422 Number of Namespaces: 32 00:16:10.422 Compare Command: Supported 00:16:10.422 Write Uncorrectable Command: Not Supported 00:16:10.422 Dataset Management Command: Supported 00:16:10.422 Write Zeroes Command: Supported 00:16:10.422 Set Features Save Field: Not Supported 00:16:10.422 Reservations: Not Supported 00:16:10.422 Timestamp: Not Supported 00:16:10.422 Copy: Supported 00:16:10.422 Volatile Write Cache: Present 00:16:10.422 Atomic Write Unit (Normal): 1 00:16:10.422 Atomic Write Unit (PFail): 1 00:16:10.422 Atomic Compare & Write Unit: 1 00:16:10.422 Fused Compare & Write: Supported 00:16:10.422 Scatter-Gather List 00:16:10.422 SGL Command Set: Supported (Dword aligned) 00:16:10.422 SGL Keyed: Not Supported 00:16:10.422 SGL Bit Bucket Descriptor: Not Supported 00:16:10.422 SGL Metadata Pointer: Not Supported 00:16:10.422 Oversized SGL: Not Supported 00:16:10.422 SGL Metadata Address: Not Supported 00:16:10.422 SGL Offset: Not Supported 00:16:10.422 Transport SGL Data Block: Not Supported 00:16:10.422 Replay Protected Memory Block: Not Supported 00:16:10.422 00:16:10.422 Firmware Slot Information 00:16:10.422 ========================= 00:16:10.422 Active slot: 1 00:16:10.422 Slot 1 Firmware Revision: 24.05 00:16:10.422 00:16:10.422 00:16:10.422 Commands Supported and Effects 00:16:10.422 ============================== 00:16:10.422 Admin Commands 00:16:10.422 -------------- 00:16:10.422 Get Log Page (02h): Supported 00:16:10.422 Identify (06h): Supported 00:16:10.422 Abort (08h): Supported 00:16:10.422 Set Features (09h): Supported 00:16:10.422 Get Features (0Ah): Supported 00:16:10.422 Asynchronous Event Request (0Ch): Supported 00:16:10.422 Keep Alive (18h): Supported 00:16:10.422 I/O Commands 00:16:10.422 ------------ 00:16:10.422 Flush (00h): Supported LBA-Change 00:16:10.422 Write (01h): Supported LBA-Change 00:16:10.422 Read (02h): Supported 00:16:10.422 Compare (05h): Supported 00:16:10.422 Write Zeroes (08h): Supported LBA-Change 00:16:10.422 Dataset Management (09h): Supported LBA-Change 00:16:10.422 Copy (19h): Supported LBA-Change 00:16:10.422 Unknown (79h): Supported LBA-Change 00:16:10.422 Unknown (7Ah): Supported 00:16:10.422 00:16:10.422 Error Log 00:16:10.422 ========= 00:16:10.422 00:16:10.422 Arbitration 00:16:10.422 =========== 00:16:10.422 Arbitration Burst: 1 00:16:10.422 00:16:10.422 Power Management 00:16:10.422 ================ 00:16:10.422 Number of Power States: 1 00:16:10.422 Current Power State: Power State #0 00:16:10.422 Power State #0: 00:16:10.422 Max Power: 0.00 W 00:16:10.422 Non-Operational State: Operational 00:16:10.423 Entry Latency: Not Reported 00:16:10.423 Exit Latency: Not Reported 00:16:10.423 Relative Read Throughput: 0 00:16:10.423 Relative Read Latency: 0 00:16:10.423 Relative Write Throughput: 0 00:16:10.423 Relative Write Latency: 0 00:16:10.423 Idle Power: Not Reported 00:16:10.423 Active Power: Not Reported 00:16:10.423 Non-Operational Permissive Mode: Not Supported 00:16:10.423 00:16:10.423 Health Information 00:16:10.423 ================== 00:16:10.423 Critical Warnings: 00:16:10.423 Available Spare Space: OK 00:16:10.423 Temperature: OK 00:16:10.423 Device Reliability: OK 00:16:10.423 Read Only: No 00:16:10.423 Volatile Memory Backup: OK 00:16:10.423 Current Temperature: 0 Kelvin (-2[2024-04-26 23:18:59.416980] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:16:10.423 [2024-04-26 23:18:59.424842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:16:10.423 [2024-04-26 23:18:59.424869] nvme_ctrlr.c:4221:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Prepare to destruct SSD 00:16:10.423 [2024-04-26 23:18:59.424878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:10.423 [2024-04-26 23:18:59.424884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:10.423 [2024-04-26 23:18:59.424890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:10.423 [2024-04-26 23:18:59.424897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:10.423 [2024-04-26 23:18:59.424953] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:16:10.423 [2024-04-26 23:18:59.424962] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x464001 00:16:10.423 [2024-04-26 23:18:59.425953] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:16:10.423 [2024-04-26 23:18:59.426000] nvme_ctrlr.c:1082:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] RTD3E = 0 us 00:16:10.423 [2024-04-26 23:18:59.426006] nvme_ctrlr.c:1085:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] shutdown timeout = 10000 ms 00:16:10.423 [2024-04-26 23:18:59.426960] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x9 00:16:10.423 [2024-04-26 23:18:59.426971] nvme_ctrlr.c:1204:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] shutdown complete in 0 milliseconds 00:16:10.423 [2024-04-26 23:18:59.427018] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user2/2/cntrl 00:16:10.423 [2024-04-26 23:18:59.428394] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:16:10.423 73 Celsius) 00:16:10.423 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:16:10.423 Available Spare: 0% 00:16:10.423 Available Spare Threshold: 0% 00:16:10.423 Life Percentage Used: 0% 00:16:10.423 Data Units Read: 0 00:16:10.423 Data Units Written: 0 00:16:10.423 Host Read Commands: 0 00:16:10.423 Host Write Commands: 0 00:16:10.423 Controller Busy Time: 0 minutes 00:16:10.423 Power Cycles: 0 00:16:10.423 Power On Hours: 0 hours 00:16:10.423 Unsafe Shutdowns: 0 00:16:10.423 Unrecoverable Media Errors: 0 00:16:10.423 Lifetime Error Log Entries: 0 00:16:10.423 Warning Temperature Time: 0 minutes 00:16:10.423 Critical Temperature Time: 0 minutes 00:16:10.423 00:16:10.423 Number of Queues 00:16:10.423 ================ 00:16:10.423 Number of I/O Submission Queues: 127 00:16:10.423 Number of I/O Completion Queues: 127 00:16:10.423 00:16:10.423 Active Namespaces 00:16:10.423 ================= 00:16:10.423 Namespace ID:1 00:16:10.423 Error Recovery Timeout: Unlimited 00:16:10.423 Command Set Identifier: NVM (00h) 00:16:10.423 Deallocate: Supported 00:16:10.423 Deallocated/Unwritten Error: Not Supported 00:16:10.423 Deallocated Read Value: Unknown 00:16:10.423 Deallocate in Write Zeroes: Not Supported 00:16:10.423 Deallocated Guard Field: 0xFFFF 00:16:10.423 Flush: Supported 00:16:10.423 Reservation: Supported 00:16:10.423 Namespace Sharing Capabilities: Multiple Controllers 00:16:10.423 Size (in LBAs): 131072 (0GiB) 00:16:10.423 Capacity (in LBAs): 131072 (0GiB) 00:16:10.423 Utilization (in LBAs): 131072 (0GiB) 00:16:10.423 NGUID: B88926F51BE54A18A39378B91313A01C 00:16:10.423 UUID: b88926f5-1be5-4a18-a393-78b91313a01c 00:16:10.423 Thin Provisioning: Not Supported 00:16:10.423 Per-NS Atomic Units: Yes 00:16:10.423 Atomic Boundary Size (Normal): 0 00:16:10.423 Atomic Boundary Size (PFail): 0 00:16:10.423 Atomic Boundary Offset: 0 00:16:10.423 Maximum Single Source Range Length: 65535 00:16:10.423 Maximum Copy Length: 65535 00:16:10.423 Maximum Source Range Count: 1 00:16:10.423 NGUID/EUI64 Never Reused: No 00:16:10.423 Namespace Write Protected: No 00:16:10.423 Number of LBA Formats: 1 00:16:10.423 Current LBA Format: LBA Format #00 00:16:10.423 LBA Format #00: Data Size: 512 Metadata Size: 0 00:16:10.423 00:16:10.423 23:18:59 -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:16:10.423 EAL: No free 2048 kB hugepages reported on node 1 00:16:10.423 [2024-04-26 23:18:59.630127] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:16:15.711 [2024-04-26 23:19:04.735043] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:16:15.711 Initializing NVMe Controllers 00:16:15.711 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:16:15.711 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:16:15.711 Initialization complete. Launching workers. 00:16:15.711 ======================================================== 00:16:15.711 Latency(us) 00:16:15.711 Device Information : IOPS MiB/s Average min max 00:16:15.711 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 44105.03 172.29 2901.72 916.26 5849.59 00:16:15.711 ======================================================== 00:16:15.711 Total : 44105.03 172.29 2901.72 916.26 5849.59 00:16:15.711 00:16:15.711 23:19:04 -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:16:15.711 EAL: No free 2048 kB hugepages reported on node 1 00:16:15.711 [2024-04-26 23:19:04.930666] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:16:20.999 [2024-04-26 23:19:09.948750] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:16:20.999 Initializing NVMe Controllers 00:16:20.999 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:16:20.999 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:16:20.999 Initialization complete. Launching workers. 00:16:20.999 ======================================================== 00:16:20.999 Latency(us) 00:16:20.999 Device Information : IOPS MiB/s Average min max 00:16:20.999 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 33485.93 130.80 3821.62 1235.43 8648.31 00:16:20.999 ======================================================== 00:16:20.999 Total : 33485.93 130.80 3821.62 1235.43 8648.31 00:16:20.999 00:16:20.999 23:19:09 -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:16:20.999 EAL: No free 2048 kB hugepages reported on node 1 00:16:20.999 [2024-04-26 23:19:10.183076] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:16:26.290 [2024-04-26 23:19:15.326937] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:16:26.290 Initializing NVMe Controllers 00:16:26.290 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:16:26.290 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:16:26.290 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 1 00:16:26.290 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 2 00:16:26.290 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 3 00:16:26.290 Initialization complete. Launching workers. 00:16:26.290 Starting thread on core 2 00:16:26.290 Starting thread on core 3 00:16:26.291 Starting thread on core 1 00:16:26.291 23:19:15 -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -d 256 -g 00:16:26.291 EAL: No free 2048 kB hugepages reported on node 1 00:16:26.551 [2024-04-26 23:19:15.597316] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:16:29.849 [2024-04-26 23:19:18.682979] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:16:29.849 Initializing NVMe Controllers 00:16:29.849 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:16:29.849 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:16:29.849 Associating SPDK bdev Controller (SPDK2 ) with lcore 0 00:16:29.849 Associating SPDK bdev Controller (SPDK2 ) with lcore 1 00:16:29.849 Associating SPDK bdev Controller (SPDK2 ) with lcore 2 00:16:29.849 Associating SPDK bdev Controller (SPDK2 ) with lcore 3 00:16:29.849 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:16:29.849 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:16:29.849 Initialization complete. Launching workers. 00:16:29.849 Starting thread on core 1 with urgent priority queue 00:16:29.849 Starting thread on core 2 with urgent priority queue 00:16:29.849 Starting thread on core 3 with urgent priority queue 00:16:29.849 Starting thread on core 0 with urgent priority queue 00:16:29.849 SPDK bdev Controller (SPDK2 ) core 0: 7931.33 IO/s 12.61 secs/100000 ios 00:16:29.849 SPDK bdev Controller (SPDK2 ) core 1: 13670.00 IO/s 7.32 secs/100000 ios 00:16:29.849 SPDK bdev Controller (SPDK2 ) core 2: 14952.33 IO/s 6.69 secs/100000 ios 00:16:29.849 SPDK bdev Controller (SPDK2 ) core 3: 9045.33 IO/s 11.06 secs/100000 ios 00:16:29.849 ======================================================== 00:16:29.849 00:16:29.849 23:19:18 -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:16:29.849 EAL: No free 2048 kB hugepages reported on node 1 00:16:29.849 [2024-04-26 23:19:18.946265] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:16:29.849 [2024-04-26 23:19:18.956324] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:16:29.849 Initializing NVMe Controllers 00:16:29.849 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:16:29.849 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:16:29.849 Namespace ID: 1 size: 0GB 00:16:29.849 Initialization complete. 00:16:29.849 INFO: using host memory buffer for IO 00:16:29.849 Hello world! 00:16:29.849 23:19:19 -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:16:29.849 EAL: No free 2048 kB hugepages reported on node 1 00:16:30.110 [2024-04-26 23:19:19.213130] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:16:31.053 Initializing NVMe Controllers 00:16:31.053 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:16:31.053 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:16:31.053 Initialization complete. Launching workers. 00:16:31.053 submit (in ns) avg, min, max = 7448.5, 3865.0, 3999864.2 00:16:31.053 complete (in ns) avg, min, max = 22437.6, 2358.3, 5991806.7 00:16:31.053 00:16:31.053 Submit histogram 00:16:31.053 ================ 00:16:31.053 Range in us Cumulative Count 00:16:31.053 3.840 - 3.867: 0.0066% ( 1) 00:16:31.053 3.867 - 3.893: 2.0897% ( 316) 00:16:31.053 3.893 - 3.920: 7.3237% ( 794) 00:16:31.053 3.920 - 3.947: 16.2953% ( 1361) 00:16:31.053 3.947 - 3.973: 27.3896% ( 1683) 00:16:31.053 3.973 - 4.000: 38.7080% ( 1717) 00:16:31.053 4.000 - 4.027: 52.7884% ( 2136) 00:16:31.053 4.027 - 4.053: 70.0066% ( 2612) 00:16:31.053 4.053 - 4.080: 84.9967% ( 2274) 00:16:31.053 4.080 - 4.107: 93.2235% ( 1248) 00:16:31.053 4.107 - 4.133: 96.9941% ( 572) 00:16:31.053 4.133 - 4.160: 98.8003% ( 274) 00:16:31.053 4.160 - 4.187: 99.3144% ( 78) 00:16:31.053 4.187 - 4.213: 99.4792% ( 25) 00:16:31.053 4.213 - 4.240: 99.4990% ( 3) 00:16:31.053 4.240 - 4.267: 99.5122% ( 2) 00:16:31.053 4.533 - 4.560: 99.5188% ( 1) 00:16:31.053 4.907 - 4.933: 99.5254% ( 1) 00:16:31.053 4.960 - 4.987: 99.5320% ( 1) 00:16:31.053 5.040 - 5.067: 99.5386% ( 1) 00:16:31.053 5.440 - 5.467: 99.5517% ( 2) 00:16:31.053 5.547 - 5.573: 99.5583% ( 1) 00:16:31.053 5.573 - 5.600: 99.5649% ( 1) 00:16:31.053 5.840 - 5.867: 99.5715% ( 1) 00:16:31.053 5.920 - 5.947: 99.5781% ( 1) 00:16:31.053 6.027 - 6.053: 99.5979% ( 3) 00:16:31.053 6.053 - 6.080: 99.6045% ( 1) 00:16:31.053 6.080 - 6.107: 99.6111% ( 1) 00:16:31.053 6.107 - 6.133: 99.6374% ( 4) 00:16:31.053 6.133 - 6.160: 99.6440% ( 1) 00:16:31.053 6.160 - 6.187: 99.6506% ( 1) 00:16:31.053 6.213 - 6.240: 99.6572% ( 1) 00:16:31.053 6.240 - 6.267: 99.6638% ( 1) 00:16:31.053 6.267 - 6.293: 99.6902% ( 4) 00:16:31.053 6.320 - 6.347: 99.6968% ( 1) 00:16:31.054 6.347 - 6.373: 99.7034% ( 1) 00:16:31.054 6.400 - 6.427: 99.7100% ( 1) 00:16:31.054 6.427 - 6.453: 99.7297% ( 3) 00:16:31.054 6.533 - 6.560: 99.7363% ( 1) 00:16:31.054 6.560 - 6.587: 99.7627% ( 4) 00:16:31.054 6.613 - 6.640: 99.7693% ( 1) 00:16:31.054 6.667 - 6.693: 99.7759% ( 1) 00:16:31.054 6.720 - 6.747: 99.7825% ( 1) 00:16:31.054 6.747 - 6.773: 99.7956% ( 2) 00:16:31.054 6.773 - 6.800: 99.8088% ( 2) 00:16:31.054 6.827 - 6.880: 99.8154% ( 1) 00:16:31.054 6.933 - 6.987: 99.8220% ( 1) 00:16:31.054 6.987 - 7.040: 99.8418% ( 3) 00:16:31.054 7.253 - 7.307: 99.8550% ( 2) 00:16:31.054 7.413 - 7.467: 99.8682% ( 2) 00:16:31.054 7.467 - 7.520: 99.8748% ( 1) 00:16:31.054 7.520 - 7.573: 99.8879% ( 2) 00:16:31.054 7.840 - 7.893: 99.8945% ( 1) 00:16:31.054 8.107 - 8.160: 99.9011% ( 1) 00:16:31.054 8.480 - 8.533: 99.9077% ( 1) 00:16:31.054 9.600 - 9.653: 99.9143% ( 1) 00:16:31.054 3986.773 - 4014.080: 100.0000% ( 13) 00:16:31.054 00:16:31.054 Complete histogram 00:16:31.054 ================== 00:16:31.054 Range in us Cumulative Count 00:16:31.054 2.347 - 2.360: 0.0066% ( 1) 00:16:31.054 2.360 - 2.373: 1.2459% ( 188) 00:16:31.054 2.373 - 2.387: 1.5162% ( 41) 00:16:31.054 2.387 - 2.400: 1.6348% ( 18) 00:16:31.054 2.400 - 2.413: 32.3072% ( 4653) 00:16:31.054 2.413 - 2.427: 57.2050% ( 3777) 00:16:31.054 2.427 - 2.440: 68.4970% ( 1713) 00:16:31.054 2.440 - 2.453: 76.8952% ( 1274) 00:16:31.054 2.453 - 2.467: 81.0415% ( 629) 00:16:31.054 2.467 - 2.480: 82.3533% ( 199) 00:16:31.054 2.480 - 2.493: 85.6361% ( 498) 00:16:31.054 2.493 - 2.507: 90.8833% ( 796) 00:16:31.054 2.507 - 2.520: 94.7726% ( 590) 00:16:31.054 2.520 - 2.533: 97.1127% ( 355) 00:16:31.054 2.533 - 2.547: 98.4509% ( 203) 00:16:31.054 2.547 - 2.560: 98.9914% ( 82) 00:16:31.054 2.560 - 2.573: 99.1562% ( 25) 00:16:31.054 2.573 - 2.587: 99.1826% ( 4) 00:16:31.054 2.587 - 2.600: 99.1892% ( 1) 00:16:31.054 4.293 - 4.320: 99.1958% ( 1) 00:16:31.054 4.320 - [2024-04-26 23:19:20.307498] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:16:31.314 4.347: 99.2090% ( 2) 00:16:31.314 4.347 - 4.373: 99.2156% ( 1) 00:16:31.314 4.373 - 4.400: 99.2221% ( 1) 00:16:31.314 4.400 - 4.427: 99.2287% ( 1) 00:16:31.314 4.427 - 4.453: 99.2353% ( 1) 00:16:31.314 4.453 - 4.480: 99.2419% ( 1) 00:16:31.314 4.560 - 4.587: 99.2485% ( 1) 00:16:31.314 4.667 - 4.693: 99.2551% ( 1) 00:16:31.314 4.693 - 4.720: 99.2815% ( 4) 00:16:31.314 4.720 - 4.747: 99.2881% ( 1) 00:16:31.314 4.747 - 4.773: 99.2947% ( 1) 00:16:31.314 4.773 - 4.800: 99.3013% ( 1) 00:16:31.314 4.800 - 4.827: 99.3078% ( 1) 00:16:31.314 4.853 - 4.880: 99.3276% ( 3) 00:16:31.314 4.880 - 4.907: 99.3606% ( 5) 00:16:31.314 4.907 - 4.933: 99.3672% ( 1) 00:16:31.314 4.960 - 4.987: 99.3738% ( 1) 00:16:31.314 4.987 - 5.013: 99.3804% ( 1) 00:16:31.314 5.013 - 5.040: 99.3869% ( 1) 00:16:31.314 5.093 - 5.120: 99.3935% ( 1) 00:16:31.314 5.173 - 5.200: 99.4001% ( 1) 00:16:31.314 5.387 - 5.413: 99.4067% ( 1) 00:16:31.314 5.413 - 5.440: 99.4133% ( 1) 00:16:31.314 5.493 - 5.520: 99.4199% ( 1) 00:16:31.314 5.627 - 5.653: 99.4265% ( 1) 00:16:31.314 5.707 - 5.733: 99.4331% ( 1) 00:16:31.314 5.867 - 5.893: 99.4463% ( 2) 00:16:31.314 6.160 - 6.187: 99.4529% ( 1) 00:16:31.314 6.213 - 6.240: 99.4595% ( 1) 00:16:31.314 7.093 - 7.147: 99.4661% ( 1) 00:16:31.314 7.467 - 7.520: 99.4726% ( 1) 00:16:31.314 10.933 - 10.987: 99.4792% ( 1) 00:16:31.314 14.187 - 14.293: 99.4858% ( 1) 00:16:31.314 44.800 - 45.013: 99.4924% ( 1) 00:16:31.314 49.920 - 50.133: 99.4990% ( 1) 00:16:31.314 1604.267 - 1611.093: 99.5056% ( 1) 00:16:31.314 3986.773 - 4014.080: 99.9934% ( 74) 00:16:31.314 5980.160 - 6007.467: 100.0000% ( 1) 00:16:31.314 00:16:31.314 23:19:20 -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user2/2 nqn.2019-07.io.spdk:cnode2 2 00:16:31.314 23:19:20 -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user2/2 00:16:31.314 23:19:20 -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode2 00:16:31.314 23:19:20 -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc4 00:16:31.314 23:19:20 -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:16:31.314 [ 00:16:31.314 { 00:16:31.314 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:16:31.314 "subtype": "Discovery", 00:16:31.314 "listen_addresses": [], 00:16:31.314 "allow_any_host": true, 00:16:31.314 "hosts": [] 00:16:31.314 }, 00:16:31.314 { 00:16:31.314 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:16:31.314 "subtype": "NVMe", 00:16:31.314 "listen_addresses": [ 00:16:31.314 { 00:16:31.314 "transport": "VFIOUSER", 00:16:31.314 "trtype": "VFIOUSER", 00:16:31.314 "adrfam": "IPv4", 00:16:31.314 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:16:31.314 "trsvcid": "0" 00:16:31.314 } 00:16:31.314 ], 00:16:31.314 "allow_any_host": true, 00:16:31.314 "hosts": [], 00:16:31.314 "serial_number": "SPDK1", 00:16:31.314 "model_number": "SPDK bdev Controller", 00:16:31.314 "max_namespaces": 32, 00:16:31.314 "min_cntlid": 1, 00:16:31.314 "max_cntlid": 65519, 00:16:31.314 "namespaces": [ 00:16:31.314 { 00:16:31.314 "nsid": 1, 00:16:31.314 "bdev_name": "Malloc1", 00:16:31.314 "name": "Malloc1", 00:16:31.314 "nguid": "932DD87758BD42FBA6A34A8F7748E4BD", 00:16:31.314 "uuid": "932dd877-58bd-42fb-a6a3-4a8f7748e4bd" 00:16:31.314 }, 00:16:31.314 { 00:16:31.314 "nsid": 2, 00:16:31.314 "bdev_name": "Malloc3", 00:16:31.314 "name": "Malloc3", 00:16:31.314 "nguid": "F2C7868EA0ED4441B013B4042C6DFFEA", 00:16:31.314 "uuid": "f2c7868e-a0ed-4441-b013-b4042c6dffea" 00:16:31.314 } 00:16:31.314 ] 00:16:31.314 }, 00:16:31.314 { 00:16:31.314 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:16:31.314 "subtype": "NVMe", 00:16:31.314 "listen_addresses": [ 00:16:31.314 { 00:16:31.314 "transport": "VFIOUSER", 00:16:31.314 "trtype": "VFIOUSER", 00:16:31.314 "adrfam": "IPv4", 00:16:31.314 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:16:31.314 "trsvcid": "0" 00:16:31.314 } 00:16:31.314 ], 00:16:31.314 "allow_any_host": true, 00:16:31.314 "hosts": [], 00:16:31.314 "serial_number": "SPDK2", 00:16:31.314 "model_number": "SPDK bdev Controller", 00:16:31.314 "max_namespaces": 32, 00:16:31.314 "min_cntlid": 1, 00:16:31.314 "max_cntlid": 65519, 00:16:31.314 "namespaces": [ 00:16:31.314 { 00:16:31.314 "nsid": 1, 00:16:31.314 "bdev_name": "Malloc2", 00:16:31.314 "name": "Malloc2", 00:16:31.314 "nguid": "B88926F51BE54A18A39378B91313A01C", 00:16:31.314 "uuid": "b88926f5-1be5-4a18-a393-78b91313a01c" 00:16:31.314 } 00:16:31.314 ] 00:16:31.314 } 00:16:31.314 ] 00:16:31.314 23:19:20 -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:16:31.314 23:19:20 -- target/nvmf_vfio_user.sh@34 -- # aerpid=3895696 00:16:31.314 23:19:20 -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:16:31.314 23:19:20 -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -n 2 -g -t /tmp/aer_touch_file 00:16:31.314 23:19:20 -- common/autotest_common.sh@1251 -- # local i=0 00:16:31.314 23:19:20 -- common/autotest_common.sh@1252 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:16:31.314 23:19:20 -- common/autotest_common.sh@1258 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:16:31.314 23:19:20 -- common/autotest_common.sh@1262 -- # return 0 00:16:31.314 23:19:20 -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:16:31.314 23:19:20 -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc4 00:16:31.575 EAL: No free 2048 kB hugepages reported on node 1 00:16:31.575 Malloc4 00:16:31.575 [2024-04-26 23:19:20.689257] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:16:31.575 23:19:20 -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc4 -n 2 00:16:31.836 [2024-04-26 23:19:20.851319] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:16:31.836 23:19:20 -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:16:31.836 Asynchronous Event Request test 00:16:31.836 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:16:31.836 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:16:31.836 Registering asynchronous event callbacks... 00:16:31.836 Starting namespace attribute notice tests for all controllers... 00:16:31.836 /var/run/vfio-user/domain/vfio-user2/2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:16:31.836 aer_cb - Changed Namespace 00:16:31.836 Cleaning up... 00:16:31.836 [ 00:16:31.836 { 00:16:31.836 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:16:31.836 "subtype": "Discovery", 00:16:31.836 "listen_addresses": [], 00:16:31.836 "allow_any_host": true, 00:16:31.836 "hosts": [] 00:16:31.836 }, 00:16:31.836 { 00:16:31.836 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:16:31.836 "subtype": "NVMe", 00:16:31.836 "listen_addresses": [ 00:16:31.836 { 00:16:31.836 "transport": "VFIOUSER", 00:16:31.836 "trtype": "VFIOUSER", 00:16:31.836 "adrfam": "IPv4", 00:16:31.836 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:16:31.836 "trsvcid": "0" 00:16:31.836 } 00:16:31.836 ], 00:16:31.836 "allow_any_host": true, 00:16:31.836 "hosts": [], 00:16:31.836 "serial_number": "SPDK1", 00:16:31.836 "model_number": "SPDK bdev Controller", 00:16:31.836 "max_namespaces": 32, 00:16:31.836 "min_cntlid": 1, 00:16:31.836 "max_cntlid": 65519, 00:16:31.836 "namespaces": [ 00:16:31.836 { 00:16:31.836 "nsid": 1, 00:16:31.836 "bdev_name": "Malloc1", 00:16:31.836 "name": "Malloc1", 00:16:31.836 "nguid": "932DD87758BD42FBA6A34A8F7748E4BD", 00:16:31.836 "uuid": "932dd877-58bd-42fb-a6a3-4a8f7748e4bd" 00:16:31.836 }, 00:16:31.836 { 00:16:31.836 "nsid": 2, 00:16:31.836 "bdev_name": "Malloc3", 00:16:31.836 "name": "Malloc3", 00:16:31.836 "nguid": "F2C7868EA0ED4441B013B4042C6DFFEA", 00:16:31.836 "uuid": "f2c7868e-a0ed-4441-b013-b4042c6dffea" 00:16:31.836 } 00:16:31.836 ] 00:16:31.836 }, 00:16:31.836 { 00:16:31.836 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:16:31.836 "subtype": "NVMe", 00:16:31.836 "listen_addresses": [ 00:16:31.836 { 00:16:31.836 "transport": "VFIOUSER", 00:16:31.836 "trtype": "VFIOUSER", 00:16:31.836 "adrfam": "IPv4", 00:16:31.836 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:16:31.836 "trsvcid": "0" 00:16:31.836 } 00:16:31.836 ], 00:16:31.837 "allow_any_host": true, 00:16:31.837 "hosts": [], 00:16:31.837 "serial_number": "SPDK2", 00:16:31.837 "model_number": "SPDK bdev Controller", 00:16:31.837 "max_namespaces": 32, 00:16:31.837 "min_cntlid": 1, 00:16:31.837 "max_cntlid": 65519, 00:16:31.837 "namespaces": [ 00:16:31.837 { 00:16:31.837 "nsid": 1, 00:16:31.837 "bdev_name": "Malloc2", 00:16:31.837 "name": "Malloc2", 00:16:31.837 "nguid": "B88926F51BE54A18A39378B91313A01C", 00:16:31.837 "uuid": "b88926f5-1be5-4a18-a393-78b91313a01c" 00:16:31.837 }, 00:16:31.837 { 00:16:31.837 "nsid": 2, 00:16:31.837 "bdev_name": "Malloc4", 00:16:31.837 "name": "Malloc4", 00:16:31.837 "nguid": "9EF71E55FD2240C1A133D926E1EAD81D", 00:16:31.837 "uuid": "9ef71e55-fd22-40c1-a133-d926e1ead81d" 00:16:31.837 } 00:16:31.837 ] 00:16:31.837 } 00:16:31.837 ] 00:16:31.837 23:19:21 -- target/nvmf_vfio_user.sh@44 -- # wait 3895696 00:16:31.837 23:19:21 -- target/nvmf_vfio_user.sh@105 -- # stop_nvmf_vfio_user 00:16:31.837 23:19:21 -- target/nvmf_vfio_user.sh@95 -- # killprocess 3886772 00:16:31.837 23:19:21 -- common/autotest_common.sh@936 -- # '[' -z 3886772 ']' 00:16:31.837 23:19:21 -- common/autotest_common.sh@940 -- # kill -0 3886772 00:16:31.837 23:19:21 -- common/autotest_common.sh@941 -- # uname 00:16:31.837 23:19:21 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:31.837 23:19:21 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3886772 00:16:32.098 23:19:21 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:16:32.098 23:19:21 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:16:32.098 23:19:21 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3886772' 00:16:32.098 killing process with pid 3886772 00:16:32.098 23:19:21 -- common/autotest_common.sh@955 -- # kill 3886772 00:16:32.098 [2024-04-26 23:19:21.104009] app.c: 937:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:16:32.098 23:19:21 -- common/autotest_common.sh@960 -- # wait 3886772 00:16:32.098 23:19:21 -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:16:32.098 23:19:21 -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:16:32.098 23:19:21 -- target/nvmf_vfio_user.sh@108 -- # setup_nvmf_vfio_user --interrupt-mode '-M -I' 00:16:32.098 23:19:21 -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args=--interrupt-mode 00:16:32.098 23:19:21 -- target/nvmf_vfio_user.sh@52 -- # local 'transport_args=-M -I' 00:16:32.098 23:19:21 -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' --interrupt-mode 00:16:32.098 23:19:21 -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=3895896 00:16:32.098 23:19:21 -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 3895896' 00:16:32.098 Process pid: 3895896 00:16:32.098 23:19:21 -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:16:32.098 23:19:21 -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 3895896 00:16:32.098 23:19:21 -- common/autotest_common.sh@817 -- # '[' -z 3895896 ']' 00:16:32.098 23:19:21 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:32.098 23:19:21 -- common/autotest_common.sh@822 -- # local max_retries=100 00:16:32.098 23:19:21 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:32.098 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:32.098 23:19:21 -- common/autotest_common.sh@826 -- # xtrace_disable 00:16:32.098 23:19:21 -- common/autotest_common.sh@10 -- # set +x 00:16:32.098 [2024-04-26 23:19:21.316154] thread.c:2927:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:16:32.098 [2024-04-26 23:19:21.317076] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:16:32.098 [2024-04-26 23:19:21.317114] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:32.098 EAL: No free 2048 kB hugepages reported on node 1 00:16:32.360 [2024-04-26 23:19:21.377867] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:32.360 [2024-04-26 23:19:21.406272] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:32.360 [2024-04-26 23:19:21.406315] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:32.360 [2024-04-26 23:19:21.406324] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:32.360 [2024-04-26 23:19:21.406332] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:32.360 [2024-04-26 23:19:21.406339] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:32.360 [2024-04-26 23:19:21.406455] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:32.360 [2024-04-26 23:19:21.406550] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:32.360 [2024-04-26 23:19:21.406707] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:32.360 [2024-04-26 23:19:21.406709] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:16:32.360 [2024-04-26 23:19:21.462055] thread.c:2085:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_0) to intr mode from intr mode. 00:16:32.360 [2024-04-26 23:19:21.462228] thread.c:2085:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_1) to intr mode from intr mode. 00:16:32.360 [2024-04-26 23:19:21.462533] thread.c:2085:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_2) to intr mode from intr mode. 00:16:32.360 [2024-04-26 23:19:21.462725] thread.c:2085:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:16:32.360 [2024-04-26 23:19:21.462806] thread.c:2085:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_3) to intr mode from intr mode. 00:16:32.360 23:19:21 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:16:32.360 23:19:21 -- common/autotest_common.sh@850 -- # return 0 00:16:32.360 23:19:21 -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:16:33.304 23:19:22 -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER -M -I 00:16:33.566 23:19:22 -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:16:33.566 23:19:22 -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:16:33.566 23:19:22 -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:16:33.566 23:19:22 -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:16:33.566 23:19:22 -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:16:33.566 Malloc1 00:16:33.828 23:19:22 -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:16:33.828 23:19:22 -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:16:34.088 23:19:23 -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:16:34.088 23:19:23 -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:16:34.088 23:19:23 -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:16:34.088 23:19:23 -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:16:34.349 Malloc2 00:16:34.349 23:19:23 -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:16:34.610 23:19:23 -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:16:34.610 23:19:23 -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:16:34.872 23:19:24 -- target/nvmf_vfio_user.sh@109 -- # stop_nvmf_vfio_user 00:16:34.872 23:19:24 -- target/nvmf_vfio_user.sh@95 -- # killprocess 3895896 00:16:34.872 23:19:24 -- common/autotest_common.sh@936 -- # '[' -z 3895896 ']' 00:16:34.872 23:19:24 -- common/autotest_common.sh@940 -- # kill -0 3895896 00:16:34.872 23:19:24 -- common/autotest_common.sh@941 -- # uname 00:16:34.872 23:19:24 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:34.872 23:19:24 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3895896 00:16:34.872 23:19:24 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:16:34.872 23:19:24 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:16:34.872 23:19:24 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3895896' 00:16:34.872 killing process with pid 3895896 00:16:34.872 23:19:24 -- common/autotest_common.sh@955 -- # kill 3895896 00:16:34.872 23:19:24 -- common/autotest_common.sh@960 -- # wait 3895896 00:16:35.134 23:19:24 -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:16:35.134 23:19:24 -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:16:35.134 00:16:35.134 real 0m50.055s 00:16:35.134 user 3m18.750s 00:16:35.134 sys 0m2.870s 00:16:35.134 23:19:24 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:16:35.134 23:19:24 -- common/autotest_common.sh@10 -- # set +x 00:16:35.134 ************************************ 00:16:35.134 END TEST nvmf_vfio_user 00:16:35.134 ************************************ 00:16:35.134 23:19:24 -- nvmf/nvmf.sh@42 -- # run_test nvmf_vfio_user_nvme_compliance /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:16:35.134 23:19:24 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:16:35.134 23:19:24 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:35.134 23:19:24 -- common/autotest_common.sh@10 -- # set +x 00:16:35.397 ************************************ 00:16:35.397 START TEST nvmf_vfio_user_nvme_compliance 00:16:35.397 ************************************ 00:16:35.397 23:19:24 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:16:35.397 * Looking for test storage... 00:16:35.397 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance 00:16:35.397 23:19:24 -- compliance/compliance.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:35.397 23:19:24 -- nvmf/common.sh@7 -- # uname -s 00:16:35.397 23:19:24 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:35.397 23:19:24 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:35.397 23:19:24 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:35.397 23:19:24 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:35.397 23:19:24 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:35.397 23:19:24 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:35.397 23:19:24 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:35.397 23:19:24 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:35.397 23:19:24 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:35.397 23:19:24 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:35.397 23:19:24 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:16:35.397 23:19:24 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:16:35.397 23:19:24 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:35.397 23:19:24 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:35.397 23:19:24 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:35.397 23:19:24 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:35.397 23:19:24 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:35.397 23:19:24 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:35.397 23:19:24 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:35.397 23:19:24 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:35.397 23:19:24 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:35.397 23:19:24 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:35.397 23:19:24 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:35.397 23:19:24 -- paths/export.sh@5 -- # export PATH 00:16:35.397 23:19:24 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:35.397 23:19:24 -- nvmf/common.sh@47 -- # : 0 00:16:35.397 23:19:24 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:35.397 23:19:24 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:35.397 23:19:24 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:35.397 23:19:24 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:35.397 23:19:24 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:35.397 23:19:24 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:35.397 23:19:24 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:35.397 23:19:24 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:35.397 23:19:24 -- compliance/compliance.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:35.397 23:19:24 -- compliance/compliance.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:35.397 23:19:24 -- compliance/compliance.sh@14 -- # export TEST_TRANSPORT=VFIOUSER 00:16:35.397 23:19:24 -- compliance/compliance.sh@14 -- # TEST_TRANSPORT=VFIOUSER 00:16:35.397 23:19:24 -- compliance/compliance.sh@16 -- # rm -rf /var/run/vfio-user 00:16:35.397 23:19:24 -- compliance/compliance.sh@20 -- # nvmfpid=3896644 00:16:35.397 23:19:24 -- compliance/compliance.sh@21 -- # echo 'Process pid: 3896644' 00:16:35.397 Process pid: 3896644 00:16:35.397 23:19:24 -- compliance/compliance.sh@23 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:16:35.397 23:19:24 -- compliance/compliance.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:16:35.397 23:19:24 -- compliance/compliance.sh@24 -- # waitforlisten 3896644 00:16:35.397 23:19:24 -- common/autotest_common.sh@817 -- # '[' -z 3896644 ']' 00:16:35.397 23:19:24 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:35.397 23:19:24 -- common/autotest_common.sh@822 -- # local max_retries=100 00:16:35.397 23:19:24 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:35.397 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:35.397 23:19:24 -- common/autotest_common.sh@826 -- # xtrace_disable 00:16:35.397 23:19:24 -- common/autotest_common.sh@10 -- # set +x 00:16:35.397 [2024-04-26 23:19:24.591852] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:16:35.397 [2024-04-26 23:19:24.591903] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:35.397 EAL: No free 2048 kB hugepages reported on node 1 00:16:35.660 [2024-04-26 23:19:24.656622] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:16:35.660 [2024-04-26 23:19:24.686108] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:35.660 [2024-04-26 23:19:24.686147] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:35.660 [2024-04-26 23:19:24.686154] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:35.660 [2024-04-26 23:19:24.686160] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:35.660 [2024-04-26 23:19:24.686166] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:35.660 [2024-04-26 23:19:24.686278] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:35.660 [2024-04-26 23:19:24.686395] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:35.660 [2024-04-26 23:19:24.686398] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:36.233 23:19:25 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:16:36.233 23:19:25 -- common/autotest_common.sh@850 -- # return 0 00:16:36.233 23:19:25 -- compliance/compliance.sh@26 -- # sleep 1 00:16:37.174 23:19:26 -- compliance/compliance.sh@28 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:16:37.174 23:19:26 -- compliance/compliance.sh@29 -- # traddr=/var/run/vfio-user 00:16:37.174 23:19:26 -- compliance/compliance.sh@31 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:16:37.174 23:19:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:37.174 23:19:26 -- common/autotest_common.sh@10 -- # set +x 00:16:37.174 23:19:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:37.174 23:19:26 -- compliance/compliance.sh@33 -- # mkdir -p /var/run/vfio-user 00:16:37.174 23:19:26 -- compliance/compliance.sh@35 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:16:37.174 23:19:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:37.174 23:19:26 -- common/autotest_common.sh@10 -- # set +x 00:16:37.174 malloc0 00:16:37.174 23:19:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:37.174 23:19:26 -- compliance/compliance.sh@36 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk -m 32 00:16:37.174 23:19:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:37.174 23:19:26 -- common/autotest_common.sh@10 -- # set +x 00:16:37.174 23:19:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:37.174 23:19:26 -- compliance/compliance.sh@37 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:16:37.174 23:19:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:37.174 23:19:26 -- common/autotest_common.sh@10 -- # set +x 00:16:37.435 23:19:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:37.435 23:19:26 -- compliance/compliance.sh@38 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:16:37.435 23:19:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:37.435 23:19:26 -- common/autotest_common.sh@10 -- # set +x 00:16:37.435 23:19:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:37.435 23:19:26 -- compliance/compliance.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/nvme_compliance -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user subnqn:nqn.2021-09.io.spdk:cnode0' 00:16:37.435 EAL: No free 2048 kB hugepages reported on node 1 00:16:37.435 00:16:37.435 00:16:37.435 CUnit - A unit testing framework for C - Version 2.1-3 00:16:37.435 http://cunit.sourceforge.net/ 00:16:37.435 00:16:37.435 00:16:37.435 Suite: nvme_compliance 00:16:37.435 Test: admin_identify_ctrlr_verify_dptr ...[2024-04-26 23:19:26.618350] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:37.435 [2024-04-26 23:19:26.619726] vfio_user.c: 804:nvme_cmd_map_prps: *ERROR*: no PRP2, 3072 remaining 00:16:37.435 [2024-04-26 23:19:26.619741] vfio_user.c:5514:map_admin_cmd_req: *ERROR*: /var/run/vfio-user: map Admin Opc 6 failed 00:16:37.435 [2024-04-26 23:19:26.619748] vfio_user.c:5607:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x6 failed 00:16:37.435 [2024-04-26 23:19:26.621371] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:37.435 passed 00:16:37.696 Test: admin_identify_ctrlr_verify_fused ...[2024-04-26 23:19:26.722998] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:37.696 [2024-04-26 23:19:26.726016] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:37.696 passed 00:16:37.696 Test: admin_identify_ns ...[2024-04-26 23:19:26.821104] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:37.696 [2024-04-26 23:19:26.888849] ctrlr.c:2656:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:16:37.696 [2024-04-26 23:19:26.896847] ctrlr.c:2656:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4294967295 00:16:37.696 [2024-04-26 23:19:26.917962] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:37.957 passed 00:16:37.957 Test: admin_get_features_mandatory_features ...[2024-04-26 23:19:27.009627] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:37.957 [2024-04-26 23:19:27.012645] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:37.957 passed 00:16:37.957 Test: admin_get_features_optional_features ...[2024-04-26 23:19:27.112224] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:37.957 [2024-04-26 23:19:27.115239] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:37.957 passed 00:16:37.957 Test: admin_set_features_number_of_queues ...[2024-04-26 23:19:27.209396] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:38.217 [2024-04-26 23:19:27.317952] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:38.217 passed 00:16:38.217 Test: admin_get_log_page_mandatory_logs ...[2024-04-26 23:19:27.408569] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:38.217 [2024-04-26 23:19:27.411591] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:38.217 passed 00:16:38.478 Test: admin_get_log_page_with_lpo ...[2024-04-26 23:19:27.507088] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:38.478 [2024-04-26 23:19:27.578850] ctrlr.c:2604:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: offset (516) > len (512) 00:16:38.478 [2024-04-26 23:19:27.591923] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:38.478 passed 00:16:38.478 Test: fabric_property_get ...[2024-04-26 23:19:27.687682] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:38.478 [2024-04-26 23:19:27.688937] vfio_user.c:5607:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x7f failed 00:16:38.478 [2024-04-26 23:19:27.690705] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:38.739 passed 00:16:38.739 Test: admin_delete_io_sq_use_admin_qid ...[2024-04-26 23:19:27.788379] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:38.739 [2024-04-26 23:19:27.789603] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:0 does not exist 00:16:38.739 [2024-04-26 23:19:27.791405] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:38.739 passed 00:16:38.739 Test: admin_delete_io_sq_delete_sq_twice ...[2024-04-26 23:19:27.890637] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:38.739 [2024-04-26 23:19:27.973843] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:16:38.739 [2024-04-26 23:19:27.989842] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:16:39.000 [2024-04-26 23:19:27.994930] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:39.000 passed 00:16:39.000 Test: admin_delete_io_cq_use_admin_qid ...[2024-04-26 23:19:28.088552] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:39.000 [2024-04-26 23:19:28.089772] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O cqid:0 does not exist 00:16:39.000 [2024-04-26 23:19:28.091566] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:39.000 passed 00:16:39.000 Test: admin_delete_io_cq_delete_cq_first ...[2024-04-26 23:19:28.188086] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:39.261 [2024-04-26 23:19:28.267849] vfio_user.c:2319:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:16:39.261 [2024-04-26 23:19:28.291853] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:16:39.261 [2024-04-26 23:19:28.296926] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:39.261 passed 00:16:39.261 Test: admin_create_io_cq_verify_iv_pc ...[2024-04-26 23:19:28.386553] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:39.261 [2024-04-26 23:19:28.387770] vfio_user.c:2158:handle_create_io_cq: *ERROR*: /var/run/vfio-user: IV is too big 00:16:39.261 [2024-04-26 23:19:28.387790] vfio_user.c:2152:handle_create_io_cq: *ERROR*: /var/run/vfio-user: non-PC CQ not supported 00:16:39.261 [2024-04-26 23:19:28.389567] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:39.261 passed 00:16:39.261 Test: admin_create_io_sq_verify_qsize_cqid ...[2024-04-26 23:19:28.482654] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:39.522 [2024-04-26 23:19:28.573843] vfio_user.c:2240:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 1 00:16:39.522 [2024-04-26 23:19:28.581846] vfio_user.c:2240:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 257 00:16:39.522 [2024-04-26 23:19:28.589847] vfio_user.c:2038:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:0 00:16:39.522 [2024-04-26 23:19:28.597846] vfio_user.c:2038:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:128 00:16:39.522 [2024-04-26 23:19:28.626931] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:39.522 passed 00:16:39.522 Test: admin_create_io_sq_verify_pc ...[2024-04-26 23:19:28.720502] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:39.522 [2024-04-26 23:19:28.734850] vfio_user.c:2051:handle_create_io_sq: *ERROR*: /var/run/vfio-user: non-PC SQ not supported 00:16:39.522 [2024-04-26 23:19:28.752638] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:39.782 passed 00:16:39.782 Test: admin_create_io_qp_max_qps ...[2024-04-26 23:19:28.848202] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:40.723 [2024-04-26 23:19:29.940846] nvme_ctrlr.c:5329:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [/var/run/vfio-user] No free I/O queue IDs 00:16:41.294 [2024-04-26 23:19:30.330030] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:41.294 passed 00:16:41.294 Test: admin_create_io_sq_shared_cq ...[2024-04-26 23:19:30.422165] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:41.553 [2024-04-26 23:19:30.553856] vfio_user.c:2319:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:16:41.553 [2024-04-26 23:19:30.590905] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:41.553 passed 00:16:41.553 00:16:41.554 Run Summary: Type Total Ran Passed Failed Inactive 00:16:41.554 suites 1 1 n/a 0 0 00:16:41.554 tests 18 18 18 0 0 00:16:41.554 asserts 360 360 360 0 n/a 00:16:41.554 00:16:41.554 Elapsed time = 1.668 seconds 00:16:41.554 23:19:30 -- compliance/compliance.sh@42 -- # killprocess 3896644 00:16:41.554 23:19:30 -- common/autotest_common.sh@936 -- # '[' -z 3896644 ']' 00:16:41.554 23:19:30 -- common/autotest_common.sh@940 -- # kill -0 3896644 00:16:41.554 23:19:30 -- common/autotest_common.sh@941 -- # uname 00:16:41.554 23:19:30 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:41.554 23:19:30 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3896644 00:16:41.554 23:19:30 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:16:41.554 23:19:30 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:16:41.554 23:19:30 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3896644' 00:16:41.554 killing process with pid 3896644 00:16:41.554 23:19:30 -- common/autotest_common.sh@955 -- # kill 3896644 00:16:41.554 23:19:30 -- common/autotest_common.sh@960 -- # wait 3896644 00:16:41.814 23:19:30 -- compliance/compliance.sh@44 -- # rm -rf /var/run/vfio-user 00:16:41.814 23:19:30 -- compliance/compliance.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:16:41.814 00:16:41.814 real 0m6.437s 00:16:41.814 user 0m18.552s 00:16:41.814 sys 0m0.429s 00:16:41.814 23:19:30 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:16:41.814 23:19:30 -- common/autotest_common.sh@10 -- # set +x 00:16:41.814 ************************************ 00:16:41.814 END TEST nvmf_vfio_user_nvme_compliance 00:16:41.814 ************************************ 00:16:41.814 23:19:30 -- nvmf/nvmf.sh@43 -- # run_test nvmf_vfio_user_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:16:41.814 23:19:30 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:16:41.814 23:19:30 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:41.814 23:19:30 -- common/autotest_common.sh@10 -- # set +x 00:16:41.814 ************************************ 00:16:41.814 START TEST nvmf_vfio_user_fuzz 00:16:41.814 ************************************ 00:16:41.814 23:19:31 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:16:42.075 * Looking for test storage... 00:16:42.075 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:42.075 23:19:31 -- target/vfio_user_fuzz.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:42.075 23:19:31 -- nvmf/common.sh@7 -- # uname -s 00:16:42.075 23:19:31 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:42.075 23:19:31 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:42.075 23:19:31 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:42.075 23:19:31 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:42.075 23:19:31 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:42.075 23:19:31 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:42.075 23:19:31 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:42.075 23:19:31 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:42.075 23:19:31 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:42.075 23:19:31 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:42.075 23:19:31 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:16:42.075 23:19:31 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:16:42.075 23:19:31 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:42.075 23:19:31 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:42.075 23:19:31 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:42.075 23:19:31 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:42.075 23:19:31 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:42.075 23:19:31 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:42.075 23:19:31 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:42.075 23:19:31 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:42.075 23:19:31 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:42.075 23:19:31 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:42.075 23:19:31 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:42.075 23:19:31 -- paths/export.sh@5 -- # export PATH 00:16:42.075 23:19:31 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:42.075 23:19:31 -- nvmf/common.sh@47 -- # : 0 00:16:42.075 23:19:31 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:42.075 23:19:31 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:42.075 23:19:31 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:42.075 23:19:31 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:42.075 23:19:31 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:42.075 23:19:31 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:42.075 23:19:31 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:42.075 23:19:31 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:42.075 23:19:31 -- target/vfio_user_fuzz.sh@12 -- # MALLOC_BDEV_SIZE=64 00:16:42.075 23:19:31 -- target/vfio_user_fuzz.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:16:42.075 23:19:31 -- target/vfio_user_fuzz.sh@15 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:16:42.075 23:19:31 -- target/vfio_user_fuzz.sh@16 -- # traddr=/var/run/vfio-user 00:16:42.075 23:19:31 -- target/vfio_user_fuzz.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:16:42.075 23:19:31 -- target/vfio_user_fuzz.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:16:42.075 23:19:31 -- target/vfio_user_fuzz.sh@20 -- # rm -rf /var/run/vfio-user 00:16:42.075 23:19:31 -- target/vfio_user_fuzz.sh@24 -- # nvmfpid=3897990 00:16:42.075 23:19:31 -- target/vfio_user_fuzz.sh@25 -- # echo 'Process pid: 3897990' 00:16:42.075 Process pid: 3897990 00:16:42.075 23:19:31 -- target/vfio_user_fuzz.sh@27 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:16:42.075 23:19:31 -- target/vfio_user_fuzz.sh@28 -- # waitforlisten 3897990 00:16:42.075 23:19:31 -- target/vfio_user_fuzz.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:16:42.075 23:19:31 -- common/autotest_common.sh@817 -- # '[' -z 3897990 ']' 00:16:42.075 23:19:31 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:42.075 23:19:31 -- common/autotest_common.sh@822 -- # local max_retries=100 00:16:42.075 23:19:31 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:42.075 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:42.075 23:19:31 -- common/autotest_common.sh@826 -- # xtrace_disable 00:16:42.075 23:19:31 -- common/autotest_common.sh@10 -- # set +x 00:16:42.335 23:19:31 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:16:42.335 23:19:31 -- common/autotest_common.sh@850 -- # return 0 00:16:42.335 23:19:31 -- target/vfio_user_fuzz.sh@30 -- # sleep 1 00:16:43.276 23:19:32 -- target/vfio_user_fuzz.sh@32 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:16:43.276 23:19:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:43.276 23:19:32 -- common/autotest_common.sh@10 -- # set +x 00:16:43.276 23:19:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:43.276 23:19:32 -- target/vfio_user_fuzz.sh@34 -- # mkdir -p /var/run/vfio-user 00:16:43.276 23:19:32 -- target/vfio_user_fuzz.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:16:43.276 23:19:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:43.276 23:19:32 -- common/autotest_common.sh@10 -- # set +x 00:16:43.276 malloc0 00:16:43.276 23:19:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:43.276 23:19:32 -- target/vfio_user_fuzz.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk 00:16:43.276 23:19:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:43.276 23:19:32 -- common/autotest_common.sh@10 -- # set +x 00:16:43.276 23:19:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:43.276 23:19:32 -- target/vfio_user_fuzz.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:16:43.276 23:19:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:43.276 23:19:32 -- common/autotest_common.sh@10 -- # set +x 00:16:43.277 23:19:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:43.277 23:19:32 -- target/vfio_user_fuzz.sh@39 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:16:43.277 23:19:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:43.277 23:19:32 -- common/autotest_common.sh@10 -- # set +x 00:16:43.277 23:19:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:43.277 23:19:32 -- target/vfio_user_fuzz.sh@41 -- # trid='trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' 00:16:43.277 23:19:32 -- target/vfio_user_fuzz.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' -N -a 00:17:15.395 Fuzzing completed. Shutting down the fuzz application 00:17:15.395 00:17:15.395 Dumping successful admin opcodes: 00:17:15.395 8, 9, 10, 24, 00:17:15.395 Dumping successful io opcodes: 00:17:15.395 0, 00:17:15.395 NS: 0x200003a1ef00 I/O qp, Total commands completed: 935960, total successful commands: 3662, random_seed: 2481158208 00:17:15.395 NS: 0x200003a1ef00 admin qp, Total commands completed: 231402, total successful commands: 1854, random_seed: 1600727360 00:17:15.395 23:20:02 -- target/vfio_user_fuzz.sh@44 -- # rpc_cmd nvmf_delete_subsystem nqn.2021-09.io.spdk:cnode0 00:17:15.395 23:20:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:15.395 23:20:02 -- common/autotest_common.sh@10 -- # set +x 00:17:15.395 23:20:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:15.395 23:20:02 -- target/vfio_user_fuzz.sh@46 -- # killprocess 3897990 00:17:15.395 23:20:02 -- common/autotest_common.sh@936 -- # '[' -z 3897990 ']' 00:17:15.395 23:20:02 -- common/autotest_common.sh@940 -- # kill -0 3897990 00:17:15.395 23:20:02 -- common/autotest_common.sh@941 -- # uname 00:17:15.395 23:20:02 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:15.395 23:20:02 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3897990 00:17:15.395 23:20:02 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:17:15.395 23:20:02 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:17:15.395 23:20:02 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3897990' 00:17:15.395 killing process with pid 3897990 00:17:15.395 23:20:02 -- common/autotest_common.sh@955 -- # kill 3897990 00:17:15.395 23:20:02 -- common/autotest_common.sh@960 -- # wait 3897990 00:17:15.395 23:20:03 -- target/vfio_user_fuzz.sh@48 -- # rm -rf /var/run/vfio-user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_log.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_tgt_output.txt 00:17:15.395 23:20:03 -- target/vfio_user_fuzz.sh@50 -- # trap - SIGINT SIGTERM EXIT 00:17:15.395 00:17:15.395 real 0m32.074s 00:17:15.395 user 0m34.906s 00:17:15.395 sys 0m23.846s 00:17:15.395 23:20:03 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:17:15.395 23:20:03 -- common/autotest_common.sh@10 -- # set +x 00:17:15.396 ************************************ 00:17:15.396 END TEST nvmf_vfio_user_fuzz 00:17:15.396 ************************************ 00:17:15.396 23:20:03 -- nvmf/nvmf.sh@47 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:17:15.396 23:20:03 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:17:15.396 23:20:03 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:17:15.396 23:20:03 -- common/autotest_common.sh@10 -- # set +x 00:17:15.396 ************************************ 00:17:15.396 START TEST nvmf_host_management 00:17:15.396 ************************************ 00:17:15.396 23:20:03 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:17:15.396 * Looking for test storage... 00:17:15.396 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:15.396 23:20:03 -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:15.396 23:20:03 -- nvmf/common.sh@7 -- # uname -s 00:17:15.396 23:20:03 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:15.396 23:20:03 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:15.396 23:20:03 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:15.396 23:20:03 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:15.396 23:20:03 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:15.396 23:20:03 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:15.396 23:20:03 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:15.396 23:20:03 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:15.396 23:20:03 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:15.396 23:20:03 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:15.396 23:20:03 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:17:15.396 23:20:03 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:17:15.396 23:20:03 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:15.396 23:20:03 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:15.396 23:20:03 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:15.396 23:20:03 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:15.396 23:20:03 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:15.396 23:20:03 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:15.396 23:20:03 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:15.396 23:20:03 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:15.396 23:20:03 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:15.396 23:20:03 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:15.396 23:20:03 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:15.396 23:20:03 -- paths/export.sh@5 -- # export PATH 00:17:15.396 23:20:03 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:15.396 23:20:03 -- nvmf/common.sh@47 -- # : 0 00:17:15.396 23:20:03 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:15.396 23:20:03 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:15.396 23:20:03 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:15.396 23:20:03 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:15.396 23:20:03 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:15.396 23:20:03 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:15.396 23:20:03 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:15.396 23:20:03 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:15.396 23:20:03 -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:17:15.396 23:20:03 -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:17:15.396 23:20:03 -- target/host_management.sh@105 -- # nvmftestinit 00:17:15.396 23:20:03 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:17:15.396 23:20:03 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:15.396 23:20:03 -- nvmf/common.sh@437 -- # prepare_net_devs 00:17:15.396 23:20:03 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:17:15.396 23:20:03 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:17:15.396 23:20:03 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:15.396 23:20:03 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:15.396 23:20:03 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:15.396 23:20:03 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:17:15.396 23:20:03 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:17:15.396 23:20:03 -- nvmf/common.sh@285 -- # xtrace_disable 00:17:15.396 23:20:03 -- common/autotest_common.sh@10 -- # set +x 00:17:22.074 23:20:10 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:17:22.074 23:20:10 -- nvmf/common.sh@291 -- # pci_devs=() 00:17:22.074 23:20:10 -- nvmf/common.sh@291 -- # local -a pci_devs 00:17:22.074 23:20:10 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:17:22.074 23:20:10 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:17:22.074 23:20:10 -- nvmf/common.sh@293 -- # pci_drivers=() 00:17:22.074 23:20:10 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:17:22.075 23:20:10 -- nvmf/common.sh@295 -- # net_devs=() 00:17:22.075 23:20:10 -- nvmf/common.sh@295 -- # local -ga net_devs 00:17:22.075 23:20:10 -- nvmf/common.sh@296 -- # e810=() 00:17:22.075 23:20:10 -- nvmf/common.sh@296 -- # local -ga e810 00:17:22.075 23:20:10 -- nvmf/common.sh@297 -- # x722=() 00:17:22.075 23:20:10 -- nvmf/common.sh@297 -- # local -ga x722 00:17:22.075 23:20:10 -- nvmf/common.sh@298 -- # mlx=() 00:17:22.075 23:20:10 -- nvmf/common.sh@298 -- # local -ga mlx 00:17:22.075 23:20:10 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:22.075 23:20:10 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:22.075 23:20:10 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:22.075 23:20:10 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:22.075 23:20:10 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:22.075 23:20:10 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:22.075 23:20:10 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:22.075 23:20:10 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:22.075 23:20:10 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:22.075 23:20:10 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:22.075 23:20:10 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:22.075 23:20:10 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:17:22.075 23:20:10 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:17:22.075 23:20:10 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:17:22.075 23:20:10 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:17:22.075 23:20:10 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:17:22.075 23:20:10 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:17:22.075 23:20:10 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:22.075 23:20:10 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:17:22.075 Found 0000:31:00.0 (0x8086 - 0x159b) 00:17:22.075 23:20:10 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:22.075 23:20:10 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:22.075 23:20:10 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:22.075 23:20:10 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:22.075 23:20:10 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:22.075 23:20:10 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:22.075 23:20:10 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:17:22.075 Found 0000:31:00.1 (0x8086 - 0x159b) 00:17:22.075 23:20:10 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:22.075 23:20:10 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:22.075 23:20:10 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:22.075 23:20:10 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:22.075 23:20:10 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:22.075 23:20:10 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:17:22.075 23:20:10 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:17:22.075 23:20:10 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:17:22.075 23:20:10 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:22.075 23:20:10 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:22.075 23:20:10 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:17:22.075 23:20:10 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:22.075 23:20:10 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:17:22.075 Found net devices under 0000:31:00.0: cvl_0_0 00:17:22.075 23:20:10 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:17:22.075 23:20:10 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:22.075 23:20:10 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:22.075 23:20:10 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:17:22.075 23:20:10 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:22.075 23:20:10 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:17:22.075 Found net devices under 0000:31:00.1: cvl_0_1 00:17:22.075 23:20:10 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:17:22.075 23:20:10 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:17:22.075 23:20:10 -- nvmf/common.sh@403 -- # is_hw=yes 00:17:22.075 23:20:10 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:17:22.075 23:20:10 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:17:22.075 23:20:10 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:17:22.075 23:20:10 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:22.075 23:20:10 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:22.075 23:20:10 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:22.075 23:20:10 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:17:22.075 23:20:10 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:22.075 23:20:10 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:22.075 23:20:10 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:17:22.075 23:20:10 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:22.075 23:20:10 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:22.075 23:20:10 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:17:22.075 23:20:10 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:17:22.075 23:20:10 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:17:22.075 23:20:10 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:22.075 23:20:10 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:22.075 23:20:10 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:22.075 23:20:10 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:17:22.075 23:20:10 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:22.075 23:20:10 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:22.075 23:20:10 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:22.075 23:20:10 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:17:22.075 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:22.075 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.566 ms 00:17:22.075 00:17:22.075 --- 10.0.0.2 ping statistics --- 00:17:22.075 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:22.075 rtt min/avg/max/mdev = 0.566/0.566/0.566/0.000 ms 00:17:22.075 23:20:10 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:22.075 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:22.075 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.323 ms 00:17:22.075 00:17:22.075 --- 10.0.0.1 ping statistics --- 00:17:22.075 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:22.075 rtt min/avg/max/mdev = 0.323/0.323/0.323/0.000 ms 00:17:22.075 23:20:10 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:22.075 23:20:10 -- nvmf/common.sh@411 -- # return 0 00:17:22.075 23:20:10 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:17:22.075 23:20:10 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:22.075 23:20:10 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:17:22.075 23:20:10 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:17:22.075 23:20:10 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:22.075 23:20:10 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:17:22.075 23:20:10 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:17:22.075 23:20:10 -- target/host_management.sh@107 -- # run_test nvmf_host_management nvmf_host_management 00:17:22.075 23:20:10 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:17:22.075 23:20:10 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:17:22.075 23:20:10 -- common/autotest_common.sh@10 -- # set +x 00:17:22.075 ************************************ 00:17:22.075 START TEST nvmf_host_management 00:17:22.075 ************************************ 00:17:22.075 23:20:10 -- common/autotest_common.sh@1111 -- # nvmf_host_management 00:17:22.075 23:20:10 -- target/host_management.sh@69 -- # starttarget 00:17:22.075 23:20:10 -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:17:22.075 23:20:10 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:17:22.075 23:20:10 -- common/autotest_common.sh@710 -- # xtrace_disable 00:17:22.075 23:20:10 -- common/autotest_common.sh@10 -- # set +x 00:17:22.075 23:20:10 -- nvmf/common.sh@470 -- # nvmfpid=3907834 00:17:22.075 23:20:10 -- nvmf/common.sh@471 -- # waitforlisten 3907834 00:17:22.075 23:20:10 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:17:22.075 23:20:10 -- common/autotest_common.sh@817 -- # '[' -z 3907834 ']' 00:17:22.075 23:20:10 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:22.075 23:20:10 -- common/autotest_common.sh@822 -- # local max_retries=100 00:17:22.075 23:20:10 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:22.075 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:22.075 23:20:10 -- common/autotest_common.sh@826 -- # xtrace_disable 00:17:22.075 23:20:10 -- common/autotest_common.sh@10 -- # set +x 00:17:22.075 [2024-04-26 23:20:10.650223] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:17:22.075 [2024-04-26 23:20:10.650279] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:22.075 EAL: No free 2048 kB hugepages reported on node 1 00:17:22.075 [2024-04-26 23:20:10.721493] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:22.075 [2024-04-26 23:20:10.760575] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:22.075 [2024-04-26 23:20:10.760625] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:22.075 [2024-04-26 23:20:10.760633] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:22.075 [2024-04-26 23:20:10.760640] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:22.075 [2024-04-26 23:20:10.760646] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:22.075 [2024-04-26 23:20:10.760770] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:22.075 [2024-04-26 23:20:10.760915] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:17:22.075 [2024-04-26 23:20:10.761050] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:22.075 [2024-04-26 23:20:10.761050] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:17:22.336 23:20:11 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:17:22.336 23:20:11 -- common/autotest_common.sh@850 -- # return 0 00:17:22.336 23:20:11 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:17:22.336 23:20:11 -- common/autotest_common.sh@716 -- # xtrace_disable 00:17:22.336 23:20:11 -- common/autotest_common.sh@10 -- # set +x 00:17:22.336 23:20:11 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:22.336 23:20:11 -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:22.336 23:20:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:22.336 23:20:11 -- common/autotest_common.sh@10 -- # set +x 00:17:22.336 [2024-04-26 23:20:11.473485] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:22.336 23:20:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:22.336 23:20:11 -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:17:22.336 23:20:11 -- common/autotest_common.sh@710 -- # xtrace_disable 00:17:22.336 23:20:11 -- common/autotest_common.sh@10 -- # set +x 00:17:22.336 23:20:11 -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:17:22.336 23:20:11 -- target/host_management.sh@23 -- # cat 00:17:22.336 23:20:11 -- target/host_management.sh@30 -- # rpc_cmd 00:17:22.336 23:20:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:22.336 23:20:11 -- common/autotest_common.sh@10 -- # set +x 00:17:22.336 Malloc0 00:17:22.336 [2024-04-26 23:20:11.532735] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:22.336 23:20:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:22.336 23:20:11 -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:17:22.336 23:20:11 -- common/autotest_common.sh@716 -- # xtrace_disable 00:17:22.336 23:20:11 -- common/autotest_common.sh@10 -- # set +x 00:17:22.336 23:20:11 -- target/host_management.sh@73 -- # perfpid=3908149 00:17:22.336 23:20:11 -- target/host_management.sh@74 -- # waitforlisten 3908149 /var/tmp/bdevperf.sock 00:17:22.336 23:20:11 -- common/autotest_common.sh@817 -- # '[' -z 3908149 ']' 00:17:22.336 23:20:11 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:22.336 23:20:11 -- common/autotest_common.sh@822 -- # local max_retries=100 00:17:22.336 23:20:11 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:22.336 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:22.336 23:20:11 -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:17:22.336 23:20:11 -- common/autotest_common.sh@826 -- # xtrace_disable 00:17:22.336 23:20:11 -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:17:22.336 23:20:11 -- common/autotest_common.sh@10 -- # set +x 00:17:22.336 23:20:11 -- nvmf/common.sh@521 -- # config=() 00:17:22.336 23:20:11 -- nvmf/common.sh@521 -- # local subsystem config 00:17:22.336 23:20:11 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:17:22.336 23:20:11 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:17:22.336 { 00:17:22.336 "params": { 00:17:22.336 "name": "Nvme$subsystem", 00:17:22.336 "trtype": "$TEST_TRANSPORT", 00:17:22.337 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:22.337 "adrfam": "ipv4", 00:17:22.337 "trsvcid": "$NVMF_PORT", 00:17:22.337 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:22.337 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:22.337 "hdgst": ${hdgst:-false}, 00:17:22.337 "ddgst": ${ddgst:-false} 00:17:22.337 }, 00:17:22.337 "method": "bdev_nvme_attach_controller" 00:17:22.337 } 00:17:22.337 EOF 00:17:22.337 )") 00:17:22.597 23:20:11 -- nvmf/common.sh@543 -- # cat 00:17:22.597 23:20:11 -- nvmf/common.sh@545 -- # jq . 00:17:22.597 23:20:11 -- nvmf/common.sh@546 -- # IFS=, 00:17:22.597 23:20:11 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:17:22.597 "params": { 00:17:22.597 "name": "Nvme0", 00:17:22.597 "trtype": "tcp", 00:17:22.597 "traddr": "10.0.0.2", 00:17:22.597 "adrfam": "ipv4", 00:17:22.597 "trsvcid": "4420", 00:17:22.597 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:17:22.597 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:17:22.597 "hdgst": false, 00:17:22.597 "ddgst": false 00:17:22.597 }, 00:17:22.597 "method": "bdev_nvme_attach_controller" 00:17:22.597 }' 00:17:22.597 [2024-04-26 23:20:11.630458] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:17:22.597 [2024-04-26 23:20:11.630513] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3908149 ] 00:17:22.597 EAL: No free 2048 kB hugepages reported on node 1 00:17:22.597 [2024-04-26 23:20:11.690364] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:22.597 [2024-04-26 23:20:11.719444] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:22.857 Running I/O for 10 seconds... 00:17:23.431 23:20:12 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:17:23.431 23:20:12 -- common/autotest_common.sh@850 -- # return 0 00:17:23.431 23:20:12 -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:17:23.431 23:20:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:23.431 23:20:12 -- common/autotest_common.sh@10 -- # set +x 00:17:23.431 23:20:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:23.431 23:20:12 -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:17:23.431 23:20:12 -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:17:23.431 23:20:12 -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:17:23.431 23:20:12 -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:17:23.431 23:20:12 -- target/host_management.sh@52 -- # local ret=1 00:17:23.431 23:20:12 -- target/host_management.sh@53 -- # local i 00:17:23.431 23:20:12 -- target/host_management.sh@54 -- # (( i = 10 )) 00:17:23.431 23:20:12 -- target/host_management.sh@54 -- # (( i != 0 )) 00:17:23.431 23:20:12 -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:17:23.431 23:20:12 -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:17:23.431 23:20:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:23.431 23:20:12 -- common/autotest_common.sh@10 -- # set +x 00:17:23.431 23:20:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:23.431 23:20:12 -- target/host_management.sh@55 -- # read_io_count=583 00:17:23.431 23:20:12 -- target/host_management.sh@58 -- # '[' 583 -ge 100 ']' 00:17:23.431 23:20:12 -- target/host_management.sh@59 -- # ret=0 00:17:23.431 23:20:12 -- target/host_management.sh@60 -- # break 00:17:23.431 23:20:12 -- target/host_management.sh@64 -- # return 0 00:17:23.431 23:20:12 -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:17:23.431 23:20:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:23.431 23:20:12 -- common/autotest_common.sh@10 -- # set +x 00:17:23.431 [2024-04-26 23:20:12.475784] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1779210 is same with the state(5) to be set 00:17:23.431 [2024-04-26 23:20:12.475831] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1779210 is same with the state(5) to be set 00:17:23.431 [2024-04-26 23:20:12.476181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:87808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:23.431 [2024-04-26 23:20:12.476219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:23.431 [2024-04-26 23:20:12.476237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:87936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:23.431 [2024-04-26 23:20:12.476245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:23.431 [2024-04-26 23:20:12.476255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:88064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:23.431 [2024-04-26 23:20:12.476263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:23.431 [2024-04-26 23:20:12.476272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:88192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:23.431 [2024-04-26 23:20:12.476280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:23.431 [2024-04-26 23:20:12.476289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:88320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:23.431 [2024-04-26 23:20:12.476297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:23.431 [2024-04-26 23:20:12.476306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:88448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:23.431 [2024-04-26 23:20:12.476313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:23.431 [2024-04-26 23:20:12.476322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:88576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:23.431 [2024-04-26 23:20:12.476329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:23.431 [2024-04-26 23:20:12.476339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:88704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:23.431 [2024-04-26 23:20:12.476346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:23.431 [2024-04-26 23:20:12.476356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:88832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:23.431 [2024-04-26 23:20:12.476363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:23.431 [2024-04-26 23:20:12.476381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:88960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:23.431 [2024-04-26 23:20:12.476388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:23.431 [2024-04-26 23:20:12.476397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:89088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:23.431 [2024-04-26 23:20:12.476405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:23.431 [2024-04-26 23:20:12.476415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:89216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:23.431 [2024-04-26 23:20:12.476423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:23.431 [2024-04-26 23:20:12.476433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:89344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:23.431 [2024-04-26 23:20:12.476441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:23.431 [2024-04-26 23:20:12.476451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:89472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:23.431 [2024-04-26 23:20:12.476460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:23.431 [2024-04-26 23:20:12.476470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:89600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:23.431 [2024-04-26 23:20:12.476478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:23.431 [2024-04-26 23:20:12.476488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:89728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:23.431 [2024-04-26 23:20:12.476496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:23.431 [2024-04-26 23:20:12.476506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:89856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:23.431 [2024-04-26 23:20:12.476515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:23.431 [2024-04-26 23:20:12.476525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:89984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:23.431 [2024-04-26 23:20:12.476534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:23.431 [2024-04-26 23:20:12.476544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:90112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:23.431 [2024-04-26 23:20:12.476552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:23.431 [2024-04-26 23:20:12.476562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:90240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:23.431 [2024-04-26 23:20:12.476571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:23.431 [2024-04-26 23:20:12.476580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:90368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:23.431 [2024-04-26 23:20:12.476588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:23.431 [2024-04-26 23:20:12.476599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:90496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:23.431 [2024-04-26 23:20:12.476608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:23.431 [2024-04-26 23:20:12.476619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:82432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:23.431 [2024-04-26 23:20:12.476628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:23.431 [2024-04-26 23:20:12.476638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:82560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:23.431 [2024-04-26 23:20:12.476647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:23.431 [2024-04-26 23:20:12.476657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:82688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:23.431 [2024-04-26 23:20:12.476665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:23.431 [2024-04-26 23:20:12.476676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:82816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:23.432 [2024-04-26 23:20:12.476684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:23.432 [2024-04-26 23:20:12.476696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:82944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:23.432 [2024-04-26 23:20:12.476705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:23.432 [2024-04-26 23:20:12.476715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:83072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:23.432 [2024-04-26 23:20:12.476723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:23.432 [2024-04-26 23:20:12.476733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:83200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:23.432 [2024-04-26 23:20:12.476742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:23.432 [2024-04-26 23:20:12.476752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:83328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:23.432 [2024-04-26 23:20:12.476760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:23.432 [2024-04-26 23:20:12.476770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:83456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:23.432 [2024-04-26 23:20:12.476779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:23.432 [2024-04-26 23:20:12.476788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:83584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:23.432 [2024-04-26 23:20:12.476796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:23.432 [2024-04-26 23:20:12.476807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:83712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:23.432 [2024-04-26 23:20:12.476815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:23.432 [2024-04-26 23:20:12.476824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:83840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:23.432 [2024-04-26 23:20:12.476833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:23.432 [2024-04-26 23:20:12.476850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:83968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:23.432 [2024-04-26 23:20:12.476857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:23.432 [2024-04-26 23:20:12.476866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:84096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:23.432 [2024-04-26 23:20:12.476874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:23.432 [2024-04-26 23:20:12.476883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:84224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:23.432 [2024-04-26 23:20:12.476891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:23.432 [2024-04-26 23:20:12.476902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:84352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:23.432 [2024-04-26 23:20:12.476909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:23.432 [2024-04-26 23:20:12.476920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:84480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:23.432 [2024-04-26 23:20:12.476928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:23.432 [2024-04-26 23:20:12.476939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:84608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:23.432 [2024-04-26 23:20:12.476947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:23.432 [2024-04-26 23:20:12.476957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:84736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:23.432 [2024-04-26 23:20:12.476965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:23.432 [2024-04-26 23:20:12.476975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:84864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:23.432 [2024-04-26 23:20:12.476984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:23.432 [2024-04-26 23:20:12.476994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:84992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:23.432 [2024-04-26 23:20:12.477002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:23.432 [2024-04-26 23:20:12.477012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:85120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:23.432 [2024-04-26 23:20:12.477020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:23.432 [2024-04-26 23:20:12.477030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:85248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:23.432 [2024-04-26 23:20:12.477038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:23.432 [2024-04-26 23:20:12.477048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:85376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:23.432 [2024-04-26 23:20:12.477057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:23.432 [2024-04-26 23:20:12.477068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:85504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:23.432 [2024-04-26 23:20:12.477077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:23.432 [2024-04-26 23:20:12.477088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:85632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:23.432 [2024-04-26 23:20:12.477096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:23.432 [2024-04-26 23:20:12.477107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:85760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:23.432 [2024-04-26 23:20:12.477115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:23.432 [2024-04-26 23:20:12.477125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:85888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:23.432 [2024-04-26 23:20:12.477133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:23.432 [2024-04-26 23:20:12.477143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:86016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:23.432 [2024-04-26 23:20:12.477152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:23.432 [2024-04-26 23:20:12.477163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:86144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:23.432 [2024-04-26 23:20:12.477171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:23.432 [2024-04-26 23:20:12.477181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:86272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:23.432 [2024-04-26 23:20:12.477189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:23.432 [2024-04-26 23:20:12.477200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:86400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:23.432 [2024-04-26 23:20:12.477208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:23.432 [2024-04-26 23:20:12.477219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:86528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:23.432 [2024-04-26 23:20:12.477227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:23.432 [2024-04-26 23:20:12.477237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:86656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:23.432 [2024-04-26 23:20:12.477246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:23.432 [2024-04-26 23:20:12.477256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:86784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:23.432 [2024-04-26 23:20:12.477264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:23.432 [2024-04-26 23:20:12.477275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:86912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:23.432 [2024-04-26 23:20:12.477283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:23.432 [2024-04-26 23:20:12.477293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:87040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:23.432 [2024-04-26 23:20:12.477302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:23.432 [2024-04-26 23:20:12.477315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:87168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:23.432 [2024-04-26 23:20:12.477323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:23.432 [2024-04-26 23:20:12.477334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:87296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:23.432 [2024-04-26 23:20:12.477343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:23.432 [2024-04-26 23:20:12.477354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:87424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:23.432 [2024-04-26 23:20:12.477362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:23.432 [2024-04-26 23:20:12.477373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:87552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:23.432 [2024-04-26 23:20:12.477381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:23.432 [2024-04-26 23:20:12.477392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:87680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:23.432 [2024-04-26 23:20:12.477400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:23.432 [2024-04-26 23:20:12.477453] bdev_nvme.c:1601:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x277d9c0 was disconnected and freed. reset controller. 00:17:23.433 [2024-04-26 23:20:12.477494] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:17:23.433 [2024-04-26 23:20:12.477504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:23.433 [2024-04-26 23:20:12.477514] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:17:23.433 [2024-04-26 23:20:12.477522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:23.433 [2024-04-26 23:20:12.477531] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:17:23.433 [2024-04-26 23:20:12.477539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:23.433 [2024-04-26 23:20:12.477548] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:17:23.433 [2024-04-26 23:20:12.477557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:23.433 [2024-04-26 23:20:12.477565] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x236c640 is same with the state(5) to be set 00:17:23.433 [2024-04-26 23:20:12.478760] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:17:23.433 23:20:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:23.433 task offset: 87808 on job bdev=Nvme0n1 fails 00:17:23.433 00:17:23.433 Latency(us) 00:17:23.433 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:23.433 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:17:23.433 Job: Nvme0n1 ended in about 0.50 seconds with error 00:17:23.433 Verification LBA range: start 0x0 length 0x400 00:17:23.433 Nvme0n1 : 0.50 1294.16 80.88 128.61 0.00 43793.41 2088.96 37137.07 00:17:23.433 =================================================================================================================== 00:17:23.433 Total : 1294.16 80.88 128.61 0.00 43793.41 2088.96 37137.07 00:17:23.433 23:20:12 -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:17:23.433 [2024-04-26 23:20:12.480997] app.c: 966:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:17:23.433 [2024-04-26 23:20:12.481022] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x236c640 (9): Bad file descriptor 00:17:23.433 23:20:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:23.433 23:20:12 -- common/autotest_common.sh@10 -- # set +x 00:17:23.433 23:20:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:23.433 23:20:12 -- target/host_management.sh@87 -- # sleep 1 00:17:23.433 [2024-04-26 23:20:12.530229] bdev_nvme.c:2054:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:17:24.374 23:20:13 -- target/host_management.sh@91 -- # kill -9 3908149 00:17:24.374 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (3908149) - No such process 00:17:24.374 23:20:13 -- target/host_management.sh@91 -- # true 00:17:24.374 23:20:13 -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:17:24.374 23:20:13 -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:17:24.374 23:20:13 -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:17:24.374 23:20:13 -- nvmf/common.sh@521 -- # config=() 00:17:24.374 23:20:13 -- nvmf/common.sh@521 -- # local subsystem config 00:17:24.374 23:20:13 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:17:24.374 23:20:13 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:17:24.374 { 00:17:24.374 "params": { 00:17:24.374 "name": "Nvme$subsystem", 00:17:24.374 "trtype": "$TEST_TRANSPORT", 00:17:24.374 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:24.374 "adrfam": "ipv4", 00:17:24.374 "trsvcid": "$NVMF_PORT", 00:17:24.374 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:24.374 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:24.374 "hdgst": ${hdgst:-false}, 00:17:24.374 "ddgst": ${ddgst:-false} 00:17:24.374 }, 00:17:24.374 "method": "bdev_nvme_attach_controller" 00:17:24.374 } 00:17:24.374 EOF 00:17:24.374 )") 00:17:24.374 23:20:13 -- nvmf/common.sh@543 -- # cat 00:17:24.374 23:20:13 -- nvmf/common.sh@545 -- # jq . 00:17:24.374 23:20:13 -- nvmf/common.sh@546 -- # IFS=, 00:17:24.374 23:20:13 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:17:24.374 "params": { 00:17:24.374 "name": "Nvme0", 00:17:24.374 "trtype": "tcp", 00:17:24.374 "traddr": "10.0.0.2", 00:17:24.374 "adrfam": "ipv4", 00:17:24.374 "trsvcid": "4420", 00:17:24.374 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:17:24.374 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:17:24.374 "hdgst": false, 00:17:24.374 "ddgst": false 00:17:24.374 }, 00:17:24.374 "method": "bdev_nvme_attach_controller" 00:17:24.374 }' 00:17:24.374 [2024-04-26 23:20:13.545832] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:17:24.374 [2024-04-26 23:20:13.545888] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3908501 ] 00:17:24.374 EAL: No free 2048 kB hugepages reported on node 1 00:17:24.374 [2024-04-26 23:20:13.605606] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:24.635 [2024-04-26 23:20:13.632851] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:24.635 Running I/O for 1 seconds... 00:17:26.017 00:17:26.017 Latency(us) 00:17:26.017 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:26.017 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:17:26.017 Verification LBA range: start 0x0 length 0x400 00:17:26.017 Nvme0n1 : 1.02 1634.03 102.13 0.00 0.00 38404.42 6034.77 33423.36 00:17:26.017 =================================================================================================================== 00:17:26.017 Total : 1634.03 102.13 0.00 0.00 38404.42 6034.77 33423.36 00:17:26.017 23:20:14 -- target/host_management.sh@102 -- # stoptarget 00:17:26.017 23:20:14 -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:17:26.017 23:20:14 -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:17:26.017 23:20:14 -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:17:26.017 23:20:14 -- target/host_management.sh@40 -- # nvmftestfini 00:17:26.017 23:20:14 -- nvmf/common.sh@477 -- # nvmfcleanup 00:17:26.017 23:20:14 -- nvmf/common.sh@117 -- # sync 00:17:26.017 23:20:14 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:26.018 23:20:14 -- nvmf/common.sh@120 -- # set +e 00:17:26.018 23:20:14 -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:26.018 23:20:14 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:26.018 rmmod nvme_tcp 00:17:26.018 rmmod nvme_fabrics 00:17:26.018 rmmod nvme_keyring 00:17:26.018 23:20:15 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:26.018 23:20:15 -- nvmf/common.sh@124 -- # set -e 00:17:26.018 23:20:15 -- nvmf/common.sh@125 -- # return 0 00:17:26.018 23:20:15 -- nvmf/common.sh@478 -- # '[' -n 3907834 ']' 00:17:26.018 23:20:15 -- nvmf/common.sh@479 -- # killprocess 3907834 00:17:26.018 23:20:15 -- common/autotest_common.sh@936 -- # '[' -z 3907834 ']' 00:17:26.018 23:20:15 -- common/autotest_common.sh@940 -- # kill -0 3907834 00:17:26.018 23:20:15 -- common/autotest_common.sh@941 -- # uname 00:17:26.018 23:20:15 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:26.018 23:20:15 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3907834 00:17:26.018 23:20:15 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:17:26.018 23:20:15 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:17:26.018 23:20:15 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3907834' 00:17:26.018 killing process with pid 3907834 00:17:26.018 23:20:15 -- common/autotest_common.sh@955 -- # kill 3907834 00:17:26.018 23:20:15 -- common/autotest_common.sh@960 -- # wait 3907834 00:17:26.018 [2024-04-26 23:20:15.188881] app.c: 630:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:17:26.018 23:20:15 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:17:26.018 23:20:15 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:17:26.018 23:20:15 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:17:26.018 23:20:15 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:26.018 23:20:15 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:26.018 23:20:15 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:26.018 23:20:15 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:26.018 23:20:15 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:28.561 23:20:17 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:17:28.561 00:17:28.561 real 0m6.701s 00:17:28.561 user 0m20.208s 00:17:28.561 sys 0m1.025s 00:17:28.561 23:20:17 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:17:28.561 23:20:17 -- common/autotest_common.sh@10 -- # set +x 00:17:28.561 ************************************ 00:17:28.561 END TEST nvmf_host_management 00:17:28.561 ************************************ 00:17:28.561 23:20:17 -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:17:28.561 00:17:28.561 real 0m14.028s 00:17:28.562 user 0m22.208s 00:17:28.562 sys 0m6.252s 00:17:28.562 23:20:17 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:17:28.562 23:20:17 -- common/autotest_common.sh@10 -- # set +x 00:17:28.562 ************************************ 00:17:28.562 END TEST nvmf_host_management 00:17:28.562 ************************************ 00:17:28.562 23:20:17 -- nvmf/nvmf.sh@48 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:17:28.562 23:20:17 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:17:28.562 23:20:17 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:17:28.562 23:20:17 -- common/autotest_common.sh@10 -- # set +x 00:17:28.562 ************************************ 00:17:28.562 START TEST nvmf_lvol 00:17:28.562 ************************************ 00:17:28.562 23:20:17 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:17:28.562 * Looking for test storage... 00:17:28.562 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:28.562 23:20:17 -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:28.562 23:20:17 -- nvmf/common.sh@7 -- # uname -s 00:17:28.562 23:20:17 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:28.562 23:20:17 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:28.562 23:20:17 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:28.562 23:20:17 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:28.562 23:20:17 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:28.562 23:20:17 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:28.562 23:20:17 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:28.562 23:20:17 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:28.562 23:20:17 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:28.562 23:20:17 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:28.562 23:20:17 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:17:28.562 23:20:17 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:17:28.562 23:20:17 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:28.562 23:20:17 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:28.562 23:20:17 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:28.562 23:20:17 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:28.562 23:20:17 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:28.562 23:20:17 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:28.562 23:20:17 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:28.562 23:20:17 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:28.562 23:20:17 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:28.562 23:20:17 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:28.562 23:20:17 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:28.562 23:20:17 -- paths/export.sh@5 -- # export PATH 00:17:28.562 23:20:17 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:28.562 23:20:17 -- nvmf/common.sh@47 -- # : 0 00:17:28.562 23:20:17 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:28.562 23:20:17 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:28.562 23:20:17 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:28.562 23:20:17 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:28.562 23:20:17 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:28.562 23:20:17 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:28.562 23:20:17 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:28.562 23:20:17 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:28.562 23:20:17 -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:17:28.562 23:20:17 -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:17:28.562 23:20:17 -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:17:28.562 23:20:17 -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:17:28.562 23:20:17 -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:28.562 23:20:17 -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:17:28.562 23:20:17 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:17:28.562 23:20:17 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:28.562 23:20:17 -- nvmf/common.sh@437 -- # prepare_net_devs 00:17:28.562 23:20:17 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:17:28.562 23:20:17 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:17:28.562 23:20:17 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:28.562 23:20:17 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:28.562 23:20:17 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:28.562 23:20:17 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:17:28.562 23:20:17 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:17:28.562 23:20:17 -- nvmf/common.sh@285 -- # xtrace_disable 00:17:28.562 23:20:17 -- common/autotest_common.sh@10 -- # set +x 00:17:35.194 23:20:24 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:17:35.194 23:20:24 -- nvmf/common.sh@291 -- # pci_devs=() 00:17:35.194 23:20:24 -- nvmf/common.sh@291 -- # local -a pci_devs 00:17:35.194 23:20:24 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:17:35.194 23:20:24 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:17:35.194 23:20:24 -- nvmf/common.sh@293 -- # pci_drivers=() 00:17:35.194 23:20:24 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:17:35.194 23:20:24 -- nvmf/common.sh@295 -- # net_devs=() 00:17:35.194 23:20:24 -- nvmf/common.sh@295 -- # local -ga net_devs 00:17:35.195 23:20:24 -- nvmf/common.sh@296 -- # e810=() 00:17:35.195 23:20:24 -- nvmf/common.sh@296 -- # local -ga e810 00:17:35.195 23:20:24 -- nvmf/common.sh@297 -- # x722=() 00:17:35.195 23:20:24 -- nvmf/common.sh@297 -- # local -ga x722 00:17:35.195 23:20:24 -- nvmf/common.sh@298 -- # mlx=() 00:17:35.195 23:20:24 -- nvmf/common.sh@298 -- # local -ga mlx 00:17:35.195 23:20:24 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:35.195 23:20:24 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:35.195 23:20:24 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:35.195 23:20:24 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:35.195 23:20:24 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:35.195 23:20:24 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:35.195 23:20:24 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:35.195 23:20:24 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:35.195 23:20:24 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:35.195 23:20:24 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:35.195 23:20:24 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:35.195 23:20:24 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:17:35.195 23:20:24 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:17:35.195 23:20:24 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:17:35.195 23:20:24 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:17:35.195 23:20:24 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:17:35.195 23:20:24 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:17:35.195 23:20:24 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:35.195 23:20:24 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:17:35.195 Found 0000:31:00.0 (0x8086 - 0x159b) 00:17:35.195 23:20:24 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:35.195 23:20:24 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:35.195 23:20:24 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:35.195 23:20:24 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:35.195 23:20:24 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:35.195 23:20:24 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:35.195 23:20:24 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:17:35.195 Found 0000:31:00.1 (0x8086 - 0x159b) 00:17:35.195 23:20:24 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:35.195 23:20:24 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:35.195 23:20:24 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:35.195 23:20:24 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:35.195 23:20:24 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:35.195 23:20:24 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:17:35.195 23:20:24 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:17:35.195 23:20:24 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:17:35.195 23:20:24 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:35.195 23:20:24 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:35.195 23:20:24 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:17:35.195 23:20:24 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:35.195 23:20:24 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:17:35.195 Found net devices under 0000:31:00.0: cvl_0_0 00:17:35.195 23:20:24 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:17:35.195 23:20:24 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:35.195 23:20:24 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:35.195 23:20:24 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:17:35.195 23:20:24 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:35.195 23:20:24 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:17:35.195 Found net devices under 0000:31:00.1: cvl_0_1 00:17:35.195 23:20:24 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:17:35.195 23:20:24 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:17:35.195 23:20:24 -- nvmf/common.sh@403 -- # is_hw=yes 00:17:35.195 23:20:24 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:17:35.195 23:20:24 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:17:35.195 23:20:24 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:17:35.195 23:20:24 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:35.195 23:20:24 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:35.195 23:20:24 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:35.195 23:20:24 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:17:35.195 23:20:24 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:35.195 23:20:24 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:35.195 23:20:24 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:17:35.195 23:20:24 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:35.195 23:20:24 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:35.195 23:20:24 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:17:35.195 23:20:24 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:17:35.195 23:20:24 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:17:35.195 23:20:24 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:35.456 23:20:24 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:35.456 23:20:24 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:35.456 23:20:24 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:17:35.456 23:20:24 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:35.717 23:20:24 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:35.717 23:20:24 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:35.717 23:20:24 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:17:35.717 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:35.717 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.855 ms 00:17:35.717 00:17:35.717 --- 10.0.0.2 ping statistics --- 00:17:35.717 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:35.717 rtt min/avg/max/mdev = 0.855/0.855/0.855/0.000 ms 00:17:35.717 23:20:24 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:35.717 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:35.717 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.344 ms 00:17:35.717 00:17:35.717 --- 10.0.0.1 ping statistics --- 00:17:35.717 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:35.717 rtt min/avg/max/mdev = 0.344/0.344/0.344/0.000 ms 00:17:35.717 23:20:24 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:35.717 23:20:24 -- nvmf/common.sh@411 -- # return 0 00:17:35.717 23:20:24 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:17:35.717 23:20:24 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:35.717 23:20:24 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:17:35.717 23:20:24 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:17:35.717 23:20:24 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:35.717 23:20:24 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:17:35.717 23:20:24 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:17:35.717 23:20:24 -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:17:35.717 23:20:24 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:17:35.717 23:20:24 -- common/autotest_common.sh@710 -- # xtrace_disable 00:17:35.717 23:20:24 -- common/autotest_common.sh@10 -- # set +x 00:17:35.717 23:20:24 -- nvmf/common.sh@470 -- # nvmfpid=3913042 00:17:35.717 23:20:24 -- nvmf/common.sh@471 -- # waitforlisten 3913042 00:17:35.717 23:20:24 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:17:35.717 23:20:24 -- common/autotest_common.sh@817 -- # '[' -z 3913042 ']' 00:17:35.717 23:20:24 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:35.717 23:20:24 -- common/autotest_common.sh@822 -- # local max_retries=100 00:17:35.717 23:20:24 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:35.717 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:35.717 23:20:24 -- common/autotest_common.sh@826 -- # xtrace_disable 00:17:35.717 23:20:24 -- common/autotest_common.sh@10 -- # set +x 00:17:35.717 [2024-04-26 23:20:24.855894] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:17:35.717 [2024-04-26 23:20:24.855957] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:35.717 EAL: No free 2048 kB hugepages reported on node 1 00:17:35.717 [2024-04-26 23:20:24.928218] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:17:35.717 [2024-04-26 23:20:24.965700] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:35.717 [2024-04-26 23:20:24.965752] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:35.717 [2024-04-26 23:20:24.965761] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:35.717 [2024-04-26 23:20:24.965769] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:35.717 [2024-04-26 23:20:24.965776] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:35.717 [2024-04-26 23:20:24.965912] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:35.717 [2024-04-26 23:20:24.966217] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:35.717 [2024-04-26 23:20:24.966222] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:36.659 23:20:25 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:17:36.659 23:20:25 -- common/autotest_common.sh@850 -- # return 0 00:17:36.659 23:20:25 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:17:36.659 23:20:25 -- common/autotest_common.sh@716 -- # xtrace_disable 00:17:36.659 23:20:25 -- common/autotest_common.sh@10 -- # set +x 00:17:36.659 23:20:25 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:36.659 23:20:25 -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:17:36.659 [2024-04-26 23:20:25.814193] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:36.659 23:20:25 -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:17:36.919 23:20:26 -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:17:36.919 23:20:26 -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:17:37.180 23:20:26 -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:17:37.180 23:20:26 -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:17:37.180 23:20:26 -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:17:37.440 23:20:26 -- target/nvmf_lvol.sh@29 -- # lvs=4cad50c7-9a03-40b2-921a-5e61aa776083 00:17:37.440 23:20:26 -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 4cad50c7-9a03-40b2-921a-5e61aa776083 lvol 20 00:17:37.701 23:20:26 -- target/nvmf_lvol.sh@32 -- # lvol=3bce8e9e-5970-4c0e-b84d-ec8fc092a837 00:17:37.701 23:20:26 -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:17:37.701 23:20:26 -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 3bce8e9e-5970-4c0e-b84d-ec8fc092a837 00:17:37.961 23:20:26 -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:17:37.961 [2024-04-26 23:20:27.116652] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:37.961 23:20:27 -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:17:38.222 23:20:27 -- target/nvmf_lvol.sh@42 -- # perf_pid=3913622 00:17:38.222 23:20:27 -- target/nvmf_lvol.sh@44 -- # sleep 1 00:17:38.222 23:20:27 -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:17:38.222 EAL: No free 2048 kB hugepages reported on node 1 00:17:39.161 23:20:28 -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot 3bce8e9e-5970-4c0e-b84d-ec8fc092a837 MY_SNAPSHOT 00:17:39.421 23:20:28 -- target/nvmf_lvol.sh@47 -- # snapshot=306793d3-510e-4d03-af31-3fa4b1253bc8 00:17:39.421 23:20:28 -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize 3bce8e9e-5970-4c0e-b84d-ec8fc092a837 30 00:17:39.682 23:20:28 -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone 306793d3-510e-4d03-af31-3fa4b1253bc8 MY_CLONE 00:17:39.682 23:20:28 -- target/nvmf_lvol.sh@49 -- # clone=c38d696e-82a3-41c5-825c-249f8ec16cac 00:17:39.682 23:20:28 -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate c38d696e-82a3-41c5-825c-249f8ec16cac 00:17:40.252 23:20:29 -- target/nvmf_lvol.sh@53 -- # wait 3913622 00:17:50.254 Initializing NVMe Controllers 00:17:50.254 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:17:50.254 Controller IO queue size 128, less than required. 00:17:50.254 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:17:50.254 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:17:50.254 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:17:50.254 Initialization complete. Launching workers. 00:17:50.254 ======================================================== 00:17:50.254 Latency(us) 00:17:50.254 Device Information : IOPS MiB/s Average min max 00:17:50.254 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 11135.60 43.50 11498.91 1501.68 55516.05 00:17:50.254 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 11169.20 43.63 11463.40 3667.42 52514.94 00:17:50.254 ======================================================== 00:17:50.254 Total : 22304.80 87.13 11481.13 1501.68 55516.05 00:17:50.254 00:17:50.254 23:20:37 -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:17:50.254 23:20:37 -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 3bce8e9e-5970-4c0e-b84d-ec8fc092a837 00:17:50.254 23:20:38 -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 4cad50c7-9a03-40b2-921a-5e61aa776083 00:17:50.254 23:20:38 -- target/nvmf_lvol.sh@60 -- # rm -f 00:17:50.254 23:20:38 -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:17:50.254 23:20:38 -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:17:50.254 23:20:38 -- nvmf/common.sh@477 -- # nvmfcleanup 00:17:50.254 23:20:38 -- nvmf/common.sh@117 -- # sync 00:17:50.254 23:20:38 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:50.254 23:20:38 -- nvmf/common.sh@120 -- # set +e 00:17:50.254 23:20:38 -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:50.254 23:20:38 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:50.254 rmmod nvme_tcp 00:17:50.254 rmmod nvme_fabrics 00:17:50.254 rmmod nvme_keyring 00:17:50.254 23:20:38 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:50.254 23:20:38 -- nvmf/common.sh@124 -- # set -e 00:17:50.254 23:20:38 -- nvmf/common.sh@125 -- # return 0 00:17:50.254 23:20:38 -- nvmf/common.sh@478 -- # '[' -n 3913042 ']' 00:17:50.254 23:20:38 -- nvmf/common.sh@479 -- # killprocess 3913042 00:17:50.254 23:20:38 -- common/autotest_common.sh@936 -- # '[' -z 3913042 ']' 00:17:50.254 23:20:38 -- common/autotest_common.sh@940 -- # kill -0 3913042 00:17:50.254 23:20:38 -- common/autotest_common.sh@941 -- # uname 00:17:50.254 23:20:38 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:50.254 23:20:38 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3913042 00:17:50.254 23:20:38 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:17:50.254 23:20:38 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:17:50.254 23:20:38 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3913042' 00:17:50.254 killing process with pid 3913042 00:17:50.254 23:20:38 -- common/autotest_common.sh@955 -- # kill 3913042 00:17:50.254 23:20:38 -- common/autotest_common.sh@960 -- # wait 3913042 00:17:50.254 23:20:38 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:17:50.254 23:20:38 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:17:50.254 23:20:38 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:17:50.254 23:20:38 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:50.254 23:20:38 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:50.254 23:20:38 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:50.254 23:20:38 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:50.254 23:20:38 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:51.642 23:20:40 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:17:51.642 00:17:51.642 real 0m23.014s 00:17:51.642 user 1m3.498s 00:17:51.642 sys 0m7.573s 00:17:51.642 23:20:40 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:17:51.642 23:20:40 -- common/autotest_common.sh@10 -- # set +x 00:17:51.642 ************************************ 00:17:51.642 END TEST nvmf_lvol 00:17:51.642 ************************************ 00:17:51.642 23:20:40 -- nvmf/nvmf.sh@49 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:17:51.642 23:20:40 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:17:51.642 23:20:40 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:17:51.642 23:20:40 -- common/autotest_common.sh@10 -- # set +x 00:17:51.642 ************************************ 00:17:51.642 START TEST nvmf_lvs_grow 00:17:51.642 ************************************ 00:17:51.642 23:20:40 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:17:51.642 * Looking for test storage... 00:17:51.642 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:51.642 23:20:40 -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:51.642 23:20:40 -- nvmf/common.sh@7 -- # uname -s 00:17:51.642 23:20:40 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:51.642 23:20:40 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:51.642 23:20:40 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:51.642 23:20:40 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:51.642 23:20:40 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:51.642 23:20:40 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:51.642 23:20:40 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:51.642 23:20:40 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:51.642 23:20:40 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:51.642 23:20:40 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:51.642 23:20:40 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:17:51.642 23:20:40 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:17:51.642 23:20:40 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:51.642 23:20:40 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:51.642 23:20:40 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:51.642 23:20:40 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:51.642 23:20:40 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:51.642 23:20:40 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:51.642 23:20:40 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:51.642 23:20:40 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:51.642 23:20:40 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:51.642 23:20:40 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:51.642 23:20:40 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:51.642 23:20:40 -- paths/export.sh@5 -- # export PATH 00:17:51.642 23:20:40 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:51.642 23:20:40 -- nvmf/common.sh@47 -- # : 0 00:17:51.642 23:20:40 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:51.642 23:20:40 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:51.642 23:20:40 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:51.642 23:20:40 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:51.642 23:20:40 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:51.642 23:20:40 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:51.642 23:20:40 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:51.642 23:20:40 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:51.642 23:20:40 -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:51.642 23:20:40 -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:51.642 23:20:40 -- target/nvmf_lvs_grow.sh@97 -- # nvmftestinit 00:17:51.642 23:20:40 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:17:51.642 23:20:40 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:51.642 23:20:40 -- nvmf/common.sh@437 -- # prepare_net_devs 00:17:51.642 23:20:40 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:17:51.642 23:20:40 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:17:51.642 23:20:40 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:51.642 23:20:40 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:51.642 23:20:40 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:51.642 23:20:40 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:17:51.642 23:20:40 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:17:51.642 23:20:40 -- nvmf/common.sh@285 -- # xtrace_disable 00:17:51.642 23:20:40 -- common/autotest_common.sh@10 -- # set +x 00:17:59.790 23:20:47 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:17:59.790 23:20:47 -- nvmf/common.sh@291 -- # pci_devs=() 00:17:59.790 23:20:47 -- nvmf/common.sh@291 -- # local -a pci_devs 00:17:59.790 23:20:47 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:17:59.790 23:20:47 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:17:59.790 23:20:47 -- nvmf/common.sh@293 -- # pci_drivers=() 00:17:59.790 23:20:47 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:17:59.790 23:20:47 -- nvmf/common.sh@295 -- # net_devs=() 00:17:59.790 23:20:47 -- nvmf/common.sh@295 -- # local -ga net_devs 00:17:59.790 23:20:47 -- nvmf/common.sh@296 -- # e810=() 00:17:59.790 23:20:47 -- nvmf/common.sh@296 -- # local -ga e810 00:17:59.790 23:20:47 -- nvmf/common.sh@297 -- # x722=() 00:17:59.790 23:20:47 -- nvmf/common.sh@297 -- # local -ga x722 00:17:59.790 23:20:47 -- nvmf/common.sh@298 -- # mlx=() 00:17:59.790 23:20:47 -- nvmf/common.sh@298 -- # local -ga mlx 00:17:59.790 23:20:47 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:59.790 23:20:47 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:59.790 23:20:47 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:59.790 23:20:47 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:59.790 23:20:47 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:59.790 23:20:47 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:59.790 23:20:47 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:59.790 23:20:47 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:59.790 23:20:47 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:59.790 23:20:47 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:59.790 23:20:47 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:59.790 23:20:47 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:17:59.790 23:20:47 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:17:59.790 23:20:47 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:17:59.790 23:20:47 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:17:59.790 23:20:47 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:17:59.790 23:20:47 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:17:59.790 23:20:47 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:59.790 23:20:47 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:17:59.790 Found 0000:31:00.0 (0x8086 - 0x159b) 00:17:59.790 23:20:47 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:59.790 23:20:47 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:59.790 23:20:47 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:59.790 23:20:47 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:59.790 23:20:47 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:59.790 23:20:47 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:59.790 23:20:47 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:17:59.790 Found 0000:31:00.1 (0x8086 - 0x159b) 00:17:59.790 23:20:47 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:59.790 23:20:47 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:59.790 23:20:47 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:59.790 23:20:47 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:59.790 23:20:47 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:59.790 23:20:47 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:17:59.790 23:20:47 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:17:59.790 23:20:47 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:17:59.790 23:20:47 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:59.790 23:20:47 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:59.790 23:20:47 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:17:59.790 23:20:47 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:59.790 23:20:47 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:17:59.790 Found net devices under 0000:31:00.0: cvl_0_0 00:17:59.790 23:20:47 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:17:59.790 23:20:47 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:59.790 23:20:47 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:59.790 23:20:47 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:17:59.790 23:20:47 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:59.790 23:20:47 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:17:59.790 Found net devices under 0000:31:00.1: cvl_0_1 00:17:59.790 23:20:47 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:17:59.790 23:20:47 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:17:59.791 23:20:47 -- nvmf/common.sh@403 -- # is_hw=yes 00:17:59.791 23:20:47 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:17:59.791 23:20:47 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:17:59.791 23:20:47 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:17:59.791 23:20:47 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:59.791 23:20:47 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:59.791 23:20:47 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:59.791 23:20:47 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:17:59.791 23:20:47 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:59.791 23:20:47 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:59.791 23:20:47 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:17:59.791 23:20:47 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:59.791 23:20:47 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:59.791 23:20:47 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:17:59.791 23:20:47 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:17:59.791 23:20:47 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:17:59.791 23:20:47 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:59.791 23:20:48 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:59.791 23:20:48 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:59.791 23:20:48 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:17:59.791 23:20:48 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:59.791 23:20:48 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:59.791 23:20:48 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:59.791 23:20:48 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:17:59.791 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:59.791 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.634 ms 00:17:59.791 00:17:59.791 --- 10.0.0.2 ping statistics --- 00:17:59.791 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:59.791 rtt min/avg/max/mdev = 0.634/0.634/0.634/0.000 ms 00:17:59.791 23:20:48 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:59.791 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:59.791 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.273 ms 00:17:59.791 00:17:59.791 --- 10.0.0.1 ping statistics --- 00:17:59.791 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:59.791 rtt min/avg/max/mdev = 0.273/0.273/0.273/0.000 ms 00:17:59.791 23:20:48 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:59.791 23:20:48 -- nvmf/common.sh@411 -- # return 0 00:17:59.791 23:20:48 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:17:59.791 23:20:48 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:59.791 23:20:48 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:17:59.791 23:20:48 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:17:59.791 23:20:48 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:59.791 23:20:48 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:17:59.791 23:20:48 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:17:59.791 23:20:48 -- target/nvmf_lvs_grow.sh@98 -- # nvmfappstart -m 0x1 00:17:59.791 23:20:48 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:17:59.791 23:20:48 -- common/autotest_common.sh@710 -- # xtrace_disable 00:17:59.791 23:20:48 -- common/autotest_common.sh@10 -- # set +x 00:17:59.791 23:20:48 -- nvmf/common.sh@470 -- # nvmfpid=3920030 00:17:59.791 23:20:48 -- nvmf/common.sh@471 -- # waitforlisten 3920030 00:17:59.791 23:20:48 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:17:59.791 23:20:48 -- common/autotest_common.sh@817 -- # '[' -z 3920030 ']' 00:17:59.791 23:20:48 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:59.791 23:20:48 -- common/autotest_common.sh@822 -- # local max_retries=100 00:17:59.791 23:20:48 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:59.791 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:59.791 23:20:48 -- common/autotest_common.sh@826 -- # xtrace_disable 00:17:59.791 23:20:48 -- common/autotest_common.sh@10 -- # set +x 00:17:59.791 [2024-04-26 23:20:48.275236] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:17:59.791 [2024-04-26 23:20:48.275288] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:59.791 EAL: No free 2048 kB hugepages reported on node 1 00:17:59.791 [2024-04-26 23:20:48.343842] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:59.791 [2024-04-26 23:20:48.374996] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:59.791 [2024-04-26 23:20:48.375036] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:59.791 [2024-04-26 23:20:48.375044] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:59.791 [2024-04-26 23:20:48.375050] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:59.791 [2024-04-26 23:20:48.375056] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:59.791 [2024-04-26 23:20:48.375076] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:59.791 23:20:49 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:17:59.791 23:20:49 -- common/autotest_common.sh@850 -- # return 0 00:17:59.791 23:20:49 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:17:59.791 23:20:49 -- common/autotest_common.sh@716 -- # xtrace_disable 00:17:59.791 23:20:49 -- common/autotest_common.sh@10 -- # set +x 00:18:00.053 23:20:49 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:00.053 23:20:49 -- target/nvmf_lvs_grow.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:18:00.053 [2024-04-26 23:20:49.219900] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:00.053 23:20:49 -- target/nvmf_lvs_grow.sh@101 -- # run_test lvs_grow_clean lvs_grow 00:18:00.053 23:20:49 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:18:00.053 23:20:49 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:18:00.053 23:20:49 -- common/autotest_common.sh@10 -- # set +x 00:18:00.313 ************************************ 00:18:00.313 START TEST lvs_grow_clean 00:18:00.313 ************************************ 00:18:00.313 23:20:49 -- common/autotest_common.sh@1111 -- # lvs_grow 00:18:00.313 23:20:49 -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:18:00.313 23:20:49 -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:18:00.313 23:20:49 -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:18:00.313 23:20:49 -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:18:00.313 23:20:49 -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:18:00.313 23:20:49 -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:18:00.313 23:20:49 -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:18:00.313 23:20:49 -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:18:00.313 23:20:49 -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:18:00.573 23:20:49 -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:18:00.573 23:20:49 -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:18:00.573 23:20:49 -- target/nvmf_lvs_grow.sh@28 -- # lvs=c3a24b81-bfcb-41a8-9f9b-fab4abf7a75a 00:18:00.573 23:20:49 -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c3a24b81-bfcb-41a8-9f9b-fab4abf7a75a 00:18:00.573 23:20:49 -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:18:00.835 23:20:49 -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:18:00.835 23:20:49 -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:18:00.835 23:20:49 -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u c3a24b81-bfcb-41a8-9f9b-fab4abf7a75a lvol 150 00:18:00.835 23:20:50 -- target/nvmf_lvs_grow.sh@33 -- # lvol=6667b5d5-d038-41ec-913d-25962f4d3cc7 00:18:00.835 23:20:50 -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:18:00.835 23:20:50 -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:18:01.096 [2024-04-26 23:20:50.193847] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:18:01.096 [2024-04-26 23:20:50.193900] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:18:01.096 true 00:18:01.096 23:20:50 -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c3a24b81-bfcb-41a8-9f9b-fab4abf7a75a 00:18:01.096 23:20:50 -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:18:01.358 23:20:50 -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:18:01.358 23:20:50 -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:18:01.358 23:20:50 -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 6667b5d5-d038-41ec-913d-25962f4d3cc7 00:18:01.619 23:20:50 -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:18:01.619 [2024-04-26 23:20:50.803704] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:01.619 23:20:50 -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:18:01.880 23:20:50 -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=3920552 00:18:01.880 23:20:50 -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:18:01.880 23:20:50 -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 3920552 /var/tmp/bdevperf.sock 00:18:01.880 23:20:50 -- common/autotest_common.sh@817 -- # '[' -z 3920552 ']' 00:18:01.880 23:20:50 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:01.880 23:20:50 -- common/autotest_common.sh@822 -- # local max_retries=100 00:18:01.880 23:20:50 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:01.880 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:01.880 23:20:50 -- common/autotest_common.sh@826 -- # xtrace_disable 00:18:01.880 23:20:50 -- common/autotest_common.sh@10 -- # set +x 00:18:01.880 23:20:50 -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:18:01.880 [2024-04-26 23:20:51.035445] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:18:01.880 [2024-04-26 23:20:51.035496] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3920552 ] 00:18:01.880 EAL: No free 2048 kB hugepages reported on node 1 00:18:01.880 [2024-04-26 23:20:51.094202] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:01.880 [2024-04-26 23:20:51.123192] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:02.141 23:20:51 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:18:02.141 23:20:51 -- common/autotest_common.sh@850 -- # return 0 00:18:02.141 23:20:51 -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:18:02.402 Nvme0n1 00:18:02.402 23:20:51 -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:18:02.663 [ 00:18:02.663 { 00:18:02.663 "name": "Nvme0n1", 00:18:02.663 "aliases": [ 00:18:02.663 "6667b5d5-d038-41ec-913d-25962f4d3cc7" 00:18:02.663 ], 00:18:02.663 "product_name": "NVMe disk", 00:18:02.663 "block_size": 4096, 00:18:02.663 "num_blocks": 38912, 00:18:02.663 "uuid": "6667b5d5-d038-41ec-913d-25962f4d3cc7", 00:18:02.663 "assigned_rate_limits": { 00:18:02.663 "rw_ios_per_sec": 0, 00:18:02.663 "rw_mbytes_per_sec": 0, 00:18:02.663 "r_mbytes_per_sec": 0, 00:18:02.663 "w_mbytes_per_sec": 0 00:18:02.663 }, 00:18:02.663 "claimed": false, 00:18:02.663 "zoned": false, 00:18:02.663 "supported_io_types": { 00:18:02.663 "read": true, 00:18:02.663 "write": true, 00:18:02.663 "unmap": true, 00:18:02.663 "write_zeroes": true, 00:18:02.663 "flush": true, 00:18:02.663 "reset": true, 00:18:02.663 "compare": true, 00:18:02.663 "compare_and_write": true, 00:18:02.663 "abort": true, 00:18:02.663 "nvme_admin": true, 00:18:02.663 "nvme_io": true 00:18:02.663 }, 00:18:02.663 "memory_domains": [ 00:18:02.663 { 00:18:02.663 "dma_device_id": "system", 00:18:02.663 "dma_device_type": 1 00:18:02.663 } 00:18:02.663 ], 00:18:02.663 "driver_specific": { 00:18:02.663 "nvme": [ 00:18:02.663 { 00:18:02.663 "trid": { 00:18:02.663 "trtype": "TCP", 00:18:02.663 "adrfam": "IPv4", 00:18:02.663 "traddr": "10.0.0.2", 00:18:02.663 "trsvcid": "4420", 00:18:02.663 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:18:02.663 }, 00:18:02.663 "ctrlr_data": { 00:18:02.663 "cntlid": 1, 00:18:02.663 "vendor_id": "0x8086", 00:18:02.663 "model_number": "SPDK bdev Controller", 00:18:02.663 "serial_number": "SPDK0", 00:18:02.663 "firmware_revision": "24.05", 00:18:02.663 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:18:02.663 "oacs": { 00:18:02.663 "security": 0, 00:18:02.663 "format": 0, 00:18:02.663 "firmware": 0, 00:18:02.663 "ns_manage": 0 00:18:02.663 }, 00:18:02.663 "multi_ctrlr": true, 00:18:02.663 "ana_reporting": false 00:18:02.663 }, 00:18:02.663 "vs": { 00:18:02.663 "nvme_version": "1.3" 00:18:02.663 }, 00:18:02.663 "ns_data": { 00:18:02.663 "id": 1, 00:18:02.663 "can_share": true 00:18:02.663 } 00:18:02.663 } 00:18:02.663 ], 00:18:02.663 "mp_policy": "active_passive" 00:18:02.663 } 00:18:02.663 } 00:18:02.663 ] 00:18:02.663 23:20:51 -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=3920758 00:18:02.663 23:20:51 -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:18:02.663 23:20:51 -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:18:02.663 Running I/O for 10 seconds... 00:18:03.608 Latency(us) 00:18:03.608 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:03.608 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:03.608 Nvme0n1 : 1.00 17489.00 68.32 0.00 0.00 0.00 0.00 0.00 00:18:03.608 =================================================================================================================== 00:18:03.608 Total : 17489.00 68.32 0.00 0.00 0.00 0.00 0.00 00:18:03.608 00:18:04.552 23:20:53 -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u c3a24b81-bfcb-41a8-9f9b-fab4abf7a75a 00:18:04.813 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:04.813 Nvme0n1 : 2.00 17542.00 68.52 0.00 0.00 0.00 0.00 0.00 00:18:04.813 =================================================================================================================== 00:18:04.813 Total : 17542.00 68.52 0.00 0.00 0.00 0.00 0.00 00:18:04.813 00:18:04.813 true 00:18:04.813 23:20:53 -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c3a24b81-bfcb-41a8-9f9b-fab4abf7a75a 00:18:04.813 23:20:53 -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:18:05.074 23:20:54 -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:18:05.074 23:20:54 -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:18:05.074 23:20:54 -- target/nvmf_lvs_grow.sh@65 -- # wait 3920758 00:18:05.647 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:05.647 Nvme0n1 : 3.00 17597.67 68.74 0.00 0.00 0.00 0.00 0.00 00:18:05.647 =================================================================================================================== 00:18:05.647 Total : 17597.67 68.74 0.00 0.00 0.00 0.00 0.00 00:18:05.647 00:18:06.591 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:06.591 Nvme0n1 : 4.00 17630.25 68.87 0.00 0.00 0.00 0.00 0.00 00:18:06.591 =================================================================================================================== 00:18:06.591 Total : 17630.25 68.87 0.00 0.00 0.00 0.00 0.00 00:18:06.591 00:18:07.604 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:07.604 Nvme0n1 : 5.00 17652.00 68.95 0.00 0.00 0.00 0.00 0.00 00:18:07.604 =================================================================================================================== 00:18:07.604 Total : 17652.00 68.95 0.00 0.00 0.00 0.00 0.00 00:18:07.604 00:18:08.988 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:08.988 Nvme0n1 : 6.00 17671.67 69.03 0.00 0.00 0.00 0.00 0.00 00:18:08.988 =================================================================================================================== 00:18:08.988 Total : 17671.67 69.03 0.00 0.00 0.00 0.00 0.00 00:18:08.988 00:18:09.928 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:09.928 Nvme0n1 : 7.00 17688.00 69.09 0.00 0.00 0.00 0.00 0.00 00:18:09.928 =================================================================================================================== 00:18:09.928 Total : 17688.00 69.09 0.00 0.00 0.00 0.00 0.00 00:18:09.928 00:18:10.872 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:10.872 Nvme0n1 : 8.00 17700.88 69.14 0.00 0.00 0.00 0.00 0.00 00:18:10.872 =================================================================================================================== 00:18:10.872 Total : 17700.88 69.14 0.00 0.00 0.00 0.00 0.00 00:18:10.872 00:18:11.817 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:11.817 Nvme0n1 : 9.00 17702.89 69.15 0.00 0.00 0.00 0.00 0.00 00:18:11.817 =================================================================================================================== 00:18:11.817 Total : 17702.89 69.15 0.00 0.00 0.00 0.00 0.00 00:18:11.817 00:18:12.760 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:12.760 Nvme0n1 : 10.00 17711.20 69.18 0.00 0.00 0.00 0.00 0.00 00:18:12.760 =================================================================================================================== 00:18:12.760 Total : 17711.20 69.18 0.00 0.00 0.00 0.00 0.00 00:18:12.760 00:18:12.760 00:18:12.760 Latency(us) 00:18:12.760 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:12.760 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:12.760 Nvme0n1 : 10.01 17713.09 69.19 0.00 0.00 7220.48 2048.00 13325.65 00:18:12.760 =================================================================================================================== 00:18:12.760 Total : 17713.09 69.19 0.00 0.00 7220.48 2048.00 13325.65 00:18:12.760 0 00:18:12.760 23:21:01 -- target/nvmf_lvs_grow.sh@66 -- # killprocess 3920552 00:18:12.760 23:21:01 -- common/autotest_common.sh@936 -- # '[' -z 3920552 ']' 00:18:12.760 23:21:01 -- common/autotest_common.sh@940 -- # kill -0 3920552 00:18:12.760 23:21:01 -- common/autotest_common.sh@941 -- # uname 00:18:12.760 23:21:01 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:18:12.760 23:21:01 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3920552 00:18:12.760 23:21:01 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:18:12.760 23:21:01 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:18:12.760 23:21:01 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3920552' 00:18:12.760 killing process with pid 3920552 00:18:12.760 23:21:01 -- common/autotest_common.sh@955 -- # kill 3920552 00:18:12.760 Received shutdown signal, test time was about 10.000000 seconds 00:18:12.760 00:18:12.760 Latency(us) 00:18:12.760 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:12.760 =================================================================================================================== 00:18:12.760 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:12.760 23:21:01 -- common/autotest_common.sh@960 -- # wait 3920552 00:18:13.021 23:21:02 -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:18:13.021 23:21:02 -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c3a24b81-bfcb-41a8-9f9b-fab4abf7a75a 00:18:13.021 23:21:02 -- target/nvmf_lvs_grow.sh@69 -- # jq -r '.[0].free_clusters' 00:18:13.281 23:21:02 -- target/nvmf_lvs_grow.sh@69 -- # free_clusters=61 00:18:13.281 23:21:02 -- target/nvmf_lvs_grow.sh@71 -- # [[ '' == \d\i\r\t\y ]] 00:18:13.281 23:21:02 -- target/nvmf_lvs_grow.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:18:13.281 [2024-04-26 23:21:02.498286] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:18:13.281 23:21:02 -- target/nvmf_lvs_grow.sh@84 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c3a24b81-bfcb-41a8-9f9b-fab4abf7a75a 00:18:13.281 23:21:02 -- common/autotest_common.sh@638 -- # local es=0 00:18:13.281 23:21:02 -- common/autotest_common.sh@640 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c3a24b81-bfcb-41a8-9f9b-fab4abf7a75a 00:18:13.281 23:21:02 -- common/autotest_common.sh@626 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:13.281 23:21:02 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:18:13.281 23:21:02 -- common/autotest_common.sh@630 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:13.541 23:21:02 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:18:13.541 23:21:02 -- common/autotest_common.sh@632 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:13.541 23:21:02 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:18:13.541 23:21:02 -- common/autotest_common.sh@632 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:13.541 23:21:02 -- common/autotest_common.sh@632 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:18:13.541 23:21:02 -- common/autotest_common.sh@641 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c3a24b81-bfcb-41a8-9f9b-fab4abf7a75a 00:18:13.541 request: 00:18:13.541 { 00:18:13.541 "uuid": "c3a24b81-bfcb-41a8-9f9b-fab4abf7a75a", 00:18:13.541 "method": "bdev_lvol_get_lvstores", 00:18:13.541 "req_id": 1 00:18:13.541 } 00:18:13.541 Got JSON-RPC error response 00:18:13.541 response: 00:18:13.541 { 00:18:13.542 "code": -19, 00:18:13.542 "message": "No such device" 00:18:13.542 } 00:18:13.542 23:21:02 -- common/autotest_common.sh@641 -- # es=1 00:18:13.542 23:21:02 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:18:13.542 23:21:02 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:18:13.542 23:21:02 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:18:13.542 23:21:02 -- target/nvmf_lvs_grow.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:18:13.802 aio_bdev 00:18:13.802 23:21:02 -- target/nvmf_lvs_grow.sh@86 -- # waitforbdev 6667b5d5-d038-41ec-913d-25962f4d3cc7 00:18:13.802 23:21:02 -- common/autotest_common.sh@885 -- # local bdev_name=6667b5d5-d038-41ec-913d-25962f4d3cc7 00:18:13.802 23:21:02 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:18:13.802 23:21:02 -- common/autotest_common.sh@887 -- # local i 00:18:13.802 23:21:02 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:18:13.802 23:21:02 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:18:13.802 23:21:02 -- common/autotest_common.sh@890 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:18:13.802 23:21:02 -- common/autotest_common.sh@892 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 6667b5d5-d038-41ec-913d-25962f4d3cc7 -t 2000 00:18:14.062 [ 00:18:14.062 { 00:18:14.063 "name": "6667b5d5-d038-41ec-913d-25962f4d3cc7", 00:18:14.063 "aliases": [ 00:18:14.063 "lvs/lvol" 00:18:14.063 ], 00:18:14.063 "product_name": "Logical Volume", 00:18:14.063 "block_size": 4096, 00:18:14.063 "num_blocks": 38912, 00:18:14.063 "uuid": "6667b5d5-d038-41ec-913d-25962f4d3cc7", 00:18:14.063 "assigned_rate_limits": { 00:18:14.063 "rw_ios_per_sec": 0, 00:18:14.063 "rw_mbytes_per_sec": 0, 00:18:14.063 "r_mbytes_per_sec": 0, 00:18:14.063 "w_mbytes_per_sec": 0 00:18:14.063 }, 00:18:14.063 "claimed": false, 00:18:14.063 "zoned": false, 00:18:14.063 "supported_io_types": { 00:18:14.063 "read": true, 00:18:14.063 "write": true, 00:18:14.063 "unmap": true, 00:18:14.063 "write_zeroes": true, 00:18:14.063 "flush": false, 00:18:14.063 "reset": true, 00:18:14.063 "compare": false, 00:18:14.063 "compare_and_write": false, 00:18:14.063 "abort": false, 00:18:14.063 "nvme_admin": false, 00:18:14.063 "nvme_io": false 00:18:14.063 }, 00:18:14.063 "driver_specific": { 00:18:14.063 "lvol": { 00:18:14.063 "lvol_store_uuid": "c3a24b81-bfcb-41a8-9f9b-fab4abf7a75a", 00:18:14.063 "base_bdev": "aio_bdev", 00:18:14.063 "thin_provision": false, 00:18:14.063 "snapshot": false, 00:18:14.063 "clone": false, 00:18:14.063 "esnap_clone": false 00:18:14.063 } 00:18:14.063 } 00:18:14.063 } 00:18:14.063 ] 00:18:14.063 23:21:03 -- common/autotest_common.sh@893 -- # return 0 00:18:14.063 23:21:03 -- target/nvmf_lvs_grow.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c3a24b81-bfcb-41a8-9f9b-fab4abf7a75a 00:18:14.063 23:21:03 -- target/nvmf_lvs_grow.sh@87 -- # jq -r '.[0].free_clusters' 00:18:14.063 23:21:03 -- target/nvmf_lvs_grow.sh@87 -- # (( free_clusters == 61 )) 00:18:14.063 23:21:03 -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c3a24b81-bfcb-41a8-9f9b-fab4abf7a75a 00:18:14.063 23:21:03 -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].total_data_clusters' 00:18:14.322 23:21:03 -- target/nvmf_lvs_grow.sh@88 -- # (( data_clusters == 99 )) 00:18:14.322 23:21:03 -- target/nvmf_lvs_grow.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 6667b5d5-d038-41ec-913d-25962f4d3cc7 00:18:14.582 23:21:03 -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u c3a24b81-bfcb-41a8-9f9b-fab4abf7a75a 00:18:14.582 23:21:03 -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:18:14.841 23:21:03 -- target/nvmf_lvs_grow.sh@94 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:18:14.841 00:18:14.841 real 0m14.582s 00:18:14.841 user 0m14.246s 00:18:14.841 sys 0m1.224s 00:18:14.842 23:21:03 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:18:14.842 23:21:03 -- common/autotest_common.sh@10 -- # set +x 00:18:14.842 ************************************ 00:18:14.842 END TEST lvs_grow_clean 00:18:14.842 ************************************ 00:18:14.842 23:21:04 -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_dirty lvs_grow dirty 00:18:14.842 23:21:04 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:18:14.842 23:21:04 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:18:14.842 23:21:04 -- common/autotest_common.sh@10 -- # set +x 00:18:15.102 ************************************ 00:18:15.102 START TEST lvs_grow_dirty 00:18:15.102 ************************************ 00:18:15.102 23:21:04 -- common/autotest_common.sh@1111 -- # lvs_grow dirty 00:18:15.102 23:21:04 -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:18:15.102 23:21:04 -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:18:15.102 23:21:04 -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:18:15.102 23:21:04 -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:18:15.102 23:21:04 -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:18:15.102 23:21:04 -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:18:15.102 23:21:04 -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:18:15.102 23:21:04 -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:18:15.102 23:21:04 -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:18:15.102 23:21:04 -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:18:15.102 23:21:04 -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:18:15.361 23:21:04 -- target/nvmf_lvs_grow.sh@28 -- # lvs=cf29f80c-7f62-4b08-8bf3-499df188428c 00:18:15.361 23:21:04 -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u cf29f80c-7f62-4b08-8bf3-499df188428c 00:18:15.361 23:21:04 -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:18:15.621 23:21:04 -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:18:15.621 23:21:04 -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:18:15.621 23:21:04 -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u cf29f80c-7f62-4b08-8bf3-499df188428c lvol 150 00:18:15.621 23:21:04 -- target/nvmf_lvs_grow.sh@33 -- # lvol=e8f690c2-3964-4dcd-a579-6e3677e56c1c 00:18:15.621 23:21:04 -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:18:15.621 23:21:04 -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:18:15.881 [2024-04-26 23:21:04.962419] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:18:15.881 [2024-04-26 23:21:04.962475] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:18:15.881 true 00:18:15.881 23:21:04 -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u cf29f80c-7f62-4b08-8bf3-499df188428c 00:18:15.881 23:21:04 -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:18:16.141 23:21:05 -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:18:16.141 23:21:05 -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:18:16.141 23:21:05 -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 e8f690c2-3964-4dcd-a579-6e3677e56c1c 00:18:16.401 23:21:05 -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:18:16.401 23:21:05 -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:18:16.661 23:21:05 -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=3923610 00:18:16.661 23:21:05 -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:18:16.661 23:21:05 -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:18:16.661 23:21:05 -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 3923610 /var/tmp/bdevperf.sock 00:18:16.661 23:21:05 -- common/autotest_common.sh@817 -- # '[' -z 3923610 ']' 00:18:16.661 23:21:05 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:16.661 23:21:05 -- common/autotest_common.sh@822 -- # local max_retries=100 00:18:16.661 23:21:05 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:16.661 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:16.661 23:21:05 -- common/autotest_common.sh@826 -- # xtrace_disable 00:18:16.661 23:21:05 -- common/autotest_common.sh@10 -- # set +x 00:18:16.661 [2024-04-26 23:21:05.743131] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:18:16.661 [2024-04-26 23:21:05.743185] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3923610 ] 00:18:16.661 EAL: No free 2048 kB hugepages reported on node 1 00:18:16.661 [2024-04-26 23:21:05.802453] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:16.661 [2024-04-26 23:21:05.831203] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:16.661 23:21:05 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:18:16.661 23:21:05 -- common/autotest_common.sh@850 -- # return 0 00:18:16.661 23:21:05 -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:18:17.231 Nvme0n1 00:18:17.231 23:21:06 -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:18:17.231 [ 00:18:17.231 { 00:18:17.231 "name": "Nvme0n1", 00:18:17.231 "aliases": [ 00:18:17.231 "e8f690c2-3964-4dcd-a579-6e3677e56c1c" 00:18:17.231 ], 00:18:17.231 "product_name": "NVMe disk", 00:18:17.231 "block_size": 4096, 00:18:17.231 "num_blocks": 38912, 00:18:17.231 "uuid": "e8f690c2-3964-4dcd-a579-6e3677e56c1c", 00:18:17.231 "assigned_rate_limits": { 00:18:17.231 "rw_ios_per_sec": 0, 00:18:17.231 "rw_mbytes_per_sec": 0, 00:18:17.231 "r_mbytes_per_sec": 0, 00:18:17.231 "w_mbytes_per_sec": 0 00:18:17.231 }, 00:18:17.231 "claimed": false, 00:18:17.231 "zoned": false, 00:18:17.231 "supported_io_types": { 00:18:17.231 "read": true, 00:18:17.231 "write": true, 00:18:17.231 "unmap": true, 00:18:17.231 "write_zeroes": true, 00:18:17.231 "flush": true, 00:18:17.231 "reset": true, 00:18:17.231 "compare": true, 00:18:17.231 "compare_and_write": true, 00:18:17.231 "abort": true, 00:18:17.231 "nvme_admin": true, 00:18:17.231 "nvme_io": true 00:18:17.231 }, 00:18:17.231 "memory_domains": [ 00:18:17.231 { 00:18:17.231 "dma_device_id": "system", 00:18:17.231 "dma_device_type": 1 00:18:17.231 } 00:18:17.231 ], 00:18:17.231 "driver_specific": { 00:18:17.231 "nvme": [ 00:18:17.231 { 00:18:17.231 "trid": { 00:18:17.231 "trtype": "TCP", 00:18:17.231 "adrfam": "IPv4", 00:18:17.231 "traddr": "10.0.0.2", 00:18:17.231 "trsvcid": "4420", 00:18:17.231 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:18:17.231 }, 00:18:17.231 "ctrlr_data": { 00:18:17.231 "cntlid": 1, 00:18:17.231 "vendor_id": "0x8086", 00:18:17.231 "model_number": "SPDK bdev Controller", 00:18:17.231 "serial_number": "SPDK0", 00:18:17.231 "firmware_revision": "24.05", 00:18:17.231 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:18:17.231 "oacs": { 00:18:17.231 "security": 0, 00:18:17.231 "format": 0, 00:18:17.231 "firmware": 0, 00:18:17.231 "ns_manage": 0 00:18:17.231 }, 00:18:17.231 "multi_ctrlr": true, 00:18:17.231 "ana_reporting": false 00:18:17.231 }, 00:18:17.231 "vs": { 00:18:17.231 "nvme_version": "1.3" 00:18:17.231 }, 00:18:17.231 "ns_data": { 00:18:17.231 "id": 1, 00:18:17.231 "can_share": true 00:18:17.231 } 00:18:17.231 } 00:18:17.231 ], 00:18:17.231 "mp_policy": "active_passive" 00:18:17.231 } 00:18:17.231 } 00:18:17.231 ] 00:18:17.231 23:21:06 -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:18:17.231 23:21:06 -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=3923669 00:18:17.231 23:21:06 -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:18:17.492 Running I/O for 10 seconds... 00:18:18.432 Latency(us) 00:18:18.432 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:18.432 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:18.432 Nvme0n1 : 1.00 17277.00 67.49 0.00 0.00 0.00 0.00 0.00 00:18:18.432 =================================================================================================================== 00:18:18.432 Total : 17277.00 67.49 0.00 0.00 0.00 0.00 0.00 00:18:18.432 00:18:19.372 23:21:08 -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u cf29f80c-7f62-4b08-8bf3-499df188428c 00:18:19.372 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:19.372 Nvme0n1 : 2.00 17403.50 67.98 0.00 0.00 0.00 0.00 0.00 00:18:19.372 =================================================================================================================== 00:18:19.372 Total : 17403.50 67.98 0.00 0.00 0.00 0.00 0.00 00:18:19.372 00:18:19.372 true 00:18:19.372 23:21:08 -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u cf29f80c-7f62-4b08-8bf3-499df188428c 00:18:19.372 23:21:08 -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:18:19.632 23:21:08 -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:18:19.632 23:21:08 -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:18:19.632 23:21:08 -- target/nvmf_lvs_grow.sh@65 -- # wait 3923669 00:18:20.575 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:20.575 Nvme0n1 : 3.00 17446.00 68.15 0.00 0.00 0.00 0.00 0.00 00:18:20.575 =================================================================================================================== 00:18:20.575 Total : 17446.00 68.15 0.00 0.00 0.00 0.00 0.00 00:18:20.575 00:18:21.516 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:21.516 Nvme0n1 : 4.00 17495.75 68.34 0.00 0.00 0.00 0.00 0.00 00:18:21.516 =================================================================================================================== 00:18:21.516 Total : 17495.75 68.34 0.00 0.00 0.00 0.00 0.00 00:18:21.516 00:18:22.460 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:22.460 Nvme0n1 : 5.00 17507.40 68.39 0.00 0.00 0.00 0.00 0.00 00:18:22.460 =================================================================================================================== 00:18:22.460 Total : 17507.40 68.39 0.00 0.00 0.00 0.00 0.00 00:18:22.460 00:18:23.400 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:23.400 Nvme0n1 : 6.00 17532.50 68.49 0.00 0.00 0.00 0.00 0.00 00:18:23.400 =================================================================================================================== 00:18:23.400 Total : 17532.50 68.49 0.00 0.00 0.00 0.00 0.00 00:18:23.400 00:18:24.338 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:24.338 Nvme0n1 : 7.00 17553.57 68.57 0.00 0.00 0.00 0.00 0.00 00:18:24.338 =================================================================================================================== 00:18:24.338 Total : 17553.57 68.57 0.00 0.00 0.00 0.00 0.00 00:18:24.338 00:18:25.720 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:25.720 Nvme0n1 : 8.00 17572.88 68.64 0.00 0.00 0.00 0.00 0.00 00:18:25.720 =================================================================================================================== 00:18:25.720 Total : 17572.88 68.64 0.00 0.00 0.00 0.00 0.00 00:18:25.720 00:18:26.289 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:26.289 Nvme0n1 : 9.00 17581.67 68.68 0.00 0.00 0.00 0.00 0.00 00:18:26.289 =================================================================================================================== 00:18:26.289 Total : 17581.67 68.68 0.00 0.00 0.00 0.00 0.00 00:18:26.289 00:18:27.669 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:27.669 Nvme0n1 : 10.00 17595.70 68.73 0.00 0.00 0.00 0.00 0.00 00:18:27.669 =================================================================================================================== 00:18:27.669 Total : 17595.70 68.73 0.00 0.00 0.00 0.00 0.00 00:18:27.669 00:18:27.669 00:18:27.669 Latency(us) 00:18:27.669 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:27.669 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:27.669 Nvme0n1 : 10.01 17593.92 68.73 0.00 0.00 7270.00 4123.31 14199.47 00:18:27.669 =================================================================================================================== 00:18:27.669 Total : 17593.92 68.73 0.00 0.00 7270.00 4123.31 14199.47 00:18:27.669 0 00:18:27.669 23:21:16 -- target/nvmf_lvs_grow.sh@66 -- # killprocess 3923610 00:18:27.669 23:21:16 -- common/autotest_common.sh@936 -- # '[' -z 3923610 ']' 00:18:27.669 23:21:16 -- common/autotest_common.sh@940 -- # kill -0 3923610 00:18:27.669 23:21:16 -- common/autotest_common.sh@941 -- # uname 00:18:27.669 23:21:16 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:18:27.669 23:21:16 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3923610 00:18:27.669 23:21:16 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:18:27.669 23:21:16 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:18:27.669 23:21:16 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3923610' 00:18:27.669 killing process with pid 3923610 00:18:27.669 23:21:16 -- common/autotest_common.sh@955 -- # kill 3923610 00:18:27.669 Received shutdown signal, test time was about 10.000000 seconds 00:18:27.669 00:18:27.669 Latency(us) 00:18:27.669 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:27.669 =================================================================================================================== 00:18:27.669 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:27.669 23:21:16 -- common/autotest_common.sh@960 -- # wait 3923610 00:18:27.669 23:21:16 -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:18:27.929 23:21:16 -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u cf29f80c-7f62-4b08-8bf3-499df188428c 00:18:27.929 23:21:16 -- target/nvmf_lvs_grow.sh@69 -- # jq -r '.[0].free_clusters' 00:18:27.929 23:21:17 -- target/nvmf_lvs_grow.sh@69 -- # free_clusters=61 00:18:27.929 23:21:17 -- target/nvmf_lvs_grow.sh@71 -- # [[ dirty == \d\i\r\t\y ]] 00:18:27.929 23:21:17 -- target/nvmf_lvs_grow.sh@73 -- # kill -9 3920030 00:18:27.929 23:21:17 -- target/nvmf_lvs_grow.sh@74 -- # wait 3920030 00:18:27.929 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 74: 3920030 Killed "${NVMF_APP[@]}" "$@" 00:18:27.929 23:21:17 -- target/nvmf_lvs_grow.sh@74 -- # true 00:18:27.929 23:21:17 -- target/nvmf_lvs_grow.sh@75 -- # nvmfappstart -m 0x1 00:18:27.929 23:21:17 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:18:27.929 23:21:17 -- common/autotest_common.sh@710 -- # xtrace_disable 00:18:27.929 23:21:17 -- common/autotest_common.sh@10 -- # set +x 00:18:27.929 23:21:17 -- nvmf/common.sh@470 -- # nvmfpid=3926272 00:18:27.929 23:21:17 -- nvmf/common.sh@471 -- # waitforlisten 3926272 00:18:27.929 23:21:17 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:18:27.929 23:21:17 -- common/autotest_common.sh@817 -- # '[' -z 3926272 ']' 00:18:27.929 23:21:17 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:27.929 23:21:17 -- common/autotest_common.sh@822 -- # local max_retries=100 00:18:27.929 23:21:17 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:27.929 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:27.929 23:21:17 -- common/autotest_common.sh@826 -- # xtrace_disable 00:18:27.929 23:21:17 -- common/autotest_common.sh@10 -- # set +x 00:18:28.189 [2024-04-26 23:21:17.206458] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:18:28.189 [2024-04-26 23:21:17.206516] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:28.189 EAL: No free 2048 kB hugepages reported on node 1 00:18:28.189 [2024-04-26 23:21:17.273633] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:28.189 [2024-04-26 23:21:17.304498] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:28.189 [2024-04-26 23:21:17.304537] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:28.189 [2024-04-26 23:21:17.304544] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:28.189 [2024-04-26 23:21:17.304551] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:28.189 [2024-04-26 23:21:17.304556] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:28.189 [2024-04-26 23:21:17.304574] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:28.759 23:21:17 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:18:28.759 23:21:17 -- common/autotest_common.sh@850 -- # return 0 00:18:28.759 23:21:17 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:18:28.759 23:21:17 -- common/autotest_common.sh@716 -- # xtrace_disable 00:18:28.759 23:21:17 -- common/autotest_common.sh@10 -- # set +x 00:18:28.759 23:21:17 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:28.759 23:21:17 -- target/nvmf_lvs_grow.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:18:29.020 [2024-04-26 23:21:18.135307] blobstore.c:4779:bs_recover: *NOTICE*: Performing recovery on blobstore 00:18:29.020 [2024-04-26 23:21:18.135393] blobstore.c:4726:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:18:29.020 [2024-04-26 23:21:18.135422] blobstore.c:4726:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:18:29.020 23:21:18 -- target/nvmf_lvs_grow.sh@76 -- # aio_bdev=aio_bdev 00:18:29.020 23:21:18 -- target/nvmf_lvs_grow.sh@77 -- # waitforbdev e8f690c2-3964-4dcd-a579-6e3677e56c1c 00:18:29.020 23:21:18 -- common/autotest_common.sh@885 -- # local bdev_name=e8f690c2-3964-4dcd-a579-6e3677e56c1c 00:18:29.020 23:21:18 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:18:29.020 23:21:18 -- common/autotest_common.sh@887 -- # local i 00:18:29.020 23:21:18 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:18:29.020 23:21:18 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:18:29.020 23:21:18 -- common/autotest_common.sh@890 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:18:29.280 23:21:18 -- common/autotest_common.sh@892 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b e8f690c2-3964-4dcd-a579-6e3677e56c1c -t 2000 00:18:29.280 [ 00:18:29.280 { 00:18:29.280 "name": "e8f690c2-3964-4dcd-a579-6e3677e56c1c", 00:18:29.280 "aliases": [ 00:18:29.280 "lvs/lvol" 00:18:29.280 ], 00:18:29.280 "product_name": "Logical Volume", 00:18:29.280 "block_size": 4096, 00:18:29.280 "num_blocks": 38912, 00:18:29.280 "uuid": "e8f690c2-3964-4dcd-a579-6e3677e56c1c", 00:18:29.280 "assigned_rate_limits": { 00:18:29.280 "rw_ios_per_sec": 0, 00:18:29.280 "rw_mbytes_per_sec": 0, 00:18:29.280 "r_mbytes_per_sec": 0, 00:18:29.280 "w_mbytes_per_sec": 0 00:18:29.280 }, 00:18:29.280 "claimed": false, 00:18:29.280 "zoned": false, 00:18:29.280 "supported_io_types": { 00:18:29.280 "read": true, 00:18:29.280 "write": true, 00:18:29.280 "unmap": true, 00:18:29.280 "write_zeroes": true, 00:18:29.280 "flush": false, 00:18:29.280 "reset": true, 00:18:29.280 "compare": false, 00:18:29.280 "compare_and_write": false, 00:18:29.280 "abort": false, 00:18:29.280 "nvme_admin": false, 00:18:29.280 "nvme_io": false 00:18:29.280 }, 00:18:29.280 "driver_specific": { 00:18:29.280 "lvol": { 00:18:29.280 "lvol_store_uuid": "cf29f80c-7f62-4b08-8bf3-499df188428c", 00:18:29.280 "base_bdev": "aio_bdev", 00:18:29.280 "thin_provision": false, 00:18:29.280 "snapshot": false, 00:18:29.280 "clone": false, 00:18:29.280 "esnap_clone": false 00:18:29.280 } 00:18:29.280 } 00:18:29.280 } 00:18:29.280 ] 00:18:29.280 23:21:18 -- common/autotest_common.sh@893 -- # return 0 00:18:29.280 23:21:18 -- target/nvmf_lvs_grow.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u cf29f80c-7f62-4b08-8bf3-499df188428c 00:18:29.280 23:21:18 -- target/nvmf_lvs_grow.sh@78 -- # jq -r '.[0].free_clusters' 00:18:29.540 23:21:18 -- target/nvmf_lvs_grow.sh@78 -- # (( free_clusters == 61 )) 00:18:29.540 23:21:18 -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u cf29f80c-7f62-4b08-8bf3-499df188428c 00:18:29.540 23:21:18 -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].total_data_clusters' 00:18:29.540 23:21:18 -- target/nvmf_lvs_grow.sh@79 -- # (( data_clusters == 99 )) 00:18:29.540 23:21:18 -- target/nvmf_lvs_grow.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:18:29.800 [2024-04-26 23:21:18.899213] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:18:29.800 23:21:18 -- target/nvmf_lvs_grow.sh@84 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u cf29f80c-7f62-4b08-8bf3-499df188428c 00:18:29.801 23:21:18 -- common/autotest_common.sh@638 -- # local es=0 00:18:29.801 23:21:18 -- common/autotest_common.sh@640 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u cf29f80c-7f62-4b08-8bf3-499df188428c 00:18:29.801 23:21:18 -- common/autotest_common.sh@626 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:29.801 23:21:18 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:18:29.801 23:21:18 -- common/autotest_common.sh@630 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:29.801 23:21:18 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:18:29.801 23:21:18 -- common/autotest_common.sh@632 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:29.801 23:21:18 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:18:29.801 23:21:18 -- common/autotest_common.sh@632 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:29.801 23:21:18 -- common/autotest_common.sh@632 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:18:29.801 23:21:18 -- common/autotest_common.sh@641 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u cf29f80c-7f62-4b08-8bf3-499df188428c 00:18:30.061 request: 00:18:30.061 { 00:18:30.061 "uuid": "cf29f80c-7f62-4b08-8bf3-499df188428c", 00:18:30.061 "method": "bdev_lvol_get_lvstores", 00:18:30.061 "req_id": 1 00:18:30.061 } 00:18:30.061 Got JSON-RPC error response 00:18:30.061 response: 00:18:30.061 { 00:18:30.061 "code": -19, 00:18:30.061 "message": "No such device" 00:18:30.061 } 00:18:30.061 23:21:19 -- common/autotest_common.sh@641 -- # es=1 00:18:30.061 23:21:19 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:18:30.061 23:21:19 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:18:30.061 23:21:19 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:18:30.061 23:21:19 -- target/nvmf_lvs_grow.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:18:30.061 aio_bdev 00:18:30.061 23:21:19 -- target/nvmf_lvs_grow.sh@86 -- # waitforbdev e8f690c2-3964-4dcd-a579-6e3677e56c1c 00:18:30.061 23:21:19 -- common/autotest_common.sh@885 -- # local bdev_name=e8f690c2-3964-4dcd-a579-6e3677e56c1c 00:18:30.061 23:21:19 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:18:30.061 23:21:19 -- common/autotest_common.sh@887 -- # local i 00:18:30.061 23:21:19 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:18:30.061 23:21:19 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:18:30.061 23:21:19 -- common/autotest_common.sh@890 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:18:30.321 23:21:19 -- common/autotest_common.sh@892 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b e8f690c2-3964-4dcd-a579-6e3677e56c1c -t 2000 00:18:30.321 [ 00:18:30.321 { 00:18:30.321 "name": "e8f690c2-3964-4dcd-a579-6e3677e56c1c", 00:18:30.321 "aliases": [ 00:18:30.321 "lvs/lvol" 00:18:30.321 ], 00:18:30.321 "product_name": "Logical Volume", 00:18:30.321 "block_size": 4096, 00:18:30.321 "num_blocks": 38912, 00:18:30.321 "uuid": "e8f690c2-3964-4dcd-a579-6e3677e56c1c", 00:18:30.321 "assigned_rate_limits": { 00:18:30.321 "rw_ios_per_sec": 0, 00:18:30.321 "rw_mbytes_per_sec": 0, 00:18:30.321 "r_mbytes_per_sec": 0, 00:18:30.321 "w_mbytes_per_sec": 0 00:18:30.321 }, 00:18:30.321 "claimed": false, 00:18:30.321 "zoned": false, 00:18:30.321 "supported_io_types": { 00:18:30.321 "read": true, 00:18:30.321 "write": true, 00:18:30.321 "unmap": true, 00:18:30.321 "write_zeroes": true, 00:18:30.321 "flush": false, 00:18:30.321 "reset": true, 00:18:30.321 "compare": false, 00:18:30.321 "compare_and_write": false, 00:18:30.321 "abort": false, 00:18:30.321 "nvme_admin": false, 00:18:30.321 "nvme_io": false 00:18:30.321 }, 00:18:30.321 "driver_specific": { 00:18:30.321 "lvol": { 00:18:30.321 "lvol_store_uuid": "cf29f80c-7f62-4b08-8bf3-499df188428c", 00:18:30.321 "base_bdev": "aio_bdev", 00:18:30.321 "thin_provision": false, 00:18:30.321 "snapshot": false, 00:18:30.321 "clone": false, 00:18:30.321 "esnap_clone": false 00:18:30.321 } 00:18:30.321 } 00:18:30.321 } 00:18:30.321 ] 00:18:30.321 23:21:19 -- common/autotest_common.sh@893 -- # return 0 00:18:30.321 23:21:19 -- target/nvmf_lvs_grow.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u cf29f80c-7f62-4b08-8bf3-499df188428c 00:18:30.321 23:21:19 -- target/nvmf_lvs_grow.sh@87 -- # jq -r '.[0].free_clusters' 00:18:30.581 23:21:19 -- target/nvmf_lvs_grow.sh@87 -- # (( free_clusters == 61 )) 00:18:30.581 23:21:19 -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u cf29f80c-7f62-4b08-8bf3-499df188428c 00:18:30.581 23:21:19 -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].total_data_clusters' 00:18:30.841 23:21:19 -- target/nvmf_lvs_grow.sh@88 -- # (( data_clusters == 99 )) 00:18:30.841 23:21:19 -- target/nvmf_lvs_grow.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete e8f690c2-3964-4dcd-a579-6e3677e56c1c 00:18:30.841 23:21:19 -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u cf29f80c-7f62-4b08-8bf3-499df188428c 00:18:31.101 23:21:20 -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:18:31.101 23:21:20 -- target/nvmf_lvs_grow.sh@94 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:18:31.101 00:18:31.101 real 0m16.183s 00:18:31.101 user 0m42.258s 00:18:31.101 sys 0m2.846s 00:18:31.101 23:21:20 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:18:31.101 23:21:20 -- common/autotest_common.sh@10 -- # set +x 00:18:31.101 ************************************ 00:18:31.101 END TEST lvs_grow_dirty 00:18:31.101 ************************************ 00:18:31.361 23:21:20 -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:18:31.361 23:21:20 -- common/autotest_common.sh@794 -- # type=--id 00:18:31.361 23:21:20 -- common/autotest_common.sh@795 -- # id=0 00:18:31.361 23:21:20 -- common/autotest_common.sh@796 -- # '[' --id = --pid ']' 00:18:31.361 23:21:20 -- common/autotest_common.sh@800 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:18:31.361 23:21:20 -- common/autotest_common.sh@800 -- # shm_files=nvmf_trace.0 00:18:31.361 23:21:20 -- common/autotest_common.sh@802 -- # [[ -z nvmf_trace.0 ]] 00:18:31.361 23:21:20 -- common/autotest_common.sh@806 -- # for n in $shm_files 00:18:31.361 23:21:20 -- common/autotest_common.sh@807 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:18:31.361 nvmf_trace.0 00:18:31.361 23:21:20 -- common/autotest_common.sh@809 -- # return 0 00:18:31.361 23:21:20 -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:18:31.361 23:21:20 -- nvmf/common.sh@477 -- # nvmfcleanup 00:18:31.361 23:21:20 -- nvmf/common.sh@117 -- # sync 00:18:31.361 23:21:20 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:31.362 23:21:20 -- nvmf/common.sh@120 -- # set +e 00:18:31.362 23:21:20 -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:31.362 23:21:20 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:31.362 rmmod nvme_tcp 00:18:31.362 rmmod nvme_fabrics 00:18:31.362 rmmod nvme_keyring 00:18:31.362 23:21:20 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:31.362 23:21:20 -- nvmf/common.sh@124 -- # set -e 00:18:31.362 23:21:20 -- nvmf/common.sh@125 -- # return 0 00:18:31.362 23:21:20 -- nvmf/common.sh@478 -- # '[' -n 3926272 ']' 00:18:31.362 23:21:20 -- nvmf/common.sh@479 -- # killprocess 3926272 00:18:31.362 23:21:20 -- common/autotest_common.sh@936 -- # '[' -z 3926272 ']' 00:18:31.362 23:21:20 -- common/autotest_common.sh@940 -- # kill -0 3926272 00:18:31.362 23:21:20 -- common/autotest_common.sh@941 -- # uname 00:18:31.362 23:21:20 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:18:31.362 23:21:20 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3926272 00:18:31.362 23:21:20 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:18:31.362 23:21:20 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:18:31.362 23:21:20 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3926272' 00:18:31.362 killing process with pid 3926272 00:18:31.362 23:21:20 -- common/autotest_common.sh@955 -- # kill 3926272 00:18:31.362 23:21:20 -- common/autotest_common.sh@960 -- # wait 3926272 00:18:31.622 23:21:20 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:18:31.622 23:21:20 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:18:31.622 23:21:20 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:18:31.622 23:21:20 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:31.622 23:21:20 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:31.622 23:21:20 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:31.622 23:21:20 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:31.622 23:21:20 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:33.534 23:21:22 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:18:33.534 00:18:33.534 real 0m42.006s 00:18:33.534 user 1m2.428s 00:18:33.534 sys 0m10.091s 00:18:33.534 23:21:22 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:18:33.534 23:21:22 -- common/autotest_common.sh@10 -- # set +x 00:18:33.534 ************************************ 00:18:33.534 END TEST nvmf_lvs_grow 00:18:33.534 ************************************ 00:18:33.534 23:21:22 -- nvmf/nvmf.sh@50 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:18:33.534 23:21:22 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:18:33.534 23:21:22 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:18:33.534 23:21:22 -- common/autotest_common.sh@10 -- # set +x 00:18:33.797 ************************************ 00:18:33.797 START TEST nvmf_bdev_io_wait 00:18:33.797 ************************************ 00:18:33.797 23:21:22 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:18:33.797 * Looking for test storage... 00:18:33.797 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:33.797 23:21:23 -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:33.797 23:21:23 -- nvmf/common.sh@7 -- # uname -s 00:18:33.797 23:21:23 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:33.797 23:21:23 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:33.797 23:21:23 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:33.797 23:21:23 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:33.797 23:21:23 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:33.797 23:21:23 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:33.797 23:21:23 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:33.797 23:21:23 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:33.797 23:21:23 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:33.797 23:21:23 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:34.059 23:21:23 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:34.059 23:21:23 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:34.059 23:21:23 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:34.059 23:21:23 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:34.059 23:21:23 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:34.059 23:21:23 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:34.059 23:21:23 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:34.059 23:21:23 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:34.059 23:21:23 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:34.059 23:21:23 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:34.059 23:21:23 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:34.059 23:21:23 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:34.059 23:21:23 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:34.059 23:21:23 -- paths/export.sh@5 -- # export PATH 00:18:34.059 23:21:23 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:34.059 23:21:23 -- nvmf/common.sh@47 -- # : 0 00:18:34.059 23:21:23 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:34.059 23:21:23 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:34.059 23:21:23 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:34.059 23:21:23 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:34.059 23:21:23 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:34.059 23:21:23 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:34.059 23:21:23 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:34.059 23:21:23 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:34.059 23:21:23 -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:34.059 23:21:23 -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:34.059 23:21:23 -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:18:34.059 23:21:23 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:18:34.059 23:21:23 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:34.059 23:21:23 -- nvmf/common.sh@437 -- # prepare_net_devs 00:18:34.059 23:21:23 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:18:34.059 23:21:23 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:18:34.059 23:21:23 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:34.059 23:21:23 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:34.059 23:21:23 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:34.059 23:21:23 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:18:34.059 23:21:23 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:18:34.059 23:21:23 -- nvmf/common.sh@285 -- # xtrace_disable 00:18:34.059 23:21:23 -- common/autotest_common.sh@10 -- # set +x 00:18:42.197 23:21:30 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:18:42.197 23:21:30 -- nvmf/common.sh@291 -- # pci_devs=() 00:18:42.197 23:21:30 -- nvmf/common.sh@291 -- # local -a pci_devs 00:18:42.197 23:21:30 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:18:42.197 23:21:30 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:18:42.197 23:21:30 -- nvmf/common.sh@293 -- # pci_drivers=() 00:18:42.197 23:21:30 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:18:42.197 23:21:30 -- nvmf/common.sh@295 -- # net_devs=() 00:18:42.197 23:21:30 -- nvmf/common.sh@295 -- # local -ga net_devs 00:18:42.197 23:21:30 -- nvmf/common.sh@296 -- # e810=() 00:18:42.197 23:21:30 -- nvmf/common.sh@296 -- # local -ga e810 00:18:42.197 23:21:30 -- nvmf/common.sh@297 -- # x722=() 00:18:42.197 23:21:30 -- nvmf/common.sh@297 -- # local -ga x722 00:18:42.197 23:21:30 -- nvmf/common.sh@298 -- # mlx=() 00:18:42.197 23:21:30 -- nvmf/common.sh@298 -- # local -ga mlx 00:18:42.197 23:21:30 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:42.197 23:21:30 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:42.197 23:21:30 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:42.197 23:21:30 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:42.197 23:21:30 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:42.197 23:21:30 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:42.197 23:21:30 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:42.197 23:21:30 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:42.197 23:21:30 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:42.197 23:21:30 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:42.197 23:21:30 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:42.197 23:21:30 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:18:42.197 23:21:30 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:18:42.197 23:21:30 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:18:42.197 23:21:30 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:18:42.197 23:21:30 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:18:42.197 23:21:30 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:18:42.197 23:21:30 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:42.197 23:21:30 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:18:42.197 Found 0000:31:00.0 (0x8086 - 0x159b) 00:18:42.197 23:21:30 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:42.197 23:21:30 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:42.197 23:21:30 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:42.197 23:21:30 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:42.197 23:21:30 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:42.197 23:21:30 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:42.197 23:21:30 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:18:42.197 Found 0000:31:00.1 (0x8086 - 0x159b) 00:18:42.197 23:21:30 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:42.197 23:21:30 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:42.197 23:21:30 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:42.197 23:21:30 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:42.197 23:21:30 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:42.197 23:21:30 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:18:42.197 23:21:30 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:18:42.197 23:21:30 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:18:42.197 23:21:30 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:42.197 23:21:30 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:42.197 23:21:30 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:18:42.197 23:21:30 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:42.197 23:21:30 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:18:42.197 Found net devices under 0000:31:00.0: cvl_0_0 00:18:42.197 23:21:30 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:18:42.197 23:21:30 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:42.197 23:21:30 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:42.197 23:21:30 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:18:42.197 23:21:30 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:42.197 23:21:30 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:18:42.197 Found net devices under 0000:31:00.1: cvl_0_1 00:18:42.197 23:21:30 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:18:42.197 23:21:30 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:18:42.197 23:21:30 -- nvmf/common.sh@403 -- # is_hw=yes 00:18:42.197 23:21:30 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:18:42.197 23:21:30 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:18:42.197 23:21:30 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:18:42.197 23:21:30 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:42.197 23:21:30 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:42.197 23:21:30 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:42.197 23:21:30 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:18:42.197 23:21:30 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:42.197 23:21:30 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:42.197 23:21:30 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:18:42.197 23:21:30 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:42.197 23:21:30 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:42.197 23:21:30 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:18:42.197 23:21:30 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:18:42.197 23:21:30 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:18:42.197 23:21:30 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:42.197 23:21:30 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:42.197 23:21:30 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:42.197 23:21:30 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:18:42.197 23:21:30 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:42.197 23:21:30 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:42.197 23:21:30 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:42.197 23:21:30 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:18:42.197 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:42.197 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.736 ms 00:18:42.197 00:18:42.197 --- 10.0.0.2 ping statistics --- 00:18:42.197 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:42.197 rtt min/avg/max/mdev = 0.736/0.736/0.736/0.000 ms 00:18:42.197 23:21:30 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:42.197 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:42.198 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.297 ms 00:18:42.198 00:18:42.198 --- 10.0.0.1 ping statistics --- 00:18:42.198 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:42.198 rtt min/avg/max/mdev = 0.297/0.297/0.297/0.000 ms 00:18:42.198 23:21:30 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:42.198 23:21:30 -- nvmf/common.sh@411 -- # return 0 00:18:42.198 23:21:30 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:18:42.198 23:21:30 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:42.198 23:21:30 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:18:42.198 23:21:30 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:18:42.198 23:21:30 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:42.198 23:21:30 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:18:42.198 23:21:30 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:18:42.198 23:21:30 -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:18:42.198 23:21:30 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:18:42.198 23:21:30 -- common/autotest_common.sh@710 -- # xtrace_disable 00:18:42.198 23:21:30 -- common/autotest_common.sh@10 -- # set +x 00:18:42.198 23:21:30 -- nvmf/common.sh@470 -- # nvmfpid=3931229 00:18:42.198 23:21:30 -- nvmf/common.sh@471 -- # waitforlisten 3931229 00:18:42.198 23:21:30 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:18:42.198 23:21:30 -- common/autotest_common.sh@817 -- # '[' -z 3931229 ']' 00:18:42.198 23:21:30 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:42.198 23:21:30 -- common/autotest_common.sh@822 -- # local max_retries=100 00:18:42.198 23:21:30 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:42.198 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:42.198 23:21:30 -- common/autotest_common.sh@826 -- # xtrace_disable 00:18:42.198 23:21:30 -- common/autotest_common.sh@10 -- # set +x 00:18:42.198 [2024-04-26 23:21:30.550008] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:18:42.198 [2024-04-26 23:21:30.550078] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:42.198 EAL: No free 2048 kB hugepages reported on node 1 00:18:42.198 [2024-04-26 23:21:30.622510] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:42.198 [2024-04-26 23:21:30.661570] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:42.198 [2024-04-26 23:21:30.661617] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:42.198 [2024-04-26 23:21:30.661624] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:42.198 [2024-04-26 23:21:30.661631] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:42.198 [2024-04-26 23:21:30.661637] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:42.198 [2024-04-26 23:21:30.661757] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:42.198 [2024-04-26 23:21:30.661872] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:18:42.198 [2024-04-26 23:21:30.661979] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:18:42.198 [2024-04-26 23:21:30.662157] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:42.198 23:21:31 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:18:42.198 23:21:31 -- common/autotest_common.sh@850 -- # return 0 00:18:42.198 23:21:31 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:18:42.198 23:21:31 -- common/autotest_common.sh@716 -- # xtrace_disable 00:18:42.198 23:21:31 -- common/autotest_common.sh@10 -- # set +x 00:18:42.198 23:21:31 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:42.198 23:21:31 -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:18:42.198 23:21:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:42.198 23:21:31 -- common/autotest_common.sh@10 -- # set +x 00:18:42.198 23:21:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:42.198 23:21:31 -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:18:42.198 23:21:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:42.198 23:21:31 -- common/autotest_common.sh@10 -- # set +x 00:18:42.198 23:21:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:42.198 23:21:31 -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:18:42.198 23:21:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:42.198 23:21:31 -- common/autotest_common.sh@10 -- # set +x 00:18:42.198 [2024-04-26 23:21:31.438350] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:42.198 23:21:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:42.198 23:21:31 -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:18:42.198 23:21:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:42.198 23:21:31 -- common/autotest_common.sh@10 -- # set +x 00:18:42.460 Malloc0 00:18:42.460 23:21:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:42.460 23:21:31 -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:18:42.460 23:21:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:42.460 23:21:31 -- common/autotest_common.sh@10 -- # set +x 00:18:42.460 23:21:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:42.460 23:21:31 -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:18:42.460 23:21:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:42.460 23:21:31 -- common/autotest_common.sh@10 -- # set +x 00:18:42.460 23:21:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:42.460 23:21:31 -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:42.460 23:21:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:42.460 23:21:31 -- common/autotest_common.sh@10 -- # set +x 00:18:42.460 [2024-04-26 23:21:31.504173] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:42.460 23:21:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:42.460 23:21:31 -- target/bdev_io_wait.sh@28 -- # WRITE_PID=3931575 00:18:42.460 23:21:31 -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:18:42.460 23:21:31 -- target/bdev_io_wait.sh@30 -- # READ_PID=3931578 00:18:42.460 23:21:31 -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:18:42.460 23:21:31 -- nvmf/common.sh@521 -- # config=() 00:18:42.460 23:21:31 -- nvmf/common.sh@521 -- # local subsystem config 00:18:42.460 23:21:31 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:18:42.460 23:21:31 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:18:42.460 { 00:18:42.460 "params": { 00:18:42.460 "name": "Nvme$subsystem", 00:18:42.460 "trtype": "$TEST_TRANSPORT", 00:18:42.460 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:42.460 "adrfam": "ipv4", 00:18:42.460 "trsvcid": "$NVMF_PORT", 00:18:42.460 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:42.460 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:42.460 "hdgst": ${hdgst:-false}, 00:18:42.460 "ddgst": ${ddgst:-false} 00:18:42.460 }, 00:18:42.460 "method": "bdev_nvme_attach_controller" 00:18:42.460 } 00:18:42.460 EOF 00:18:42.460 )") 00:18:42.460 23:21:31 -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=3931580 00:18:42.460 23:21:31 -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:18:42.460 23:21:31 -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:18:42.460 23:21:31 -- nvmf/common.sh@521 -- # config=() 00:18:42.460 23:21:31 -- nvmf/common.sh@521 -- # local subsystem config 00:18:42.460 23:21:31 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:18:42.460 23:21:31 -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=3931583 00:18:42.460 23:21:31 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:18:42.460 { 00:18:42.460 "params": { 00:18:42.460 "name": "Nvme$subsystem", 00:18:42.460 "trtype": "$TEST_TRANSPORT", 00:18:42.460 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:42.460 "adrfam": "ipv4", 00:18:42.460 "trsvcid": "$NVMF_PORT", 00:18:42.460 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:42.460 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:42.460 "hdgst": ${hdgst:-false}, 00:18:42.460 "ddgst": ${ddgst:-false} 00:18:42.460 }, 00:18:42.460 "method": "bdev_nvme_attach_controller" 00:18:42.460 } 00:18:42.460 EOF 00:18:42.460 )") 00:18:42.460 23:21:31 -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:18:42.460 23:21:31 -- target/bdev_io_wait.sh@35 -- # sync 00:18:42.460 23:21:31 -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:18:42.460 23:21:31 -- nvmf/common.sh@543 -- # cat 00:18:42.460 23:21:31 -- nvmf/common.sh@521 -- # config=() 00:18:42.460 23:21:31 -- nvmf/common.sh@521 -- # local subsystem config 00:18:42.460 23:21:31 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:18:42.460 23:21:31 -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:18:42.460 23:21:31 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:18:42.460 { 00:18:42.460 "params": { 00:18:42.460 "name": "Nvme$subsystem", 00:18:42.460 "trtype": "$TEST_TRANSPORT", 00:18:42.460 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:42.460 "adrfam": "ipv4", 00:18:42.460 "trsvcid": "$NVMF_PORT", 00:18:42.460 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:42.460 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:42.460 "hdgst": ${hdgst:-false}, 00:18:42.460 "ddgst": ${ddgst:-false} 00:18:42.460 }, 00:18:42.460 "method": "bdev_nvme_attach_controller" 00:18:42.460 } 00:18:42.460 EOF 00:18:42.460 )") 00:18:42.460 23:21:31 -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:18:42.460 23:21:31 -- nvmf/common.sh@521 -- # config=() 00:18:42.460 23:21:31 -- nvmf/common.sh@521 -- # local subsystem config 00:18:42.460 23:21:31 -- nvmf/common.sh@543 -- # cat 00:18:42.460 23:21:31 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:18:42.460 23:21:31 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:18:42.460 { 00:18:42.460 "params": { 00:18:42.460 "name": "Nvme$subsystem", 00:18:42.460 "trtype": "$TEST_TRANSPORT", 00:18:42.460 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:42.460 "adrfam": "ipv4", 00:18:42.460 "trsvcid": "$NVMF_PORT", 00:18:42.460 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:42.460 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:42.460 "hdgst": ${hdgst:-false}, 00:18:42.460 "ddgst": ${ddgst:-false} 00:18:42.460 }, 00:18:42.460 "method": "bdev_nvme_attach_controller" 00:18:42.460 } 00:18:42.460 EOF 00:18:42.460 )") 00:18:42.460 23:21:31 -- nvmf/common.sh@543 -- # cat 00:18:42.460 23:21:31 -- target/bdev_io_wait.sh@37 -- # wait 3931575 00:18:42.460 23:21:31 -- nvmf/common.sh@543 -- # cat 00:18:42.460 23:21:31 -- nvmf/common.sh@545 -- # jq . 00:18:42.460 23:21:31 -- nvmf/common.sh@545 -- # jq . 00:18:42.460 23:21:31 -- nvmf/common.sh@545 -- # jq . 00:18:42.460 23:21:31 -- nvmf/common.sh@546 -- # IFS=, 00:18:42.460 23:21:31 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:18:42.460 "params": { 00:18:42.460 "name": "Nvme1", 00:18:42.460 "trtype": "tcp", 00:18:42.460 "traddr": "10.0.0.2", 00:18:42.460 "adrfam": "ipv4", 00:18:42.460 "trsvcid": "4420", 00:18:42.460 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:42.460 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:42.460 "hdgst": false, 00:18:42.460 "ddgst": false 00:18:42.460 }, 00:18:42.460 "method": "bdev_nvme_attach_controller" 00:18:42.460 }' 00:18:42.460 23:21:31 -- nvmf/common.sh@545 -- # jq . 00:18:42.460 23:21:31 -- nvmf/common.sh@546 -- # IFS=, 00:18:42.460 23:21:31 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:18:42.460 "params": { 00:18:42.460 "name": "Nvme1", 00:18:42.460 "trtype": "tcp", 00:18:42.460 "traddr": "10.0.0.2", 00:18:42.460 "adrfam": "ipv4", 00:18:42.460 "trsvcid": "4420", 00:18:42.460 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:42.460 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:42.460 "hdgst": false, 00:18:42.460 "ddgst": false 00:18:42.460 }, 00:18:42.460 "method": "bdev_nvme_attach_controller" 00:18:42.460 }' 00:18:42.460 23:21:31 -- nvmf/common.sh@546 -- # IFS=, 00:18:42.460 23:21:31 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:18:42.460 "params": { 00:18:42.460 "name": "Nvme1", 00:18:42.460 "trtype": "tcp", 00:18:42.460 "traddr": "10.0.0.2", 00:18:42.460 "adrfam": "ipv4", 00:18:42.460 "trsvcid": "4420", 00:18:42.460 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:42.460 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:42.460 "hdgst": false, 00:18:42.460 "ddgst": false 00:18:42.460 }, 00:18:42.460 "method": "bdev_nvme_attach_controller" 00:18:42.460 }' 00:18:42.460 23:21:31 -- nvmf/common.sh@546 -- # IFS=, 00:18:42.460 23:21:31 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:18:42.460 "params": { 00:18:42.460 "name": "Nvme1", 00:18:42.460 "trtype": "tcp", 00:18:42.460 "traddr": "10.0.0.2", 00:18:42.460 "adrfam": "ipv4", 00:18:42.460 "trsvcid": "4420", 00:18:42.460 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:42.460 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:42.460 "hdgst": false, 00:18:42.461 "ddgst": false 00:18:42.461 }, 00:18:42.461 "method": "bdev_nvme_attach_controller" 00:18:42.461 }' 00:18:42.461 [2024-04-26 23:21:31.554452] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:18:42.461 [2024-04-26 23:21:31.554503] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:18:42.461 [2024-04-26 23:21:31.558214] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:18:42.461 [2024-04-26 23:21:31.558261] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:18:42.461 [2024-04-26 23:21:31.558489] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:18:42.461 [2024-04-26 23:21:31.558531] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:18:42.461 [2024-04-26 23:21:31.559451] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:18:42.461 [2024-04-26 23:21:31.559497] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:18:42.461 EAL: No free 2048 kB hugepages reported on node 1 00:18:42.461 EAL: No free 2048 kB hugepages reported on node 1 00:18:42.461 [2024-04-26 23:21:31.697859] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:42.461 EAL: No free 2048 kB hugepages reported on node 1 00:18:42.722 [2024-04-26 23:21:31.714210] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:18:42.722 [2024-04-26 23:21:31.755876] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:42.722 EAL: No free 2048 kB hugepages reported on node 1 00:18:42.722 [2024-04-26 23:21:31.773578] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 7 00:18:42.722 [2024-04-26 23:21:31.802158] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:42.722 [2024-04-26 23:21:31.818360] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:18:42.722 [2024-04-26 23:21:31.850889] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:42.722 [2024-04-26 23:21:31.867389] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:18:42.722 Running I/O for 1 seconds... 00:18:42.983 Running I/O for 1 seconds... 00:18:42.983 Running I/O for 1 seconds... 00:18:42.983 Running I/O for 1 seconds... 00:18:44.034 00:18:44.034 Latency(us) 00:18:44.034 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:44.034 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:18:44.034 Nvme1n1 : 1.00 20207.46 78.94 0.00 0.00 6320.32 3426.99 16602.45 00:18:44.034 =================================================================================================================== 00:18:44.034 Total : 20207.46 78.94 0.00 0.00 6320.32 3426.99 16602.45 00:18:44.034 00:18:44.034 Latency(us) 00:18:44.034 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:44.034 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:18:44.034 Nvme1n1 : 1.01 10998.13 42.96 0.00 0.00 11600.28 5679.79 21845.33 00:18:44.034 =================================================================================================================== 00:18:44.034 Total : 10998.13 42.96 0.00 0.00 11600.28 5679.79 21845.33 00:18:44.034 00:18:44.034 Latency(us) 00:18:44.034 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:44.034 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:18:44.034 Nvme1n1 : 1.00 188078.92 734.68 0.00 0.00 677.81 266.24 761.17 00:18:44.034 =================================================================================================================== 00:18:44.034 Total : 188078.92 734.68 0.00 0.00 677.81 266.24 761.17 00:18:44.034 00:18:44.034 Latency(us) 00:18:44.034 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:44.034 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:18:44.034 Nvme1n1 : 1.01 11416.05 44.59 0.00 0.00 11174.86 6171.31 22719.15 00:18:44.034 =================================================================================================================== 00:18:44.034 Total : 11416.05 44.59 0.00 0.00 11174.86 6171.31 22719.15 00:18:44.034 23:21:33 -- target/bdev_io_wait.sh@38 -- # wait 3931578 00:18:44.034 23:21:33 -- target/bdev_io_wait.sh@39 -- # wait 3931580 00:18:44.034 23:21:33 -- target/bdev_io_wait.sh@40 -- # wait 3931583 00:18:44.034 23:21:33 -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:44.034 23:21:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:44.034 23:21:33 -- common/autotest_common.sh@10 -- # set +x 00:18:44.034 23:21:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:44.034 23:21:33 -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:18:44.034 23:21:33 -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:18:44.034 23:21:33 -- nvmf/common.sh@477 -- # nvmfcleanup 00:18:44.034 23:21:33 -- nvmf/common.sh@117 -- # sync 00:18:44.034 23:21:33 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:44.034 23:21:33 -- nvmf/common.sh@120 -- # set +e 00:18:44.034 23:21:33 -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:44.034 23:21:33 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:44.034 rmmod nvme_tcp 00:18:44.310 rmmod nvme_fabrics 00:18:44.310 rmmod nvme_keyring 00:18:44.310 23:21:33 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:44.310 23:21:33 -- nvmf/common.sh@124 -- # set -e 00:18:44.310 23:21:33 -- nvmf/common.sh@125 -- # return 0 00:18:44.310 23:21:33 -- nvmf/common.sh@478 -- # '[' -n 3931229 ']' 00:18:44.310 23:21:33 -- nvmf/common.sh@479 -- # killprocess 3931229 00:18:44.310 23:21:33 -- common/autotest_common.sh@936 -- # '[' -z 3931229 ']' 00:18:44.310 23:21:33 -- common/autotest_common.sh@940 -- # kill -0 3931229 00:18:44.310 23:21:33 -- common/autotest_common.sh@941 -- # uname 00:18:44.310 23:21:33 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:18:44.310 23:21:33 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3931229 00:18:44.310 23:21:33 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:18:44.310 23:21:33 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:18:44.310 23:21:33 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3931229' 00:18:44.310 killing process with pid 3931229 00:18:44.310 23:21:33 -- common/autotest_common.sh@955 -- # kill 3931229 00:18:44.310 23:21:33 -- common/autotest_common.sh@960 -- # wait 3931229 00:18:44.310 23:21:33 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:18:44.310 23:21:33 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:18:44.310 23:21:33 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:18:44.310 23:21:33 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:44.310 23:21:33 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:44.310 23:21:33 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:44.310 23:21:33 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:44.310 23:21:33 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:46.849 23:21:35 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:18:46.849 00:18:46.849 real 0m12.636s 00:18:46.849 user 0m18.508s 00:18:46.849 sys 0m6.852s 00:18:46.849 23:21:35 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:18:46.849 23:21:35 -- common/autotest_common.sh@10 -- # set +x 00:18:46.849 ************************************ 00:18:46.849 END TEST nvmf_bdev_io_wait 00:18:46.849 ************************************ 00:18:46.849 23:21:35 -- nvmf/nvmf.sh@51 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:18:46.849 23:21:35 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:18:46.849 23:21:35 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:18:46.849 23:21:35 -- common/autotest_common.sh@10 -- # set +x 00:18:46.849 ************************************ 00:18:46.849 START TEST nvmf_queue_depth 00:18:46.849 ************************************ 00:18:46.849 23:21:35 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:18:46.849 * Looking for test storage... 00:18:46.849 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:46.849 23:21:35 -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:46.849 23:21:35 -- nvmf/common.sh@7 -- # uname -s 00:18:46.849 23:21:35 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:46.849 23:21:35 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:46.849 23:21:35 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:46.849 23:21:35 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:46.849 23:21:35 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:46.849 23:21:35 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:46.849 23:21:35 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:46.849 23:21:35 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:46.849 23:21:35 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:46.849 23:21:35 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:46.849 23:21:35 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:46.849 23:21:35 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:46.849 23:21:35 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:46.849 23:21:35 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:46.849 23:21:35 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:46.849 23:21:35 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:46.849 23:21:35 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:46.849 23:21:35 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:46.849 23:21:35 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:46.849 23:21:35 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:46.849 23:21:35 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:46.849 23:21:35 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:46.849 23:21:35 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:46.849 23:21:35 -- paths/export.sh@5 -- # export PATH 00:18:46.849 23:21:35 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:46.849 23:21:35 -- nvmf/common.sh@47 -- # : 0 00:18:46.849 23:21:35 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:46.849 23:21:35 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:46.849 23:21:35 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:46.850 23:21:35 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:46.850 23:21:35 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:46.850 23:21:35 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:46.850 23:21:35 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:46.850 23:21:35 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:46.850 23:21:35 -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:18:46.850 23:21:35 -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:18:46.850 23:21:35 -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:46.850 23:21:35 -- target/queue_depth.sh@19 -- # nvmftestinit 00:18:46.850 23:21:35 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:18:46.850 23:21:35 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:46.850 23:21:35 -- nvmf/common.sh@437 -- # prepare_net_devs 00:18:46.850 23:21:35 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:18:46.850 23:21:35 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:18:46.850 23:21:35 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:46.850 23:21:35 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:46.850 23:21:35 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:46.850 23:21:35 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:18:46.850 23:21:35 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:18:46.850 23:21:35 -- nvmf/common.sh@285 -- # xtrace_disable 00:18:46.850 23:21:35 -- common/autotest_common.sh@10 -- # set +x 00:18:53.432 23:21:42 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:18:53.432 23:21:42 -- nvmf/common.sh@291 -- # pci_devs=() 00:18:53.432 23:21:42 -- nvmf/common.sh@291 -- # local -a pci_devs 00:18:53.432 23:21:42 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:18:53.432 23:21:42 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:18:53.432 23:21:42 -- nvmf/common.sh@293 -- # pci_drivers=() 00:18:53.432 23:21:42 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:18:53.432 23:21:42 -- nvmf/common.sh@295 -- # net_devs=() 00:18:53.432 23:21:42 -- nvmf/common.sh@295 -- # local -ga net_devs 00:18:53.432 23:21:42 -- nvmf/common.sh@296 -- # e810=() 00:18:53.432 23:21:42 -- nvmf/common.sh@296 -- # local -ga e810 00:18:53.432 23:21:42 -- nvmf/common.sh@297 -- # x722=() 00:18:53.432 23:21:42 -- nvmf/common.sh@297 -- # local -ga x722 00:18:53.432 23:21:42 -- nvmf/common.sh@298 -- # mlx=() 00:18:53.432 23:21:42 -- nvmf/common.sh@298 -- # local -ga mlx 00:18:53.432 23:21:42 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:53.432 23:21:42 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:53.432 23:21:42 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:53.432 23:21:42 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:53.432 23:21:42 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:53.432 23:21:42 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:53.432 23:21:42 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:53.432 23:21:42 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:53.432 23:21:42 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:53.432 23:21:42 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:53.432 23:21:42 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:53.432 23:21:42 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:18:53.432 23:21:42 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:18:53.432 23:21:42 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:18:53.432 23:21:42 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:18:53.432 23:21:42 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:18:53.432 23:21:42 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:18:53.432 23:21:42 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:53.432 23:21:42 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:18:53.432 Found 0000:31:00.0 (0x8086 - 0x159b) 00:18:53.432 23:21:42 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:53.432 23:21:42 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:53.432 23:21:42 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:53.432 23:21:42 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:53.432 23:21:42 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:53.432 23:21:42 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:53.432 23:21:42 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:18:53.432 Found 0000:31:00.1 (0x8086 - 0x159b) 00:18:53.432 23:21:42 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:53.432 23:21:42 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:53.432 23:21:42 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:53.432 23:21:42 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:53.432 23:21:42 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:53.432 23:21:42 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:18:53.432 23:21:42 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:18:53.432 23:21:42 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:18:53.432 23:21:42 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:53.432 23:21:42 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:53.432 23:21:42 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:18:53.432 23:21:42 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:53.432 23:21:42 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:18:53.432 Found net devices under 0000:31:00.0: cvl_0_0 00:18:53.432 23:21:42 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:18:53.432 23:21:42 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:53.432 23:21:42 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:53.432 23:21:42 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:18:53.432 23:21:42 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:53.432 23:21:42 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:18:53.432 Found net devices under 0000:31:00.1: cvl_0_1 00:18:53.432 23:21:42 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:18:53.432 23:21:42 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:18:53.432 23:21:42 -- nvmf/common.sh@403 -- # is_hw=yes 00:18:53.432 23:21:42 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:18:53.432 23:21:42 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:18:53.432 23:21:42 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:18:53.432 23:21:42 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:53.432 23:21:42 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:53.432 23:21:42 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:53.432 23:21:42 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:18:53.432 23:21:42 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:53.432 23:21:42 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:53.432 23:21:42 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:18:53.432 23:21:42 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:53.432 23:21:42 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:53.432 23:21:42 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:18:53.432 23:21:42 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:18:53.432 23:21:42 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:18:53.432 23:21:42 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:53.692 23:21:42 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:53.693 23:21:42 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:53.693 23:21:42 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:18:53.693 23:21:42 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:53.693 23:21:42 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:53.693 23:21:42 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:53.693 23:21:42 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:18:53.693 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:53.693 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.604 ms 00:18:53.693 00:18:53.693 --- 10.0.0.2 ping statistics --- 00:18:53.693 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:53.693 rtt min/avg/max/mdev = 0.604/0.604/0.604/0.000 ms 00:18:53.693 23:21:42 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:53.693 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:53.693 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.291 ms 00:18:53.693 00:18:53.693 --- 10.0.0.1 ping statistics --- 00:18:53.693 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:53.693 rtt min/avg/max/mdev = 0.291/0.291/0.291/0.000 ms 00:18:53.693 23:21:42 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:53.693 23:21:42 -- nvmf/common.sh@411 -- # return 0 00:18:53.693 23:21:42 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:18:53.693 23:21:42 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:53.693 23:21:42 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:18:53.693 23:21:42 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:18:53.693 23:21:42 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:53.693 23:21:42 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:18:53.693 23:21:42 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:18:53.952 23:21:42 -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:18:53.952 23:21:42 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:18:53.952 23:21:42 -- common/autotest_common.sh@710 -- # xtrace_disable 00:18:53.953 23:21:42 -- common/autotest_common.sh@10 -- # set +x 00:18:53.953 23:21:42 -- nvmf/common.sh@470 -- # nvmfpid=3936083 00:18:53.953 23:21:42 -- nvmf/common.sh@471 -- # waitforlisten 3936083 00:18:53.953 23:21:42 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:18:53.953 23:21:42 -- common/autotest_common.sh@817 -- # '[' -z 3936083 ']' 00:18:53.953 23:21:42 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:53.953 23:21:42 -- common/autotest_common.sh@822 -- # local max_retries=100 00:18:53.953 23:21:42 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:53.953 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:53.953 23:21:42 -- common/autotest_common.sh@826 -- # xtrace_disable 00:18:53.953 23:21:42 -- common/autotest_common.sh@10 -- # set +x 00:18:53.953 [2024-04-26 23:21:43.040544] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:18:53.953 [2024-04-26 23:21:43.040591] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:53.953 EAL: No free 2048 kB hugepages reported on node 1 00:18:53.953 [2024-04-26 23:21:43.106764] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:53.953 [2024-04-26 23:21:43.135505] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:53.953 [2024-04-26 23:21:43.135545] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:53.953 [2024-04-26 23:21:43.135552] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:53.953 [2024-04-26 23:21:43.135559] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:53.953 [2024-04-26 23:21:43.135565] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:53.953 [2024-04-26 23:21:43.135584] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:54.892 23:21:43 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:18:54.892 23:21:43 -- common/autotest_common.sh@850 -- # return 0 00:18:54.892 23:21:43 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:18:54.892 23:21:43 -- common/autotest_common.sh@716 -- # xtrace_disable 00:18:54.892 23:21:43 -- common/autotest_common.sh@10 -- # set +x 00:18:54.892 23:21:43 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:54.892 23:21:43 -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:18:54.893 23:21:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:54.893 23:21:43 -- common/autotest_common.sh@10 -- # set +x 00:18:54.893 [2024-04-26 23:21:43.859799] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:54.893 23:21:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:54.893 23:21:43 -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:18:54.893 23:21:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:54.893 23:21:43 -- common/autotest_common.sh@10 -- # set +x 00:18:54.893 Malloc0 00:18:54.893 23:21:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:54.893 23:21:43 -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:18:54.893 23:21:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:54.893 23:21:43 -- common/autotest_common.sh@10 -- # set +x 00:18:54.893 23:21:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:54.893 23:21:43 -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:18:54.893 23:21:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:54.893 23:21:43 -- common/autotest_common.sh@10 -- # set +x 00:18:54.893 23:21:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:54.893 23:21:43 -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:54.893 23:21:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:54.893 23:21:43 -- common/autotest_common.sh@10 -- # set +x 00:18:54.893 [2024-04-26 23:21:43.924815] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:54.893 23:21:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:54.893 23:21:43 -- target/queue_depth.sh@30 -- # bdevperf_pid=3936363 00:18:54.893 23:21:43 -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:18:54.893 23:21:43 -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:18:54.893 23:21:43 -- target/queue_depth.sh@33 -- # waitforlisten 3936363 /var/tmp/bdevperf.sock 00:18:54.893 23:21:43 -- common/autotest_common.sh@817 -- # '[' -z 3936363 ']' 00:18:54.893 23:21:43 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:54.893 23:21:43 -- common/autotest_common.sh@822 -- # local max_retries=100 00:18:54.893 23:21:43 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:54.893 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:54.893 23:21:43 -- common/autotest_common.sh@826 -- # xtrace_disable 00:18:54.893 23:21:43 -- common/autotest_common.sh@10 -- # set +x 00:18:54.893 [2024-04-26 23:21:43.978709] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:18:54.893 [2024-04-26 23:21:43.978756] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3936363 ] 00:18:54.893 EAL: No free 2048 kB hugepages reported on node 1 00:18:54.893 [2024-04-26 23:21:44.038577] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:54.893 [2024-04-26 23:21:44.067564] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:54.893 23:21:44 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:18:54.893 23:21:44 -- common/autotest_common.sh@850 -- # return 0 00:18:54.893 23:21:44 -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:18:54.893 23:21:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:54.893 23:21:44 -- common/autotest_common.sh@10 -- # set +x 00:18:55.153 NVMe0n1 00:18:55.153 23:21:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:55.153 23:21:44 -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:18:55.153 Running I/O for 10 seconds... 00:19:05.175 00:19:05.175 Latency(us) 00:19:05.175 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:05.175 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:19:05.175 Verification LBA range: start 0x0 length 0x4000 00:19:05.175 NVMe0n1 : 10.08 9412.90 36.77 0.00 0.00 108295.44 24139.09 70341.97 00:19:05.175 =================================================================================================================== 00:19:05.175 Total : 9412.90 36.77 0.00 0.00 108295.44 24139.09 70341.97 00:19:05.175 0 00:19:05.175 23:21:54 -- target/queue_depth.sh@39 -- # killprocess 3936363 00:19:05.175 23:21:54 -- common/autotest_common.sh@936 -- # '[' -z 3936363 ']' 00:19:05.175 23:21:54 -- common/autotest_common.sh@940 -- # kill -0 3936363 00:19:05.435 23:21:54 -- common/autotest_common.sh@941 -- # uname 00:19:05.435 23:21:54 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:19:05.435 23:21:54 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3936363 00:19:05.435 23:21:54 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:19:05.435 23:21:54 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:19:05.435 23:21:54 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3936363' 00:19:05.435 killing process with pid 3936363 00:19:05.435 23:21:54 -- common/autotest_common.sh@955 -- # kill 3936363 00:19:05.435 Received shutdown signal, test time was about 10.000000 seconds 00:19:05.435 00:19:05.435 Latency(us) 00:19:05.435 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:05.435 =================================================================================================================== 00:19:05.435 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:05.435 23:21:54 -- common/autotest_common.sh@960 -- # wait 3936363 00:19:05.435 23:21:54 -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:19:05.435 23:21:54 -- target/queue_depth.sh@43 -- # nvmftestfini 00:19:05.435 23:21:54 -- nvmf/common.sh@477 -- # nvmfcleanup 00:19:05.435 23:21:54 -- nvmf/common.sh@117 -- # sync 00:19:05.436 23:21:54 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:19:05.436 23:21:54 -- nvmf/common.sh@120 -- # set +e 00:19:05.436 23:21:54 -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:05.436 23:21:54 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:19:05.436 rmmod nvme_tcp 00:19:05.436 rmmod nvme_fabrics 00:19:05.436 rmmod nvme_keyring 00:19:05.436 23:21:54 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:05.436 23:21:54 -- nvmf/common.sh@124 -- # set -e 00:19:05.436 23:21:54 -- nvmf/common.sh@125 -- # return 0 00:19:05.436 23:21:54 -- nvmf/common.sh@478 -- # '[' -n 3936083 ']' 00:19:05.436 23:21:54 -- nvmf/common.sh@479 -- # killprocess 3936083 00:19:05.436 23:21:54 -- common/autotest_common.sh@936 -- # '[' -z 3936083 ']' 00:19:05.436 23:21:54 -- common/autotest_common.sh@940 -- # kill -0 3936083 00:19:05.436 23:21:54 -- common/autotest_common.sh@941 -- # uname 00:19:05.436 23:21:54 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:19:05.436 23:21:54 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3936083 00:19:05.696 23:21:54 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:19:05.696 23:21:54 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:19:05.696 23:21:54 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3936083' 00:19:05.696 killing process with pid 3936083 00:19:05.696 23:21:54 -- common/autotest_common.sh@955 -- # kill 3936083 00:19:05.696 23:21:54 -- common/autotest_common.sh@960 -- # wait 3936083 00:19:05.696 23:21:54 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:19:05.696 23:21:54 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:19:05.696 23:21:54 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:19:05.696 23:21:54 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:05.696 23:21:54 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:19:05.696 23:21:54 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:05.696 23:21:54 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:05.696 23:21:54 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:08.244 23:21:56 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:19:08.244 00:19:08.244 real 0m21.151s 00:19:08.244 user 0m24.158s 00:19:08.244 sys 0m6.249s 00:19:08.244 23:21:56 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:19:08.244 23:21:56 -- common/autotest_common.sh@10 -- # set +x 00:19:08.244 ************************************ 00:19:08.244 END TEST nvmf_queue_depth 00:19:08.244 ************************************ 00:19:08.244 23:21:56 -- nvmf/nvmf.sh@52 -- # run_test nvmf_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:19:08.244 23:21:56 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:19:08.244 23:21:56 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:19:08.244 23:21:56 -- common/autotest_common.sh@10 -- # set +x 00:19:08.244 ************************************ 00:19:08.244 START TEST nvmf_multipath 00:19:08.244 ************************************ 00:19:08.244 23:21:57 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:19:08.244 * Looking for test storage... 00:19:08.244 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:08.244 23:21:57 -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:08.244 23:21:57 -- nvmf/common.sh@7 -- # uname -s 00:19:08.244 23:21:57 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:08.244 23:21:57 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:08.244 23:21:57 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:08.244 23:21:57 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:08.244 23:21:57 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:08.244 23:21:57 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:08.244 23:21:57 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:08.244 23:21:57 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:08.244 23:21:57 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:08.244 23:21:57 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:08.244 23:21:57 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:19:08.244 23:21:57 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:19:08.244 23:21:57 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:08.244 23:21:57 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:08.244 23:21:57 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:08.244 23:21:57 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:08.244 23:21:57 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:08.244 23:21:57 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:08.244 23:21:57 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:08.244 23:21:57 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:08.244 23:21:57 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:08.244 23:21:57 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:08.244 23:21:57 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:08.244 23:21:57 -- paths/export.sh@5 -- # export PATH 00:19:08.244 23:21:57 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:08.244 23:21:57 -- nvmf/common.sh@47 -- # : 0 00:19:08.244 23:21:57 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:19:08.244 23:21:57 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:19:08.244 23:21:57 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:08.244 23:21:57 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:08.244 23:21:57 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:08.244 23:21:57 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:19:08.244 23:21:57 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:19:08.244 23:21:57 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:19:08.244 23:21:57 -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:19:08.244 23:21:57 -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:19:08.244 23:21:57 -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:19:08.244 23:21:57 -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:19:08.244 23:21:57 -- target/multipath.sh@43 -- # nvmftestinit 00:19:08.244 23:21:57 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:19:08.244 23:21:57 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:08.244 23:21:57 -- nvmf/common.sh@437 -- # prepare_net_devs 00:19:08.244 23:21:57 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:19:08.244 23:21:57 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:19:08.244 23:21:57 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:08.244 23:21:57 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:08.244 23:21:57 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:08.244 23:21:57 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:19:08.244 23:21:57 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:19:08.244 23:21:57 -- nvmf/common.sh@285 -- # xtrace_disable 00:19:08.244 23:21:57 -- common/autotest_common.sh@10 -- # set +x 00:19:16.385 23:22:04 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:19:16.385 23:22:04 -- nvmf/common.sh@291 -- # pci_devs=() 00:19:16.385 23:22:04 -- nvmf/common.sh@291 -- # local -a pci_devs 00:19:16.385 23:22:04 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:19:16.385 23:22:04 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:19:16.385 23:22:04 -- nvmf/common.sh@293 -- # pci_drivers=() 00:19:16.385 23:22:04 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:19:16.385 23:22:04 -- nvmf/common.sh@295 -- # net_devs=() 00:19:16.385 23:22:04 -- nvmf/common.sh@295 -- # local -ga net_devs 00:19:16.385 23:22:04 -- nvmf/common.sh@296 -- # e810=() 00:19:16.385 23:22:04 -- nvmf/common.sh@296 -- # local -ga e810 00:19:16.385 23:22:04 -- nvmf/common.sh@297 -- # x722=() 00:19:16.385 23:22:04 -- nvmf/common.sh@297 -- # local -ga x722 00:19:16.385 23:22:04 -- nvmf/common.sh@298 -- # mlx=() 00:19:16.385 23:22:04 -- nvmf/common.sh@298 -- # local -ga mlx 00:19:16.385 23:22:04 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:16.385 23:22:04 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:16.385 23:22:04 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:16.385 23:22:04 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:16.385 23:22:04 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:16.385 23:22:04 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:16.385 23:22:04 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:16.385 23:22:04 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:16.385 23:22:04 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:16.385 23:22:04 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:16.385 23:22:04 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:16.385 23:22:04 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:19:16.385 23:22:04 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:19:16.385 23:22:04 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:19:16.385 23:22:04 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:19:16.385 23:22:04 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:19:16.385 23:22:04 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:19:16.385 23:22:04 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:16.385 23:22:04 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:19:16.385 Found 0000:31:00.0 (0x8086 - 0x159b) 00:19:16.385 23:22:04 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:16.385 23:22:04 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:16.385 23:22:04 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:16.385 23:22:04 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:16.385 23:22:04 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:16.385 23:22:04 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:16.385 23:22:04 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:19:16.385 Found 0000:31:00.1 (0x8086 - 0x159b) 00:19:16.385 23:22:04 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:16.385 23:22:04 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:16.385 23:22:04 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:16.385 23:22:04 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:16.385 23:22:04 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:16.385 23:22:04 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:19:16.385 23:22:04 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:19:16.385 23:22:04 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:19:16.385 23:22:04 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:16.385 23:22:04 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:16.385 23:22:04 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:19:16.385 23:22:04 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:16.385 23:22:04 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:19:16.385 Found net devices under 0000:31:00.0: cvl_0_0 00:19:16.385 23:22:04 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:19:16.385 23:22:04 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:16.385 23:22:04 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:16.385 23:22:04 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:19:16.385 23:22:04 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:16.385 23:22:04 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:19:16.385 Found net devices under 0000:31:00.1: cvl_0_1 00:19:16.385 23:22:04 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:19:16.385 23:22:04 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:19:16.385 23:22:04 -- nvmf/common.sh@403 -- # is_hw=yes 00:19:16.385 23:22:04 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:19:16.385 23:22:04 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:19:16.385 23:22:04 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:19:16.385 23:22:04 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:16.385 23:22:04 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:16.385 23:22:04 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:16.385 23:22:04 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:19:16.385 23:22:04 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:16.385 23:22:04 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:16.385 23:22:04 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:19:16.385 23:22:04 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:16.385 23:22:04 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:16.385 23:22:04 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:19:16.385 23:22:04 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:19:16.385 23:22:04 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:19:16.385 23:22:04 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:16.385 23:22:04 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:16.385 23:22:04 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:16.385 23:22:04 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:19:16.385 23:22:04 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:16.385 23:22:04 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:16.385 23:22:04 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:16.385 23:22:04 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:19:16.385 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:16.385 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.508 ms 00:19:16.385 00:19:16.385 --- 10.0.0.2 ping statistics --- 00:19:16.385 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:16.385 rtt min/avg/max/mdev = 0.508/0.508/0.508/0.000 ms 00:19:16.385 23:22:04 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:16.385 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:16.385 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.362 ms 00:19:16.385 00:19:16.385 --- 10.0.0.1 ping statistics --- 00:19:16.385 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:16.385 rtt min/avg/max/mdev = 0.362/0.362/0.362/0.000 ms 00:19:16.385 23:22:04 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:16.385 23:22:04 -- nvmf/common.sh@411 -- # return 0 00:19:16.385 23:22:04 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:19:16.385 23:22:04 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:16.385 23:22:04 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:19:16.385 23:22:04 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:19:16.385 23:22:04 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:16.385 23:22:04 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:19:16.385 23:22:04 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:19:16.385 23:22:04 -- target/multipath.sh@45 -- # '[' -z ']' 00:19:16.385 23:22:04 -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:19:16.385 only one NIC for nvmf test 00:19:16.385 23:22:04 -- target/multipath.sh@47 -- # nvmftestfini 00:19:16.385 23:22:04 -- nvmf/common.sh@477 -- # nvmfcleanup 00:19:16.385 23:22:04 -- nvmf/common.sh@117 -- # sync 00:19:16.385 23:22:04 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:19:16.385 23:22:04 -- nvmf/common.sh@120 -- # set +e 00:19:16.385 23:22:04 -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:16.385 23:22:04 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:19:16.385 rmmod nvme_tcp 00:19:16.385 rmmod nvme_fabrics 00:19:16.385 rmmod nvme_keyring 00:19:16.385 23:22:04 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:16.385 23:22:04 -- nvmf/common.sh@124 -- # set -e 00:19:16.385 23:22:04 -- nvmf/common.sh@125 -- # return 0 00:19:16.385 23:22:04 -- nvmf/common.sh@478 -- # '[' -n '' ']' 00:19:16.385 23:22:04 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:19:16.385 23:22:04 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:19:16.385 23:22:04 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:19:16.385 23:22:04 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:16.385 23:22:04 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:19:16.385 23:22:04 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:16.385 23:22:04 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:16.385 23:22:04 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:17.768 23:22:06 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:19:17.768 23:22:06 -- target/multipath.sh@48 -- # exit 0 00:19:17.768 23:22:06 -- target/multipath.sh@1 -- # nvmftestfini 00:19:17.768 23:22:06 -- nvmf/common.sh@477 -- # nvmfcleanup 00:19:17.768 23:22:06 -- nvmf/common.sh@117 -- # sync 00:19:17.768 23:22:06 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:19:17.768 23:22:06 -- nvmf/common.sh@120 -- # set +e 00:19:17.768 23:22:06 -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:17.768 23:22:06 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:19:17.768 23:22:06 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:17.768 23:22:06 -- nvmf/common.sh@124 -- # set -e 00:19:17.768 23:22:06 -- nvmf/common.sh@125 -- # return 0 00:19:17.768 23:22:06 -- nvmf/common.sh@478 -- # '[' -n '' ']' 00:19:17.768 23:22:06 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:19:17.768 23:22:06 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:19:17.768 23:22:06 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:19:17.768 23:22:06 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:17.768 23:22:06 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:19:17.768 23:22:06 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:17.768 23:22:06 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:17.768 23:22:06 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:17.768 23:22:06 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:19:17.768 00:19:17.768 real 0m9.672s 00:19:17.768 user 0m2.027s 00:19:17.768 sys 0m5.547s 00:19:17.768 23:22:06 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:19:17.768 23:22:06 -- common/autotest_common.sh@10 -- # set +x 00:19:17.768 ************************************ 00:19:17.768 END TEST nvmf_multipath 00:19:17.768 ************************************ 00:19:17.768 23:22:06 -- nvmf/nvmf.sh@53 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:19:17.768 23:22:06 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:19:17.768 23:22:06 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:19:17.768 23:22:06 -- common/autotest_common.sh@10 -- # set +x 00:19:17.768 ************************************ 00:19:17.768 START TEST nvmf_zcopy 00:19:17.768 ************************************ 00:19:17.768 23:22:06 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:19:18.029 * Looking for test storage... 00:19:18.029 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:18.029 23:22:07 -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:18.029 23:22:07 -- nvmf/common.sh@7 -- # uname -s 00:19:18.029 23:22:07 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:18.029 23:22:07 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:18.029 23:22:07 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:18.029 23:22:07 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:18.029 23:22:07 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:18.029 23:22:07 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:18.029 23:22:07 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:18.029 23:22:07 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:18.029 23:22:07 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:18.029 23:22:07 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:18.029 23:22:07 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:19:18.029 23:22:07 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:19:18.029 23:22:07 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:18.029 23:22:07 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:18.029 23:22:07 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:18.029 23:22:07 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:18.029 23:22:07 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:18.029 23:22:07 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:18.029 23:22:07 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:18.029 23:22:07 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:18.029 23:22:07 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:18.029 23:22:07 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:18.029 23:22:07 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:18.029 23:22:07 -- paths/export.sh@5 -- # export PATH 00:19:18.029 23:22:07 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:18.029 23:22:07 -- nvmf/common.sh@47 -- # : 0 00:19:18.029 23:22:07 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:19:18.029 23:22:07 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:19:18.029 23:22:07 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:18.029 23:22:07 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:18.029 23:22:07 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:18.029 23:22:07 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:19:18.029 23:22:07 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:19:18.029 23:22:07 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:19:18.029 23:22:07 -- target/zcopy.sh@12 -- # nvmftestinit 00:19:18.029 23:22:07 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:19:18.029 23:22:07 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:18.029 23:22:07 -- nvmf/common.sh@437 -- # prepare_net_devs 00:19:18.029 23:22:07 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:19:18.029 23:22:07 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:19:18.029 23:22:07 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:18.029 23:22:07 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:18.029 23:22:07 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:18.029 23:22:07 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:19:18.029 23:22:07 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:19:18.029 23:22:07 -- nvmf/common.sh@285 -- # xtrace_disable 00:19:18.029 23:22:07 -- common/autotest_common.sh@10 -- # set +x 00:19:26.184 23:22:13 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:19:26.184 23:22:13 -- nvmf/common.sh@291 -- # pci_devs=() 00:19:26.184 23:22:13 -- nvmf/common.sh@291 -- # local -a pci_devs 00:19:26.184 23:22:13 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:19:26.184 23:22:13 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:19:26.184 23:22:13 -- nvmf/common.sh@293 -- # pci_drivers=() 00:19:26.184 23:22:13 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:19:26.184 23:22:13 -- nvmf/common.sh@295 -- # net_devs=() 00:19:26.184 23:22:13 -- nvmf/common.sh@295 -- # local -ga net_devs 00:19:26.184 23:22:13 -- nvmf/common.sh@296 -- # e810=() 00:19:26.184 23:22:13 -- nvmf/common.sh@296 -- # local -ga e810 00:19:26.184 23:22:13 -- nvmf/common.sh@297 -- # x722=() 00:19:26.184 23:22:13 -- nvmf/common.sh@297 -- # local -ga x722 00:19:26.184 23:22:13 -- nvmf/common.sh@298 -- # mlx=() 00:19:26.184 23:22:13 -- nvmf/common.sh@298 -- # local -ga mlx 00:19:26.184 23:22:13 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:26.184 23:22:13 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:26.184 23:22:13 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:26.184 23:22:13 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:26.184 23:22:13 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:26.184 23:22:13 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:26.184 23:22:13 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:26.184 23:22:13 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:26.184 23:22:13 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:26.184 23:22:13 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:26.184 23:22:13 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:26.184 23:22:13 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:19:26.184 23:22:13 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:19:26.184 23:22:13 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:19:26.184 23:22:13 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:19:26.184 23:22:13 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:19:26.184 23:22:13 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:19:26.184 23:22:13 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:26.184 23:22:13 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:19:26.184 Found 0000:31:00.0 (0x8086 - 0x159b) 00:19:26.185 23:22:13 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:26.185 23:22:13 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:26.185 23:22:13 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:26.185 23:22:13 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:26.185 23:22:13 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:26.185 23:22:13 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:26.185 23:22:13 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:19:26.185 Found 0000:31:00.1 (0x8086 - 0x159b) 00:19:26.185 23:22:13 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:26.185 23:22:13 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:26.185 23:22:13 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:26.185 23:22:13 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:26.185 23:22:13 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:26.185 23:22:13 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:19:26.185 23:22:13 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:19:26.185 23:22:13 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:19:26.185 23:22:13 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:26.185 23:22:13 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:26.185 23:22:13 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:19:26.185 23:22:13 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:26.185 23:22:13 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:19:26.185 Found net devices under 0000:31:00.0: cvl_0_0 00:19:26.185 23:22:13 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:19:26.185 23:22:13 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:26.185 23:22:13 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:26.185 23:22:13 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:19:26.185 23:22:13 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:26.185 23:22:13 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:19:26.185 Found net devices under 0000:31:00.1: cvl_0_1 00:19:26.185 23:22:13 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:19:26.185 23:22:13 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:19:26.185 23:22:13 -- nvmf/common.sh@403 -- # is_hw=yes 00:19:26.185 23:22:13 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:19:26.185 23:22:13 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:19:26.185 23:22:13 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:19:26.185 23:22:13 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:26.185 23:22:13 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:26.185 23:22:13 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:26.185 23:22:13 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:19:26.185 23:22:13 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:26.185 23:22:13 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:26.185 23:22:13 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:19:26.185 23:22:13 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:26.185 23:22:13 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:26.185 23:22:13 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:19:26.185 23:22:13 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:19:26.185 23:22:14 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:19:26.185 23:22:14 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:26.185 23:22:14 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:26.185 23:22:14 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:26.185 23:22:14 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:19:26.185 23:22:14 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:26.185 23:22:14 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:26.185 23:22:14 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:26.185 23:22:14 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:19:26.185 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:26.185 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.559 ms 00:19:26.185 00:19:26.185 --- 10.0.0.2 ping statistics --- 00:19:26.185 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:26.185 rtt min/avg/max/mdev = 0.559/0.559/0.559/0.000 ms 00:19:26.185 23:22:14 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:26.185 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:26.185 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.325 ms 00:19:26.185 00:19:26.185 --- 10.0.0.1 ping statistics --- 00:19:26.185 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:26.185 rtt min/avg/max/mdev = 0.325/0.325/0.325/0.000 ms 00:19:26.185 23:22:14 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:26.185 23:22:14 -- nvmf/common.sh@411 -- # return 0 00:19:26.185 23:22:14 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:19:26.185 23:22:14 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:26.185 23:22:14 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:19:26.185 23:22:14 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:19:26.185 23:22:14 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:26.185 23:22:14 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:19:26.185 23:22:14 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:19:26.185 23:22:14 -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:19:26.185 23:22:14 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:19:26.185 23:22:14 -- common/autotest_common.sh@710 -- # xtrace_disable 00:19:26.185 23:22:14 -- common/autotest_common.sh@10 -- # set +x 00:19:26.185 23:22:14 -- nvmf/common.sh@470 -- # nvmfpid=3946829 00:19:26.185 23:22:14 -- nvmf/common.sh@471 -- # waitforlisten 3946829 00:19:26.185 23:22:14 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:19:26.185 23:22:14 -- common/autotest_common.sh@817 -- # '[' -z 3946829 ']' 00:19:26.185 23:22:14 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:26.185 23:22:14 -- common/autotest_common.sh@822 -- # local max_retries=100 00:19:26.185 23:22:14 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:26.185 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:26.185 23:22:14 -- common/autotest_common.sh@826 -- # xtrace_disable 00:19:26.185 23:22:14 -- common/autotest_common.sh@10 -- # set +x 00:19:26.185 [2024-04-26 23:22:14.407537] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:19:26.185 [2024-04-26 23:22:14.407586] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:26.185 EAL: No free 2048 kB hugepages reported on node 1 00:19:26.185 [2024-04-26 23:22:14.473923] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:26.185 [2024-04-26 23:22:14.502258] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:26.185 [2024-04-26 23:22:14.502295] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:26.185 [2024-04-26 23:22:14.502303] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:26.185 [2024-04-26 23:22:14.502309] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:26.185 [2024-04-26 23:22:14.502315] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:26.185 [2024-04-26 23:22:14.502335] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:19:26.185 23:22:15 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:19:26.185 23:22:15 -- common/autotest_common.sh@850 -- # return 0 00:19:26.185 23:22:15 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:19:26.185 23:22:15 -- common/autotest_common.sh@716 -- # xtrace_disable 00:19:26.185 23:22:15 -- common/autotest_common.sh@10 -- # set +x 00:19:26.185 23:22:15 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:26.185 23:22:15 -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:19:26.185 23:22:15 -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:19:26.185 23:22:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:26.185 23:22:15 -- common/autotest_common.sh@10 -- # set +x 00:19:26.185 [2024-04-26 23:22:15.210600] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:26.185 23:22:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:26.185 23:22:15 -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:19:26.185 23:22:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:26.185 23:22:15 -- common/autotest_common.sh@10 -- # set +x 00:19:26.185 23:22:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:26.185 23:22:15 -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:26.185 23:22:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:26.185 23:22:15 -- common/autotest_common.sh@10 -- # set +x 00:19:26.185 [2024-04-26 23:22:15.226773] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:26.185 23:22:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:26.185 23:22:15 -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:19:26.185 23:22:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:26.185 23:22:15 -- common/autotest_common.sh@10 -- # set +x 00:19:26.185 23:22:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:26.185 23:22:15 -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:19:26.185 23:22:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:26.185 23:22:15 -- common/autotest_common.sh@10 -- # set +x 00:19:26.185 malloc0 00:19:26.185 23:22:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:26.185 23:22:15 -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:19:26.185 23:22:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:26.185 23:22:15 -- common/autotest_common.sh@10 -- # set +x 00:19:26.185 23:22:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:26.185 23:22:15 -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:19:26.185 23:22:15 -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:19:26.185 23:22:15 -- nvmf/common.sh@521 -- # config=() 00:19:26.185 23:22:15 -- nvmf/common.sh@521 -- # local subsystem config 00:19:26.185 23:22:15 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:19:26.185 23:22:15 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:19:26.185 { 00:19:26.185 "params": { 00:19:26.185 "name": "Nvme$subsystem", 00:19:26.185 "trtype": "$TEST_TRANSPORT", 00:19:26.185 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:26.185 "adrfam": "ipv4", 00:19:26.185 "trsvcid": "$NVMF_PORT", 00:19:26.185 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:26.185 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:26.185 "hdgst": ${hdgst:-false}, 00:19:26.185 "ddgst": ${ddgst:-false} 00:19:26.185 }, 00:19:26.186 "method": "bdev_nvme_attach_controller" 00:19:26.186 } 00:19:26.186 EOF 00:19:26.186 )") 00:19:26.186 23:22:15 -- nvmf/common.sh@543 -- # cat 00:19:26.186 23:22:15 -- nvmf/common.sh@545 -- # jq . 00:19:26.186 23:22:15 -- nvmf/common.sh@546 -- # IFS=, 00:19:26.186 23:22:15 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:19:26.186 "params": { 00:19:26.186 "name": "Nvme1", 00:19:26.186 "trtype": "tcp", 00:19:26.186 "traddr": "10.0.0.2", 00:19:26.186 "adrfam": "ipv4", 00:19:26.186 "trsvcid": "4420", 00:19:26.186 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:26.186 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:26.186 "hdgst": false, 00:19:26.186 "ddgst": false 00:19:26.186 }, 00:19:26.186 "method": "bdev_nvme_attach_controller" 00:19:26.186 }' 00:19:26.186 [2024-04-26 23:22:15.305569] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:19:26.186 [2024-04-26 23:22:15.305616] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3947171 ] 00:19:26.186 EAL: No free 2048 kB hugepages reported on node 1 00:19:26.186 [2024-04-26 23:22:15.364457] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:26.186 [2024-04-26 23:22:15.393366] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:26.447 Running I/O for 10 seconds... 00:19:38.700 00:19:38.700 Latency(us) 00:19:38.700 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:38.700 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:19:38.700 Verification LBA range: start 0x0 length 0x1000 00:19:38.700 Nvme1n1 : 10.01 6666.98 52.09 0.00 0.00 19141.51 3795.63 26105.17 00:19:38.700 =================================================================================================================== 00:19:38.700 Total : 6666.98 52.09 0.00 0.00 19141.51 3795.63 26105.17 00:19:38.700 23:22:25 -- target/zcopy.sh@39 -- # perfpid=3949181 00:19:38.700 23:22:25 -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:19:38.700 23:22:25 -- target/zcopy.sh@41 -- # xtrace_disable 00:19:38.700 23:22:25 -- common/autotest_common.sh@10 -- # set +x 00:19:38.700 23:22:25 -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:19:38.700 23:22:25 -- nvmf/common.sh@521 -- # config=() 00:19:38.700 23:22:25 -- nvmf/common.sh@521 -- # local subsystem config 00:19:38.700 23:22:25 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:19:38.700 23:22:25 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:19:38.700 { 00:19:38.700 "params": { 00:19:38.700 "name": "Nvme$subsystem", 00:19:38.700 "trtype": "$TEST_TRANSPORT", 00:19:38.700 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:38.700 "adrfam": "ipv4", 00:19:38.700 "trsvcid": "$NVMF_PORT", 00:19:38.700 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:38.700 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:38.700 "hdgst": ${hdgst:-false}, 00:19:38.700 "ddgst": ${ddgst:-false} 00:19:38.700 }, 00:19:38.700 "method": "bdev_nvme_attach_controller" 00:19:38.700 } 00:19:38.700 EOF 00:19:38.700 )") 00:19:38.700 23:22:25 -- nvmf/common.sh@543 -- # cat 00:19:38.700 [2024-04-26 23:22:25.859474] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:38.700 [2024-04-26 23:22:25.859510] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:38.700 23:22:25 -- nvmf/common.sh@545 -- # jq . 00:19:38.700 [2024-04-26 23:22:25.867455] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:38.700 [2024-04-26 23:22:25.867467] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:38.700 23:22:25 -- nvmf/common.sh@546 -- # IFS=, 00:19:38.700 23:22:25 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:19:38.700 "params": { 00:19:38.700 "name": "Nvme1", 00:19:38.700 "trtype": "tcp", 00:19:38.700 "traddr": "10.0.0.2", 00:19:38.700 "adrfam": "ipv4", 00:19:38.700 "trsvcid": "4420", 00:19:38.700 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:38.700 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:38.700 "hdgst": false, 00:19:38.700 "ddgst": false 00:19:38.700 }, 00:19:38.700 "method": "bdev_nvme_attach_controller" 00:19:38.700 }' 00:19:38.700 [2024-04-26 23:22:25.875473] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:38.700 [2024-04-26 23:22:25.875484] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:38.700 [2024-04-26 23:22:25.882123] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:19:38.700 [2024-04-26 23:22:25.882172] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3949181 ] 00:19:38.700 [2024-04-26 23:22:25.883492] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:38.700 [2024-04-26 23:22:25.883502] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:38.700 [2024-04-26 23:22:25.891515] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:38.700 [2024-04-26 23:22:25.891525] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:38.700 [2024-04-26 23:22:25.899535] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:38.700 [2024-04-26 23:22:25.899545] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:38.700 [2024-04-26 23:22:25.907556] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:38.700 [2024-04-26 23:22:25.907566] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:38.700 EAL: No free 2048 kB hugepages reported on node 1 00:19:38.700 [2024-04-26 23:22:25.915578] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:38.700 [2024-04-26 23:22:25.915589] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:38.700 [2024-04-26 23:22:25.923599] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:38.700 [2024-04-26 23:22:25.923609] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:38.700 [2024-04-26 23:22:25.931621] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:38.700 [2024-04-26 23:22:25.931631] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:38.700 [2024-04-26 23:22:25.939642] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:38.700 [2024-04-26 23:22:25.939652] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:38.700 [2024-04-26 23:22:25.942395] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:38.700 [2024-04-26 23:22:25.947663] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:38.700 [2024-04-26 23:22:25.947676] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:38.700 [2024-04-26 23:22:25.955684] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:38.700 [2024-04-26 23:22:25.955696] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:38.700 [2024-04-26 23:22:25.963704] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:38.700 [2024-04-26 23:22:25.963717] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:38.700 [2024-04-26 23:22:25.970667] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:38.700 [2024-04-26 23:22:25.971723] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:38.700 [2024-04-26 23:22:25.971735] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:38.700 [2024-04-26 23:22:25.979744] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:38.700 [2024-04-26 23:22:25.979754] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:38.700 [2024-04-26 23:22:25.987772] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:38.700 [2024-04-26 23:22:25.987786] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:38.700 [2024-04-26 23:22:25.995791] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:38.700 [2024-04-26 23:22:25.995802] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:38.700 [2024-04-26 23:22:26.003810] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:38.700 [2024-04-26 23:22:26.003821] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:38.700 [2024-04-26 23:22:26.011832] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:38.700 [2024-04-26 23:22:26.011846] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:38.700 [2024-04-26 23:22:26.019857] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:38.700 [2024-04-26 23:22:26.019868] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:38.700 [2024-04-26 23:22:26.027877] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:38.700 [2024-04-26 23:22:26.027887] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:38.700 [2024-04-26 23:22:26.035897] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:38.700 [2024-04-26 23:22:26.035907] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:38.700 [2024-04-26 23:22:26.043923] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:38.700 [2024-04-26 23:22:26.043942] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:38.700 [2024-04-26 23:22:26.051938] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:38.700 [2024-04-26 23:22:26.051950] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:38.700 [2024-04-26 23:22:26.059959] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:38.700 [2024-04-26 23:22:26.059971] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:38.700 [2024-04-26 23:22:26.067982] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:38.700 [2024-04-26 23:22:26.067995] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:38.700 [2024-04-26 23:22:26.076005] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:38.700 [2024-04-26 23:22:26.076017] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:38.700 [2024-04-26 23:22:26.084027] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:38.701 [2024-04-26 23:22:26.084040] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:38.701 [2024-04-26 23:22:26.092045] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:38.701 [2024-04-26 23:22:26.092056] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:38.701 [2024-04-26 23:22:26.100076] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:38.701 [2024-04-26 23:22:26.100094] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:38.701 [2024-04-26 23:22:26.108089] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:38.701 [2024-04-26 23:22:26.108100] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:38.701 Running I/O for 5 seconds... 00:19:38.701 [2024-04-26 23:22:26.116111] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:38.701 [2024-04-26 23:22:26.116126] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:38.701 [2024-04-26 23:22:26.128923] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:38.701 [2024-04-26 23:22:26.128943] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:38.701 [2024-04-26 23:22:26.138410] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:38.701 [2024-04-26 23:22:26.138429] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:38.701 [2024-04-26 23:22:26.148745] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:38.701 [2024-04-26 23:22:26.148764] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:38.701 [2024-04-26 23:22:26.160331] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:38.701 [2024-04-26 23:22:26.160349] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:38.701 [2024-04-26 23:22:26.168751] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:38.701 [2024-04-26 23:22:26.168769] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:38.701 [2024-04-26 23:22:26.180686] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:38.701 [2024-04-26 23:22:26.180703] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:38.701 [2024-04-26 23:22:26.192155] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:38.701 [2024-04-26 23:22:26.192173] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:38.701 [2024-04-26 23:22:26.200459] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:38.701 [2024-04-26 23:22:26.200477] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:38.701 [2024-04-26 23:22:26.212250] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:38.701 [2024-04-26 23:22:26.212267] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:38.701 [2024-04-26 23:22:26.223281] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:38.701 [2024-04-26 23:22:26.223299] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:38.701 [2024-04-26 23:22:26.231714] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:38.701 [2024-04-26 23:22:26.231731] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:38.701 [2024-04-26 23:22:26.243463] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:38.701 [2024-04-26 23:22:26.243482] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:38.701 [2024-04-26 23:22:26.252656] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:38.701 [2024-04-26 23:22:26.252673] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:38.701 [2024-04-26 23:22:26.264190] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:38.701 [2024-04-26 23:22:26.264208] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:38.701 [2024-04-26 23:22:26.272978] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:38.701 [2024-04-26 23:22:26.272996] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:38.701 [2024-04-26 23:22:26.283132] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:38.701 [2024-04-26 23:22:26.283150] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:38.701 [2024-04-26 23:22:26.292811] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:38.701 [2024-04-26 23:22:26.292829] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:38.701 [2024-04-26 23:22:26.302473] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:38.701 [2024-04-26 23:22:26.302491] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:38.701 [2024-04-26 23:22:26.312210] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:38.701 [2024-04-26 23:22:26.312232] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:38.701 [2024-04-26 23:22:26.322126] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:38.701 [2024-04-26 23:22:26.322144] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:38.701 [2024-04-26 23:22:26.331783] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:38.701 [2024-04-26 23:22:26.331801] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:38.701 [2024-04-26 23:22:26.341049] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:38.701 [2024-04-26 23:22:26.341066] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:38.701 [2024-04-26 23:22:26.350636] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:38.701 [2024-04-26 23:22:26.350653] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:38.701 [2024-04-26 23:22:26.360232] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:38.701 [2024-04-26 23:22:26.360250] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:38.701 [2024-04-26 23:22:26.369950] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:38.701 [2024-04-26 23:22:26.369967] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:38.701 [2024-04-26 23:22:26.379455] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:38.701 [2024-04-26 23:22:26.379473] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:38.701 [2024-04-26 23:22:26.389290] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:38.701 [2024-04-26 23:22:26.389308] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:38.701 [2024-04-26 23:22:26.398914] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:38.701 [2024-04-26 23:22:26.398932] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:38.701 [2024-04-26 23:22:26.408615] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:38.701 [2024-04-26 23:22:26.408632] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:38.701 [2024-04-26 23:22:26.418279] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:38.701 [2024-04-26 23:22:26.418297] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:38.701 [2024-04-26 23:22:26.427883] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:38.701 [2024-04-26 23:22:26.427901] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:38.701 [2024-04-26 23:22:26.437872] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:38.701 [2024-04-26 23:22:26.437890] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:38.701 [2024-04-26 23:22:26.447377] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:38.701 [2024-04-26 23:22:26.447395] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:38.701 [2024-04-26 23:22:26.458917] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:38.701 [2024-04-26 23:22:26.458935] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:38.701 [2024-04-26 23:22:26.467640] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:38.701 [2024-04-26 23:22:26.467657] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:38.701 [2024-04-26 23:22:26.477796] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:38.701 [2024-04-26 23:22:26.477814] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:38.701 [2024-04-26 23:22:26.487336] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:38.701 [2024-04-26 23:22:26.487354] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:38.701 [2024-04-26 23:22:26.497015] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:38.701 [2024-04-26 23:22:26.497032] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:38.701 [2024-04-26 23:22:26.506723] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:38.701 [2024-04-26 23:22:26.506741] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:38.701 [2024-04-26 23:22:26.516211] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:38.701 [2024-04-26 23:22:26.516230] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:38.701 [2024-04-26 23:22:26.526038] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:38.701 [2024-04-26 23:22:26.526055] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:38.701 [2024-04-26 23:22:26.535796] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:38.701 [2024-04-26 23:22:26.535814] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:38.701 [2024-04-26 23:22:26.545624] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:38.701 [2024-04-26 23:22:26.545642] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:38.701 [2024-04-26 23:22:26.555339] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:38.701 [2024-04-26 23:22:26.555357] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:38.701 [2024-04-26 23:22:26.565067] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:38.701 [2024-04-26 23:22:26.565085] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:38.701 [2024-04-26 23:22:26.574528] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:38.701 [2024-04-26 23:22:26.574546] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:38.701 [2024-04-26 23:22:26.584046] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:38.701 [2024-04-26 23:22:26.584063] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:38.701 [2024-04-26 23:22:26.593556] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:38.701 [2024-04-26 23:22:26.593574] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:38.701 [2024-04-26 23:22:26.603193] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:38.701 [2024-04-26 23:22:26.603211] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:38.701 [2024-04-26 23:22:26.612900] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:38.702 [2024-04-26 23:22:26.612918] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:38.702 [2024-04-26 23:22:26.622558] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:38.702 [2024-04-26 23:22:26.622577] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:38.702 [2024-04-26 23:22:26.632233] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:38.702 [2024-04-26 23:22:26.632252] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:38.702 [2024-04-26 23:22:26.642126] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:38.702 [2024-04-26 23:22:26.642145] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:38.702 [2024-04-26 23:22:26.651979] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:38.702 [2024-04-26 23:22:26.651997] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:38.702 [2024-04-26 23:22:26.661579] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:38.702 [2024-04-26 23:22:26.661597] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:38.702 [2024-04-26 23:22:26.671088] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:38.702 [2024-04-26 23:22:26.671106] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:38.702 [2024-04-26 23:22:26.680693] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:38.702 [2024-04-26 23:22:26.680710] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:38.702 [2024-04-26 23:22:26.690471] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:38.702 [2024-04-26 23:22:26.690488] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:38.702 [2024-04-26 23:22:26.700223] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:38.702 [2024-04-26 23:22:26.700241] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:38.702 [2024-04-26 23:22:26.709934] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:38.702 [2024-04-26 23:22:26.709952] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:38.702 [2024-04-26 23:22:26.719660] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:38.702 [2024-04-26 23:22:26.719677] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:38.702 [2024-04-26 23:22:26.729006] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:38.702 [2024-04-26 23:22:26.729024] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:38.702 [2024-04-26 23:22:26.738637] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:38.702 [2024-04-26 23:22:26.738655] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:38.702 [2024-04-26 23:22:26.748218] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:38.702 [2024-04-26 23:22:26.748236] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:38.702 [2024-04-26 23:22:26.757881] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:38.702 [2024-04-26 23:22:26.757899] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:38.702 [2024-04-26 23:22:26.767460] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:38.702 [2024-04-26 23:22:26.767478] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:38.702 [2024-04-26 23:22:26.777126] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:38.702 [2024-04-26 23:22:26.777144] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:38.702 [2024-04-26 23:22:26.786661] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:38.702 [2024-04-26 23:22:26.786679] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:38.702 [2024-04-26 23:22:26.796292] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:38.702 [2024-04-26 23:22:26.796310] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:38.702 [2024-04-26 23:22:26.805727] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:38.702 [2024-04-26 23:22:26.805745] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:38.702 [2024-04-26 23:22:26.815441] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:38.702 [2024-04-26 23:22:26.815459] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:38.702 [2024-04-26 23:22:26.825116] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:38.702 [2024-04-26 23:22:26.825134] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:38.702 [2024-04-26 23:22:26.834743] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:38.702 [2024-04-26 23:22:26.834761] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:38.702 [2024-04-26 23:22:26.844287] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:38.702 [2024-04-26 23:22:26.844305] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:38.702 [2024-04-26 23:22:26.853858] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:38.702 [2024-04-26 23:22:26.853876] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:38.702 [2024-04-26 23:22:26.863445] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:38.702 [2024-04-26 23:22:26.863463] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:38.702 [2024-04-26 23:22:26.873319] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:38.702 [2024-04-26 23:22:26.873337] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:38.702 [2024-04-26 23:22:26.882799] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:38.702 [2024-04-26 23:22:26.882817] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:38.702 [2024-04-26 23:22:26.892627] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:38.702 [2024-04-26 23:22:26.892645] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:38.702 [2024-04-26 23:22:26.902322] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:38.702 [2024-04-26 23:22:26.902340] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:38.702 [2024-04-26 23:22:26.912113] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:38.702 [2024-04-26 23:22:26.912131] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:38.702 [2024-04-26 23:22:26.921985] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:38.702 [2024-04-26 23:22:26.922003] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:38.702 [2024-04-26 23:22:26.931494] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:38.702 [2024-04-26 23:22:26.931512] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:38.702 [2024-04-26 23:22:26.941184] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:38.702 [2024-04-26 23:22:26.941203] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:38.702 [2024-04-26 23:22:26.950964] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:38.702 [2024-04-26 23:22:26.950982] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:38.702 [2024-04-26 23:22:26.960656] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:38.702 [2024-04-26 23:22:26.960674] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:38.702 [2024-04-26 23:22:26.970390] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:38.702 [2024-04-26 23:22:26.970408] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:38.702 [2024-04-26 23:22:26.980288] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:38.702 [2024-04-26 23:22:26.980306] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:38.702 [2024-04-26 23:22:26.989930] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:38.702 [2024-04-26 23:22:26.989948] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:38.702 [2024-04-26 23:22:26.999574] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:38.702 [2024-04-26 23:22:26.999592] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:38.702 [2024-04-26 23:22:27.009323] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:38.702 [2024-04-26 23:22:27.009341] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:38.702 [2024-04-26 23:22:27.018817] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:38.702 [2024-04-26 23:22:27.018835] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:38.702 [2024-04-26 23:22:27.028734] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:38.702 [2024-04-26 23:22:27.028757] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:38.702 [2024-04-26 23:22:27.038428] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:38.702 [2024-04-26 23:22:27.038452] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:38.702 [2024-04-26 23:22:27.048343] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:38.702 [2024-04-26 23:22:27.048362] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:38.702 [2024-04-26 23:22:27.057790] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:38.702 [2024-04-26 23:22:27.057808] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:38.702 [2024-04-26 23:22:27.067052] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:38.702 [2024-04-26 23:22:27.067070] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:38.702 [2024-04-26 23:22:27.076766] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:38.702 [2024-04-26 23:22:27.076783] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:38.702 [2024-04-26 23:22:27.086613] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:38.702 [2024-04-26 23:22:27.086631] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:38.702 [2024-04-26 23:22:27.096175] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:38.702 [2024-04-26 23:22:27.096192] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:38.702 [2024-04-26 23:22:27.105938] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:38.702 [2024-04-26 23:22:27.105956] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:38.702 [2024-04-26 23:22:27.115792] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:38.702 [2024-04-26 23:22:27.115811] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:38.702 [2024-04-26 23:22:27.125535] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:38.702 [2024-04-26 23:22:27.125553] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:38.702 [2024-04-26 23:22:27.135187] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:38.702 [2024-04-26 23:22:27.135205] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:38.703 [2024-04-26 23:22:27.144872] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:38.703 [2024-04-26 23:22:27.144890] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:38.703 [2024-04-26 23:22:27.154537] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:38.703 [2024-04-26 23:22:27.154555] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:38.703 [2024-04-26 23:22:27.164595] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:38.703 [2024-04-26 23:22:27.164613] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:38.703 [2024-04-26 23:22:27.174192] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:38.703 [2024-04-26 23:22:27.174210] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:38.703 [2024-04-26 23:22:27.183821] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:38.703 [2024-04-26 23:22:27.183844] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:38.703 [2024-04-26 23:22:27.193372] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:38.703 [2024-04-26 23:22:27.193389] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:38.703 [2024-04-26 23:22:27.203166] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:38.703 [2024-04-26 23:22:27.203183] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:38.703 [2024-04-26 23:22:27.212792] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:38.703 [2024-04-26 23:22:27.212809] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:38.703 [2024-04-26 23:22:27.222405] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:38.703 [2024-04-26 23:22:27.222427] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:38.703 [2024-04-26 23:22:27.232069] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:38.703 [2024-04-26 23:22:27.232087] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:38.703 [2024-04-26 23:22:27.241589] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:38.703 [2024-04-26 23:22:27.241606] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:38.703 [2024-04-26 23:22:27.251081] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:38.703 [2024-04-26 23:22:27.251099] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:38.703 [2024-04-26 23:22:27.260762] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:38.703 [2024-04-26 23:22:27.260781] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:38.703 [2024-04-26 23:22:27.270621] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:38.703 [2024-04-26 23:22:27.270639] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:38.703 [2024-04-26 23:22:27.280257] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:38.703 [2024-04-26 23:22:27.280276] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:38.703 [2024-04-26 23:22:27.289935] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:38.703 [2024-04-26 23:22:27.289953] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:38.703 [2024-04-26 23:22:27.299818] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:38.703 [2024-04-26 23:22:27.299835] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:38.703 [2024-04-26 23:22:27.309392] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:38.703 [2024-04-26 23:22:27.309410] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:38.703 [2024-04-26 23:22:27.319175] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:38.703 [2024-04-26 23:22:27.319193] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:38.703 [2024-04-26 23:22:27.328820] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:38.703 [2024-04-26 23:22:27.328842] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:38.703 [2024-04-26 23:22:27.338551] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:38.703 [2024-04-26 23:22:27.338568] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:38.703 [2024-04-26 23:22:27.348240] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:38.703 [2024-04-26 23:22:27.348257] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:38.703 [2024-04-26 23:22:27.358048] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:38.703 [2024-04-26 23:22:27.358066] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:38.703 [2024-04-26 23:22:27.367709] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:38.703 [2024-04-26 23:22:27.367727] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:38.703 [2024-04-26 23:22:27.377454] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:38.703 [2024-04-26 23:22:27.377472] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:38.703 [2024-04-26 23:22:27.387210] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:38.703 [2024-04-26 23:22:27.387228] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:38.703 [2024-04-26 23:22:27.396713] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:38.703 [2024-04-26 23:22:27.396731] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:38.703 [2024-04-26 23:22:27.406431] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:38.703 [2024-04-26 23:22:27.406452] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:38.703 [2024-04-26 23:22:27.416454] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:38.703 [2024-04-26 23:22:27.416471] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:38.703 [2024-04-26 23:22:27.426276] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:38.703 [2024-04-26 23:22:27.426294] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:38.703 [2024-04-26 23:22:27.435300] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:38.703 [2024-04-26 23:22:27.435317] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:38.703 [2024-04-26 23:22:27.445702] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:38.703 [2024-04-26 23:22:27.445720] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:38.703 [2024-04-26 23:22:27.455192] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:38.703 [2024-04-26 23:22:27.455209] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:38.703 [2024-04-26 23:22:27.466856] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:38.703 [2024-04-26 23:22:27.466874] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:38.703 [2024-04-26 23:22:27.477182] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:38.703 [2024-04-26 23:22:27.477200] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:38.703 [2024-04-26 23:22:27.485615] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:38.703 [2024-04-26 23:22:27.485632] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:38.703 [2024-04-26 23:22:27.497529] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:38.703 [2024-04-26 23:22:27.497546] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:38.703 [2024-04-26 23:22:27.508594] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:38.703 [2024-04-26 23:22:27.508611] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:38.703 [2024-04-26 23:22:27.517050] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:38.703 [2024-04-26 23:22:27.517067] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:38.703 [2024-04-26 23:22:27.529110] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:38.703 [2024-04-26 23:22:27.529127] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:38.703 [2024-04-26 23:22:27.540752] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:38.703 [2024-04-26 23:22:27.540770] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:38.703 [2024-04-26 23:22:27.549550] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:38.703 [2024-04-26 23:22:27.549567] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:38.703 [2024-04-26 23:22:27.559254] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:38.703 [2024-04-26 23:22:27.559272] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:38.703 [2024-04-26 23:22:27.568993] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:38.703 [2024-04-26 23:22:27.569011] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:38.703 [2024-04-26 23:22:27.578728] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:38.703 [2024-04-26 23:22:27.578746] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:38.703 [2024-04-26 23:22:27.588432] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:38.703 [2024-04-26 23:22:27.588449] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:38.703 [2024-04-26 23:22:27.598329] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:38.703 [2024-04-26 23:22:27.598350] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:38.703 [2024-04-26 23:22:27.608331] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:38.703 [2024-04-26 23:22:27.608349] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:38.703 [2024-04-26 23:22:27.618038] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:38.703 [2024-04-26 23:22:27.618055] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:38.703 [2024-04-26 23:22:27.629545] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:38.703 [2024-04-26 23:22:27.629562] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:38.703 [2024-04-26 23:22:27.637985] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:38.703 [2024-04-26 23:22:27.638004] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:38.703 [2024-04-26 23:22:27.648319] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:38.703 [2024-04-26 23:22:27.648336] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:38.703 [2024-04-26 23:22:27.657922] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:38.703 [2024-04-26 23:22:27.657940] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:38.703 [2024-04-26 23:22:27.667343] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:38.703 [2024-04-26 23:22:27.667359] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:38.704 [2024-04-26 23:22:27.676696] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:38.704 [2024-04-26 23:22:27.676713] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:38.704 [2024-04-26 23:22:27.686349] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:38.704 [2024-04-26 23:22:27.686366] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:38.704 [2024-04-26 23:22:27.695779] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:38.704 [2024-04-26 23:22:27.695796] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:38.704 [2024-04-26 23:22:27.705306] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:38.704 [2024-04-26 23:22:27.705324] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:38.704 [2024-04-26 23:22:27.714566] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:38.704 [2024-04-26 23:22:27.714584] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:38.704 [2024-04-26 23:22:27.724235] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:38.704 [2024-04-26 23:22:27.724252] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:38.704 [2024-04-26 23:22:27.733801] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:38.704 [2024-04-26 23:22:27.733819] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:38.704 [2024-04-26 23:22:27.743600] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:38.704 [2024-04-26 23:22:27.743618] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:38.704 [2024-04-26 23:22:27.753437] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:38.704 [2024-04-26 23:22:27.753455] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:38.704 [2024-04-26 23:22:27.762901] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:38.704 [2024-04-26 23:22:27.762918] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:38.704 [2024-04-26 23:22:27.772406] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:38.704 [2024-04-26 23:22:27.772424] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:38.704 [2024-04-26 23:22:27.782267] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:38.704 [2024-04-26 23:22:27.782288] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:38.704 [2024-04-26 23:22:27.791884] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:38.704 [2024-04-26 23:22:27.791902] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:38.704 [2024-04-26 23:22:27.801752] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:38.704 [2024-04-26 23:22:27.801770] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:38.704 [2024-04-26 23:22:27.811645] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:38.704 [2024-04-26 23:22:27.811663] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:38.704 [2024-04-26 23:22:27.821746] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:38.704 [2024-04-26 23:22:27.821764] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:38.704 [2024-04-26 23:22:27.831491] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:38.704 [2024-04-26 23:22:27.831509] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:38.704 [2024-04-26 23:22:27.843030] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:38.704 [2024-04-26 23:22:27.843047] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:38.704 [2024-04-26 23:22:27.851631] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:38.704 [2024-04-26 23:22:27.851648] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:38.704 [2024-04-26 23:22:27.861432] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:38.704 [2024-04-26 23:22:27.861449] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:38.704 [2024-04-26 23:22:27.871169] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:38.704 [2024-04-26 23:22:27.871187] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:38.704 [2024-04-26 23:22:27.880700] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:38.704 [2024-04-26 23:22:27.880718] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:38.704 [2024-04-26 23:22:27.890488] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:38.704 [2024-04-26 23:22:27.890505] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:38.704 [2024-04-26 23:22:27.900415] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:38.704 [2024-04-26 23:22:27.900433] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:38.704 [2024-04-26 23:22:27.910184] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:38.704 [2024-04-26 23:22:27.910202] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:38.704 [2024-04-26 23:22:27.919977] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:38.704 [2024-04-26 23:22:27.919995] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:38.704 [2024-04-26 23:22:27.929629] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:38.704 [2024-04-26 23:22:27.929647] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:38.704 [2024-04-26 23:22:27.939209] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:38.704 [2024-04-26 23:22:27.939227] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:38.704 [2024-04-26 23:22:27.948852] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:38.704 [2024-04-26 23:22:27.948870] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:38.966 [2024-04-26 23:22:27.958659] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:38.966 [2024-04-26 23:22:27.958677] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:38.966 [2024-04-26 23:22:27.968744] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:38.966 [2024-04-26 23:22:27.968762] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:38.966 [2024-04-26 23:22:27.978647] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:38.966 [2024-04-26 23:22:27.978664] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:38.966 [2024-04-26 23:22:27.988447] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:38.966 [2024-04-26 23:22:27.988464] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:38.966 [2024-04-26 23:22:27.998215] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:38.966 [2024-04-26 23:22:27.998233] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:38.966 [2024-04-26 23:22:28.008016] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:38.966 [2024-04-26 23:22:28.008033] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:38.966 [2024-04-26 23:22:28.017522] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:38.966 [2024-04-26 23:22:28.017540] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:38.966 [2024-04-26 23:22:28.027139] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:38.966 [2024-04-26 23:22:28.027157] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:38.967 [2024-04-26 23:22:28.036719] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:38.967 [2024-04-26 23:22:28.036737] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:38.967 [2024-04-26 23:22:28.046491] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:38.967 [2024-04-26 23:22:28.046509] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:38.967 [2024-04-26 23:22:28.056034] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:38.967 [2024-04-26 23:22:28.056051] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:38.967 [2024-04-26 23:22:28.065628] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:38.967 [2024-04-26 23:22:28.065645] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:38.967 [2024-04-26 23:22:28.075245] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:38.967 [2024-04-26 23:22:28.075263] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:38.967 [2024-04-26 23:22:28.084803] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:38.967 [2024-04-26 23:22:28.084822] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:38.967 [2024-04-26 23:22:28.094694] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:38.967 [2024-04-26 23:22:28.094711] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:38.967 [2024-04-26 23:22:28.104421] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:38.967 [2024-04-26 23:22:28.104438] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:38.967 [2024-04-26 23:22:28.114186] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:38.967 [2024-04-26 23:22:28.114203] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:38.967 [2024-04-26 23:22:28.123881] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:38.967 [2024-04-26 23:22:28.123898] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:38.967 [2024-04-26 23:22:28.133497] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:38.967 [2024-04-26 23:22:28.133514] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:38.967 [2024-04-26 23:22:28.143083] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:38.967 [2024-04-26 23:22:28.143101] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:38.967 [2024-04-26 23:22:28.152769] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:38.967 [2024-04-26 23:22:28.152787] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:38.967 [2024-04-26 23:22:28.162240] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:38.967 [2024-04-26 23:22:28.162258] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:38.967 [2024-04-26 23:22:28.172268] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:38.967 [2024-04-26 23:22:28.172286] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:38.967 [2024-04-26 23:22:28.182049] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:38.967 [2024-04-26 23:22:28.182067] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:38.967 [2024-04-26 23:22:28.193560] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:38.967 [2024-04-26 23:22:28.193578] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:38.967 [2024-04-26 23:22:28.202313] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:38.967 [2024-04-26 23:22:28.202330] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:38.967 [2024-04-26 23:22:28.212624] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:38.967 [2024-04-26 23:22:28.212642] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:39.229 [2024-04-26 23:22:28.222424] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:39.229 [2024-04-26 23:22:28.222442] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:39.229 [2024-04-26 23:22:28.232222] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:39.229 [2024-04-26 23:22:28.232240] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:39.229 [2024-04-26 23:22:28.242460] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:39.229 [2024-04-26 23:22:28.242476] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:39.229 [2024-04-26 23:22:28.255167] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:39.229 [2024-04-26 23:22:28.255185] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:39.229 [2024-04-26 23:22:28.264042] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:39.229 [2024-04-26 23:22:28.264060] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:39.229 [2024-04-26 23:22:28.276347] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:39.229 [2024-04-26 23:22:28.276369] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:39.229 [2024-04-26 23:22:28.285257] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:39.229 [2024-04-26 23:22:28.285275] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:39.229 [2024-04-26 23:22:28.297040] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:39.229 [2024-04-26 23:22:28.297059] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:39.229 [2024-04-26 23:22:28.305725] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:39.229 [2024-04-26 23:22:28.305743] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:39.229 [2024-04-26 23:22:28.315968] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:39.229 [2024-04-26 23:22:28.315987] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:39.229 [2024-04-26 23:22:28.325524] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:39.229 [2024-04-26 23:22:28.325542] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:39.229 [2024-04-26 23:22:28.335083] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:39.229 [2024-04-26 23:22:28.335101] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:39.229 [2024-04-26 23:22:28.345199] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:39.229 [2024-04-26 23:22:28.345217] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:39.229 [2024-04-26 23:22:28.354642] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:39.229 [2024-04-26 23:22:28.354660] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:39.229 [2024-04-26 23:22:28.364184] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:39.229 [2024-04-26 23:22:28.364203] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:39.229 [2024-04-26 23:22:28.374059] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:39.229 [2024-04-26 23:22:28.374078] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:39.229 [2024-04-26 23:22:28.383555] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:39.229 [2024-04-26 23:22:28.383573] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:39.229 [2024-04-26 23:22:28.393027] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:39.229 [2024-04-26 23:22:28.393045] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:39.229 [2024-04-26 23:22:28.402509] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:39.229 [2024-04-26 23:22:28.402528] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:39.229 [2024-04-26 23:22:28.411995] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:39.229 [2024-04-26 23:22:28.412013] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:39.229 [2024-04-26 23:22:28.421803] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:39.229 [2024-04-26 23:22:28.421821] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:39.229 [2024-04-26 23:22:28.431615] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:39.229 [2024-04-26 23:22:28.431634] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:39.229 [2024-04-26 23:22:28.441493] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:39.229 [2024-04-26 23:22:28.441511] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:39.229 [2024-04-26 23:22:28.451153] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:39.229 [2024-04-26 23:22:28.451171] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:39.229 [2024-04-26 23:22:28.460818] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:39.229 [2024-04-26 23:22:28.460841] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:39.229 [2024-04-26 23:22:28.470278] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:39.229 [2024-04-26 23:22:28.470296] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:39.229 [2024-04-26 23:22:28.480083] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:39.229 [2024-04-26 23:22:28.480100] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:39.490 [2024-04-26 23:22:28.489638] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:39.490 [2024-04-26 23:22:28.489656] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:39.490 [2024-04-26 23:22:28.499350] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:39.490 [2024-04-26 23:22:28.499368] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:39.490 [2024-04-26 23:22:28.508994] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:39.490 [2024-04-26 23:22:28.509012] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:39.490 [2024-04-26 23:22:28.518825] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:39.490 [2024-04-26 23:22:28.518856] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:39.490 [2024-04-26 23:22:28.528975] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:39.490 [2024-04-26 23:22:28.528993] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:39.490 [2024-04-26 23:22:28.538494] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:39.490 [2024-04-26 23:22:28.538512] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:39.490 [2024-04-26 23:22:28.549860] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:39.490 [2024-04-26 23:22:28.549878] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:39.490 [2024-04-26 23:22:28.558574] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:39.490 [2024-04-26 23:22:28.558591] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:39.490 [2024-04-26 23:22:28.568700] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:39.490 [2024-04-26 23:22:28.568718] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:39.490 [2024-04-26 23:22:28.578315] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:39.490 [2024-04-26 23:22:28.578333] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:39.490 [2024-04-26 23:22:28.587869] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:39.490 [2024-04-26 23:22:28.587887] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:39.490 [2024-04-26 23:22:28.597398] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:39.490 [2024-04-26 23:22:28.597416] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:39.490 [2024-04-26 23:22:28.607051] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:39.490 [2024-04-26 23:22:28.607069] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:39.490 [2024-04-26 23:22:28.616783] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:39.490 [2024-04-26 23:22:28.616801] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:39.490 [2024-04-26 23:22:28.626527] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:39.490 [2024-04-26 23:22:28.626544] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:39.490 [2024-04-26 23:22:28.636186] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:39.490 [2024-04-26 23:22:28.636204] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:39.490 [2024-04-26 23:22:28.645817] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:39.490 [2024-04-26 23:22:28.645835] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:39.490 [2024-04-26 23:22:28.655311] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:39.490 [2024-04-26 23:22:28.655329] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:39.490 [2024-04-26 23:22:28.665179] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:39.490 [2024-04-26 23:22:28.665197] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:39.490 [2024-04-26 23:22:28.674948] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:39.490 [2024-04-26 23:22:28.674966] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:39.490 [2024-04-26 23:22:28.684631] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:39.491 [2024-04-26 23:22:28.684649] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:39.491 [2024-04-26 23:22:28.694203] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:39.491 [2024-04-26 23:22:28.694221] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:39.491 [2024-04-26 23:22:28.704077] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:39.491 [2024-04-26 23:22:28.704099] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:39.491 [2024-04-26 23:22:28.713758] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:39.491 [2024-04-26 23:22:28.713776] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:39.491 [2024-04-26 23:22:28.725411] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:39.491 [2024-04-26 23:22:28.725429] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:39.491 [2024-04-26 23:22:28.733967] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:39.491 [2024-04-26 23:22:28.733984] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:39.752 [2024-04-26 23:22:28.744359] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:39.752 [2024-04-26 23:22:28.744377] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:39.752 [2024-04-26 23:22:28.754062] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:39.752 [2024-04-26 23:22:28.754079] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:39.752 [2024-04-26 23:22:28.763557] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:39.752 [2024-04-26 23:22:28.763574] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:39.752 [2024-04-26 23:22:28.773358] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:39.752 [2024-04-26 23:22:28.773376] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:39.752 [2024-04-26 23:22:28.783479] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:39.752 [2024-04-26 23:22:28.783497] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:39.752 [2024-04-26 23:22:28.795851] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:39.752 [2024-04-26 23:22:28.795869] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:39.752 [2024-04-26 23:22:28.804487] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:39.752 [2024-04-26 23:22:28.804504] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:39.752 [2024-04-26 23:22:28.814744] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:39.752 [2024-04-26 23:22:28.814763] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:39.752 [2024-04-26 23:22:28.824708] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:39.752 [2024-04-26 23:22:28.824726] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:39.752 [2024-04-26 23:22:28.834244] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:39.752 [2024-04-26 23:22:28.834262] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:39.752 [2024-04-26 23:22:28.843957] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:39.752 [2024-04-26 23:22:28.843975] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:39.752 [2024-04-26 23:22:28.853593] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:39.752 [2024-04-26 23:22:28.853612] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:39.752 [2024-04-26 23:22:28.863307] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:39.752 [2024-04-26 23:22:28.863325] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:39.752 [2024-04-26 23:22:28.872995] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:39.752 [2024-04-26 23:22:28.873014] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:39.752 [2024-04-26 23:22:28.882937] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:39.752 [2024-04-26 23:22:28.882955] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:39.752 [2024-04-26 23:22:28.892855] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:39.752 [2024-04-26 23:22:28.892878] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:39.752 [2024-04-26 23:22:28.902378] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:39.752 [2024-04-26 23:22:28.902395] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:39.752 [2024-04-26 23:22:28.911950] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:39.752 [2024-04-26 23:22:28.911968] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:39.752 [2024-04-26 23:22:28.921675] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:39.752 [2024-04-26 23:22:28.921695] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:39.752 [2024-04-26 23:22:28.931184] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:39.752 [2024-04-26 23:22:28.931202] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:39.752 [2024-04-26 23:22:28.941098] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:39.752 [2024-04-26 23:22:28.941116] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:39.752 [2024-04-26 23:22:28.950844] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:39.752 [2024-04-26 23:22:28.950862] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:39.752 [2024-04-26 23:22:28.960612] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:39.752 [2024-04-26 23:22:28.960630] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:39.752 [2024-04-26 23:22:28.970360] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:39.752 [2024-04-26 23:22:28.970378] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:39.752 [2024-04-26 23:22:28.980401] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:39.752 [2024-04-26 23:22:28.980419] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:39.752 [2024-04-26 23:22:28.990210] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:39.752 [2024-04-26 23:22:28.990228] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:39.752 [2024-04-26 23:22:29.001891] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:39.752 [2024-04-26 23:22:29.001910] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:40.013 [2024-04-26 23:22:29.010708] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:40.013 [2024-04-26 23:22:29.010726] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:40.013 [2024-04-26 23:22:29.022628] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:40.013 [2024-04-26 23:22:29.022646] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:40.013 [2024-04-26 23:22:29.034073] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:40.013 [2024-04-26 23:22:29.034090] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:40.013 [2024-04-26 23:22:29.042873] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:40.013 [2024-04-26 23:22:29.042890] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:40.013 [2024-04-26 23:22:29.053148] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:40.013 [2024-04-26 23:22:29.053166] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:40.013 [2024-04-26 23:22:29.062781] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:40.013 [2024-04-26 23:22:29.062799] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:40.013 [2024-04-26 23:22:29.072202] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:40.013 [2024-04-26 23:22:29.072220] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:40.013 [2024-04-26 23:22:29.081881] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:40.013 [2024-04-26 23:22:29.081902] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:40.013 [2024-04-26 23:22:29.091470] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:40.013 [2024-04-26 23:22:29.091488] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:40.013 [2024-04-26 23:22:29.101143] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:40.013 [2024-04-26 23:22:29.101161] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:40.013 [2024-04-26 23:22:29.110983] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:40.013 [2024-04-26 23:22:29.111000] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:40.013 [2024-04-26 23:22:29.120667] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:40.013 [2024-04-26 23:22:29.120685] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:40.013 [2024-04-26 23:22:29.130313] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:40.013 [2024-04-26 23:22:29.130331] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:40.013 [2024-04-26 23:22:29.139567] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:40.013 [2024-04-26 23:22:29.139585] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:40.013 [2024-04-26 23:22:29.149889] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:40.013 [2024-04-26 23:22:29.149907] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:40.013 [2024-04-26 23:22:29.161574] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:40.013 [2024-04-26 23:22:29.161592] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:40.013 [2024-04-26 23:22:29.170188] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:40.013 [2024-04-26 23:22:29.170206] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:40.013 [2024-04-26 23:22:29.180470] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:40.013 [2024-04-26 23:22:29.180488] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:40.013 [2024-04-26 23:22:29.190034] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:40.013 [2024-04-26 23:22:29.190052] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:40.013 [2024-04-26 23:22:29.199554] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:40.014 [2024-04-26 23:22:29.199572] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:40.014 [2024-04-26 23:22:29.209548] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:40.014 [2024-04-26 23:22:29.209565] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:40.014 [2024-04-26 23:22:29.219134] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:40.014 [2024-04-26 23:22:29.219152] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:40.014 [2024-04-26 23:22:29.230688] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:40.014 [2024-04-26 23:22:29.230706] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:40.014 [2024-04-26 23:22:29.241217] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:40.014 [2024-04-26 23:22:29.241235] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:40.014 [2024-04-26 23:22:29.249318] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:40.014 [2024-04-26 23:22:29.249335] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:40.014 [2024-04-26 23:22:29.261186] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:40.014 [2024-04-26 23:22:29.261204] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:40.275 [2024-04-26 23:22:29.272616] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:40.275 [2024-04-26 23:22:29.272638] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:40.275 [2024-04-26 23:22:29.281014] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:40.275 [2024-04-26 23:22:29.281032] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:40.275 [2024-04-26 23:22:29.292886] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:40.275 [2024-04-26 23:22:29.292903] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:40.275 [2024-04-26 23:22:29.304309] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:40.275 [2024-04-26 23:22:29.304327] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:40.275 [2024-04-26 23:22:29.312422] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:40.275 [2024-04-26 23:22:29.312439] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:40.275 [2024-04-26 23:22:29.324089] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:40.275 [2024-04-26 23:22:29.324107] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:40.275 [2024-04-26 23:22:29.333296] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:40.275 [2024-04-26 23:22:29.333313] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:40.275 [2024-04-26 23:22:29.344770] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:40.275 [2024-04-26 23:22:29.344788] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:40.275 [2024-04-26 23:22:29.353173] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:40.275 [2024-04-26 23:22:29.353190] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:40.275 [2024-04-26 23:22:29.364924] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:40.275 [2024-04-26 23:22:29.364941] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:40.275 [2024-04-26 23:22:29.376159] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:40.275 [2024-04-26 23:22:29.376176] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:40.275 [2024-04-26 23:22:29.386565] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:40.275 [2024-04-26 23:22:29.386582] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:40.275 [2024-04-26 23:22:29.396816] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:40.275 [2024-04-26 23:22:29.396833] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:40.275 [2024-04-26 23:22:29.405011] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:40.275 [2024-04-26 23:22:29.405028] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:40.275 [2024-04-26 23:22:29.416933] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:40.275 [2024-04-26 23:22:29.416951] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:40.275 [2024-04-26 23:22:29.428645] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:40.275 [2024-04-26 23:22:29.428663] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:40.275 [2024-04-26 23:22:29.437165] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:40.275 [2024-04-26 23:22:29.437183] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:40.275 [2024-04-26 23:22:29.447495] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:40.275 [2024-04-26 23:22:29.447513] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:40.275 [2024-04-26 23:22:29.457305] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:40.275 [2024-04-26 23:22:29.457323] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:40.275 [2024-04-26 23:22:29.467001] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:40.275 [2024-04-26 23:22:29.467018] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:40.275 [2024-04-26 23:22:29.476658] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:40.275 [2024-04-26 23:22:29.476676] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:40.275 [2024-04-26 23:22:29.486448] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:40.275 [2024-04-26 23:22:29.486465] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:40.275 [2024-04-26 23:22:29.496078] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:40.275 [2024-04-26 23:22:29.496095] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:40.275 [2024-04-26 23:22:29.505847] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:40.275 [2024-04-26 23:22:29.505864] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:40.275 [2024-04-26 23:22:29.515372] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:40.275 [2024-04-26 23:22:29.515390] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:40.275 [2024-04-26 23:22:29.525467] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:40.275 [2024-04-26 23:22:29.525488] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:40.537 [2024-04-26 23:22:29.535121] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:40.537 [2024-04-26 23:22:29.535140] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:40.537 [2024-04-26 23:22:29.546712] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:40.537 [2024-04-26 23:22:29.546730] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:40.537 [2024-04-26 23:22:29.555519] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:40.537 [2024-04-26 23:22:29.555537] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:40.537 [2024-04-26 23:22:29.565721] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:40.537 [2024-04-26 23:22:29.565740] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:40.537 [2024-04-26 23:22:29.575317] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:40.537 [2024-04-26 23:22:29.575335] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:40.537 [2024-04-26 23:22:29.585045] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:40.537 [2024-04-26 23:22:29.585064] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:40.537 [2024-04-26 23:22:29.594606] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:40.537 [2024-04-26 23:22:29.594623] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:40.537 [2024-04-26 23:22:29.604239] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:40.537 [2024-04-26 23:22:29.604257] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:40.537 [2024-04-26 23:22:29.613935] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:40.537 [2024-04-26 23:22:29.613953] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:40.537 [2024-04-26 23:22:29.623711] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:40.537 [2024-04-26 23:22:29.623728] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:40.537 [2024-04-26 23:22:29.633640] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:40.537 [2024-04-26 23:22:29.633657] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:40.537 [2024-04-26 23:22:29.643339] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:40.537 [2024-04-26 23:22:29.643357] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:40.537 [2024-04-26 23:22:29.652809] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:40.537 [2024-04-26 23:22:29.652826] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:40.537 [2024-04-26 23:22:29.662391] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:40.537 [2024-04-26 23:22:29.662409] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:40.537 [2024-04-26 23:22:29.672156] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:40.537 [2024-04-26 23:22:29.672174] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:40.537 [2024-04-26 23:22:29.682058] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:40.537 [2024-04-26 23:22:29.682076] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:40.537 [2024-04-26 23:22:29.691599] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:40.537 [2024-04-26 23:22:29.691616] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:40.537 [2024-04-26 23:22:29.701150] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:40.537 [2024-04-26 23:22:29.701168] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:40.537 [2024-04-26 23:22:29.710751] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:40.537 [2024-04-26 23:22:29.710768] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:40.537 [2024-04-26 23:22:29.720152] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:40.537 [2024-04-26 23:22:29.720169] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:40.537 [2024-04-26 23:22:29.729716] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:40.537 [2024-04-26 23:22:29.729734] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:40.537 [2024-04-26 23:22:29.739488] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:40.537 [2024-04-26 23:22:29.739506] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:40.537 [2024-04-26 23:22:29.749229] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:40.537 [2024-04-26 23:22:29.749247] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:40.537 [2024-04-26 23:22:29.758935] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:40.537 [2024-04-26 23:22:29.758952] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:40.537 [2024-04-26 23:22:29.768542] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:40.537 [2024-04-26 23:22:29.768559] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:40.537 [2024-04-26 23:22:29.778026] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:40.537 [2024-04-26 23:22:29.778044] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:40.537 [2024-04-26 23:22:29.787745] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:40.537 [2024-04-26 23:22:29.787763] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:40.799 [2024-04-26 23:22:29.797455] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:40.799 [2024-04-26 23:22:29.797474] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:40.799 [2024-04-26 23:22:29.807035] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:40.799 [2024-04-26 23:22:29.807053] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:40.799 [2024-04-26 23:22:29.816582] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:40.799 [2024-04-26 23:22:29.816600] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:40.799 [2024-04-26 23:22:29.826255] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:40.799 [2024-04-26 23:22:29.826272] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:40.799 [2024-04-26 23:22:29.835786] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:40.799 [2024-04-26 23:22:29.835805] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:40.799 [2024-04-26 23:22:29.845475] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:40.799 [2024-04-26 23:22:29.845494] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:40.799 [2024-04-26 23:22:29.855325] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:40.799 [2024-04-26 23:22:29.855342] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:40.799 [2024-04-26 23:22:29.865184] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:40.799 [2024-04-26 23:22:29.865201] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:40.799 [2024-04-26 23:22:29.874878] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:40.799 [2024-04-26 23:22:29.874895] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:40.799 [2024-04-26 23:22:29.884538] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:40.799 [2024-04-26 23:22:29.884556] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:40.799 [2024-04-26 23:22:29.894263] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:40.799 [2024-04-26 23:22:29.894280] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:40.799 [2024-04-26 23:22:29.903783] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:40.799 [2024-04-26 23:22:29.903800] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:40.799 [2024-04-26 23:22:29.913473] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:40.799 [2024-04-26 23:22:29.913490] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:40.799 [2024-04-26 23:22:29.923085] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:40.799 [2024-04-26 23:22:29.923102] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:40.799 [2024-04-26 23:22:29.932880] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:40.799 [2024-04-26 23:22:29.932898] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:40.799 [2024-04-26 23:22:29.942454] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:40.799 [2024-04-26 23:22:29.942471] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:40.799 [2024-04-26 23:22:29.952103] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:40.799 [2024-04-26 23:22:29.952121] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:40.799 [2024-04-26 23:22:29.961794] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:40.799 [2024-04-26 23:22:29.961811] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:40.799 [2024-04-26 23:22:29.971438] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:40.799 [2024-04-26 23:22:29.971456] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:40.799 [2024-04-26 23:22:29.981119] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:40.799 [2024-04-26 23:22:29.981136] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:40.799 [2024-04-26 23:22:29.990857] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:40.799 [2024-04-26 23:22:29.990874] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:40.799 [2024-04-26 23:22:30.000449] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:40.799 [2024-04-26 23:22:30.000466] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:40.799 [2024-04-26 23:22:30.010629] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:40.799 [2024-04-26 23:22:30.010655] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:40.799 [2024-04-26 23:22:30.022255] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:40.799 [2024-04-26 23:22:30.022273] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:40.799 [2024-04-26 23:22:30.039160] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:40.799 [2024-04-26 23:22:30.039178] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:41.060 [2024-04-26 23:22:30.055228] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:41.061 [2024-04-26 23:22:30.055246] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:41.061 [2024-04-26 23:22:30.073329] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:41.061 [2024-04-26 23:22:30.073348] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:41.061 [2024-04-26 23:22:30.089059] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:41.061 [2024-04-26 23:22:30.089077] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:41.061 [2024-04-26 23:22:30.105814] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:41.061 [2024-04-26 23:22:30.105833] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:41.061 [2024-04-26 23:22:30.122317] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:41.061 [2024-04-26 23:22:30.122336] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:41.061 [2024-04-26 23:22:30.138697] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:41.061 [2024-04-26 23:22:30.138716] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:41.061 [2024-04-26 23:22:30.156452] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:41.061 [2024-04-26 23:22:30.156471] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:41.061 [2024-04-26 23:22:30.171635] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:41.061 [2024-04-26 23:22:30.171653] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:41.061 [2024-04-26 23:22:30.187075] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:41.061 [2024-04-26 23:22:30.187094] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:41.061 [2024-04-26 23:22:30.204008] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:41.061 [2024-04-26 23:22:30.204026] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:41.061 [2024-04-26 23:22:30.221710] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:41.061 [2024-04-26 23:22:30.221729] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:41.061 [2024-04-26 23:22:30.238307] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:41.061 [2024-04-26 23:22:30.238325] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:41.061 [2024-04-26 23:22:30.254825] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:41.061 [2024-04-26 23:22:30.254851] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:41.061 [2024-04-26 23:22:30.272199] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:41.061 [2024-04-26 23:22:30.272217] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:41.061 [2024-04-26 23:22:30.288717] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:41.061 [2024-04-26 23:22:30.288735] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:41.061 [2024-04-26 23:22:30.306781] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:41.061 [2024-04-26 23:22:30.306799] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:41.323 [2024-04-26 23:22:30.322667] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:41.323 [2024-04-26 23:22:30.322690] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:41.323 [2024-04-26 23:22:30.340194] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:41.323 [2024-04-26 23:22:30.340212] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:41.323 [2024-04-26 23:22:30.355967] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:41.323 [2024-04-26 23:22:30.355986] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:41.323 [2024-04-26 23:22:30.367132] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:41.323 [2024-04-26 23:22:30.367151] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:41.323 [2024-04-26 23:22:30.383394] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:41.323 [2024-04-26 23:22:30.383412] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:41.323 [2024-04-26 23:22:30.399704] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:41.323 [2024-04-26 23:22:30.399722] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:41.323 [2024-04-26 23:22:30.424102] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:41.323 [2024-04-26 23:22:30.424121] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:41.323 [2024-04-26 23:22:30.435544] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:41.323 [2024-04-26 23:22:30.435562] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:41.323 [2024-04-26 23:22:30.452122] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:41.323 [2024-04-26 23:22:30.452140] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:41.323 [2024-04-26 23:22:30.469706] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:41.323 [2024-04-26 23:22:30.469725] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:41.323 [2024-04-26 23:22:30.485807] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:41.323 [2024-04-26 23:22:30.485826] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:41.323 [2024-04-26 23:22:30.497325] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:41.323 [2024-04-26 23:22:30.497343] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:41.323 [2024-04-26 23:22:30.514464] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:41.323 [2024-04-26 23:22:30.514482] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:41.323 [2024-04-26 23:22:30.531271] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:41.323 [2024-04-26 23:22:30.531289] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:41.323 [2024-04-26 23:22:30.546928] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:41.323 [2024-04-26 23:22:30.546946] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:41.323 [2024-04-26 23:22:30.562808] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:41.323 [2024-04-26 23:22:30.562827] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:41.584 [2024-04-26 23:22:30.580381] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:41.584 [2024-04-26 23:22:30.580399] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:41.584 [2024-04-26 23:22:30.596171] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:41.584 [2024-04-26 23:22:30.596189] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:41.584 [2024-04-26 23:22:30.613883] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:41.584 [2024-04-26 23:22:30.613901] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:41.584 [2024-04-26 23:22:30.628971] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:41.584 [2024-04-26 23:22:30.628994] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:41.584 [2024-04-26 23:22:30.640228] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:41.584 [2024-04-26 23:22:30.640246] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:41.584 [2024-04-26 23:22:30.657540] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:41.584 [2024-04-26 23:22:30.657558] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:41.584 [2024-04-26 23:22:30.673258] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:41.584 [2024-04-26 23:22:30.673276] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:41.584 [2024-04-26 23:22:30.690833] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:41.584 [2024-04-26 23:22:30.690856] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:41.584 [2024-04-26 23:22:30.708162] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:41.584 [2024-04-26 23:22:30.708181] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:41.584 [2024-04-26 23:22:30.724458] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:41.584 [2024-04-26 23:22:30.724476] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:41.584 [2024-04-26 23:22:30.741386] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:41.584 [2024-04-26 23:22:30.741404] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:41.584 [2024-04-26 23:22:30.757597] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:41.584 [2024-04-26 23:22:30.757615] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:41.584 [2024-04-26 23:22:30.775309] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:41.585 [2024-04-26 23:22:30.775329] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:41.585 [2024-04-26 23:22:30.790949] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:41.585 [2024-04-26 23:22:30.790967] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:41.585 [2024-04-26 23:22:30.808457] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:41.585 [2024-04-26 23:22:30.808476] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:41.585 [2024-04-26 23:22:30.824215] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:41.585 [2024-04-26 23:22:30.824232] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:41.846 [2024-04-26 23:22:30.841859] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:41.846 [2024-04-26 23:22:30.841879] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:41.846 [2024-04-26 23:22:30.857509] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:41.846 [2024-04-26 23:22:30.857527] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:41.846 [2024-04-26 23:22:30.874928] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:41.846 [2024-04-26 23:22:30.874946] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:41.846 [2024-04-26 23:22:30.890970] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:41.846 [2024-04-26 23:22:30.890987] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:41.846 [2024-04-26 23:22:30.909179] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:41.846 [2024-04-26 23:22:30.909197] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:41.846 [2024-04-26 23:22:30.923995] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:41.846 [2024-04-26 23:22:30.924013] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:41.846 [2024-04-26 23:22:30.939785] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:41.846 [2024-04-26 23:22:30.939807] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:41.846 [2024-04-26 23:22:30.957363] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:41.846 [2024-04-26 23:22:30.957382] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:41.846 [2024-04-26 23:22:30.973228] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:41.846 [2024-04-26 23:22:30.973246] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:41.846 [2024-04-26 23:22:30.990501] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:41.846 [2024-04-26 23:22:30.990520] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:41.846 [2024-04-26 23:22:31.006641] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:41.846 [2024-04-26 23:22:31.006659] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:41.846 [2024-04-26 23:22:31.023529] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:41.846 [2024-04-26 23:22:31.023548] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:41.846 [2024-04-26 23:22:31.040553] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:41.846 [2024-04-26 23:22:31.040571] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:41.846 [2024-04-26 23:22:31.056998] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:41.846 [2024-04-26 23:22:31.057015] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:41.846 [2024-04-26 23:22:31.074337] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:41.846 [2024-04-26 23:22:31.074355] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:41.846 [2024-04-26 23:22:31.090297] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:41.846 [2024-04-26 23:22:31.090315] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:42.108 [2024-04-26 23:22:31.107620] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:42.108 [2024-04-26 23:22:31.107639] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:42.108 [2024-04-26 23:22:31.123082] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:42.108 [2024-04-26 23:22:31.123100] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:42.108 00:19:42.108 Latency(us) 00:19:42.108 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:42.108 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:19:42.108 Nvme1n1 : 5.01 13111.48 102.43 0.00 0.00 9752.19 4369.07 25449.81 00:19:42.108 =================================================================================================================== 00:19:42.108 Total : 13111.48 102.43 0.00 0.00 9752.19 4369.07 25449.81 00:19:42.108 [2024-04-26 23:22:31.135163] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:42.108 [2024-04-26 23:22:31.135181] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:42.108 [2024-04-26 23:22:31.147186] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:42.108 [2024-04-26 23:22:31.147200] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:42.108 [2024-04-26 23:22:31.159220] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:42.108 [2024-04-26 23:22:31.159234] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:42.108 [2024-04-26 23:22:31.171250] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:42.108 [2024-04-26 23:22:31.171263] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:42.108 [2024-04-26 23:22:31.183281] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:42.108 [2024-04-26 23:22:31.183294] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:42.108 [2024-04-26 23:22:31.195312] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:42.108 [2024-04-26 23:22:31.195324] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:42.108 [2024-04-26 23:22:31.207344] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:42.108 [2024-04-26 23:22:31.207356] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:42.108 [2024-04-26 23:22:31.219376] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:42.108 [2024-04-26 23:22:31.219389] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:42.108 [2024-04-26 23:22:31.231409] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:42.108 [2024-04-26 23:22:31.231423] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:42.108 [2024-04-26 23:22:31.243436] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:42.108 [2024-04-26 23:22:31.243445] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:42.108 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (3949181) - No such process 00:19:42.108 23:22:31 -- target/zcopy.sh@49 -- # wait 3949181 00:19:42.108 23:22:31 -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:19:42.108 23:22:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:42.108 23:22:31 -- common/autotest_common.sh@10 -- # set +x 00:19:42.108 23:22:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:42.108 23:22:31 -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:19:42.108 23:22:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:42.108 23:22:31 -- common/autotest_common.sh@10 -- # set +x 00:19:42.108 delay0 00:19:42.108 23:22:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:42.108 23:22:31 -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:19:42.108 23:22:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:42.108 23:22:31 -- common/autotest_common.sh@10 -- # set +x 00:19:42.108 23:22:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:42.108 23:22:31 -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:19:42.108 EAL: No free 2048 kB hugepages reported on node 1 00:19:42.368 [2024-04-26 23:22:31.396038] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:19:50.559 Initializing NVMe Controllers 00:19:50.559 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:19:50.559 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:19:50.559 Initialization complete. Launching workers. 00:19:50.559 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 249, failed: 24550 00:19:50.559 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 24676, failed to submit 123 00:19:50.559 success 24575, unsuccess 101, failed 0 00:19:50.559 23:22:38 -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:19:50.559 23:22:38 -- target/zcopy.sh@60 -- # nvmftestfini 00:19:50.559 23:22:38 -- nvmf/common.sh@477 -- # nvmfcleanup 00:19:50.559 23:22:38 -- nvmf/common.sh@117 -- # sync 00:19:50.559 23:22:38 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:19:50.559 23:22:38 -- nvmf/common.sh@120 -- # set +e 00:19:50.559 23:22:38 -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:50.559 23:22:38 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:19:50.559 rmmod nvme_tcp 00:19:50.559 rmmod nvme_fabrics 00:19:50.559 rmmod nvme_keyring 00:19:50.559 23:22:38 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:50.559 23:22:38 -- nvmf/common.sh@124 -- # set -e 00:19:50.559 23:22:38 -- nvmf/common.sh@125 -- # return 0 00:19:50.559 23:22:38 -- nvmf/common.sh@478 -- # '[' -n 3946829 ']' 00:19:50.559 23:22:38 -- nvmf/common.sh@479 -- # killprocess 3946829 00:19:50.559 23:22:38 -- common/autotest_common.sh@936 -- # '[' -z 3946829 ']' 00:19:50.559 23:22:38 -- common/autotest_common.sh@940 -- # kill -0 3946829 00:19:50.559 23:22:38 -- common/autotest_common.sh@941 -- # uname 00:19:50.559 23:22:38 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:19:50.559 23:22:38 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3946829 00:19:50.559 23:22:38 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:19:50.559 23:22:38 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:19:50.559 23:22:38 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3946829' 00:19:50.559 killing process with pid 3946829 00:19:50.559 23:22:38 -- common/autotest_common.sh@955 -- # kill 3946829 00:19:50.559 23:22:38 -- common/autotest_common.sh@960 -- # wait 3946829 00:19:50.559 23:22:38 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:19:50.559 23:22:38 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:19:50.559 23:22:38 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:19:50.559 23:22:38 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:50.559 23:22:38 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:19:50.559 23:22:38 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:50.559 23:22:38 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:50.559 23:22:38 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:51.950 23:22:40 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:19:51.950 00:19:51.950 real 0m33.921s 00:19:51.950 user 0m45.954s 00:19:51.950 sys 0m10.712s 00:19:51.950 23:22:40 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:19:51.950 23:22:40 -- common/autotest_common.sh@10 -- # set +x 00:19:51.950 ************************************ 00:19:51.950 END TEST nvmf_zcopy 00:19:51.950 ************************************ 00:19:51.950 23:22:40 -- nvmf/nvmf.sh@54 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:19:51.950 23:22:40 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:19:51.950 23:22:40 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:19:51.950 23:22:40 -- common/autotest_common.sh@10 -- # set +x 00:19:51.950 ************************************ 00:19:51.950 START TEST nvmf_nmic 00:19:51.950 ************************************ 00:19:51.950 23:22:41 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:19:52.211 * Looking for test storage... 00:19:52.211 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:52.211 23:22:41 -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:52.211 23:22:41 -- nvmf/common.sh@7 -- # uname -s 00:19:52.211 23:22:41 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:52.211 23:22:41 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:52.211 23:22:41 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:52.211 23:22:41 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:52.211 23:22:41 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:52.211 23:22:41 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:52.211 23:22:41 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:52.211 23:22:41 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:52.211 23:22:41 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:52.211 23:22:41 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:52.211 23:22:41 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:19:52.211 23:22:41 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:19:52.211 23:22:41 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:52.211 23:22:41 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:52.211 23:22:41 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:52.211 23:22:41 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:52.211 23:22:41 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:52.211 23:22:41 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:52.211 23:22:41 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:52.211 23:22:41 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:52.211 23:22:41 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:52.211 23:22:41 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:52.211 23:22:41 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:52.211 23:22:41 -- paths/export.sh@5 -- # export PATH 00:19:52.211 23:22:41 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:52.211 23:22:41 -- nvmf/common.sh@47 -- # : 0 00:19:52.211 23:22:41 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:19:52.211 23:22:41 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:19:52.211 23:22:41 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:52.212 23:22:41 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:52.212 23:22:41 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:52.212 23:22:41 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:19:52.212 23:22:41 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:19:52.212 23:22:41 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:19:52.212 23:22:41 -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:19:52.212 23:22:41 -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:19:52.212 23:22:41 -- target/nmic.sh@14 -- # nvmftestinit 00:19:52.212 23:22:41 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:19:52.212 23:22:41 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:52.212 23:22:41 -- nvmf/common.sh@437 -- # prepare_net_devs 00:19:52.212 23:22:41 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:19:52.212 23:22:41 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:19:52.212 23:22:41 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:52.212 23:22:41 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:52.212 23:22:41 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:52.212 23:22:41 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:19:52.212 23:22:41 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:19:52.212 23:22:41 -- nvmf/common.sh@285 -- # xtrace_disable 00:19:52.212 23:22:41 -- common/autotest_common.sh@10 -- # set +x 00:20:00.363 23:22:48 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:20:00.363 23:22:48 -- nvmf/common.sh@291 -- # pci_devs=() 00:20:00.363 23:22:48 -- nvmf/common.sh@291 -- # local -a pci_devs 00:20:00.363 23:22:48 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:20:00.363 23:22:48 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:20:00.363 23:22:48 -- nvmf/common.sh@293 -- # pci_drivers=() 00:20:00.363 23:22:48 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:20:00.363 23:22:48 -- nvmf/common.sh@295 -- # net_devs=() 00:20:00.363 23:22:48 -- nvmf/common.sh@295 -- # local -ga net_devs 00:20:00.363 23:22:48 -- nvmf/common.sh@296 -- # e810=() 00:20:00.363 23:22:48 -- nvmf/common.sh@296 -- # local -ga e810 00:20:00.363 23:22:48 -- nvmf/common.sh@297 -- # x722=() 00:20:00.363 23:22:48 -- nvmf/common.sh@297 -- # local -ga x722 00:20:00.363 23:22:48 -- nvmf/common.sh@298 -- # mlx=() 00:20:00.363 23:22:48 -- nvmf/common.sh@298 -- # local -ga mlx 00:20:00.363 23:22:48 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:00.363 23:22:48 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:00.363 23:22:48 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:00.363 23:22:48 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:00.363 23:22:48 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:00.363 23:22:48 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:00.363 23:22:48 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:00.363 23:22:48 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:00.363 23:22:48 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:00.363 23:22:48 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:00.363 23:22:48 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:00.363 23:22:48 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:20:00.363 23:22:48 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:20:00.363 23:22:48 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:20:00.363 23:22:48 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:20:00.363 23:22:48 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:20:00.363 23:22:48 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:20:00.363 23:22:48 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:00.363 23:22:48 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:20:00.363 Found 0000:31:00.0 (0x8086 - 0x159b) 00:20:00.363 23:22:48 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:00.363 23:22:48 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:00.363 23:22:48 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:00.363 23:22:48 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:00.363 23:22:48 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:00.363 23:22:48 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:00.363 23:22:48 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:20:00.363 Found 0000:31:00.1 (0x8086 - 0x159b) 00:20:00.363 23:22:48 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:00.363 23:22:48 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:00.363 23:22:48 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:00.363 23:22:48 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:00.363 23:22:48 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:00.364 23:22:48 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:20:00.364 23:22:48 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:20:00.364 23:22:48 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:20:00.364 23:22:48 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:00.364 23:22:48 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:00.364 23:22:48 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:20:00.364 23:22:48 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:00.364 23:22:48 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:20:00.364 Found net devices under 0000:31:00.0: cvl_0_0 00:20:00.364 23:22:48 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:20:00.364 23:22:48 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:00.364 23:22:48 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:00.364 23:22:48 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:20:00.364 23:22:48 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:00.364 23:22:48 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:20:00.364 Found net devices under 0000:31:00.1: cvl_0_1 00:20:00.364 23:22:48 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:20:00.364 23:22:48 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:20:00.364 23:22:48 -- nvmf/common.sh@403 -- # is_hw=yes 00:20:00.364 23:22:48 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:20:00.364 23:22:48 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:20:00.364 23:22:48 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:20:00.364 23:22:48 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:00.364 23:22:48 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:00.364 23:22:48 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:00.364 23:22:48 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:20:00.364 23:22:48 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:00.364 23:22:48 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:00.364 23:22:48 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:20:00.364 23:22:48 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:00.364 23:22:48 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:00.364 23:22:48 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:20:00.364 23:22:48 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:20:00.364 23:22:48 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:20:00.364 23:22:48 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:00.364 23:22:48 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:00.364 23:22:48 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:00.364 23:22:48 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:20:00.364 23:22:48 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:00.364 23:22:48 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:00.364 23:22:48 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:00.364 23:22:48 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:20:00.364 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:00.364 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.653 ms 00:20:00.364 00:20:00.364 --- 10.0.0.2 ping statistics --- 00:20:00.364 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:00.364 rtt min/avg/max/mdev = 0.653/0.653/0.653/0.000 ms 00:20:00.364 23:22:48 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:00.364 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:00.364 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.287 ms 00:20:00.364 00:20:00.364 --- 10.0.0.1 ping statistics --- 00:20:00.364 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:00.364 rtt min/avg/max/mdev = 0.287/0.287/0.287/0.000 ms 00:20:00.364 23:22:48 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:00.364 23:22:48 -- nvmf/common.sh@411 -- # return 0 00:20:00.364 23:22:48 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:20:00.364 23:22:48 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:00.364 23:22:48 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:20:00.364 23:22:48 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:20:00.364 23:22:48 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:00.364 23:22:48 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:20:00.364 23:22:48 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:20:00.364 23:22:48 -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:20:00.364 23:22:48 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:20:00.364 23:22:48 -- common/autotest_common.sh@710 -- # xtrace_disable 00:20:00.364 23:22:48 -- common/autotest_common.sh@10 -- # set +x 00:20:00.364 23:22:48 -- nvmf/common.sh@470 -- # nvmfpid=3955916 00:20:00.364 23:22:48 -- nvmf/common.sh@471 -- # waitforlisten 3955916 00:20:00.364 23:22:48 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:20:00.364 23:22:48 -- common/autotest_common.sh@817 -- # '[' -z 3955916 ']' 00:20:00.364 23:22:48 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:00.364 23:22:48 -- common/autotest_common.sh@822 -- # local max_retries=100 00:20:00.364 23:22:48 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:00.364 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:00.364 23:22:48 -- common/autotest_common.sh@826 -- # xtrace_disable 00:20:00.364 23:22:48 -- common/autotest_common.sh@10 -- # set +x 00:20:00.364 [2024-04-26 23:22:48.619519] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:20:00.364 [2024-04-26 23:22:48.619567] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:00.364 EAL: No free 2048 kB hugepages reported on node 1 00:20:00.364 [2024-04-26 23:22:48.686575] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:00.364 [2024-04-26 23:22:48.717514] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:00.364 [2024-04-26 23:22:48.717555] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:00.364 [2024-04-26 23:22:48.717564] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:00.364 [2024-04-26 23:22:48.717576] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:00.364 [2024-04-26 23:22:48.717582] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:00.364 [2024-04-26 23:22:48.719857] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:00.364 [2024-04-26 23:22:48.719999] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:20:00.364 [2024-04-26 23:22:48.720205] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:20:00.364 [2024-04-26 23:22:48.720207] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:00.364 23:22:48 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:20:00.364 23:22:48 -- common/autotest_common.sh@850 -- # return 0 00:20:00.364 23:22:48 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:20:00.364 23:22:48 -- common/autotest_common.sh@716 -- # xtrace_disable 00:20:00.364 23:22:48 -- common/autotest_common.sh@10 -- # set +x 00:20:00.364 23:22:48 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:00.364 23:22:48 -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:00.364 23:22:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:00.364 23:22:48 -- common/autotest_common.sh@10 -- # set +x 00:20:00.364 [2024-04-26 23:22:48.865657] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:00.364 23:22:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:00.364 23:22:48 -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:20:00.364 23:22:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:00.364 23:22:48 -- common/autotest_common.sh@10 -- # set +x 00:20:00.364 Malloc0 00:20:00.364 23:22:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:00.364 23:22:48 -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:20:00.364 23:22:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:00.364 23:22:48 -- common/autotest_common.sh@10 -- # set +x 00:20:00.364 23:22:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:00.364 23:22:48 -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:20:00.364 23:22:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:00.364 23:22:48 -- common/autotest_common.sh@10 -- # set +x 00:20:00.364 23:22:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:00.364 23:22:48 -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:00.364 23:22:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:00.364 23:22:48 -- common/autotest_common.sh@10 -- # set +x 00:20:00.364 [2024-04-26 23:22:48.922450] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:00.364 23:22:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:00.364 23:22:48 -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:20:00.364 test case1: single bdev can't be used in multiple subsystems 00:20:00.364 23:22:48 -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:20:00.364 23:22:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:00.364 23:22:48 -- common/autotest_common.sh@10 -- # set +x 00:20:00.364 23:22:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:00.364 23:22:48 -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:20:00.364 23:22:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:00.364 23:22:48 -- common/autotest_common.sh@10 -- # set +x 00:20:00.364 23:22:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:00.364 23:22:48 -- target/nmic.sh@28 -- # nmic_status=0 00:20:00.364 23:22:48 -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:20:00.364 23:22:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:00.364 23:22:48 -- common/autotest_common.sh@10 -- # set +x 00:20:00.364 [2024-04-26 23:22:48.958372] bdev.c:8005:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:20:00.364 [2024-04-26 23:22:48.958390] subsystem.c:1940:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:20:00.364 [2024-04-26 23:22:48.958398] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:00.364 request: 00:20:00.364 { 00:20:00.364 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:20:00.364 "namespace": { 00:20:00.364 "bdev_name": "Malloc0", 00:20:00.364 "no_auto_visible": false 00:20:00.364 }, 00:20:00.364 "method": "nvmf_subsystem_add_ns", 00:20:00.364 "req_id": 1 00:20:00.364 } 00:20:00.364 Got JSON-RPC error response 00:20:00.364 response: 00:20:00.364 { 00:20:00.364 "code": -32602, 00:20:00.365 "message": "Invalid parameters" 00:20:00.365 } 00:20:00.365 23:22:48 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:20:00.365 23:22:48 -- target/nmic.sh@29 -- # nmic_status=1 00:20:00.365 23:22:48 -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:20:00.365 23:22:48 -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:20:00.365 Adding namespace failed - expected result. 00:20:00.365 23:22:48 -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:20:00.365 test case2: host connect to nvmf target in multiple paths 00:20:00.365 23:22:48 -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:20:00.365 23:22:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:00.365 23:22:48 -- common/autotest_common.sh@10 -- # set +x 00:20:00.365 [2024-04-26 23:22:48.970519] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:20:00.365 23:22:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:00.365 23:22:48 -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:20:01.310 23:22:50 -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:20:03.225 23:22:52 -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:20:03.225 23:22:52 -- common/autotest_common.sh@1184 -- # local i=0 00:20:03.225 23:22:52 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:20:03.225 23:22:52 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:20:03.225 23:22:52 -- common/autotest_common.sh@1191 -- # sleep 2 00:20:05.141 23:22:54 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:20:05.141 23:22:54 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:20:05.141 23:22:54 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:20:05.141 23:22:54 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:20:05.141 23:22:54 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:20:05.141 23:22:54 -- common/autotest_common.sh@1194 -- # return 0 00:20:05.141 23:22:54 -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:20:05.141 [global] 00:20:05.141 thread=1 00:20:05.141 invalidate=1 00:20:05.141 rw=write 00:20:05.141 time_based=1 00:20:05.141 runtime=1 00:20:05.141 ioengine=libaio 00:20:05.141 direct=1 00:20:05.141 bs=4096 00:20:05.141 iodepth=1 00:20:05.141 norandommap=0 00:20:05.141 numjobs=1 00:20:05.141 00:20:05.141 verify_dump=1 00:20:05.141 verify_backlog=512 00:20:05.141 verify_state_save=0 00:20:05.141 do_verify=1 00:20:05.141 verify=crc32c-intel 00:20:05.141 [job0] 00:20:05.141 filename=/dev/nvme0n1 00:20:05.141 Could not set queue depth (nvme0n1) 00:20:05.402 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:20:05.402 fio-3.35 00:20:05.402 Starting 1 thread 00:20:06.346 00:20:06.346 job0: (groupid=0, jobs=1): err= 0: pid=3957139: Fri Apr 26 23:22:55 2024 00:20:06.346 read: IOPS=541, BW=2166KiB/s (2218kB/s)(2168KiB/1001msec) 00:20:06.346 slat (nsec): min=6621, max=67152, avg=26222.46, stdev=4662.15 00:20:06.346 clat (usec): min=402, max=1103, avg=879.27, stdev=80.27 00:20:06.346 lat (usec): min=428, max=1129, avg=905.50, stdev=80.07 00:20:06.346 clat percentiles (usec): 00:20:06.346 | 1.00th=[ 652], 5.00th=[ 758], 10.00th=[ 775], 20.00th=[ 816], 00:20:06.346 | 30.00th=[ 840], 40.00th=[ 865], 50.00th=[ 889], 60.00th=[ 906], 00:20:06.346 | 70.00th=[ 922], 80.00th=[ 947], 90.00th=[ 971], 95.00th=[ 988], 00:20:06.346 | 99.00th=[ 1045], 99.50th=[ 1090], 99.90th=[ 1106], 99.95th=[ 1106], 00:20:06.346 | 99.99th=[ 1106] 00:20:06.346 write: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec); 0 zone resets 00:20:06.346 slat (nsec): min=8622, max=67885, avg=29196.62, stdev=9423.60 00:20:06.346 clat (usec): min=145, max=734, avg=457.17, stdev=100.87 00:20:06.346 lat (usec): min=156, max=783, avg=486.37, stdev=104.81 00:20:06.346 clat percentiles (usec): 00:20:06.346 | 1.00th=[ 253], 5.00th=[ 285], 10.00th=[ 338], 20.00th=[ 363], 00:20:06.346 | 30.00th=[ 400], 40.00th=[ 445], 50.00th=[ 457], 60.00th=[ 474], 00:20:06.346 | 70.00th=[ 502], 80.00th=[ 545], 90.00th=[ 594], 95.00th=[ 627], 00:20:06.346 | 99.00th=[ 676], 99.50th=[ 701], 99.90th=[ 734], 99.95th=[ 734], 00:20:06.346 | 99.99th=[ 734] 00:20:06.346 bw ( KiB/s): min= 4096, max= 4096, per=100.00%, avg=4096.00, stdev= 0.00, samples=1 00:20:06.346 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:20:06.346 lat (usec) : 250=0.51%, 500=44.89%, 750=21.52%, 1000=31.74% 00:20:06.346 lat (msec) : 2=1.34% 00:20:06.346 cpu : usr=2.70%, sys=6.30%, ctx=1566, majf=0, minf=1 00:20:06.346 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:06.346 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:06.346 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:06.346 issued rwts: total=542,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:06.346 latency : target=0, window=0, percentile=100.00%, depth=1 00:20:06.346 00:20:06.346 Run status group 0 (all jobs): 00:20:06.346 READ: bw=2166KiB/s (2218kB/s), 2166KiB/s-2166KiB/s (2218kB/s-2218kB/s), io=2168KiB (2220kB), run=1001-1001msec 00:20:06.346 WRITE: bw=4092KiB/s (4190kB/s), 4092KiB/s-4092KiB/s (4190kB/s-4190kB/s), io=4096KiB (4194kB), run=1001-1001msec 00:20:06.346 00:20:06.346 Disk stats (read/write): 00:20:06.346 nvme0n1: ios=562/884, merge=0/0, ticks=464/348, in_queue=812, util=94.19% 00:20:06.346 23:22:55 -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:20:06.608 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:20:06.608 23:22:55 -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:20:06.608 23:22:55 -- common/autotest_common.sh@1205 -- # local i=0 00:20:06.608 23:22:55 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:20:06.608 23:22:55 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:20:06.608 23:22:55 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:20:06.608 23:22:55 -- common/autotest_common.sh@1213 -- # grep -q -w SPDKISFASTANDAWESOME 00:20:06.608 23:22:55 -- common/autotest_common.sh@1217 -- # return 0 00:20:06.608 23:22:55 -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:20:06.608 23:22:55 -- target/nmic.sh@53 -- # nvmftestfini 00:20:06.608 23:22:55 -- nvmf/common.sh@477 -- # nvmfcleanup 00:20:06.608 23:22:55 -- nvmf/common.sh@117 -- # sync 00:20:06.608 23:22:55 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:06.608 23:22:55 -- nvmf/common.sh@120 -- # set +e 00:20:06.608 23:22:55 -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:06.608 23:22:55 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:06.608 rmmod nvme_tcp 00:20:06.608 rmmod nvme_fabrics 00:20:06.608 rmmod nvme_keyring 00:20:06.608 23:22:55 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:06.608 23:22:55 -- nvmf/common.sh@124 -- # set -e 00:20:06.608 23:22:55 -- nvmf/common.sh@125 -- # return 0 00:20:06.608 23:22:55 -- nvmf/common.sh@478 -- # '[' -n 3955916 ']' 00:20:06.608 23:22:55 -- nvmf/common.sh@479 -- # killprocess 3955916 00:20:06.608 23:22:55 -- common/autotest_common.sh@936 -- # '[' -z 3955916 ']' 00:20:06.608 23:22:55 -- common/autotest_common.sh@940 -- # kill -0 3955916 00:20:06.608 23:22:55 -- common/autotest_common.sh@941 -- # uname 00:20:06.608 23:22:55 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:20:06.608 23:22:55 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3955916 00:20:06.608 23:22:55 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:20:06.608 23:22:55 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:20:06.608 23:22:55 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3955916' 00:20:06.608 killing process with pid 3955916 00:20:06.608 23:22:55 -- common/autotest_common.sh@955 -- # kill 3955916 00:20:06.608 23:22:55 -- common/autotest_common.sh@960 -- # wait 3955916 00:20:06.870 23:22:55 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:20:06.870 23:22:55 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:20:06.870 23:22:55 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:20:06.870 23:22:55 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:06.870 23:22:55 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:06.870 23:22:55 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:06.870 23:22:55 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:06.870 23:22:55 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:09.421 23:22:58 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:20:09.421 00:20:09.421 real 0m16.956s 00:20:09.421 user 0m45.775s 00:20:09.421 sys 0m6.190s 00:20:09.421 23:22:58 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:20:09.421 23:22:58 -- common/autotest_common.sh@10 -- # set +x 00:20:09.421 ************************************ 00:20:09.421 END TEST nvmf_nmic 00:20:09.421 ************************************ 00:20:09.421 23:22:58 -- nvmf/nvmf.sh@55 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:20:09.421 23:22:58 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:20:09.421 23:22:58 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:20:09.421 23:22:58 -- common/autotest_common.sh@10 -- # set +x 00:20:09.421 ************************************ 00:20:09.421 START TEST nvmf_fio_target 00:20:09.421 ************************************ 00:20:09.421 23:22:58 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:20:09.421 * Looking for test storage... 00:20:09.421 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:20:09.421 23:22:58 -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:09.421 23:22:58 -- nvmf/common.sh@7 -- # uname -s 00:20:09.421 23:22:58 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:09.421 23:22:58 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:09.421 23:22:58 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:09.421 23:22:58 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:09.421 23:22:58 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:09.421 23:22:58 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:09.421 23:22:58 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:09.421 23:22:58 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:09.421 23:22:58 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:09.421 23:22:58 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:09.421 23:22:58 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:20:09.421 23:22:58 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:20:09.421 23:22:58 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:09.421 23:22:58 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:09.421 23:22:58 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:09.421 23:22:58 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:09.421 23:22:58 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:09.421 23:22:58 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:09.422 23:22:58 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:09.422 23:22:58 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:09.422 23:22:58 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:09.422 23:22:58 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:09.422 23:22:58 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:09.422 23:22:58 -- paths/export.sh@5 -- # export PATH 00:20:09.422 23:22:58 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:09.422 23:22:58 -- nvmf/common.sh@47 -- # : 0 00:20:09.422 23:22:58 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:09.422 23:22:58 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:09.422 23:22:58 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:09.422 23:22:58 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:09.422 23:22:58 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:09.422 23:22:58 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:09.422 23:22:58 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:09.422 23:22:58 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:09.422 23:22:58 -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:20:09.422 23:22:58 -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:20:09.422 23:22:58 -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:20:09.422 23:22:58 -- target/fio.sh@16 -- # nvmftestinit 00:20:09.422 23:22:58 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:20:09.422 23:22:58 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:09.422 23:22:58 -- nvmf/common.sh@437 -- # prepare_net_devs 00:20:09.422 23:22:58 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:20:09.422 23:22:58 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:20:09.422 23:22:58 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:09.422 23:22:58 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:09.422 23:22:58 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:09.422 23:22:58 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:20:09.422 23:22:58 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:20:09.422 23:22:58 -- nvmf/common.sh@285 -- # xtrace_disable 00:20:09.422 23:22:58 -- common/autotest_common.sh@10 -- # set +x 00:20:16.017 23:23:05 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:20:16.017 23:23:05 -- nvmf/common.sh@291 -- # pci_devs=() 00:20:16.017 23:23:05 -- nvmf/common.sh@291 -- # local -a pci_devs 00:20:16.017 23:23:05 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:20:16.017 23:23:05 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:20:16.017 23:23:05 -- nvmf/common.sh@293 -- # pci_drivers=() 00:20:16.017 23:23:05 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:20:16.017 23:23:05 -- nvmf/common.sh@295 -- # net_devs=() 00:20:16.017 23:23:05 -- nvmf/common.sh@295 -- # local -ga net_devs 00:20:16.017 23:23:05 -- nvmf/common.sh@296 -- # e810=() 00:20:16.017 23:23:05 -- nvmf/common.sh@296 -- # local -ga e810 00:20:16.017 23:23:05 -- nvmf/common.sh@297 -- # x722=() 00:20:16.017 23:23:05 -- nvmf/common.sh@297 -- # local -ga x722 00:20:16.017 23:23:05 -- nvmf/common.sh@298 -- # mlx=() 00:20:16.017 23:23:05 -- nvmf/common.sh@298 -- # local -ga mlx 00:20:16.017 23:23:05 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:16.017 23:23:05 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:16.017 23:23:05 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:16.017 23:23:05 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:16.017 23:23:05 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:16.017 23:23:05 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:16.017 23:23:05 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:16.017 23:23:05 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:16.017 23:23:05 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:16.017 23:23:05 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:16.017 23:23:05 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:16.017 23:23:05 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:20:16.017 23:23:05 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:20:16.017 23:23:05 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:20:16.017 23:23:05 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:20:16.018 23:23:05 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:20:16.018 23:23:05 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:20:16.018 23:23:05 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:16.018 23:23:05 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:20:16.018 Found 0000:31:00.0 (0x8086 - 0x159b) 00:20:16.018 23:23:05 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:16.018 23:23:05 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:16.018 23:23:05 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:16.018 23:23:05 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:16.018 23:23:05 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:16.018 23:23:05 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:16.018 23:23:05 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:20:16.018 Found 0000:31:00.1 (0x8086 - 0x159b) 00:20:16.018 23:23:05 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:16.018 23:23:05 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:16.018 23:23:05 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:16.018 23:23:05 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:16.018 23:23:05 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:16.018 23:23:05 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:20:16.018 23:23:05 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:20:16.018 23:23:05 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:20:16.018 23:23:05 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:16.018 23:23:05 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:16.018 23:23:05 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:20:16.018 23:23:05 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:16.018 23:23:05 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:20:16.018 Found net devices under 0000:31:00.0: cvl_0_0 00:20:16.018 23:23:05 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:20:16.018 23:23:05 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:16.018 23:23:05 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:16.018 23:23:05 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:20:16.018 23:23:05 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:16.018 23:23:05 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:20:16.018 Found net devices under 0000:31:00.1: cvl_0_1 00:20:16.018 23:23:05 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:20:16.018 23:23:05 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:20:16.018 23:23:05 -- nvmf/common.sh@403 -- # is_hw=yes 00:20:16.018 23:23:05 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:20:16.018 23:23:05 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:20:16.018 23:23:05 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:20:16.018 23:23:05 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:16.018 23:23:05 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:16.018 23:23:05 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:16.018 23:23:05 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:20:16.018 23:23:05 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:16.018 23:23:05 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:16.018 23:23:05 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:20:16.018 23:23:05 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:16.018 23:23:05 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:16.018 23:23:05 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:20:16.018 23:23:05 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:20:16.018 23:23:05 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:20:16.018 23:23:05 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:16.279 23:23:05 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:16.279 23:23:05 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:16.279 23:23:05 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:20:16.279 23:23:05 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:16.279 23:23:05 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:16.279 23:23:05 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:16.279 23:23:05 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:20:16.279 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:16.279 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.663 ms 00:20:16.279 00:20:16.279 --- 10.0.0.2 ping statistics --- 00:20:16.280 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:16.280 rtt min/avg/max/mdev = 0.663/0.663/0.663/0.000 ms 00:20:16.280 23:23:05 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:16.280 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:16.280 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.377 ms 00:20:16.280 00:20:16.280 --- 10.0.0.1 ping statistics --- 00:20:16.280 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:16.280 rtt min/avg/max/mdev = 0.377/0.377/0.377/0.000 ms 00:20:16.280 23:23:05 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:16.280 23:23:05 -- nvmf/common.sh@411 -- # return 0 00:20:16.280 23:23:05 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:20:16.280 23:23:05 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:16.280 23:23:05 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:20:16.280 23:23:05 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:20:16.280 23:23:05 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:16.280 23:23:05 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:20:16.280 23:23:05 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:20:16.541 23:23:05 -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:20:16.541 23:23:05 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:20:16.541 23:23:05 -- common/autotest_common.sh@710 -- # xtrace_disable 00:20:16.541 23:23:05 -- common/autotest_common.sh@10 -- # set +x 00:20:16.541 23:23:05 -- nvmf/common.sh@470 -- # nvmfpid=3961666 00:20:16.541 23:23:05 -- nvmf/common.sh@471 -- # waitforlisten 3961666 00:20:16.541 23:23:05 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:20:16.541 23:23:05 -- common/autotest_common.sh@817 -- # '[' -z 3961666 ']' 00:20:16.541 23:23:05 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:16.541 23:23:05 -- common/autotest_common.sh@822 -- # local max_retries=100 00:20:16.541 23:23:05 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:16.541 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:16.541 23:23:05 -- common/autotest_common.sh@826 -- # xtrace_disable 00:20:16.541 23:23:05 -- common/autotest_common.sh@10 -- # set +x 00:20:16.541 [2024-04-26 23:23:05.630259] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:20:16.541 [2024-04-26 23:23:05.630326] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:16.541 EAL: No free 2048 kB hugepages reported on node 1 00:20:16.541 [2024-04-26 23:23:05.703864] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:16.541 [2024-04-26 23:23:05.742315] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:16.541 [2024-04-26 23:23:05.742364] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:16.541 [2024-04-26 23:23:05.742372] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:16.541 [2024-04-26 23:23:05.742379] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:16.541 [2024-04-26 23:23:05.742385] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:16.541 [2024-04-26 23:23:05.742504] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:16.541 [2024-04-26 23:23:05.742625] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:20:16.541 [2024-04-26 23:23:05.742784] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:16.541 [2024-04-26 23:23:05.742785] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:20:17.486 23:23:06 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:20:17.486 23:23:06 -- common/autotest_common.sh@850 -- # return 0 00:20:17.486 23:23:06 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:20:17.486 23:23:06 -- common/autotest_common.sh@716 -- # xtrace_disable 00:20:17.486 23:23:06 -- common/autotest_common.sh@10 -- # set +x 00:20:17.486 23:23:06 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:17.486 23:23:06 -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:20:17.486 [2024-04-26 23:23:06.592006] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:17.486 23:23:06 -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:20:17.747 23:23:06 -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:20:17.747 23:23:06 -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:20:17.747 23:23:06 -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:20:17.747 23:23:06 -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:20:18.008 23:23:07 -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:20:18.008 23:23:07 -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:20:18.268 23:23:07 -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:20:18.268 23:23:07 -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:20:18.268 23:23:07 -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:20:18.528 23:23:07 -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:20:18.528 23:23:07 -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:20:18.789 23:23:07 -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:20:18.789 23:23:07 -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:20:18.789 23:23:08 -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:20:18.789 23:23:08 -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:20:19.049 23:23:08 -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:20:19.311 23:23:08 -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:20:19.311 23:23:08 -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:20:19.311 23:23:08 -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:20:19.311 23:23:08 -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:20:19.572 23:23:08 -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:19.834 [2024-04-26 23:23:08.853531] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:19.834 23:23:08 -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:20:19.834 23:23:09 -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:20:20.108 23:23:09 -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:20:21.538 23:23:10 -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:20:21.538 23:23:10 -- common/autotest_common.sh@1184 -- # local i=0 00:20:21.538 23:23:10 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:20:21.538 23:23:10 -- common/autotest_common.sh@1186 -- # [[ -n 4 ]] 00:20:21.538 23:23:10 -- common/autotest_common.sh@1187 -- # nvme_device_counter=4 00:20:21.538 23:23:10 -- common/autotest_common.sh@1191 -- # sleep 2 00:20:23.447 23:23:12 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:20:23.447 23:23:12 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:20:23.447 23:23:12 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:20:23.707 23:23:12 -- common/autotest_common.sh@1193 -- # nvme_devices=4 00:20:23.707 23:23:12 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:20:23.707 23:23:12 -- common/autotest_common.sh@1194 -- # return 0 00:20:23.707 23:23:12 -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:20:23.707 [global] 00:20:23.707 thread=1 00:20:23.707 invalidate=1 00:20:23.707 rw=write 00:20:23.707 time_based=1 00:20:23.707 runtime=1 00:20:23.707 ioengine=libaio 00:20:23.707 direct=1 00:20:23.707 bs=4096 00:20:23.707 iodepth=1 00:20:23.707 norandommap=0 00:20:23.707 numjobs=1 00:20:23.707 00:20:23.707 verify_dump=1 00:20:23.707 verify_backlog=512 00:20:23.707 verify_state_save=0 00:20:23.707 do_verify=1 00:20:23.707 verify=crc32c-intel 00:20:23.707 [job0] 00:20:23.707 filename=/dev/nvme0n1 00:20:23.707 [job1] 00:20:23.707 filename=/dev/nvme0n2 00:20:23.707 [job2] 00:20:23.707 filename=/dev/nvme0n3 00:20:23.707 [job3] 00:20:23.707 filename=/dev/nvme0n4 00:20:23.707 Could not set queue depth (nvme0n1) 00:20:23.707 Could not set queue depth (nvme0n2) 00:20:23.707 Could not set queue depth (nvme0n3) 00:20:23.707 Could not set queue depth (nvme0n4) 00:20:23.967 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:20:23.967 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:20:23.967 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:20:23.967 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:20:23.967 fio-3.35 00:20:23.967 Starting 4 threads 00:20:25.354 00:20:25.354 job0: (groupid=0, jobs=1): err= 0: pid=3963448: Fri Apr 26 23:23:14 2024 00:20:25.354 read: IOPS=189, BW=758KiB/s (776kB/s)(772KiB/1019msec) 00:20:25.354 slat (nsec): min=6637, max=46802, avg=24203.96, stdev=7837.82 00:20:25.354 clat (usec): min=347, max=42000, avg=3685.40, stdev=10652.74 00:20:25.354 lat (usec): min=374, max=42027, avg=3709.61, stdev=10653.55 00:20:25.354 clat percentiles (usec): 00:20:25.354 | 1.00th=[ 416], 5.00th=[ 553], 10.00th=[ 578], 20.00th=[ 644], 00:20:25.354 | 30.00th=[ 668], 40.00th=[ 701], 50.00th=[ 734], 60.00th=[ 750], 00:20:25.354 | 70.00th=[ 783], 80.00th=[ 832], 90.00th=[ 898], 95.00th=[41681], 00:20:25.354 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:20:25.354 | 99.99th=[42206] 00:20:25.354 write: IOPS=502, BW=2010KiB/s (2058kB/s)(2048KiB/1019msec); 0 zone resets 00:20:25.354 slat (nsec): min=9407, max=53462, avg=32895.69, stdev=8331.60 00:20:25.354 clat (usec): min=215, max=875, avg=543.97, stdev=118.63 00:20:25.354 lat (usec): min=250, max=911, avg=576.86, stdev=120.68 00:20:25.354 clat percentiles (usec): 00:20:25.354 | 1.00th=[ 265], 5.00th=[ 334], 10.00th=[ 383], 20.00th=[ 441], 00:20:25.354 | 30.00th=[ 486], 40.00th=[ 523], 50.00th=[ 553], 60.00th=[ 586], 00:20:25.354 | 70.00th=[ 611], 80.00th=[ 644], 90.00th=[ 693], 95.00th=[ 725], 00:20:25.354 | 99.00th=[ 791], 99.50th=[ 807], 99.90th=[ 873], 99.95th=[ 873], 00:20:25.354 | 99.99th=[ 873] 00:20:25.354 bw ( KiB/s): min= 4096, max= 4096, per=50.95%, avg=4096.00, stdev= 0.00, samples=1 00:20:25.354 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:20:25.354 lat (usec) : 250=0.28%, 500=24.96%, 750=61.28%, 1000=11.49% 00:20:25.354 lat (msec) : 50=1.99% 00:20:25.354 cpu : usr=1.38%, sys=2.75%, ctx=709, majf=0, minf=1 00:20:25.355 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:25.355 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:25.355 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:25.355 issued rwts: total=193,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:25.355 latency : target=0, window=0, percentile=100.00%, depth=1 00:20:25.355 job1: (groupid=0, jobs=1): err= 0: pid=3963449: Fri Apr 26 23:23:14 2024 00:20:25.355 read: IOPS=431, BW=1726KiB/s (1767kB/s)(1748KiB/1013msec) 00:20:25.355 slat (nsec): min=6393, max=60839, avg=23569.00, stdev=6311.73 00:20:25.355 clat (usec): min=184, max=42264, avg=1551.46, stdev=5109.43 00:20:25.355 lat (usec): min=190, max=42289, avg=1575.03, stdev=5109.50 00:20:25.355 clat percentiles (usec): 00:20:25.355 | 1.00th=[ 229], 5.00th=[ 289], 10.00th=[ 375], 20.00th=[ 469], 00:20:25.355 | 30.00th=[ 594], 40.00th=[ 963], 50.00th=[ 1057], 60.00th=[ 1123], 00:20:25.355 | 70.00th=[ 1172], 80.00th=[ 1205], 90.00th=[ 1254], 95.00th=[ 1287], 00:20:25.355 | 99.00th=[41157], 99.50th=[41681], 99.90th=[42206], 99.95th=[42206], 00:20:25.355 | 99.99th=[42206] 00:20:25.355 write: IOPS=505, BW=2022KiB/s (2070kB/s)(2048KiB/1013msec); 0 zone resets 00:20:25.355 slat (nsec): min=9793, max=81525, avg=30005.53, stdev=8839.52 00:20:25.355 clat (usec): min=315, max=997, avg=583.84, stdev=102.89 00:20:25.355 lat (usec): min=326, max=1029, avg=613.84, stdev=106.10 00:20:25.355 clat percentiles (usec): 00:20:25.355 | 1.00th=[ 379], 5.00th=[ 420], 10.00th=[ 469], 20.00th=[ 502], 00:20:25.355 | 30.00th=[ 519], 40.00th=[ 545], 50.00th=[ 578], 60.00th=[ 611], 00:20:25.355 | 70.00th=[ 635], 80.00th=[ 668], 90.00th=[ 709], 95.00th=[ 758], 00:20:25.355 | 99.00th=[ 881], 99.50th=[ 898], 99.90th=[ 996], 99.95th=[ 996], 00:20:25.355 | 99.99th=[ 996] 00:20:25.355 bw ( KiB/s): min= 4096, max= 4096, per=50.95%, avg=4096.00, stdev= 0.00, samples=1 00:20:25.355 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:20:25.355 lat (usec) : 250=1.05%, 500=20.44%, 750=43.73%, 1000=8.43% 00:20:25.355 lat (msec) : 2=25.61%, 50=0.74% 00:20:25.355 cpu : usr=1.98%, sys=2.08%, ctx=950, majf=0, minf=1 00:20:25.355 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:25.355 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:25.355 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:25.355 issued rwts: total=437,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:25.355 latency : target=0, window=0, percentile=100.00%, depth=1 00:20:25.355 job2: (groupid=0, jobs=1): err= 0: pid=3963450: Fri Apr 26 23:23:14 2024 00:20:25.355 read: IOPS=202, BW=811KiB/s (831kB/s)(812KiB/1001msec) 00:20:25.355 slat (nsec): min=6709, max=47669, avg=25819.83, stdev=7020.59 00:20:25.355 clat (usec): min=319, max=41928, avg=3556.35, stdev=10253.38 00:20:25.355 lat (usec): min=347, max=41954, avg=3582.17, stdev=10253.56 00:20:25.355 clat percentiles (usec): 00:20:25.355 | 1.00th=[ 506], 5.00th=[ 594], 10.00th=[ 644], 20.00th=[ 668], 00:20:25.355 | 30.00th=[ 701], 40.00th=[ 742], 50.00th=[ 791], 60.00th=[ 824], 00:20:25.355 | 70.00th=[ 848], 80.00th=[ 889], 90.00th=[ 963], 95.00th=[41157], 00:20:25.355 | 99.00th=[41157], 99.50th=[41681], 99.90th=[41681], 99.95th=[41681], 00:20:25.355 | 99.99th=[41681] 00:20:25.355 write: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec); 0 zone resets 00:20:25.355 slat (nsec): min=9845, max=66836, avg=30658.11, stdev=10053.13 00:20:25.355 clat (usec): min=183, max=782, avg=488.81, stdev=82.40 00:20:25.355 lat (usec): min=204, max=817, avg=519.47, stdev=86.21 00:20:25.355 clat percentiles (usec): 00:20:25.355 | 1.00th=[ 297], 5.00th=[ 338], 10.00th=[ 383], 20.00th=[ 416], 00:20:25.355 | 30.00th=[ 445], 40.00th=[ 478], 50.00th=[ 502], 60.00th=[ 519], 00:20:25.355 | 70.00th=[ 537], 80.00th=[ 553], 90.00th=[ 578], 95.00th=[ 611], 00:20:25.355 | 99.00th=[ 652], 99.50th=[ 676], 99.90th=[ 783], 99.95th=[ 783], 00:20:25.355 | 99.99th=[ 783] 00:20:25.355 bw ( KiB/s): min= 4096, max= 4096, per=50.95%, avg=4096.00, stdev= 0.00, samples=1 00:20:25.355 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:20:25.355 lat (usec) : 250=0.28%, 500=35.10%, 750=47.69%, 1000=14.41% 00:20:25.355 lat (msec) : 2=0.56%, 50=1.96% 00:20:25.355 cpu : usr=1.50%, sys=1.90%, ctx=716, majf=0, minf=1 00:20:25.355 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:25.355 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:25.355 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:25.355 issued rwts: total=203,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:25.355 latency : target=0, window=0, percentile=100.00%, depth=1 00:20:25.355 job3: (groupid=0, jobs=1): err= 0: pid=3963451: Fri Apr 26 23:23:14 2024 00:20:25.355 read: IOPS=17, BW=71.6KiB/s (73.3kB/s)(72.0KiB/1006msec) 00:20:25.355 slat (nsec): min=26143, max=27037, avg=26559.89, stdev=211.58 00:20:25.355 clat (usec): min=40864, max=41928, avg=41025.19, stdev=234.54 00:20:25.355 lat (usec): min=40890, max=41954, avg=41051.75, stdev=234.53 00:20:25.355 clat percentiles (usec): 00:20:25.355 | 1.00th=[40633], 5.00th=[40633], 10.00th=[40633], 20.00th=[41157], 00:20:25.355 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:20:25.355 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41681], 00:20:25.355 | 99.00th=[41681], 99.50th=[41681], 99.90th=[41681], 99.95th=[41681], 00:20:25.355 | 99.99th=[41681] 00:20:25.355 write: IOPS=508, BW=2036KiB/s (2085kB/s)(2048KiB/1006msec); 0 zone resets 00:20:25.355 slat (nsec): min=9702, max=52081, avg=29105.97, stdev=10410.93 00:20:25.355 clat (usec): min=156, max=641, avg=481.21, stdev=76.48 00:20:25.355 lat (usec): min=169, max=671, avg=510.31, stdev=79.97 00:20:25.355 clat percentiles (usec): 00:20:25.355 | 1.00th=[ 289], 5.00th=[ 318], 10.00th=[ 383], 20.00th=[ 416], 00:20:25.355 | 30.00th=[ 453], 40.00th=[ 478], 50.00th=[ 494], 60.00th=[ 510], 00:20:25.355 | 70.00th=[ 529], 80.00th=[ 545], 90.00th=[ 570], 95.00th=[ 586], 00:20:25.355 | 99.00th=[ 627], 99.50th=[ 635], 99.90th=[ 644], 99.95th=[ 644], 00:20:25.355 | 99.99th=[ 644] 00:20:25.355 bw ( KiB/s): min= 4096, max= 4096, per=50.95%, avg=4096.00, stdev= 0.00, samples=1 00:20:25.355 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:20:25.355 lat (usec) : 250=0.19%, 500=50.75%, 750=45.66% 00:20:25.355 lat (msec) : 50=3.40% 00:20:25.355 cpu : usr=0.70%, sys=1.49%, ctx=531, majf=0, minf=1 00:20:25.355 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:25.355 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:25.355 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:25.355 issued rwts: total=18,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:25.355 latency : target=0, window=0, percentile=100.00%, depth=1 00:20:25.355 00:20:25.355 Run status group 0 (all jobs): 00:20:25.355 READ: bw=3341KiB/s (3421kB/s), 71.6KiB/s-1726KiB/s (73.3kB/s-1767kB/s), io=3404KiB (3486kB), run=1001-1019msec 00:20:25.355 WRITE: bw=8039KiB/s (8232kB/s), 2010KiB/s-2046KiB/s (2058kB/s-2095kB/s), io=8192KiB (8389kB), run=1001-1019msec 00:20:25.355 00:20:25.355 Disk stats (read/write): 00:20:25.355 nvme0n1: ios=228/512, merge=0/0, ticks=943/200, in_queue=1143, util=96.59% 00:20:25.355 nvme0n2: ios=459/512, merge=0/0, ticks=1345/286, in_queue=1631, util=97.55% 00:20:25.355 nvme0n3: ios=36/512, merge=0/0, ticks=1500/239, in_queue=1739, util=97.36% 00:20:25.355 nvme0n4: ios=40/512, merge=0/0, ticks=1446/240, in_queue=1686, util=97.33% 00:20:25.355 23:23:14 -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:20:25.355 [global] 00:20:25.355 thread=1 00:20:25.355 invalidate=1 00:20:25.355 rw=randwrite 00:20:25.355 time_based=1 00:20:25.355 runtime=1 00:20:25.355 ioengine=libaio 00:20:25.355 direct=1 00:20:25.355 bs=4096 00:20:25.355 iodepth=1 00:20:25.355 norandommap=0 00:20:25.355 numjobs=1 00:20:25.355 00:20:25.355 verify_dump=1 00:20:25.355 verify_backlog=512 00:20:25.355 verify_state_save=0 00:20:25.355 do_verify=1 00:20:25.355 verify=crc32c-intel 00:20:25.355 [job0] 00:20:25.355 filename=/dev/nvme0n1 00:20:25.355 [job1] 00:20:25.355 filename=/dev/nvme0n2 00:20:25.355 [job2] 00:20:25.355 filename=/dev/nvme0n3 00:20:25.355 [job3] 00:20:25.355 filename=/dev/nvme0n4 00:20:25.355 Could not set queue depth (nvme0n1) 00:20:25.355 Could not set queue depth (nvme0n2) 00:20:25.355 Could not set queue depth (nvme0n3) 00:20:25.355 Could not set queue depth (nvme0n4) 00:20:25.616 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:20:25.616 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:20:25.616 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:20:25.616 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:20:25.616 fio-3.35 00:20:25.616 Starting 4 threads 00:20:27.001 00:20:27.001 job0: (groupid=0, jobs=1): err= 0: pid=3963974: Fri Apr 26 23:23:16 2024 00:20:27.001 read: IOPS=18, BW=73.0KiB/s (74.8kB/s)(76.0KiB/1041msec) 00:20:27.001 slat (nsec): min=9147, max=31453, avg=25457.68, stdev=4163.19 00:20:27.001 clat (usec): min=964, max=42075, avg=39675.33, stdev=9380.22 00:20:27.001 lat (usec): min=991, max=42101, avg=39700.79, stdev=9379.81 00:20:27.001 clat percentiles (usec): 00:20:27.001 | 1.00th=[ 963], 5.00th=[ 963], 10.00th=[41157], 20.00th=[41681], 00:20:27.001 | 30.00th=[41681], 40.00th=[41681], 50.00th=[41681], 60.00th=[42206], 00:20:27.001 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:20:27.001 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:20:27.001 | 99.99th=[42206] 00:20:27.001 write: IOPS=491, BW=1967KiB/s (2015kB/s)(2048KiB/1041msec); 0 zone resets 00:20:27.001 slat (nsec): min=8615, max=50593, avg=28604.00, stdev=9816.75 00:20:27.001 clat (usec): min=269, max=856, avg=523.01, stdev=102.21 00:20:27.001 lat (usec): min=285, max=871, avg=551.62, stdev=105.04 00:20:27.001 clat percentiles (usec): 00:20:27.001 | 1.00th=[ 302], 5.00th=[ 371], 10.00th=[ 400], 20.00th=[ 453], 00:20:27.001 | 30.00th=[ 474], 40.00th=[ 486], 50.00th=[ 502], 60.00th=[ 523], 00:20:27.001 | 70.00th=[ 578], 80.00th=[ 611], 90.00th=[ 668], 95.00th=[ 709], 00:20:27.001 | 99.00th=[ 775], 99.50th=[ 799], 99.90th=[ 857], 99.95th=[ 857], 00:20:27.001 | 99.99th=[ 857] 00:20:27.001 bw ( KiB/s): min= 4096, max= 4096, per=44.16%, avg=4096.00, stdev= 0.00, samples=1 00:20:27.001 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:20:27.001 lat (usec) : 500=47.65%, 750=46.14%, 1000=2.82% 00:20:27.001 lat (msec) : 50=3.39% 00:20:27.001 cpu : usr=1.44%, sys=1.35%, ctx=533, majf=0, minf=1 00:20:27.001 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:27.001 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:27.001 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:27.001 issued rwts: total=19,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:27.001 latency : target=0, window=0, percentile=100.00%, depth=1 00:20:27.001 job1: (groupid=0, jobs=1): err= 0: pid=3963975: Fri Apr 26 23:23:16 2024 00:20:27.001 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:20:27.001 slat (nsec): min=6827, max=42494, avg=24085.87, stdev=3156.55 00:20:27.001 clat (usec): min=676, max=1301, avg=1075.47, stdev=88.03 00:20:27.001 lat (usec): min=700, max=1325, avg=1099.55, stdev=88.16 00:20:27.001 clat percentiles (usec): 00:20:27.001 | 1.00th=[ 816], 5.00th=[ 914], 10.00th=[ 955], 20.00th=[ 1012], 00:20:27.001 | 30.00th=[ 1037], 40.00th=[ 1074], 50.00th=[ 1090], 60.00th=[ 1106], 00:20:27.001 | 70.00th=[ 1123], 80.00th=[ 1156], 90.00th=[ 1172], 95.00th=[ 1205], 00:20:27.001 | 99.00th=[ 1237], 99.50th=[ 1270], 99.90th=[ 1303], 99.95th=[ 1303], 00:20:27.001 | 99.99th=[ 1303] 00:20:27.001 write: IOPS=660, BW=2641KiB/s (2705kB/s)(2644KiB/1001msec); 0 zone resets 00:20:27.001 slat (nsec): min=9033, max=64534, avg=27400.45, stdev=7719.81 00:20:27.001 clat (usec): min=147, max=978, avg=620.59, stdev=131.85 00:20:27.001 lat (usec): min=156, max=1008, avg=647.99, stdev=134.25 00:20:27.001 clat percentiles (usec): 00:20:27.001 | 1.00th=[ 297], 5.00th=[ 396], 10.00th=[ 461], 20.00th=[ 502], 00:20:27.001 | 30.00th=[ 545], 40.00th=[ 594], 50.00th=[ 627], 60.00th=[ 660], 00:20:27.001 | 70.00th=[ 693], 80.00th=[ 734], 90.00th=[ 791], 95.00th=[ 840], 00:20:27.001 | 99.00th=[ 906], 99.50th=[ 938], 99.90th=[ 979], 99.95th=[ 979], 00:20:27.001 | 99.99th=[ 979] 00:20:27.001 bw ( KiB/s): min= 1184, max= 4096, per=28.46%, avg=2640.00, stdev=2059.09, samples=2 00:20:27.001 iops : min= 296, max= 1024, avg=660.00, stdev=514.77, samples=2 00:20:27.001 lat (usec) : 250=0.17%, 500=10.83%, 750=35.81%, 1000=17.56% 00:20:27.001 lat (msec) : 2=35.64% 00:20:27.001 cpu : usr=1.50%, sys=3.40%, ctx=1173, majf=0, minf=1 00:20:27.001 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:27.001 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:27.001 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:27.001 issued rwts: total=512,661,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:27.001 latency : target=0, window=0, percentile=100.00%, depth=1 00:20:27.001 job2: (groupid=0, jobs=1): err= 0: pid=3963976: Fri Apr 26 23:23:16 2024 00:20:27.001 read: IOPS=152, BW=611KiB/s (626kB/s)(612KiB/1001msec) 00:20:27.001 slat (nsec): min=24446, max=42999, avg=25349.04, stdev=2976.41 00:20:27.001 clat (usec): min=807, max=42058, avg=4329.22, stdev=10855.17 00:20:27.001 lat (usec): min=832, max=42082, avg=4354.57, stdev=10855.02 00:20:27.001 clat percentiles (usec): 00:20:27.001 | 1.00th=[ 840], 5.00th=[ 955], 10.00th=[ 988], 20.00th=[ 1045], 00:20:27.001 | 30.00th=[ 1074], 40.00th=[ 1123], 50.00th=[ 1139], 60.00th=[ 1172], 00:20:27.001 | 70.00th=[ 1221], 80.00th=[ 1270], 90.00th=[ 1369], 95.00th=[41157], 00:20:27.001 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:20:27.001 | 99.99th=[42206] 00:20:27.001 write: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec); 0 zone resets 00:20:27.001 slat (nsec): min=8966, max=51938, avg=29700.79, stdev=6961.72 00:20:27.001 clat (usec): min=187, max=948, avg=612.43, stdev=138.72 00:20:27.001 lat (usec): min=197, max=978, avg=642.13, stdev=140.12 00:20:27.001 clat percentiles (usec): 00:20:27.001 | 1.00th=[ 285], 5.00th=[ 388], 10.00th=[ 420], 20.00th=[ 498], 00:20:27.001 | 30.00th=[ 537], 40.00th=[ 586], 50.00th=[ 619], 60.00th=[ 652], 00:20:27.001 | 70.00th=[ 693], 80.00th=[ 734], 90.00th=[ 791], 95.00th=[ 832], 00:20:27.001 | 99.00th=[ 889], 99.50th=[ 938], 99.90th=[ 947], 99.95th=[ 947], 00:20:27.001 | 99.99th=[ 947] 00:20:27.001 bw ( KiB/s): min= 4096, max= 4096, per=44.16%, avg=4096.00, stdev= 0.00, samples=1 00:20:27.001 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:20:27.001 lat (usec) : 250=0.45%, 500=15.19%, 750=48.72%, 1000=15.04% 00:20:27.001 lat (msec) : 2=18.65%, 10=0.15%, 50=1.80% 00:20:27.001 cpu : usr=1.20%, sys=1.70%, ctx=665, majf=0, minf=1 00:20:27.001 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:27.001 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:27.001 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:27.001 issued rwts: total=153,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:27.001 latency : target=0, window=0, percentile=100.00%, depth=1 00:20:27.001 job3: (groupid=0, jobs=1): err= 0: pid=3963977: Fri Apr 26 23:23:16 2024 00:20:27.001 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:20:27.001 slat (nsec): min=6947, max=60707, avg=24674.17, stdev=3961.79 00:20:27.001 clat (usec): min=680, max=3165, avg=1059.79, stdev=189.56 00:20:27.001 lat (usec): min=687, max=3190, avg=1084.47, stdev=189.63 00:20:27.001 clat percentiles (usec): 00:20:27.001 | 1.00th=[ 725], 5.00th=[ 799], 10.00th=[ 848], 20.00th=[ 914], 00:20:27.001 | 30.00th=[ 955], 40.00th=[ 1004], 50.00th=[ 1045], 60.00th=[ 1090], 00:20:27.001 | 70.00th=[ 1139], 80.00th=[ 1205], 90.00th=[ 1303], 95.00th=[ 1352], 00:20:27.001 | 99.00th=[ 1434], 99.50th=[ 1450], 99.90th=[ 3163], 99.95th=[ 3163], 00:20:27.001 | 99.99th=[ 3163] 00:20:27.001 write: IOPS=728, BW=2913KiB/s (2983kB/s)(2916KiB/1001msec); 0 zone resets 00:20:27.001 slat (nsec): min=8988, max=62917, avg=26810.33, stdev=9072.39 00:20:27.001 clat (usec): min=155, max=1026, avg=570.55, stdev=132.88 00:20:27.001 lat (usec): min=164, max=1057, avg=597.37, stdev=136.73 00:20:27.001 clat percentiles (usec): 00:20:27.001 | 1.00th=[ 277], 5.00th=[ 326], 10.00th=[ 408], 20.00th=[ 474], 00:20:27.001 | 30.00th=[ 510], 40.00th=[ 537], 50.00th=[ 562], 60.00th=[ 603], 00:20:27.001 | 70.00th=[ 635], 80.00th=[ 676], 90.00th=[ 734], 95.00th=[ 791], 00:20:27.001 | 99.00th=[ 906], 99.50th=[ 963], 99.90th=[ 1029], 99.95th=[ 1029], 00:20:27.001 | 99.99th=[ 1029] 00:20:27.001 bw ( KiB/s): min= 4096, max= 4096, per=44.16%, avg=4096.00, stdev= 0.00, samples=1 00:20:27.001 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:20:27.001 lat (usec) : 250=0.24%, 500=15.39%, 750=38.60%, 1000=20.87% 00:20:27.001 lat (msec) : 2=24.82%, 4=0.08% 00:20:27.001 cpu : usr=1.80%, sys=3.40%, ctx=1242, majf=0, minf=1 00:20:27.001 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:27.001 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:27.001 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:27.001 issued rwts: total=512,729,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:27.001 latency : target=0, window=0, percentile=100.00%, depth=1 00:20:27.001 00:20:27.001 Run status group 0 (all jobs): 00:20:27.001 READ: bw=4596KiB/s (4706kB/s), 73.0KiB/s-2046KiB/s (74.8kB/s-2095kB/s), io=4784KiB (4899kB), run=1001-1041msec 00:20:27.001 WRITE: bw=9276KiB/s (9498kB/s), 1967KiB/s-2913KiB/s (2015kB/s-2983kB/s), io=9656KiB (9888kB), run=1001-1041msec 00:20:27.001 00:20:27.001 Disk stats (read/write): 00:20:27.001 nvme0n1: ios=66/512, merge=0/0, ticks=1449/231, in_queue=1680, util=96.69% 00:20:27.001 nvme0n2: ios=485/512, merge=0/0, ticks=530/309, in_queue=839, util=88.99% 00:20:27.001 nvme0n3: ios=149/512, merge=0/0, ticks=801/281, in_queue=1082, util=92.19% 00:20:27.001 nvme0n4: ios=475/512, merge=0/0, ticks=488/283, in_queue=771, util=89.43% 00:20:27.001 23:23:16 -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:20:27.001 [global] 00:20:27.001 thread=1 00:20:27.001 invalidate=1 00:20:27.001 rw=write 00:20:27.001 time_based=1 00:20:27.001 runtime=1 00:20:27.001 ioengine=libaio 00:20:27.001 direct=1 00:20:27.001 bs=4096 00:20:27.001 iodepth=128 00:20:27.001 norandommap=0 00:20:27.001 numjobs=1 00:20:27.001 00:20:27.001 verify_dump=1 00:20:27.001 verify_backlog=512 00:20:27.001 verify_state_save=0 00:20:27.001 do_verify=1 00:20:27.001 verify=crc32c-intel 00:20:27.001 [job0] 00:20:27.001 filename=/dev/nvme0n1 00:20:27.001 [job1] 00:20:27.001 filename=/dev/nvme0n2 00:20:27.001 [job2] 00:20:27.001 filename=/dev/nvme0n3 00:20:27.001 [job3] 00:20:27.001 filename=/dev/nvme0n4 00:20:27.001 Could not set queue depth (nvme0n1) 00:20:27.001 Could not set queue depth (nvme0n2) 00:20:27.001 Could not set queue depth (nvme0n3) 00:20:27.001 Could not set queue depth (nvme0n4) 00:20:27.262 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:20:27.262 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:20:27.262 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:20:27.262 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:20:27.262 fio-3.35 00:20:27.262 Starting 4 threads 00:20:28.648 00:20:28.648 job0: (groupid=0, jobs=1): err= 0: pid=3964498: Fri Apr 26 23:23:17 2024 00:20:28.648 read: IOPS=6629, BW=25.9MiB/s (27.2MB/s)(26.0MiB/1004msec) 00:20:28.648 slat (nsec): min=918, max=8761.5k, avg=79268.09, stdev=588690.57 00:20:28.648 clat (usec): min=3839, max=18592, avg=10315.53, stdev=2348.63 00:20:28.648 lat (usec): min=3844, max=18916, avg=10394.80, stdev=2390.85 00:20:28.648 clat percentiles (usec): 00:20:28.648 | 1.00th=[ 5800], 5.00th=[ 7177], 10.00th=[ 8225], 20.00th=[ 8979], 00:20:28.648 | 30.00th=[ 9241], 40.00th=[ 9634], 50.00th=[ 9634], 60.00th=[ 9765], 00:20:28.648 | 70.00th=[10421], 80.00th=[11994], 90.00th=[13829], 95.00th=[15533], 00:20:28.648 | 99.00th=[17433], 99.50th=[17695], 99.90th=[18220], 99.95th=[18220], 00:20:28.648 | 99.99th=[18482] 00:20:28.648 write: IOPS=6655, BW=26.0MiB/s (27.3MB/s)(26.1MiB/1004msec); 0 zone resets 00:20:28.648 slat (nsec): min=1590, max=8148.7k, avg=65415.07, stdev=435651.75 00:20:28.648 clat (usec): min=1092, max=18182, avg=8776.56, stdev=2361.92 00:20:28.648 lat (usec): min=1103, max=18207, avg=8841.98, stdev=2375.55 00:20:28.648 clat percentiles (usec): 00:20:28.648 | 1.00th=[ 3261], 5.00th=[ 4817], 10.00th=[ 5473], 20.00th=[ 6390], 00:20:28.648 | 30.00th=[ 7701], 40.00th=[ 8586], 50.00th=[ 9372], 60.00th=[ 9765], 00:20:28.648 | 70.00th=[10028], 80.00th=[10159], 90.00th=[11994], 95.00th=[12911], 00:20:28.648 | 99.00th=[14091], 99.50th=[14353], 99.90th=[17957], 99.95th=[18220], 00:20:28.648 | 99.99th=[18220] 00:20:28.648 bw ( KiB/s): min=24576, max=28672, per=29.46%, avg=26624.00, stdev=2896.31, samples=2 00:20:28.648 iops : min= 6144, max= 7168, avg=6656.00, stdev=724.08, samples=2 00:20:28.648 lat (msec) : 2=0.01%, 4=1.32%, 10=66.76%, 20=31.90% 00:20:28.648 cpu : usr=4.69%, sys=7.08%, ctx=548, majf=0, minf=1 00:20:28.648 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.5% 00:20:28.648 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:28.648 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:20:28.648 issued rwts: total=6656,6682,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:28.648 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:28.648 job1: (groupid=0, jobs=1): err= 0: pid=3964500: Fri Apr 26 23:23:17 2024 00:20:28.648 read: IOPS=5164, BW=20.2MiB/s (21.2MB/s)(20.3MiB/1008msec) 00:20:28.648 slat (nsec): min=913, max=8424.6k, avg=82512.74, stdev=546545.02 00:20:28.648 clat (usec): min=4029, max=37694, avg=10271.63, stdev=3591.08 00:20:28.648 lat (usec): min=4047, max=37702, avg=10354.14, stdev=3632.34 00:20:28.648 clat percentiles (usec): 00:20:28.648 | 1.00th=[ 6128], 5.00th=[ 6980], 10.00th=[ 7635], 20.00th=[ 7898], 00:20:28.648 | 30.00th=[ 8160], 40.00th=[ 8586], 50.00th=[ 9372], 60.00th=[10028], 00:20:28.648 | 70.00th=[11076], 80.00th=[11994], 90.00th=[13698], 95.00th=[16909], 00:20:28.648 | 99.00th=[24249], 99.50th=[28705], 99.90th=[36963], 99.95th=[36963], 00:20:28.648 | 99.99th=[37487] 00:20:28.648 write: IOPS=5587, BW=21.8MiB/s (22.9MB/s)(22.0MiB/1008msec); 0 zone resets 00:20:28.648 slat (nsec): min=1592, max=7131.4k, avg=96204.13, stdev=498111.15 00:20:28.648 clat (usec): min=1075, max=38451, avg=13198.99, stdev=8085.89 00:20:28.648 lat (usec): min=1088, max=38457, avg=13295.19, stdev=8135.21 00:20:28.648 clat percentiles (usec): 00:20:28.648 | 1.00th=[ 3752], 5.00th=[ 4490], 10.00th=[ 4817], 20.00th=[ 6652], 00:20:28.648 | 30.00th=[ 7701], 40.00th=[ 7963], 50.00th=[ 9503], 60.00th=[13829], 00:20:28.648 | 70.00th=[17695], 80.00th=[21890], 90.00th=[24249], 95.00th=[28705], 00:20:28.648 | 99.00th=[34866], 99.50th=[35914], 99.90th=[38536], 99.95th=[38536], 00:20:28.648 | 99.99th=[38536] 00:20:28.648 bw ( KiB/s): min=20480, max=24240, per=24.74%, avg=22360.00, stdev=2658.72, samples=2 00:20:28.648 iops : min= 5120, max= 6060, avg=5590.00, stdev=664.68, samples=2 00:20:28.648 lat (msec) : 2=0.03%, 4=0.77%, 10=54.86%, 20=30.61%, 50=13.74% 00:20:28.648 cpu : usr=5.56%, sys=4.47%, ctx=466, majf=0, minf=1 00:20:28.648 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:20:28.648 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:28.648 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:20:28.648 issued rwts: total=5206,5632,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:28.648 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:28.648 job2: (groupid=0, jobs=1): err= 0: pid=3964501: Fri Apr 26 23:23:17 2024 00:20:28.648 read: IOPS=6629, BW=25.9MiB/s (27.2MB/s)(26.0MiB/1004msec) 00:20:28.648 slat (nsec): min=928, max=4214.7k, avg=69088.23, stdev=431811.91 00:20:28.648 clat (usec): min=5432, max=13321, avg=8608.52, stdev=1017.99 00:20:28.648 lat (usec): min=5440, max=13640, avg=8677.61, stdev=1073.85 00:20:28.649 clat percentiles (usec): 00:20:28.649 | 1.00th=[ 6063], 5.00th=[ 6718], 10.00th=[ 7242], 20.00th=[ 8160], 00:20:28.649 | 30.00th=[ 8356], 40.00th=[ 8455], 50.00th=[ 8586], 60.00th=[ 8717], 00:20:28.649 | 70.00th=[ 8848], 80.00th=[ 9110], 90.00th=[ 9634], 95.00th=[10552], 00:20:28.649 | 99.00th=[11731], 99.50th=[12125], 99.90th=[12649], 99.95th=[12780], 00:20:28.649 | 99.99th=[13304] 00:20:28.649 write: IOPS=6851, BW=26.8MiB/s (28.1MB/s)(26.9MiB/1004msec); 0 zone resets 00:20:28.649 slat (nsec): min=1604, max=36806k, avg=74080.93, stdev=767614.81 00:20:28.649 clat (usec): min=3363, max=78842, avg=9816.67, stdev=8758.06 00:20:28.649 lat (usec): min=3919, max=81391, avg=9890.75, stdev=8807.00 00:20:28.649 clat percentiles (usec): 00:20:28.649 | 1.00th=[ 5145], 5.00th=[ 6652], 10.00th=[ 7570], 20.00th=[ 7898], 00:20:28.649 | 30.00th=[ 8094], 40.00th=[ 8160], 50.00th=[ 8225], 60.00th=[ 8291], 00:20:28.649 | 70.00th=[ 8455], 80.00th=[ 8586], 90.00th=[ 9634], 95.00th=[11207], 00:20:28.649 | 99.00th=[45351], 99.50th=[79168], 99.90th=[79168], 99.95th=[79168], 00:20:28.649 | 99.99th=[79168] 00:20:28.649 bw ( KiB/s): min=24576, max=29440, per=29.88%, avg=27008.00, stdev=3439.37, samples=2 00:20:28.649 iops : min= 6144, max= 7360, avg=6752.00, stdev=859.84, samples=2 00:20:28.649 lat (msec) : 4=0.07%, 10=91.56%, 20=6.49%, 50=1.42%, 100=0.47% 00:20:28.649 cpu : usr=4.59%, sys=5.48%, ctx=944, majf=0, minf=1 00:20:28.649 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.5% 00:20:28.649 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:28.649 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:20:28.649 issued rwts: total=6656,6879,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:28.649 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:28.649 job3: (groupid=0, jobs=1): err= 0: pid=3964502: Fri Apr 26 23:23:17 2024 00:20:28.649 read: IOPS=3516, BW=13.7MiB/s (14.4MB/s)(13.8MiB/1004msec) 00:20:28.649 slat (nsec): min=895, max=11850k, avg=122211.86, stdev=765277.50 00:20:28.649 clat (usec): min=2268, max=56895, avg=14587.29, stdev=8314.84 00:20:28.649 lat (usec): min=5199, max=56902, avg=14709.50, stdev=8402.70 00:20:28.649 clat percentiles (usec): 00:20:28.649 | 1.00th=[ 5276], 5.00th=[ 8717], 10.00th=[ 9110], 20.00th=[10028], 00:20:28.649 | 30.00th=[10814], 40.00th=[11207], 50.00th=[11338], 60.00th=[11600], 00:20:28.649 | 70.00th=[12387], 80.00th=[15664], 90.00th=[27919], 95.00th=[34866], 00:20:28.649 | 99.00th=[41681], 99.50th=[49546], 99.90th=[56886], 99.95th=[56886], 00:20:28.649 | 99.99th=[56886] 00:20:28.649 write: IOPS=3569, BW=13.9MiB/s (14.6MB/s)(14.0MiB/1004msec); 0 zone resets 00:20:28.649 slat (nsec): min=1547, max=15542k, avg=153357.17, stdev=815435.06 00:20:28.649 clat (usec): min=1205, max=71998, avg=21165.65, stdev=16732.77 00:20:28.649 lat (usec): min=1216, max=72019, avg=21319.01, stdev=16851.71 00:20:28.649 clat percentiles (usec): 00:20:28.649 | 1.00th=[ 7111], 5.00th=[ 7635], 10.00th=[ 7898], 20.00th=[ 8717], 00:20:28.649 | 30.00th=[ 9634], 40.00th=[10945], 50.00th=[14091], 60.00th=[16057], 00:20:28.649 | 70.00th=[21890], 80.00th=[32637], 90.00th=[53740], 95.00th=[58983], 00:20:28.649 | 99.00th=[63177], 99.50th=[65274], 99.90th=[71828], 99.95th=[71828], 00:20:28.649 | 99.99th=[71828] 00:20:28.649 bw ( KiB/s): min=11656, max=17016, per=15.86%, avg=14336.00, stdev=3790.09, samples=2 00:20:28.649 iops : min= 2914, max= 4254, avg=3584.00, stdev=947.52, samples=2 00:20:28.649 lat (msec) : 2=0.04%, 4=0.01%, 10=27.22%, 20=46.30%, 50=19.92% 00:20:28.649 lat (msec) : 100=6.51% 00:20:28.649 cpu : usr=2.79%, sys=3.49%, ctx=380, majf=0, minf=1 00:20:28.649 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.1% 00:20:28.649 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:28.649 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:20:28.649 issued rwts: total=3531,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:28.649 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:28.649 00:20:28.649 Run status group 0 (all jobs): 00:20:28.649 READ: bw=85.4MiB/s (89.6MB/s), 13.7MiB/s-25.9MiB/s (14.4MB/s-27.2MB/s), io=86.1MiB (90.3MB), run=1004-1008msec 00:20:28.649 WRITE: bw=88.3MiB/s (92.6MB/s), 13.9MiB/s-26.8MiB/s (14.6MB/s-28.1MB/s), io=89.0MiB (93.3MB), run=1004-1008msec 00:20:28.649 00:20:28.649 Disk stats (read/write): 00:20:28.649 nvme0n1: ios=5547/5632, merge=0/0, ticks=54471/47018, in_queue=101489, util=92.48% 00:20:28.649 nvme0n2: ios=4647/4745, merge=0/0, ticks=44985/56829, in_queue=101814, util=88.48% 00:20:28.649 nvme0n3: ios=5405/5632, merge=0/0, ticks=22600/21319, in_queue=43919, util=100.00% 00:20:28.649 nvme0n4: ios=2603/3047, merge=0/0, ticks=18133/33219, in_queue=51352, util=96.91% 00:20:28.649 23:23:17 -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:20:28.649 [global] 00:20:28.649 thread=1 00:20:28.649 invalidate=1 00:20:28.649 rw=randwrite 00:20:28.649 time_based=1 00:20:28.649 runtime=1 00:20:28.649 ioengine=libaio 00:20:28.649 direct=1 00:20:28.649 bs=4096 00:20:28.649 iodepth=128 00:20:28.649 norandommap=0 00:20:28.649 numjobs=1 00:20:28.649 00:20:28.649 verify_dump=1 00:20:28.649 verify_backlog=512 00:20:28.649 verify_state_save=0 00:20:28.649 do_verify=1 00:20:28.649 verify=crc32c-intel 00:20:28.649 [job0] 00:20:28.649 filename=/dev/nvme0n1 00:20:28.649 [job1] 00:20:28.649 filename=/dev/nvme0n2 00:20:28.649 [job2] 00:20:28.649 filename=/dev/nvme0n3 00:20:28.649 [job3] 00:20:28.649 filename=/dev/nvme0n4 00:20:28.649 Could not set queue depth (nvme0n1) 00:20:28.649 Could not set queue depth (nvme0n2) 00:20:28.649 Could not set queue depth (nvme0n3) 00:20:28.649 Could not set queue depth (nvme0n4) 00:20:28.908 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:20:28.908 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:20:28.908 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:20:28.908 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:20:28.908 fio-3.35 00:20:28.908 Starting 4 threads 00:20:30.289 00:20:30.289 job0: (groupid=0, jobs=1): err= 0: pid=3964977: Fri Apr 26 23:23:19 2024 00:20:30.289 read: IOPS=4035, BW=15.8MiB/s (16.5MB/s)(16.0MiB/1015msec) 00:20:30.289 slat (nsec): min=909, max=16376k, avg=109443.89, stdev=813725.23 00:20:30.289 clat (usec): min=4868, max=41626, avg=12529.15, stdev=5902.20 00:20:30.289 lat (usec): min=4873, max=41634, avg=12638.60, stdev=5986.89 00:20:30.289 clat percentiles (usec): 00:20:30.289 | 1.00th=[ 5604], 5.00th=[ 8455], 10.00th=[ 8717], 20.00th=[ 9110], 00:20:30.289 | 30.00th=[ 9241], 40.00th=[ 9503], 50.00th=[ 9896], 60.00th=[10421], 00:20:30.289 | 70.00th=[11863], 80.00th=[16450], 90.00th=[19268], 95.00th=[25560], 00:20:30.289 | 99.00th=[36439], 99.50th=[38536], 99.90th=[41681], 99.95th=[41681], 00:20:30.289 | 99.99th=[41681] 00:20:30.289 write: IOPS=4490, BW=17.5MiB/s (18.4MB/s)(17.8MiB/1015msec); 0 zone resets 00:20:30.289 slat (nsec): min=1532, max=12211k, avg=113878.29, stdev=658797.24 00:20:30.289 clat (usec): min=2173, max=50854, avg=16940.74, stdev=11724.10 00:20:30.289 lat (usec): min=2216, max=50864, avg=17054.62, stdev=11785.52 00:20:30.289 clat percentiles (usec): 00:20:30.289 | 1.00th=[ 3425], 5.00th=[ 5407], 10.00th=[ 5735], 20.00th=[ 8160], 00:20:30.289 | 30.00th=[ 8848], 40.00th=[11600], 50.00th=[14353], 60.00th=[17433], 00:20:30.289 | 70.00th=[17695], 80.00th=[20579], 90.00th=[34866], 95.00th=[48497], 00:20:30.289 | 99.00th=[50594], 99.50th=[50594], 99.90th=[50594], 99.95th=[50594], 00:20:30.289 | 99.99th=[50594] 00:20:30.289 bw ( KiB/s): min=16816, max=18632, per=24.78%, avg=17724.00, stdev=1284.11, samples=2 00:20:30.289 iops : min= 4204, max= 4658, avg=4431.00, stdev=321.03, samples=2 00:20:30.289 lat (msec) : 4=0.65%, 10=42.60%, 20=41.58%, 50=13.67%, 100=1.50% 00:20:30.289 cpu : usr=3.16%, sys=4.44%, ctx=411, majf=0, minf=1 00:20:30.289 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:20:30.289 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:30.289 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:20:30.289 issued rwts: total=4096,4558,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:30.289 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:30.289 job1: (groupid=0, jobs=1): err= 0: pid=3964993: Fri Apr 26 23:23:19 2024 00:20:30.289 read: IOPS=4818, BW=18.8MiB/s (19.7MB/s)(18.9MiB/1003msec) 00:20:30.289 slat (nsec): min=847, max=6013.8k, avg=114566.91, stdev=569189.10 00:20:30.289 clat (usec): min=1316, max=17990, avg=14701.51, stdev=1719.15 00:20:30.289 lat (usec): min=5018, max=17997, avg=14816.08, stdev=1638.72 00:20:30.289 clat percentiles (usec): 00:20:30.289 | 1.00th=[ 9110], 5.00th=[11600], 10.00th=[12518], 20.00th=[13698], 00:20:30.289 | 30.00th=[14353], 40.00th=[14615], 50.00th=[15139], 60.00th=[15401], 00:20:30.289 | 70.00th=[15664], 80.00th=[15926], 90.00th=[16319], 95.00th=[16581], 00:20:30.289 | 99.00th=[17695], 99.50th=[17957], 99.90th=[17957], 99.95th=[17957], 00:20:30.289 | 99.99th=[17957] 00:20:30.289 write: IOPS=5104, BW=19.9MiB/s (20.9MB/s)(20.0MiB/1003msec); 0 zone resets 00:20:30.289 slat (nsec): min=1442, max=3661.9k, avg=81613.41, stdev=420163.24 00:20:30.289 clat (usec): min=1652, max=21775, avg=10930.95, stdev=2280.97 00:20:30.289 lat (usec): min=1678, max=21783, avg=11012.56, stdev=2250.89 00:20:30.289 clat percentiles (usec): 00:20:30.289 | 1.00th=[ 3359], 5.00th=[ 7308], 10.00th=[ 9372], 20.00th=[10028], 00:20:30.289 | 30.00th=[10290], 40.00th=[10421], 50.00th=[10683], 60.00th=[10814], 00:20:30.289 | 70.00th=[10945], 80.00th=[12387], 90.00th=[13960], 95.00th=[14484], 00:20:30.289 | 99.00th=[17171], 99.50th=[18744], 99.90th=[21627], 99.95th=[21627], 00:20:30.289 | 99.99th=[21890] 00:20:30.289 bw ( KiB/s): min=20480, max=20480, per=28.64%, avg=20480.00, stdev= 0.00, samples=2 00:20:30.289 iops : min= 5120, max= 5120, avg=5120.00, stdev= 0.00, samples=2 00:20:30.289 lat (msec) : 2=0.07%, 4=0.62%, 10=9.65%, 20=89.52%, 50=0.14% 00:20:30.289 cpu : usr=3.39%, sys=4.79%, ctx=407, majf=0, minf=1 00:20:30.289 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:20:30.289 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:30.289 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:20:30.289 issued rwts: total=4833,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:30.289 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:30.289 job2: (groupid=0, jobs=1): err= 0: pid=3965011: Fri Apr 26 23:23:19 2024 00:20:30.289 read: IOPS=4929, BW=19.3MiB/s (20.2MB/s)(19.3MiB/1003msec) 00:20:30.289 slat (nsec): min=928, max=5235.6k, avg=98633.80, stdev=478340.81 00:20:30.289 clat (usec): min=1184, max=17828, avg=12326.18, stdev=1571.09 00:20:30.289 lat (usec): min=2899, max=19519, avg=12424.82, stdev=1619.61 00:20:30.289 clat percentiles (usec): 00:20:30.290 | 1.00th=[ 7767], 5.00th=[10421], 10.00th=[10945], 20.00th=[11338], 00:20:30.290 | 30.00th=[11731], 40.00th=[11863], 50.00th=[12125], 60.00th=[12387], 00:20:30.290 | 70.00th=[12911], 80.00th=[13566], 90.00th=[14353], 95.00th=[14746], 00:20:30.290 | 99.00th=[16319], 99.50th=[16712], 99.90th=[17433], 99.95th=[17433], 00:20:30.290 | 99.99th=[17957] 00:20:30.290 write: IOPS=5104, BW=19.9MiB/s (20.9MB/s)(20.0MiB/1003msec); 0 zone resets 00:20:30.290 slat (nsec): min=1539, max=7501.2k, avg=96370.69, stdev=372467.42 00:20:30.290 clat (usec): min=8033, max=31307, avg=12873.41, stdev=2657.70 00:20:30.290 lat (usec): min=8037, max=31317, avg=12969.78, stdev=2667.05 00:20:30.290 clat percentiles (usec): 00:20:30.290 | 1.00th=[ 9110], 5.00th=[10290], 10.00th=[10421], 20.00th=[10945], 00:20:30.290 | 30.00th=[11338], 40.00th=[11863], 50.00th=[12518], 60.00th=[13173], 00:20:30.290 | 70.00th=[13698], 80.00th=[14091], 90.00th=[15139], 95.00th=[16057], 00:20:30.290 | 99.00th=[26608], 99.50th=[28705], 99.90th=[30540], 99.95th=[31327], 00:20:30.290 | 99.99th=[31327] 00:20:30.290 bw ( KiB/s): min=20480, max=20521, per=28.66%, avg=20500.50, stdev=28.99, samples=2 00:20:30.290 iops : min= 5120, max= 5130, avg=5125.00, stdev= 7.07, samples=2 00:20:30.290 lat (msec) : 2=0.01%, 4=0.19%, 10=2.41%, 20=96.17%, 50=1.21% 00:20:30.290 cpu : usr=2.40%, sys=3.69%, ctx=822, majf=0, minf=1 00:20:30.290 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:20:30.290 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:30.290 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:20:30.290 issued rwts: total=4944,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:30.290 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:30.290 job3: (groupid=0, jobs=1): err= 0: pid=3965017: Fri Apr 26 23:23:19 2024 00:20:30.290 read: IOPS=3026, BW=11.8MiB/s (12.4MB/s)(12.0MiB/1015msec) 00:20:30.290 slat (nsec): min=894, max=13865k, avg=125771.08, stdev=875412.14 00:20:30.290 clat (usec): min=4604, max=41597, avg=14242.46, stdev=5018.44 00:20:30.290 lat (usec): min=4608, max=41604, avg=14368.24, stdev=5094.35 00:20:30.290 clat percentiles (usec): 00:20:30.290 | 1.00th=[ 5211], 5.00th=[ 9896], 10.00th=[10290], 20.00th=[10552], 00:20:30.290 | 30.00th=[10814], 40.00th=[11076], 50.00th=[11469], 60.00th=[14222], 00:20:30.290 | 70.00th=[16712], 80.00th=[17695], 90.00th=[22152], 95.00th=[23987], 00:20:30.290 | 99.00th=[29754], 99.50th=[32637], 99.90th=[41681], 99.95th=[41681], 00:20:30.290 | 99.99th=[41681] 00:20:30.290 write: IOPS=3300, BW=12.9MiB/s (13.5MB/s)(13.1MiB/1015msec); 0 zone resets 00:20:30.290 slat (nsec): min=1560, max=12678k, avg=177224.68, stdev=841417.58 00:20:30.290 clat (usec): min=972, max=82937, avg=25364.30, stdev=19349.90 00:20:30.290 lat (usec): min=981, max=82945, avg=25541.53, stdev=19469.93 00:20:30.290 clat percentiles (usec): 00:20:30.290 | 1.00th=[ 3163], 5.00th=[ 6587], 10.00th=[ 8979], 20.00th=[12256], 00:20:30.290 | 30.00th=[16319], 40.00th=[17433], 50.00th=[17695], 60.00th=[17957], 00:20:30.290 | 70.00th=[23725], 80.00th=[38011], 90.00th=[61080], 95.00th=[73925], 00:20:30.290 | 99.00th=[78119], 99.50th=[79168], 99.90th=[83362], 99.95th=[83362], 00:20:30.290 | 99.99th=[83362] 00:20:30.290 bw ( KiB/s): min=11720, max=14064, per=18.03%, avg=12892.00, stdev=1657.46, samples=2 00:20:30.290 iops : min= 2930, max= 3516, avg=3223.00, stdev=414.36, samples=2 00:20:30.290 lat (usec) : 1000=0.03% 00:20:30.290 lat (msec) : 2=0.16%, 4=0.84%, 10=8.28%, 20=67.39%, 50=16.52% 00:20:30.290 lat (msec) : 100=6.77% 00:20:30.290 cpu : usr=2.37%, sys=3.25%, ctx=416, majf=0, minf=1 00:20:30.290 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.0% 00:20:30.290 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:30.290 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:20:30.290 issued rwts: total=3072,3350,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:30.290 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:30.290 00:20:30.290 Run status group 0 (all jobs): 00:20:30.290 READ: bw=65.2MiB/s (68.4MB/s), 11.8MiB/s-19.3MiB/s (12.4MB/s-20.2MB/s), io=66.2MiB (69.4MB), run=1003-1015msec 00:20:30.290 WRITE: bw=69.8MiB/s (73.2MB/s), 12.9MiB/s-19.9MiB/s (13.5MB/s-20.9MB/s), io=70.9MiB (74.3MB), run=1003-1015msec 00:20:30.290 00:20:30.290 Disk stats (read/write): 00:20:30.290 nvme0n1: ios=3489/3584, merge=0/0, ticks=43112/58850, in_queue=101962, util=88.38% 00:20:30.290 nvme0n2: ios=4135/4224, merge=0/0, ticks=15032/11539, in_queue=26571, util=96.64% 00:20:30.290 nvme0n3: ios=4139/4327, merge=0/0, ticks=16566/18089, in_queue=34655, util=100.00% 00:20:30.290 nvme0n4: ios=2560/2847, merge=0/0, ticks=35517/65517, in_queue=101034, util=89.43% 00:20:30.290 23:23:19 -- target/fio.sh@55 -- # sync 00:20:30.290 23:23:19 -- target/fio.sh@59 -- # fio_pid=3965079 00:20:30.290 23:23:19 -- target/fio.sh@61 -- # sleep 3 00:20:30.290 23:23:19 -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:20:30.290 [global] 00:20:30.290 thread=1 00:20:30.290 invalidate=1 00:20:30.290 rw=read 00:20:30.290 time_based=1 00:20:30.290 runtime=10 00:20:30.290 ioengine=libaio 00:20:30.290 direct=1 00:20:30.290 bs=4096 00:20:30.290 iodepth=1 00:20:30.290 norandommap=1 00:20:30.290 numjobs=1 00:20:30.290 00:20:30.290 [job0] 00:20:30.290 filename=/dev/nvme0n1 00:20:30.290 [job1] 00:20:30.290 filename=/dev/nvme0n2 00:20:30.290 [job2] 00:20:30.290 filename=/dev/nvme0n3 00:20:30.290 [job3] 00:20:30.290 filename=/dev/nvme0n4 00:20:30.290 Could not set queue depth (nvme0n1) 00:20:30.290 Could not set queue depth (nvme0n2) 00:20:30.290 Could not set queue depth (nvme0n3) 00:20:30.290 Could not set queue depth (nvme0n4) 00:20:30.859 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:20:30.859 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:20:30.859 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:20:30.859 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:20:30.859 fio-3.35 00:20:30.859 Starting 4 threads 00:20:33.478 23:23:22 -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:20:33.479 23:23:22 -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:20:33.479 fio: io_u error on file /dev/nvme0n4: Remote I/O error: read offset=6180864, buflen=4096 00:20:33.479 fio: pid=3965499, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:20:33.479 23:23:22 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:20:33.479 23:23:22 -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:20:33.479 fio: io_u error on file /dev/nvme0n3: Remote I/O error: read offset=270336, buflen=4096 00:20:33.479 fio: pid=3965491, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:20:33.739 23:23:22 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:20:33.739 23:23:22 -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:20:33.739 fio: io_u error on file /dev/nvme0n1: Remote I/O error: read offset=6365184, buflen=4096 00:20:33.739 fio: pid=3965455, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:20:33.999 23:23:23 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:20:33.999 23:23:23 -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:20:33.999 fio: io_u error on file /dev/nvme0n2: Remote I/O error: read offset=335872, buflen=4096 00:20:33.999 fio: pid=3965470, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:20:33.999 00:20:33.999 job0: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=3965455: Fri Apr 26 23:23:23 2024 00:20:33.999 read: IOPS=537, BW=2147KiB/s (2199kB/s)(6216KiB/2895msec) 00:20:33.999 slat (usec): min=6, max=10157, avg=30.62, stdev=257.07 00:20:33.999 clat (usec): min=253, max=41848, avg=1811.98, stdev=5581.26 00:20:33.999 lat (usec): min=278, max=41872, avg=1842.60, stdev=5586.56 00:20:33.999 clat percentiles (usec): 00:20:33.999 | 1.00th=[ 562], 5.00th=[ 717], 10.00th=[ 791], 20.00th=[ 865], 00:20:33.999 | 30.00th=[ 914], 40.00th=[ 979], 50.00th=[ 1037], 60.00th=[ 1090], 00:20:33.999 | 70.00th=[ 1172], 80.00th=[ 1221], 90.00th=[ 1303], 95.00th=[ 1352], 00:20:33.999 | 99.00th=[41681], 99.50th=[41681], 99.90th=[41681], 99.95th=[41681], 00:20:33.999 | 99.99th=[41681] 00:20:33.999 bw ( KiB/s): min= 240, max= 4056, per=58.11%, avg=2440.00, stdev=1887.05, samples=5 00:20:33.999 iops : min= 60, max= 1014, avg=610.00, stdev=471.76, samples=5 00:20:33.999 lat (usec) : 500=0.32%, 750=6.75%, 1000=36.66% 00:20:33.999 lat (msec) : 2=54.28%, 50=1.93% 00:20:33.999 cpu : usr=0.31%, sys=1.76%, ctx=1558, majf=0, minf=1 00:20:33.999 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:33.999 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:33.999 complete : 0=0.1%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:33.999 issued rwts: total=1555,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:33.999 latency : target=0, window=0, percentile=100.00%, depth=1 00:20:33.999 job1: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=3965470: Fri Apr 26 23:23:23 2024 00:20:33.999 read: IOPS=27, BW=107KiB/s (110kB/s)(328KiB/3059msec) 00:20:33.999 slat (usec): min=7, max=6639, avg=106.19, stdev=725.93 00:20:33.999 clat (usec): min=643, max=41638, avg=36928.73, stdev=11989.23 00:20:33.999 lat (usec): min=652, max=47959, avg=37035.89, stdev=12043.81 00:20:33.999 clat percentiles (usec): 00:20:33.999 | 1.00th=[ 644], 5.00th=[ 873], 10.00th=[28443], 20.00th=[40633], 00:20:33.999 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:20:33.999 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:20:33.999 | 99.00th=[41681], 99.50th=[41681], 99.90th=[41681], 99.95th=[41681], 00:20:33.999 | 99.99th=[41681] 00:20:33.999 bw ( KiB/s): min= 96, max= 160, per=2.62%, avg=110.40, stdev=27.94, samples=5 00:20:33.999 iops : min= 24, max= 40, avg=27.60, stdev= 6.99, samples=5 00:20:33.999 lat (usec) : 750=2.41%, 1000=6.02% 00:20:33.999 lat (msec) : 2=1.20%, 50=89.16% 00:20:33.999 cpu : usr=0.00%, sys=0.10%, ctx=88, majf=0, minf=1 00:20:33.999 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:33.999 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:33.999 complete : 0=1.2%, 4=98.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:33.999 issued rwts: total=83,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:33.999 latency : target=0, window=0, percentile=100.00%, depth=1 00:20:33.999 job2: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=3965491: Fri Apr 26 23:23:23 2024 00:20:33.999 read: IOPS=24, BW=96.0KiB/s (98.3kB/s)(264KiB/2751msec) 00:20:33.999 slat (nsec): min=22308, max=34180, avg=26347.45, stdev=1171.44 00:20:33.999 clat (usec): min=920, max=42198, avg=41326.78, stdev=5051.98 00:20:33.999 lat (usec): min=954, max=42224, avg=41353.13, stdev=5051.00 00:20:33.999 clat percentiles (usec): 00:20:33.999 | 1.00th=[ 922], 5.00th=[41681], 10.00th=[41681], 20.00th=[41681], 00:20:33.999 | 30.00th=[42206], 40.00th=[42206], 50.00th=[42206], 60.00th=[42206], 00:20:33.999 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:20:33.999 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:20:33.999 | 99.99th=[42206] 00:20:33.999 bw ( KiB/s): min= 96, max= 96, per=2.29%, avg=96.00, stdev= 0.00, samples=5 00:20:33.999 iops : min= 24, max= 24, avg=24.00, stdev= 0.00, samples=5 00:20:33.999 lat (usec) : 1000=1.49% 00:20:33.999 lat (msec) : 50=97.01% 00:20:33.999 cpu : usr=0.15%, sys=0.00%, ctx=67, majf=0, minf=1 00:20:33.999 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:33.999 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:33.999 complete : 0=1.5%, 4=98.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:33.999 issued rwts: total=67,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:33.999 latency : target=0, window=0, percentile=100.00%, depth=1 00:20:33.999 job3: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=3965499: Fri Apr 26 23:23:23 2024 00:20:33.999 read: IOPS=583, BW=2333KiB/s (2389kB/s)(6036KiB/2587msec) 00:20:33.999 slat (nsec): min=7128, max=62171, avg=25726.69, stdev=4355.30 00:20:33.999 clat (usec): min=487, max=42382, avg=1668.27, stdev=4792.21 00:20:33.999 lat (usec): min=514, max=42407, avg=1694.00, stdev=4792.12 00:20:33.999 clat percentiles (usec): 00:20:33.999 | 1.00th=[ 799], 5.00th=[ 955], 10.00th=[ 996], 20.00th=[ 1045], 00:20:33.999 | 30.00th=[ 1074], 40.00th=[ 1090], 50.00th=[ 1106], 60.00th=[ 1123], 00:20:33.999 | 70.00th=[ 1139], 80.00th=[ 1172], 90.00th=[ 1188], 95.00th=[ 1237], 00:20:33.999 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:20:33.999 | 99.99th=[42206] 00:20:33.999 bw ( KiB/s): min= 96, max= 3544, per=57.42%, avg=2411.20, stdev=1592.15, samples=5 00:20:33.999 iops : min= 24, max= 886, avg=602.80, stdev=398.04, samples=5 00:20:33.999 lat (usec) : 500=0.07%, 750=0.66%, 1000=9.54% 00:20:33.999 lat (msec) : 2=88.28%, 50=1.39% 00:20:33.999 cpu : usr=1.12%, sys=2.05%, ctx=1510, majf=0, minf=2 00:20:33.999 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:33.999 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:33.999 complete : 0=0.1%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:33.999 issued rwts: total=1510,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:33.999 latency : target=0, window=0, percentile=100.00%, depth=1 00:20:33.999 00:20:33.999 Run status group 0 (all jobs): 00:20:33.999 READ: bw=4199KiB/s (4300kB/s), 96.0KiB/s-2333KiB/s (98.3kB/s-2389kB/s), io=12.5MiB (13.2MB), run=2587-3059msec 00:20:33.999 00:20:33.999 Disk stats (read/write): 00:20:33.999 nvme0n1: ios=1553/0, merge=0/0, ticks=2737/0, in_queue=2737, util=94.30% 00:20:33.999 nvme0n2: ios=103/0, merge=0/0, ticks=3224/0, in_queue=3224, util=98.79% 00:20:33.999 nvme0n3: ios=62/0, merge=0/0, ticks=2562/0, in_queue=2562, util=96.03% 00:20:33.999 nvme0n4: ios=1510/0, merge=0/0, ticks=2388/0, in_queue=2388, util=96.05% 00:20:33.999 23:23:23 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:20:33.999 23:23:23 -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:20:34.260 23:23:23 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:20:34.260 23:23:23 -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:20:34.520 23:23:23 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:20:34.520 23:23:23 -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:20:34.520 23:23:23 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:20:34.520 23:23:23 -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:20:34.781 23:23:23 -- target/fio.sh@69 -- # fio_status=0 00:20:34.781 23:23:23 -- target/fio.sh@70 -- # wait 3965079 00:20:34.781 23:23:23 -- target/fio.sh@70 -- # fio_status=4 00:20:34.781 23:23:23 -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:20:34.781 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:20:34.781 23:23:23 -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:20:34.781 23:23:23 -- common/autotest_common.sh@1205 -- # local i=0 00:20:34.781 23:23:23 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:20:34.781 23:23:23 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:20:34.781 23:23:23 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:20:34.781 23:23:23 -- common/autotest_common.sh@1213 -- # grep -q -w SPDKISFASTANDAWESOME 00:20:34.782 23:23:23 -- common/autotest_common.sh@1217 -- # return 0 00:20:34.782 23:23:23 -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:20:34.782 23:23:23 -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:20:34.782 nvmf hotplug test: fio failed as expected 00:20:34.782 23:23:23 -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:35.043 23:23:24 -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:20:35.043 23:23:24 -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:20:35.043 23:23:24 -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:20:35.043 23:23:24 -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:20:35.043 23:23:24 -- target/fio.sh@91 -- # nvmftestfini 00:20:35.043 23:23:24 -- nvmf/common.sh@477 -- # nvmfcleanup 00:20:35.043 23:23:24 -- nvmf/common.sh@117 -- # sync 00:20:35.043 23:23:24 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:35.043 23:23:24 -- nvmf/common.sh@120 -- # set +e 00:20:35.043 23:23:24 -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:35.043 23:23:24 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:35.043 rmmod nvme_tcp 00:20:35.043 rmmod nvme_fabrics 00:20:35.043 rmmod nvme_keyring 00:20:35.043 23:23:24 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:35.043 23:23:24 -- nvmf/common.sh@124 -- # set -e 00:20:35.043 23:23:24 -- nvmf/common.sh@125 -- # return 0 00:20:35.043 23:23:24 -- nvmf/common.sh@478 -- # '[' -n 3961666 ']' 00:20:35.043 23:23:24 -- nvmf/common.sh@479 -- # killprocess 3961666 00:20:35.043 23:23:24 -- common/autotest_common.sh@936 -- # '[' -z 3961666 ']' 00:20:35.043 23:23:24 -- common/autotest_common.sh@940 -- # kill -0 3961666 00:20:35.043 23:23:24 -- common/autotest_common.sh@941 -- # uname 00:20:35.043 23:23:24 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:20:35.043 23:23:24 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3961666 00:20:35.043 23:23:24 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:20:35.043 23:23:24 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:20:35.043 23:23:24 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3961666' 00:20:35.043 killing process with pid 3961666 00:20:35.043 23:23:24 -- common/autotest_common.sh@955 -- # kill 3961666 00:20:35.043 23:23:24 -- common/autotest_common.sh@960 -- # wait 3961666 00:20:35.303 23:23:24 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:20:35.303 23:23:24 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:20:35.303 23:23:24 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:20:35.303 23:23:24 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:35.303 23:23:24 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:35.303 23:23:24 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:35.303 23:23:24 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:35.303 23:23:24 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:37.216 23:23:26 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:20:37.216 00:20:37.216 real 0m28.190s 00:20:37.216 user 2m24.143s 00:20:37.216 sys 0m8.680s 00:20:37.216 23:23:26 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:20:37.216 23:23:26 -- common/autotest_common.sh@10 -- # set +x 00:20:37.216 ************************************ 00:20:37.216 END TEST nvmf_fio_target 00:20:37.216 ************************************ 00:20:37.477 23:23:26 -- nvmf/nvmf.sh@56 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:20:37.477 23:23:26 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:20:37.477 23:23:26 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:20:37.477 23:23:26 -- common/autotest_common.sh@10 -- # set +x 00:20:37.477 ************************************ 00:20:37.477 START TEST nvmf_bdevio 00:20:37.477 ************************************ 00:20:37.477 23:23:26 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:20:37.477 * Looking for test storage... 00:20:37.739 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:20:37.739 23:23:26 -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:37.739 23:23:26 -- nvmf/common.sh@7 -- # uname -s 00:20:37.739 23:23:26 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:37.739 23:23:26 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:37.739 23:23:26 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:37.739 23:23:26 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:37.739 23:23:26 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:37.739 23:23:26 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:37.739 23:23:26 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:37.739 23:23:26 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:37.739 23:23:26 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:37.739 23:23:26 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:37.739 23:23:26 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:20:37.739 23:23:26 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:20:37.739 23:23:26 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:37.739 23:23:26 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:37.739 23:23:26 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:37.739 23:23:26 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:37.739 23:23:26 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:37.739 23:23:26 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:37.739 23:23:26 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:37.739 23:23:26 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:37.739 23:23:26 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:37.739 23:23:26 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:37.739 23:23:26 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:37.739 23:23:26 -- paths/export.sh@5 -- # export PATH 00:20:37.739 23:23:26 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:37.739 23:23:26 -- nvmf/common.sh@47 -- # : 0 00:20:37.739 23:23:26 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:37.739 23:23:26 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:37.739 23:23:26 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:37.739 23:23:26 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:37.739 23:23:26 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:37.739 23:23:26 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:37.739 23:23:26 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:37.739 23:23:26 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:37.739 23:23:26 -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:20:37.739 23:23:26 -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:20:37.739 23:23:26 -- target/bdevio.sh@14 -- # nvmftestinit 00:20:37.739 23:23:26 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:20:37.739 23:23:26 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:37.739 23:23:26 -- nvmf/common.sh@437 -- # prepare_net_devs 00:20:37.739 23:23:26 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:20:37.739 23:23:26 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:20:37.739 23:23:26 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:37.739 23:23:26 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:37.739 23:23:26 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:37.739 23:23:26 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:20:37.739 23:23:26 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:20:37.739 23:23:26 -- nvmf/common.sh@285 -- # xtrace_disable 00:20:37.739 23:23:26 -- common/autotest_common.sh@10 -- # set +x 00:20:44.337 23:23:33 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:20:44.337 23:23:33 -- nvmf/common.sh@291 -- # pci_devs=() 00:20:44.337 23:23:33 -- nvmf/common.sh@291 -- # local -a pci_devs 00:20:44.337 23:23:33 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:20:44.337 23:23:33 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:20:44.337 23:23:33 -- nvmf/common.sh@293 -- # pci_drivers=() 00:20:44.337 23:23:33 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:20:44.337 23:23:33 -- nvmf/common.sh@295 -- # net_devs=() 00:20:44.337 23:23:33 -- nvmf/common.sh@295 -- # local -ga net_devs 00:20:44.337 23:23:33 -- nvmf/common.sh@296 -- # e810=() 00:20:44.337 23:23:33 -- nvmf/common.sh@296 -- # local -ga e810 00:20:44.337 23:23:33 -- nvmf/common.sh@297 -- # x722=() 00:20:44.337 23:23:33 -- nvmf/common.sh@297 -- # local -ga x722 00:20:44.337 23:23:33 -- nvmf/common.sh@298 -- # mlx=() 00:20:44.337 23:23:33 -- nvmf/common.sh@298 -- # local -ga mlx 00:20:44.337 23:23:33 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:44.337 23:23:33 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:44.337 23:23:33 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:44.337 23:23:33 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:44.337 23:23:33 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:44.337 23:23:33 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:44.337 23:23:33 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:44.337 23:23:33 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:44.337 23:23:33 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:44.337 23:23:33 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:44.337 23:23:33 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:44.337 23:23:33 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:20:44.337 23:23:33 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:20:44.337 23:23:33 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:20:44.337 23:23:33 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:20:44.337 23:23:33 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:20:44.337 23:23:33 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:20:44.337 23:23:33 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:44.337 23:23:33 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:20:44.337 Found 0000:31:00.0 (0x8086 - 0x159b) 00:20:44.337 23:23:33 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:44.337 23:23:33 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:44.337 23:23:33 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:44.337 23:23:33 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:44.337 23:23:33 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:44.337 23:23:33 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:44.337 23:23:33 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:20:44.337 Found 0000:31:00.1 (0x8086 - 0x159b) 00:20:44.337 23:23:33 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:44.337 23:23:33 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:44.337 23:23:33 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:44.337 23:23:33 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:44.337 23:23:33 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:44.337 23:23:33 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:20:44.337 23:23:33 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:20:44.337 23:23:33 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:20:44.337 23:23:33 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:44.337 23:23:33 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:44.337 23:23:33 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:20:44.337 23:23:33 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:44.337 23:23:33 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:20:44.337 Found net devices under 0000:31:00.0: cvl_0_0 00:20:44.337 23:23:33 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:20:44.337 23:23:33 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:44.337 23:23:33 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:44.337 23:23:33 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:20:44.337 23:23:33 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:44.337 23:23:33 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:20:44.337 Found net devices under 0000:31:00.1: cvl_0_1 00:20:44.337 23:23:33 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:20:44.337 23:23:33 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:20:44.337 23:23:33 -- nvmf/common.sh@403 -- # is_hw=yes 00:20:44.337 23:23:33 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:20:44.337 23:23:33 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:20:44.337 23:23:33 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:20:44.337 23:23:33 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:44.338 23:23:33 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:44.338 23:23:33 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:44.338 23:23:33 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:20:44.338 23:23:33 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:44.338 23:23:33 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:44.338 23:23:33 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:20:44.338 23:23:33 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:44.338 23:23:33 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:44.338 23:23:33 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:20:44.338 23:23:33 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:20:44.338 23:23:33 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:20:44.338 23:23:33 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:44.598 23:23:33 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:44.598 23:23:33 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:44.598 23:23:33 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:20:44.598 23:23:33 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:44.598 23:23:33 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:44.598 23:23:33 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:44.598 23:23:33 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:20:44.598 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:44.598 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.334 ms 00:20:44.598 00:20:44.598 --- 10.0.0.2 ping statistics --- 00:20:44.598 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:44.598 rtt min/avg/max/mdev = 0.334/0.334/0.334/0.000 ms 00:20:44.598 23:23:33 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:44.598 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:44.598 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.183 ms 00:20:44.598 00:20:44.598 --- 10.0.0.1 ping statistics --- 00:20:44.598 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:44.598 rtt min/avg/max/mdev = 0.183/0.183/0.183/0.000 ms 00:20:44.598 23:23:33 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:44.598 23:23:33 -- nvmf/common.sh@411 -- # return 0 00:20:44.598 23:23:33 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:20:44.598 23:23:33 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:44.598 23:23:33 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:20:44.598 23:23:33 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:20:44.598 23:23:33 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:44.598 23:23:33 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:20:44.598 23:23:33 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:20:44.859 23:23:33 -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:20:44.859 23:23:33 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:20:44.859 23:23:33 -- common/autotest_common.sh@710 -- # xtrace_disable 00:20:44.859 23:23:33 -- common/autotest_common.sh@10 -- # set +x 00:20:44.859 23:23:33 -- nvmf/common.sh@470 -- # nvmfpid=3970559 00:20:44.859 23:23:33 -- nvmf/common.sh@471 -- # waitforlisten 3970559 00:20:44.859 23:23:33 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:20:44.859 23:23:33 -- common/autotest_common.sh@817 -- # '[' -z 3970559 ']' 00:20:44.859 23:23:33 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:44.859 23:23:33 -- common/autotest_common.sh@822 -- # local max_retries=100 00:20:44.859 23:23:33 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:44.859 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:44.859 23:23:33 -- common/autotest_common.sh@826 -- # xtrace_disable 00:20:44.859 23:23:33 -- common/autotest_common.sh@10 -- # set +x 00:20:44.859 [2024-04-26 23:23:33.941265] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:20:44.859 [2024-04-26 23:23:33.941313] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:44.859 EAL: No free 2048 kB hugepages reported on node 1 00:20:44.859 [2024-04-26 23:23:34.026097] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:44.859 [2024-04-26 23:23:34.057230] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:44.859 [2024-04-26 23:23:34.057273] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:44.859 [2024-04-26 23:23:34.057281] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:44.859 [2024-04-26 23:23:34.057288] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:44.859 [2024-04-26 23:23:34.057293] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:44.859 [2024-04-26 23:23:34.057445] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:20:44.859 [2024-04-26 23:23:34.057584] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:20:44.859 [2024-04-26 23:23:34.058007] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:20:44.859 [2024-04-26 23:23:34.058008] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:20:45.801 23:23:34 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:20:45.801 23:23:34 -- common/autotest_common.sh@850 -- # return 0 00:20:45.801 23:23:34 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:20:45.801 23:23:34 -- common/autotest_common.sh@716 -- # xtrace_disable 00:20:45.801 23:23:34 -- common/autotest_common.sh@10 -- # set +x 00:20:45.801 23:23:34 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:45.801 23:23:34 -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:45.801 23:23:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:45.801 23:23:34 -- common/autotest_common.sh@10 -- # set +x 00:20:45.801 [2024-04-26 23:23:34.780300] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:45.801 23:23:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:45.801 23:23:34 -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:20:45.801 23:23:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:45.801 23:23:34 -- common/autotest_common.sh@10 -- # set +x 00:20:45.801 Malloc0 00:20:45.801 23:23:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:45.801 23:23:34 -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:20:45.801 23:23:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:45.801 23:23:34 -- common/autotest_common.sh@10 -- # set +x 00:20:45.801 23:23:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:45.801 23:23:34 -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:20:45.801 23:23:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:45.801 23:23:34 -- common/autotest_common.sh@10 -- # set +x 00:20:45.801 23:23:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:45.801 23:23:34 -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:45.801 23:23:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:45.801 23:23:34 -- common/autotest_common.sh@10 -- # set +x 00:20:45.801 [2024-04-26 23:23:34.845194] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:45.801 23:23:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:45.801 23:23:34 -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:20:45.801 23:23:34 -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:20:45.801 23:23:34 -- nvmf/common.sh@521 -- # config=() 00:20:45.801 23:23:34 -- nvmf/common.sh@521 -- # local subsystem config 00:20:45.801 23:23:34 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:20:45.801 23:23:34 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:20:45.801 { 00:20:45.801 "params": { 00:20:45.801 "name": "Nvme$subsystem", 00:20:45.801 "trtype": "$TEST_TRANSPORT", 00:20:45.801 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:45.801 "adrfam": "ipv4", 00:20:45.801 "trsvcid": "$NVMF_PORT", 00:20:45.801 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:45.801 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:45.801 "hdgst": ${hdgst:-false}, 00:20:45.801 "ddgst": ${ddgst:-false} 00:20:45.801 }, 00:20:45.801 "method": "bdev_nvme_attach_controller" 00:20:45.801 } 00:20:45.801 EOF 00:20:45.801 )") 00:20:45.801 23:23:34 -- nvmf/common.sh@543 -- # cat 00:20:45.801 23:23:34 -- nvmf/common.sh@545 -- # jq . 00:20:45.801 23:23:34 -- nvmf/common.sh@546 -- # IFS=, 00:20:45.801 23:23:34 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:20:45.801 "params": { 00:20:45.801 "name": "Nvme1", 00:20:45.801 "trtype": "tcp", 00:20:45.801 "traddr": "10.0.0.2", 00:20:45.801 "adrfam": "ipv4", 00:20:45.801 "trsvcid": "4420", 00:20:45.801 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:45.801 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:45.801 "hdgst": false, 00:20:45.801 "ddgst": false 00:20:45.801 }, 00:20:45.801 "method": "bdev_nvme_attach_controller" 00:20:45.801 }' 00:20:45.801 [2024-04-26 23:23:34.899812] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:20:45.801 [2024-04-26 23:23:34.899895] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3970673 ] 00:20:45.801 EAL: No free 2048 kB hugepages reported on node 1 00:20:45.801 [2024-04-26 23:23:34.967301] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:20:45.801 [2024-04-26 23:23:35.005564] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:45.801 [2024-04-26 23:23:35.005689] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:20:45.801 [2024-04-26 23:23:35.005693] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:46.087 I/O targets: 00:20:46.087 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:20:46.087 00:20:46.087 00:20:46.087 CUnit - A unit testing framework for C - Version 2.1-3 00:20:46.087 http://cunit.sourceforge.net/ 00:20:46.087 00:20:46.087 00:20:46.087 Suite: bdevio tests on: Nvme1n1 00:20:46.349 Test: blockdev write read block ...passed 00:20:46.349 Test: blockdev write zeroes read block ...passed 00:20:46.349 Test: blockdev write zeroes read no split ...passed 00:20:46.349 Test: blockdev write zeroes read split ...passed 00:20:46.349 Test: blockdev write zeroes read split partial ...passed 00:20:46.349 Test: blockdev reset ...[2024-04-26 23:23:35.417415] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:46.349 [2024-04-26 23:23:35.417484] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x222b3f0 (9): Bad file descriptor 00:20:46.349 [2024-04-26 23:23:35.435045] bdev_nvme.c:2054:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:20:46.349 passed 00:20:46.349 Test: blockdev write read 8 blocks ...passed 00:20:46.349 Test: blockdev write read size > 128k ...passed 00:20:46.349 Test: blockdev write read invalid size ...passed 00:20:46.349 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:20:46.349 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:20:46.349 Test: blockdev write read max offset ...passed 00:20:46.610 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:20:46.611 Test: blockdev writev readv 8 blocks ...passed 00:20:46.611 Test: blockdev writev readv 30 x 1block ...passed 00:20:46.611 Test: blockdev writev readv block ...passed 00:20:46.611 Test: blockdev writev readv size > 128k ...passed 00:20:46.611 Test: blockdev writev readv size > 128k in two iovs ...passed 00:20:46.611 Test: blockdev comparev and writev ...[2024-04-26 23:23:35.696129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:46.611 [2024-04-26 23:23:35.696155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:46.611 [2024-04-26 23:23:35.696166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:46.611 [2024-04-26 23:23:35.696171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:46.611 [2024-04-26 23:23:35.696486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:46.611 [2024-04-26 23:23:35.696498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:20:46.611 [2024-04-26 23:23:35.696508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:46.611 [2024-04-26 23:23:35.696513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:20:46.611 [2024-04-26 23:23:35.696809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:46.611 [2024-04-26 23:23:35.696816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:20:46.611 [2024-04-26 23:23:35.696825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:46.611 [2024-04-26 23:23:35.696831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:20:46.611 [2024-04-26 23:23:35.697108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:46.611 [2024-04-26 23:23:35.697116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:20:46.611 [2024-04-26 23:23:35.697125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:46.611 [2024-04-26 23:23:35.697130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:20:46.611 passed 00:20:46.611 Test: blockdev nvme passthru rw ...passed 00:20:46.611 Test: blockdev nvme passthru vendor specific ...[2024-04-26 23:23:35.780229] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:20:46.611 [2024-04-26 23:23:35.780242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:20:46.611 [2024-04-26 23:23:35.780402] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:20:46.611 [2024-04-26 23:23:35.780409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:20:46.611 [2024-04-26 23:23:35.780562] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:20:46.611 [2024-04-26 23:23:35.780570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:20:46.611 [2024-04-26 23:23:35.780726] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:20:46.611 [2024-04-26 23:23:35.780733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:20:46.611 passed 00:20:46.611 Test: blockdev nvme admin passthru ...passed 00:20:46.611 Test: blockdev copy ...passed 00:20:46.611 00:20:46.611 Run Summary: Type Total Ran Passed Failed Inactive 00:20:46.611 suites 1 1 n/a 0 0 00:20:46.611 tests 23 23 23 0 0 00:20:46.611 asserts 152 152 152 0 n/a 00:20:46.611 00:20:46.611 Elapsed time = 1.070 seconds 00:20:46.871 23:23:35 -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:46.871 23:23:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:46.871 23:23:35 -- common/autotest_common.sh@10 -- # set +x 00:20:46.871 23:23:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:46.871 23:23:35 -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:20:46.871 23:23:35 -- target/bdevio.sh@30 -- # nvmftestfini 00:20:46.871 23:23:35 -- nvmf/common.sh@477 -- # nvmfcleanup 00:20:46.871 23:23:35 -- nvmf/common.sh@117 -- # sync 00:20:46.871 23:23:35 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:46.871 23:23:35 -- nvmf/common.sh@120 -- # set +e 00:20:46.871 23:23:35 -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:46.871 23:23:35 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:46.871 rmmod nvme_tcp 00:20:46.871 rmmod nvme_fabrics 00:20:46.871 rmmod nvme_keyring 00:20:46.872 23:23:36 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:46.872 23:23:36 -- nvmf/common.sh@124 -- # set -e 00:20:46.872 23:23:36 -- nvmf/common.sh@125 -- # return 0 00:20:46.872 23:23:36 -- nvmf/common.sh@478 -- # '[' -n 3970559 ']' 00:20:46.872 23:23:36 -- nvmf/common.sh@479 -- # killprocess 3970559 00:20:46.872 23:23:36 -- common/autotest_common.sh@936 -- # '[' -z 3970559 ']' 00:20:46.872 23:23:36 -- common/autotest_common.sh@940 -- # kill -0 3970559 00:20:46.872 23:23:36 -- common/autotest_common.sh@941 -- # uname 00:20:46.872 23:23:36 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:20:46.872 23:23:36 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3970559 00:20:46.872 23:23:36 -- common/autotest_common.sh@942 -- # process_name=reactor_3 00:20:46.872 23:23:36 -- common/autotest_common.sh@946 -- # '[' reactor_3 = sudo ']' 00:20:46.872 23:23:36 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3970559' 00:20:46.872 killing process with pid 3970559 00:20:46.872 23:23:36 -- common/autotest_common.sh@955 -- # kill 3970559 00:20:46.872 23:23:36 -- common/autotest_common.sh@960 -- # wait 3970559 00:20:47.132 23:23:36 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:20:47.132 23:23:36 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:20:47.132 23:23:36 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:20:47.132 23:23:36 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:47.132 23:23:36 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:47.132 23:23:36 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:47.132 23:23:36 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:47.132 23:23:36 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:49.680 23:23:38 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:20:49.680 00:20:49.680 real 0m11.715s 00:20:49.680 user 0m13.065s 00:20:49.680 sys 0m5.751s 00:20:49.680 23:23:38 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:20:49.680 23:23:38 -- common/autotest_common.sh@10 -- # set +x 00:20:49.680 ************************************ 00:20:49.680 END TEST nvmf_bdevio 00:20:49.680 ************************************ 00:20:49.680 23:23:38 -- nvmf/nvmf.sh@58 -- # '[' tcp = tcp ']' 00:20:49.680 23:23:38 -- nvmf/nvmf.sh@59 -- # run_test nvmf_bdevio_no_huge /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:20:49.680 23:23:38 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:20:49.680 23:23:38 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:20:49.680 23:23:38 -- common/autotest_common.sh@10 -- # set +x 00:20:49.680 ************************************ 00:20:49.680 START TEST nvmf_bdevio_no_huge 00:20:49.680 ************************************ 00:20:49.680 23:23:38 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:20:49.680 * Looking for test storage... 00:20:49.680 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:20:49.680 23:23:38 -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:49.680 23:23:38 -- nvmf/common.sh@7 -- # uname -s 00:20:49.680 23:23:38 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:49.680 23:23:38 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:49.680 23:23:38 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:49.680 23:23:38 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:49.680 23:23:38 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:49.680 23:23:38 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:49.680 23:23:38 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:49.680 23:23:38 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:49.680 23:23:38 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:49.680 23:23:38 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:49.680 23:23:38 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:20:49.680 23:23:38 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:20:49.680 23:23:38 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:49.680 23:23:38 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:49.680 23:23:38 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:49.680 23:23:38 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:49.680 23:23:38 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:49.680 23:23:38 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:49.680 23:23:38 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:49.680 23:23:38 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:49.680 23:23:38 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:49.680 23:23:38 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:49.680 23:23:38 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:49.680 23:23:38 -- paths/export.sh@5 -- # export PATH 00:20:49.680 23:23:38 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:49.680 23:23:38 -- nvmf/common.sh@47 -- # : 0 00:20:49.680 23:23:38 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:49.680 23:23:38 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:49.680 23:23:38 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:49.680 23:23:38 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:49.680 23:23:38 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:49.680 23:23:38 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:49.680 23:23:38 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:49.680 23:23:38 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:49.680 23:23:38 -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:20:49.680 23:23:38 -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:20:49.680 23:23:38 -- target/bdevio.sh@14 -- # nvmftestinit 00:20:49.680 23:23:38 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:20:49.680 23:23:38 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:49.680 23:23:38 -- nvmf/common.sh@437 -- # prepare_net_devs 00:20:49.680 23:23:38 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:20:49.680 23:23:38 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:20:49.680 23:23:38 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:49.680 23:23:38 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:49.680 23:23:38 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:49.680 23:23:38 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:20:49.680 23:23:38 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:20:49.680 23:23:38 -- nvmf/common.sh@285 -- # xtrace_disable 00:20:49.680 23:23:38 -- common/autotest_common.sh@10 -- # set +x 00:20:56.272 23:23:45 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:20:56.272 23:23:45 -- nvmf/common.sh@291 -- # pci_devs=() 00:20:56.272 23:23:45 -- nvmf/common.sh@291 -- # local -a pci_devs 00:20:56.272 23:23:45 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:20:56.272 23:23:45 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:20:56.272 23:23:45 -- nvmf/common.sh@293 -- # pci_drivers=() 00:20:56.272 23:23:45 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:20:56.272 23:23:45 -- nvmf/common.sh@295 -- # net_devs=() 00:20:56.272 23:23:45 -- nvmf/common.sh@295 -- # local -ga net_devs 00:20:56.272 23:23:45 -- nvmf/common.sh@296 -- # e810=() 00:20:56.272 23:23:45 -- nvmf/common.sh@296 -- # local -ga e810 00:20:56.272 23:23:45 -- nvmf/common.sh@297 -- # x722=() 00:20:56.272 23:23:45 -- nvmf/common.sh@297 -- # local -ga x722 00:20:56.272 23:23:45 -- nvmf/common.sh@298 -- # mlx=() 00:20:56.272 23:23:45 -- nvmf/common.sh@298 -- # local -ga mlx 00:20:56.272 23:23:45 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:56.272 23:23:45 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:56.272 23:23:45 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:56.272 23:23:45 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:56.272 23:23:45 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:56.272 23:23:45 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:56.272 23:23:45 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:56.272 23:23:45 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:56.272 23:23:45 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:56.272 23:23:45 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:56.272 23:23:45 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:56.272 23:23:45 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:20:56.272 23:23:45 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:20:56.272 23:23:45 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:20:56.272 23:23:45 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:20:56.272 23:23:45 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:20:56.272 23:23:45 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:20:56.272 23:23:45 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:56.272 23:23:45 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:20:56.272 Found 0000:31:00.0 (0x8086 - 0x159b) 00:20:56.272 23:23:45 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:56.272 23:23:45 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:56.272 23:23:45 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:56.272 23:23:45 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:56.272 23:23:45 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:56.272 23:23:45 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:56.272 23:23:45 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:20:56.272 Found 0000:31:00.1 (0x8086 - 0x159b) 00:20:56.272 23:23:45 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:56.272 23:23:45 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:56.272 23:23:45 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:56.272 23:23:45 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:56.272 23:23:45 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:56.272 23:23:45 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:20:56.272 23:23:45 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:20:56.272 23:23:45 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:20:56.272 23:23:45 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:56.272 23:23:45 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:56.272 23:23:45 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:20:56.272 23:23:45 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:56.272 23:23:45 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:20:56.272 Found net devices under 0000:31:00.0: cvl_0_0 00:20:56.272 23:23:45 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:20:56.272 23:23:45 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:56.272 23:23:45 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:56.272 23:23:45 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:20:56.272 23:23:45 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:56.272 23:23:45 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:20:56.272 Found net devices under 0000:31:00.1: cvl_0_1 00:20:56.272 23:23:45 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:20:56.272 23:23:45 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:20:56.272 23:23:45 -- nvmf/common.sh@403 -- # is_hw=yes 00:20:56.272 23:23:45 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:20:56.272 23:23:45 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:20:56.272 23:23:45 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:20:56.272 23:23:45 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:56.272 23:23:45 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:56.272 23:23:45 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:56.272 23:23:45 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:20:56.272 23:23:45 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:56.272 23:23:45 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:56.272 23:23:45 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:20:56.272 23:23:45 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:56.272 23:23:45 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:56.272 23:23:45 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:20:56.272 23:23:45 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:20:56.272 23:23:45 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:20:56.272 23:23:45 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:56.533 23:23:45 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:56.533 23:23:45 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:56.533 23:23:45 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:20:56.533 23:23:45 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:56.533 23:23:45 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:56.533 23:23:45 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:56.533 23:23:45 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:20:56.533 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:56.533 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.629 ms 00:20:56.533 00:20:56.533 --- 10.0.0.2 ping statistics --- 00:20:56.533 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:56.533 rtt min/avg/max/mdev = 0.629/0.629/0.629/0.000 ms 00:20:56.533 23:23:45 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:56.533 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:56.533 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.289 ms 00:20:56.533 00:20:56.533 --- 10.0.0.1 ping statistics --- 00:20:56.533 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:56.533 rtt min/avg/max/mdev = 0.289/0.289/0.289/0.000 ms 00:20:56.533 23:23:45 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:56.533 23:23:45 -- nvmf/common.sh@411 -- # return 0 00:20:56.533 23:23:45 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:20:56.533 23:23:45 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:56.533 23:23:45 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:20:56.533 23:23:45 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:20:56.533 23:23:45 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:56.533 23:23:45 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:20:56.533 23:23:45 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:20:56.533 23:23:45 -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:20:56.533 23:23:45 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:20:56.533 23:23:45 -- common/autotest_common.sh@710 -- # xtrace_disable 00:20:56.533 23:23:45 -- common/autotest_common.sh@10 -- # set +x 00:20:56.533 23:23:45 -- nvmf/common.sh@470 -- # nvmfpid=3975166 00:20:56.533 23:23:45 -- nvmf/common.sh@471 -- # waitforlisten 3975166 00:20:56.533 23:23:45 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:20:56.533 23:23:45 -- common/autotest_common.sh@817 -- # '[' -z 3975166 ']' 00:20:56.533 23:23:45 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:56.533 23:23:45 -- common/autotest_common.sh@822 -- # local max_retries=100 00:20:56.533 23:23:45 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:56.533 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:56.533 23:23:45 -- common/autotest_common.sh@826 -- # xtrace_disable 00:20:56.533 23:23:45 -- common/autotest_common.sh@10 -- # set +x 00:20:56.794 [2024-04-26 23:23:45.812532] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:20:56.794 [2024-04-26 23:23:45.812590] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:20:56.794 [2024-04-26 23:23:45.903079] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:56.794 [2024-04-26 23:23:45.982384] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:56.794 [2024-04-26 23:23:45.982436] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:56.794 [2024-04-26 23:23:45.982444] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:56.794 [2024-04-26 23:23:45.982451] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:56.794 [2024-04-26 23:23:45.982457] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:56.794 [2024-04-26 23:23:45.982618] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:20:56.794 [2024-04-26 23:23:45.982805] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:20:56.794 [2024-04-26 23:23:45.982968] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:20:56.794 [2024-04-26 23:23:45.983121] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:20:57.367 23:23:46 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:20:57.367 23:23:46 -- common/autotest_common.sh@850 -- # return 0 00:20:57.367 23:23:46 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:20:57.367 23:23:46 -- common/autotest_common.sh@716 -- # xtrace_disable 00:20:57.367 23:23:46 -- common/autotest_common.sh@10 -- # set +x 00:20:57.629 23:23:46 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:57.629 23:23:46 -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:57.629 23:23:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:57.629 23:23:46 -- common/autotest_common.sh@10 -- # set +x 00:20:57.629 [2024-04-26 23:23:46.653835] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:57.629 23:23:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:57.629 23:23:46 -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:20:57.629 23:23:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:57.629 23:23:46 -- common/autotest_common.sh@10 -- # set +x 00:20:57.629 Malloc0 00:20:57.629 23:23:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:57.629 23:23:46 -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:20:57.629 23:23:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:57.629 23:23:46 -- common/autotest_common.sh@10 -- # set +x 00:20:57.629 23:23:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:57.629 23:23:46 -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:20:57.629 23:23:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:57.629 23:23:46 -- common/autotest_common.sh@10 -- # set +x 00:20:57.629 23:23:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:57.629 23:23:46 -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:57.629 23:23:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:57.629 23:23:46 -- common/autotest_common.sh@10 -- # set +x 00:20:57.629 [2024-04-26 23:23:46.707583] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:57.629 23:23:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:57.629 23:23:46 -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:20:57.629 23:23:46 -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:20:57.629 23:23:46 -- nvmf/common.sh@521 -- # config=() 00:20:57.629 23:23:46 -- nvmf/common.sh@521 -- # local subsystem config 00:20:57.629 23:23:46 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:20:57.629 23:23:46 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:20:57.629 { 00:20:57.629 "params": { 00:20:57.629 "name": "Nvme$subsystem", 00:20:57.629 "trtype": "$TEST_TRANSPORT", 00:20:57.629 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:57.629 "adrfam": "ipv4", 00:20:57.629 "trsvcid": "$NVMF_PORT", 00:20:57.629 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:57.629 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:57.629 "hdgst": ${hdgst:-false}, 00:20:57.629 "ddgst": ${ddgst:-false} 00:20:57.629 }, 00:20:57.629 "method": "bdev_nvme_attach_controller" 00:20:57.629 } 00:20:57.629 EOF 00:20:57.629 )") 00:20:57.629 23:23:46 -- nvmf/common.sh@543 -- # cat 00:20:57.629 23:23:46 -- nvmf/common.sh@545 -- # jq . 00:20:57.629 23:23:46 -- nvmf/common.sh@546 -- # IFS=, 00:20:57.629 23:23:46 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:20:57.629 "params": { 00:20:57.629 "name": "Nvme1", 00:20:57.629 "trtype": "tcp", 00:20:57.629 "traddr": "10.0.0.2", 00:20:57.629 "adrfam": "ipv4", 00:20:57.629 "trsvcid": "4420", 00:20:57.629 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:57.629 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:57.629 "hdgst": false, 00:20:57.629 "ddgst": false 00:20:57.629 }, 00:20:57.629 "method": "bdev_nvme_attach_controller" 00:20:57.629 }' 00:20:57.629 [2024-04-26 23:23:46.762704] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:20:57.629 [2024-04-26 23:23:46.762776] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid3975422 ] 00:20:57.629 [2024-04-26 23:23:46.830448] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:20:57.890 [2024-04-26 23:23:46.900327] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:57.890 [2024-04-26 23:23:46.900472] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:20:57.890 [2024-04-26 23:23:46.900476] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:57.890 I/O targets: 00:20:57.890 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:20:57.890 00:20:57.890 00:20:57.890 CUnit - A unit testing framework for C - Version 2.1-3 00:20:57.890 http://cunit.sourceforge.net/ 00:20:57.890 00:20:57.890 00:20:57.890 Suite: bdevio tests on: Nvme1n1 00:20:57.890 Test: blockdev write read block ...passed 00:20:58.152 Test: blockdev write zeroes read block ...passed 00:20:58.152 Test: blockdev write zeroes read no split ...passed 00:20:58.152 Test: blockdev write zeroes read split ...passed 00:20:58.152 Test: blockdev write zeroes read split partial ...passed 00:20:58.152 Test: blockdev reset ...[2024-04-26 23:23:47.245517] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:58.152 [2024-04-26 23:23:47.245573] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x923980 (9): Bad file descriptor 00:20:58.152 [2024-04-26 23:23:47.276517] bdev_nvme.c:2054:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:20:58.152 passed 00:20:58.152 Test: blockdev write read 8 blocks ...passed 00:20:58.152 Test: blockdev write read size > 128k ...passed 00:20:58.152 Test: blockdev write read invalid size ...passed 00:20:58.152 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:20:58.152 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:20:58.152 Test: blockdev write read max offset ...passed 00:20:58.414 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:20:58.414 Test: blockdev writev readv 8 blocks ...passed 00:20:58.414 Test: blockdev writev readv 30 x 1block ...passed 00:20:58.414 Test: blockdev writev readv block ...passed 00:20:58.414 Test: blockdev writev readv size > 128k ...passed 00:20:58.414 Test: blockdev writev readv size > 128k in two iovs ...passed 00:20:58.414 Test: blockdev comparev and writev ...[2024-04-26 23:23:47.500576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:58.414 [2024-04-26 23:23:47.500600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:58.414 [2024-04-26 23:23:47.500611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:58.414 [2024-04-26 23:23:47.500617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:58.414 [2024-04-26 23:23:47.501153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:58.414 [2024-04-26 23:23:47.501162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:20:58.414 [2024-04-26 23:23:47.501172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:58.414 [2024-04-26 23:23:47.501177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:20:58.414 [2024-04-26 23:23:47.501647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:58.414 [2024-04-26 23:23:47.501654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:20:58.414 [2024-04-26 23:23:47.501663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:58.414 [2024-04-26 23:23:47.501668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:20:58.414 [2024-04-26 23:23:47.502179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:58.414 [2024-04-26 23:23:47.502187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:20:58.414 [2024-04-26 23:23:47.502196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:58.414 [2024-04-26 23:23:47.502201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:20:58.414 passed 00:20:58.414 Test: blockdev nvme passthru rw ...passed 00:20:58.414 Test: blockdev nvme passthru vendor specific ...[2024-04-26 23:23:47.586575] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:20:58.414 [2024-04-26 23:23:47.586585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:20:58.414 [2024-04-26 23:23:47.586951] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:20:58.414 [2024-04-26 23:23:47.586958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:20:58.414 [2024-04-26 23:23:47.587338] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:20:58.414 [2024-04-26 23:23:47.587345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:20:58.414 [2024-04-26 23:23:47.587703] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:20:58.414 [2024-04-26 23:23:47.587709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:20:58.414 passed 00:20:58.414 Test: blockdev nvme admin passthru ...passed 00:20:58.414 Test: blockdev copy ...passed 00:20:58.414 00:20:58.414 Run Summary: Type Total Ran Passed Failed Inactive 00:20:58.414 suites 1 1 n/a 0 0 00:20:58.414 tests 23 23 23 0 0 00:20:58.414 asserts 152 152 152 0 n/a 00:20:58.414 00:20:58.414 Elapsed time = 1.142 seconds 00:20:58.676 23:23:47 -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:58.676 23:23:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:58.676 23:23:47 -- common/autotest_common.sh@10 -- # set +x 00:20:58.676 23:23:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:58.676 23:23:47 -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:20:58.676 23:23:47 -- target/bdevio.sh@30 -- # nvmftestfini 00:20:58.676 23:23:47 -- nvmf/common.sh@477 -- # nvmfcleanup 00:20:58.676 23:23:47 -- nvmf/common.sh@117 -- # sync 00:20:58.676 23:23:47 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:58.676 23:23:47 -- nvmf/common.sh@120 -- # set +e 00:20:58.676 23:23:47 -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:58.676 23:23:47 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:58.676 rmmod nvme_tcp 00:20:58.676 rmmod nvme_fabrics 00:20:58.937 rmmod nvme_keyring 00:20:58.937 23:23:47 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:58.937 23:23:47 -- nvmf/common.sh@124 -- # set -e 00:20:58.937 23:23:47 -- nvmf/common.sh@125 -- # return 0 00:20:58.937 23:23:47 -- nvmf/common.sh@478 -- # '[' -n 3975166 ']' 00:20:58.937 23:23:47 -- nvmf/common.sh@479 -- # killprocess 3975166 00:20:58.937 23:23:47 -- common/autotest_common.sh@936 -- # '[' -z 3975166 ']' 00:20:58.937 23:23:47 -- common/autotest_common.sh@940 -- # kill -0 3975166 00:20:58.937 23:23:47 -- common/autotest_common.sh@941 -- # uname 00:20:58.937 23:23:47 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:20:58.937 23:23:47 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3975166 00:20:58.937 23:23:48 -- common/autotest_common.sh@942 -- # process_name=reactor_3 00:20:58.937 23:23:48 -- common/autotest_common.sh@946 -- # '[' reactor_3 = sudo ']' 00:20:58.937 23:23:48 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3975166' 00:20:58.937 killing process with pid 3975166 00:20:58.937 23:23:48 -- common/autotest_common.sh@955 -- # kill 3975166 00:20:58.937 23:23:48 -- common/autotest_common.sh@960 -- # wait 3975166 00:20:59.198 23:23:48 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:20:59.198 23:23:48 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:20:59.198 23:23:48 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:20:59.198 23:23:48 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:59.198 23:23:48 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:59.198 23:23:48 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:59.198 23:23:48 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:59.198 23:23:48 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:01.113 23:23:50 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:21:01.113 00:21:01.113 real 0m11.779s 00:21:01.113 user 0m12.889s 00:21:01.113 sys 0m6.167s 00:21:01.113 23:23:50 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:21:01.113 23:23:50 -- common/autotest_common.sh@10 -- # set +x 00:21:01.113 ************************************ 00:21:01.113 END TEST nvmf_bdevio_no_huge 00:21:01.113 ************************************ 00:21:01.113 23:23:50 -- nvmf/nvmf.sh@60 -- # run_test nvmf_tls /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:21:01.113 23:23:50 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:21:01.113 23:23:50 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:21:01.113 23:23:50 -- common/autotest_common.sh@10 -- # set +x 00:21:01.375 ************************************ 00:21:01.375 START TEST nvmf_tls 00:21:01.375 ************************************ 00:21:01.375 23:23:50 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:21:01.375 * Looking for test storage... 00:21:01.375 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:21:01.375 23:23:50 -- target/tls.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:01.375 23:23:50 -- nvmf/common.sh@7 -- # uname -s 00:21:01.375 23:23:50 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:01.375 23:23:50 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:01.375 23:23:50 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:01.375 23:23:50 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:01.375 23:23:50 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:01.375 23:23:50 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:01.375 23:23:50 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:01.375 23:23:50 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:01.375 23:23:50 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:01.375 23:23:50 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:01.375 23:23:50 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:21:01.375 23:23:50 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:21:01.375 23:23:50 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:01.375 23:23:50 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:01.375 23:23:50 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:01.375 23:23:50 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:01.375 23:23:50 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:01.375 23:23:50 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:01.375 23:23:50 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:01.375 23:23:50 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:01.375 23:23:50 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:01.375 23:23:50 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:01.375 23:23:50 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:01.375 23:23:50 -- paths/export.sh@5 -- # export PATH 00:21:01.375 23:23:50 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:01.375 23:23:50 -- nvmf/common.sh@47 -- # : 0 00:21:01.375 23:23:50 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:01.375 23:23:50 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:01.375 23:23:50 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:01.375 23:23:50 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:01.375 23:23:50 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:01.375 23:23:50 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:01.375 23:23:50 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:01.375 23:23:50 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:01.636 23:23:50 -- target/tls.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:21:01.636 23:23:50 -- target/tls.sh@62 -- # nvmftestinit 00:21:01.636 23:23:50 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:21:01.636 23:23:50 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:01.636 23:23:50 -- nvmf/common.sh@437 -- # prepare_net_devs 00:21:01.636 23:23:50 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:21:01.636 23:23:50 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:21:01.636 23:23:50 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:01.636 23:23:50 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:01.636 23:23:50 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:01.636 23:23:50 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:21:01.636 23:23:50 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:21:01.636 23:23:50 -- nvmf/common.sh@285 -- # xtrace_disable 00:21:01.636 23:23:50 -- common/autotest_common.sh@10 -- # set +x 00:21:09.782 23:23:57 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:21:09.782 23:23:57 -- nvmf/common.sh@291 -- # pci_devs=() 00:21:09.782 23:23:57 -- nvmf/common.sh@291 -- # local -a pci_devs 00:21:09.782 23:23:57 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:21:09.782 23:23:57 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:21:09.782 23:23:57 -- nvmf/common.sh@293 -- # pci_drivers=() 00:21:09.782 23:23:57 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:21:09.782 23:23:57 -- nvmf/common.sh@295 -- # net_devs=() 00:21:09.782 23:23:57 -- nvmf/common.sh@295 -- # local -ga net_devs 00:21:09.782 23:23:57 -- nvmf/common.sh@296 -- # e810=() 00:21:09.782 23:23:57 -- nvmf/common.sh@296 -- # local -ga e810 00:21:09.782 23:23:57 -- nvmf/common.sh@297 -- # x722=() 00:21:09.782 23:23:57 -- nvmf/common.sh@297 -- # local -ga x722 00:21:09.782 23:23:57 -- nvmf/common.sh@298 -- # mlx=() 00:21:09.782 23:23:57 -- nvmf/common.sh@298 -- # local -ga mlx 00:21:09.782 23:23:57 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:09.782 23:23:57 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:09.782 23:23:57 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:09.782 23:23:57 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:09.782 23:23:57 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:09.782 23:23:57 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:09.782 23:23:57 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:09.782 23:23:57 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:09.782 23:23:57 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:09.782 23:23:57 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:09.782 23:23:57 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:09.782 23:23:57 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:21:09.782 23:23:57 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:21:09.782 23:23:57 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:21:09.782 23:23:57 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:21:09.782 23:23:57 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:21:09.782 23:23:57 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:21:09.782 23:23:57 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:09.782 23:23:57 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:21:09.782 Found 0000:31:00.0 (0x8086 - 0x159b) 00:21:09.782 23:23:57 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:09.782 23:23:57 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:09.782 23:23:57 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:09.782 23:23:57 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:09.782 23:23:57 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:09.782 23:23:57 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:09.782 23:23:57 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:21:09.782 Found 0000:31:00.1 (0x8086 - 0x159b) 00:21:09.782 23:23:57 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:09.782 23:23:57 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:09.782 23:23:57 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:09.782 23:23:57 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:09.782 23:23:57 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:09.783 23:23:57 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:21:09.783 23:23:57 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:21:09.783 23:23:57 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:21:09.783 23:23:57 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:09.783 23:23:57 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:09.783 23:23:57 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:21:09.783 23:23:57 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:09.783 23:23:57 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:21:09.783 Found net devices under 0000:31:00.0: cvl_0_0 00:21:09.783 23:23:57 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:21:09.783 23:23:57 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:09.783 23:23:57 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:09.783 23:23:57 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:21:09.783 23:23:57 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:09.783 23:23:57 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:21:09.783 Found net devices under 0000:31:00.1: cvl_0_1 00:21:09.783 23:23:57 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:21:09.783 23:23:57 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:21:09.783 23:23:57 -- nvmf/common.sh@403 -- # is_hw=yes 00:21:09.783 23:23:57 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:21:09.783 23:23:57 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:21:09.783 23:23:57 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:21:09.783 23:23:57 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:09.783 23:23:57 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:09.783 23:23:57 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:09.783 23:23:57 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:21:09.783 23:23:57 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:09.783 23:23:57 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:09.783 23:23:57 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:21:09.783 23:23:57 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:09.783 23:23:57 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:09.783 23:23:57 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:21:09.783 23:23:57 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:21:09.783 23:23:57 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:21:09.783 23:23:57 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:09.783 23:23:57 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:09.783 23:23:57 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:09.783 23:23:57 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:21:09.783 23:23:57 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:09.783 23:23:57 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:09.783 23:23:57 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:09.783 23:23:57 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:21:09.783 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:09.783 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.608 ms 00:21:09.783 00:21:09.783 --- 10.0.0.2 ping statistics --- 00:21:09.783 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:09.783 rtt min/avg/max/mdev = 0.608/0.608/0.608/0.000 ms 00:21:09.783 23:23:57 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:09.783 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:09.783 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.313 ms 00:21:09.783 00:21:09.783 --- 10.0.0.1 ping statistics --- 00:21:09.783 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:09.783 rtt min/avg/max/mdev = 0.313/0.313/0.313/0.000 ms 00:21:09.783 23:23:57 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:09.783 23:23:57 -- nvmf/common.sh@411 -- # return 0 00:21:09.783 23:23:57 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:21:09.783 23:23:57 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:09.783 23:23:57 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:21:09.783 23:23:57 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:21:09.783 23:23:57 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:09.783 23:23:57 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:21:09.783 23:23:57 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:21:09.783 23:23:57 -- target/tls.sh@63 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:21:09.783 23:23:57 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:21:09.783 23:23:57 -- common/autotest_common.sh@710 -- # xtrace_disable 00:21:09.783 23:23:57 -- common/autotest_common.sh@10 -- # set +x 00:21:09.783 23:23:57 -- nvmf/common.sh@470 -- # nvmfpid=3979845 00:21:09.783 23:23:57 -- nvmf/common.sh@471 -- # waitforlisten 3979845 00:21:09.783 23:23:57 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:21:09.783 23:23:57 -- common/autotest_common.sh@817 -- # '[' -z 3979845 ']' 00:21:09.783 23:23:57 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:09.783 23:23:57 -- common/autotest_common.sh@822 -- # local max_retries=100 00:21:09.783 23:23:57 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:09.783 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:09.783 23:23:57 -- common/autotest_common.sh@826 -- # xtrace_disable 00:21:09.783 23:23:57 -- common/autotest_common.sh@10 -- # set +x 00:21:09.783 [2024-04-26 23:23:58.015076] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:21:09.783 [2024-04-26 23:23:58.015140] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:09.783 EAL: No free 2048 kB hugepages reported on node 1 00:21:09.783 [2024-04-26 23:23:58.087797] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:09.783 [2024-04-26 23:23:58.124385] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:09.783 [2024-04-26 23:23:58.124435] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:09.783 [2024-04-26 23:23:58.124443] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:09.783 [2024-04-26 23:23:58.124454] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:09.783 [2024-04-26 23:23:58.124461] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:09.783 [2024-04-26 23:23:58.124481] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:21:09.783 23:23:58 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:21:09.783 23:23:58 -- common/autotest_common.sh@850 -- # return 0 00:21:09.783 23:23:58 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:21:09.783 23:23:58 -- common/autotest_common.sh@716 -- # xtrace_disable 00:21:09.783 23:23:58 -- common/autotest_common.sh@10 -- # set +x 00:21:09.783 23:23:58 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:09.783 23:23:58 -- target/tls.sh@65 -- # '[' tcp '!=' tcp ']' 00:21:09.783 23:23:58 -- target/tls.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:21:09.783 true 00:21:09.783 23:23:58 -- target/tls.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:21:09.783 23:23:58 -- target/tls.sh@73 -- # jq -r .tls_version 00:21:10.043 23:23:59 -- target/tls.sh@73 -- # version=0 00:21:10.043 23:23:59 -- target/tls.sh@74 -- # [[ 0 != \0 ]] 00:21:10.043 23:23:59 -- target/tls.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:21:10.043 23:23:59 -- target/tls.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:21:10.043 23:23:59 -- target/tls.sh@81 -- # jq -r .tls_version 00:21:10.304 23:23:59 -- target/tls.sh@81 -- # version=13 00:21:10.304 23:23:59 -- target/tls.sh@82 -- # [[ 13 != \1\3 ]] 00:21:10.304 23:23:59 -- target/tls.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:21:10.564 23:23:59 -- target/tls.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:21:10.564 23:23:59 -- target/tls.sh@89 -- # jq -r .tls_version 00:21:10.564 23:23:59 -- target/tls.sh@89 -- # version=7 00:21:10.564 23:23:59 -- target/tls.sh@90 -- # [[ 7 != \7 ]] 00:21:10.564 23:23:59 -- target/tls.sh@96 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:21:10.564 23:23:59 -- target/tls.sh@96 -- # jq -r .enable_ktls 00:21:10.825 23:23:59 -- target/tls.sh@96 -- # ktls=false 00:21:10.825 23:23:59 -- target/tls.sh@97 -- # [[ false != \f\a\l\s\e ]] 00:21:10.825 23:23:59 -- target/tls.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:21:10.825 23:24:00 -- target/tls.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:21:10.825 23:24:00 -- target/tls.sh@104 -- # jq -r .enable_ktls 00:21:11.085 23:24:00 -- target/tls.sh@104 -- # ktls=true 00:21:11.085 23:24:00 -- target/tls.sh@105 -- # [[ true != \t\r\u\e ]] 00:21:11.085 23:24:00 -- target/tls.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:21:11.347 23:24:00 -- target/tls.sh@112 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:21:11.347 23:24:00 -- target/tls.sh@112 -- # jq -r .enable_ktls 00:21:11.347 23:24:00 -- target/tls.sh@112 -- # ktls=false 00:21:11.347 23:24:00 -- target/tls.sh@113 -- # [[ false != \f\a\l\s\e ]] 00:21:11.347 23:24:00 -- target/tls.sh@118 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:21:11.347 23:24:00 -- nvmf/common.sh@704 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:21:11.347 23:24:00 -- nvmf/common.sh@691 -- # local prefix key digest 00:21:11.347 23:24:00 -- nvmf/common.sh@693 -- # prefix=NVMeTLSkey-1 00:21:11.347 23:24:00 -- nvmf/common.sh@693 -- # key=00112233445566778899aabbccddeeff 00:21:11.347 23:24:00 -- nvmf/common.sh@693 -- # digest=1 00:21:11.347 23:24:00 -- nvmf/common.sh@694 -- # python - 00:21:11.607 23:24:00 -- target/tls.sh@118 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:21:11.607 23:24:00 -- target/tls.sh@119 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:21:11.607 23:24:00 -- nvmf/common.sh@704 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:21:11.607 23:24:00 -- nvmf/common.sh@691 -- # local prefix key digest 00:21:11.607 23:24:00 -- nvmf/common.sh@693 -- # prefix=NVMeTLSkey-1 00:21:11.607 23:24:00 -- nvmf/common.sh@693 -- # key=ffeeddccbbaa99887766554433221100 00:21:11.607 23:24:00 -- nvmf/common.sh@693 -- # digest=1 00:21:11.607 23:24:00 -- nvmf/common.sh@694 -- # python - 00:21:11.607 23:24:00 -- target/tls.sh@119 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:21:11.607 23:24:00 -- target/tls.sh@121 -- # mktemp 00:21:11.607 23:24:00 -- target/tls.sh@121 -- # key_path=/tmp/tmp.i0mQU05Jy1 00:21:11.607 23:24:00 -- target/tls.sh@122 -- # mktemp 00:21:11.607 23:24:00 -- target/tls.sh@122 -- # key_2_path=/tmp/tmp.lhziUR1LdY 00:21:11.607 23:24:00 -- target/tls.sh@124 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:21:11.607 23:24:00 -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:21:11.607 23:24:00 -- target/tls.sh@127 -- # chmod 0600 /tmp/tmp.i0mQU05Jy1 00:21:11.607 23:24:00 -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.lhziUR1LdY 00:21:11.607 23:24:00 -- target/tls.sh@130 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:21:11.607 23:24:00 -- target/tls.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_start_init 00:21:11.868 23:24:01 -- target/tls.sh@133 -- # setup_nvmf_tgt /tmp/tmp.i0mQU05Jy1 00:21:11.868 23:24:01 -- target/tls.sh@49 -- # local key=/tmp/tmp.i0mQU05Jy1 00:21:11.868 23:24:01 -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:21:12.129 [2024-04-26 23:24:01.190753] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:12.129 23:24:01 -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:21:12.129 23:24:01 -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:21:12.391 [2024-04-26 23:24:01.495512] tcp.c: 925:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:12.391 [2024-04-26 23:24:01.495710] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:12.391 23:24:01 -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:21:12.652 malloc0 00:21:12.652 23:24:01 -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:21:12.652 23:24:01 -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.i0mQU05Jy1 00:21:12.913 [2024-04-26 23:24:01.939366] tcp.c:3652:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:21:12.913 23:24:01 -- target/tls.sh@137 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.i0mQU05Jy1 00:21:12.913 EAL: No free 2048 kB hugepages reported on node 1 00:21:22.920 Initializing NVMe Controllers 00:21:22.920 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:21:22.920 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:21:22.920 Initialization complete. Launching workers. 00:21:22.920 ======================================================== 00:21:22.920 Latency(us) 00:21:22.920 Device Information : IOPS MiB/s Average min max 00:21:22.920 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 13190.86 51.53 4852.59 980.25 5504.60 00:21:22.920 ======================================================== 00:21:22.920 Total : 13190.86 51.53 4852.59 980.25 5504.60 00:21:22.920 00:21:22.920 23:24:12 -- target/tls.sh@143 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.i0mQU05Jy1 00:21:22.920 23:24:12 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:21:22.920 23:24:12 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:21:22.920 23:24:12 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:21:22.920 23:24:12 -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.i0mQU05Jy1' 00:21:22.920 23:24:12 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:22.920 23:24:12 -- target/tls.sh@28 -- # bdevperf_pid=3983286 00:21:22.920 23:24:12 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:21:22.920 23:24:12 -- target/tls.sh@31 -- # waitforlisten 3983286 /var/tmp/bdevperf.sock 00:21:22.920 23:24:12 -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:21:22.920 23:24:12 -- common/autotest_common.sh@817 -- # '[' -z 3983286 ']' 00:21:22.920 23:24:12 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:22.920 23:24:12 -- common/autotest_common.sh@822 -- # local max_retries=100 00:21:22.920 23:24:12 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:22.920 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:22.920 23:24:12 -- common/autotest_common.sh@826 -- # xtrace_disable 00:21:22.920 23:24:12 -- common/autotest_common.sh@10 -- # set +x 00:21:22.920 [2024-04-26 23:24:12.111491] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:21:22.920 [2024-04-26 23:24:12.111546] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3983286 ] 00:21:22.920 EAL: No free 2048 kB hugepages reported on node 1 00:21:22.920 [2024-04-26 23:24:12.161431] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:23.181 [2024-04-26 23:24:12.188007] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:21:23.181 23:24:12 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:21:23.181 23:24:12 -- common/autotest_common.sh@850 -- # return 0 00:21:23.181 23:24:12 -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.i0mQU05Jy1 00:21:23.181 [2024-04-26 23:24:12.390242] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:23.181 [2024-04-26 23:24:12.390296] nvme_tcp.c:2577:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:21:23.467 TLSTESTn1 00:21:23.467 23:24:12 -- target/tls.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:21:23.467 Running I/O for 10 seconds... 00:21:33.491 00:21:33.491 Latency(us) 00:21:33.491 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:33.491 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:21:33.491 Verification LBA range: start 0x0 length 0x2000 00:21:33.491 TLSTESTn1 : 10.03 3946.21 15.41 0.00 0.00 32371.86 4259.84 59856.21 00:21:33.491 =================================================================================================================== 00:21:33.491 Total : 3946.21 15.41 0.00 0.00 32371.86 4259.84 59856.21 00:21:33.491 0 00:21:33.491 23:24:22 -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:33.491 23:24:22 -- target/tls.sh@45 -- # killprocess 3983286 00:21:33.491 23:24:22 -- common/autotest_common.sh@936 -- # '[' -z 3983286 ']' 00:21:33.491 23:24:22 -- common/autotest_common.sh@940 -- # kill -0 3983286 00:21:33.491 23:24:22 -- common/autotest_common.sh@941 -- # uname 00:21:33.491 23:24:22 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:21:33.491 23:24:22 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3983286 00:21:33.491 23:24:22 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:21:33.491 23:24:22 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:21:33.491 23:24:22 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3983286' 00:21:33.491 killing process with pid 3983286 00:21:33.491 23:24:22 -- common/autotest_common.sh@955 -- # kill 3983286 00:21:33.491 Received shutdown signal, test time was about 10.000000 seconds 00:21:33.491 00:21:33.491 Latency(us) 00:21:33.491 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:33.491 =================================================================================================================== 00:21:33.491 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:33.491 [2024-04-26 23:24:22.708075] app.c: 937:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:21:33.491 23:24:22 -- common/autotest_common.sh@960 -- # wait 3983286 00:21:33.752 23:24:22 -- target/tls.sh@146 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.lhziUR1LdY 00:21:33.752 23:24:22 -- common/autotest_common.sh@638 -- # local es=0 00:21:33.752 23:24:22 -- common/autotest_common.sh@640 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.lhziUR1LdY 00:21:33.752 23:24:22 -- common/autotest_common.sh@626 -- # local arg=run_bdevperf 00:21:33.752 23:24:22 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:21:33.752 23:24:22 -- common/autotest_common.sh@630 -- # type -t run_bdevperf 00:21:33.752 23:24:22 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:21:33.752 23:24:22 -- common/autotest_common.sh@641 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.lhziUR1LdY 00:21:33.752 23:24:22 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:21:33.752 23:24:22 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:21:33.752 23:24:22 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:21:33.752 23:24:22 -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.lhziUR1LdY' 00:21:33.752 23:24:22 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:33.752 23:24:22 -- target/tls.sh@28 -- # bdevperf_pid=3985437 00:21:33.752 23:24:22 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:21:33.752 23:24:22 -- target/tls.sh@31 -- # waitforlisten 3985437 /var/tmp/bdevperf.sock 00:21:33.752 23:24:22 -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:21:33.752 23:24:22 -- common/autotest_common.sh@817 -- # '[' -z 3985437 ']' 00:21:33.752 23:24:22 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:33.752 23:24:22 -- common/autotest_common.sh@822 -- # local max_retries=100 00:21:33.752 23:24:22 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:33.752 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:33.752 23:24:22 -- common/autotest_common.sh@826 -- # xtrace_disable 00:21:33.752 23:24:22 -- common/autotest_common.sh@10 -- # set +x 00:21:33.752 [2024-04-26 23:24:22.863570] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:21:33.752 [2024-04-26 23:24:22.863626] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3985437 ] 00:21:33.752 EAL: No free 2048 kB hugepages reported on node 1 00:21:33.752 [2024-04-26 23:24:22.913461] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:33.752 [2024-04-26 23:24:22.939571] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:21:33.752 23:24:22 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:21:33.752 23:24:23 -- common/autotest_common.sh@850 -- # return 0 00:21:33.752 23:24:23 -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.lhziUR1LdY 00:21:34.011 [2024-04-26 23:24:23.141645] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:34.011 [2024-04-26 23:24:23.141702] nvme_tcp.c:2577:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:21:34.011 [2024-04-26 23:24:23.149069] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:21:34.011 [2024-04-26 23:24:23.149601] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8543f0 (107): Transport endpoint is not connected 00:21:34.011 [2024-04-26 23:24:23.150596] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8543f0 (9): Bad file descriptor 00:21:34.011 [2024-04-26 23:24:23.151598] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:34.011 [2024-04-26 23:24:23.151604] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:21:34.011 [2024-04-26 23:24:23.151609] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:34.011 request: 00:21:34.011 { 00:21:34.011 "name": "TLSTEST", 00:21:34.011 "trtype": "tcp", 00:21:34.011 "traddr": "10.0.0.2", 00:21:34.011 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:34.011 "adrfam": "ipv4", 00:21:34.011 "trsvcid": "4420", 00:21:34.011 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:34.011 "psk": "/tmp/tmp.lhziUR1LdY", 00:21:34.011 "method": "bdev_nvme_attach_controller", 00:21:34.011 "req_id": 1 00:21:34.011 } 00:21:34.011 Got JSON-RPC error response 00:21:34.011 response: 00:21:34.011 { 00:21:34.011 "code": -32602, 00:21:34.011 "message": "Invalid parameters" 00:21:34.011 } 00:21:34.011 23:24:23 -- target/tls.sh@36 -- # killprocess 3985437 00:21:34.011 23:24:23 -- common/autotest_common.sh@936 -- # '[' -z 3985437 ']' 00:21:34.011 23:24:23 -- common/autotest_common.sh@940 -- # kill -0 3985437 00:21:34.011 23:24:23 -- common/autotest_common.sh@941 -- # uname 00:21:34.011 23:24:23 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:21:34.011 23:24:23 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3985437 00:21:34.011 23:24:23 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:21:34.012 23:24:23 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:21:34.012 23:24:23 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3985437' 00:21:34.012 killing process with pid 3985437 00:21:34.012 23:24:23 -- common/autotest_common.sh@955 -- # kill 3985437 00:21:34.012 Received shutdown signal, test time was about 10.000000 seconds 00:21:34.012 00:21:34.012 Latency(us) 00:21:34.012 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:34.012 =================================================================================================================== 00:21:34.012 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:21:34.012 [2024-04-26 23:24:23.223225] app.c: 937:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:21:34.012 23:24:23 -- common/autotest_common.sh@960 -- # wait 3985437 00:21:34.272 23:24:23 -- target/tls.sh@37 -- # return 1 00:21:34.272 23:24:23 -- common/autotest_common.sh@641 -- # es=1 00:21:34.272 23:24:23 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:21:34.272 23:24:23 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:21:34.272 23:24:23 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:21:34.272 23:24:23 -- target/tls.sh@149 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.i0mQU05Jy1 00:21:34.272 23:24:23 -- common/autotest_common.sh@638 -- # local es=0 00:21:34.272 23:24:23 -- common/autotest_common.sh@640 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.i0mQU05Jy1 00:21:34.272 23:24:23 -- common/autotest_common.sh@626 -- # local arg=run_bdevperf 00:21:34.272 23:24:23 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:21:34.272 23:24:23 -- common/autotest_common.sh@630 -- # type -t run_bdevperf 00:21:34.272 23:24:23 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:21:34.272 23:24:23 -- common/autotest_common.sh@641 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.i0mQU05Jy1 00:21:34.272 23:24:23 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:21:34.272 23:24:23 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:21:34.272 23:24:23 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:21:34.272 23:24:23 -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.i0mQU05Jy1' 00:21:34.272 23:24:23 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:34.272 23:24:23 -- target/tls.sh@28 -- # bdevperf_pid=3985471 00:21:34.272 23:24:23 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:21:34.272 23:24:23 -- target/tls.sh@31 -- # waitforlisten 3985471 /var/tmp/bdevperf.sock 00:21:34.272 23:24:23 -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:21:34.272 23:24:23 -- common/autotest_common.sh@817 -- # '[' -z 3985471 ']' 00:21:34.272 23:24:23 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:34.272 23:24:23 -- common/autotest_common.sh@822 -- # local max_retries=100 00:21:34.272 23:24:23 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:34.272 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:34.272 23:24:23 -- common/autotest_common.sh@826 -- # xtrace_disable 00:21:34.272 23:24:23 -- common/autotest_common.sh@10 -- # set +x 00:21:34.272 [2024-04-26 23:24:23.369985] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:21:34.272 [2024-04-26 23:24:23.370042] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3985471 ] 00:21:34.272 EAL: No free 2048 kB hugepages reported on node 1 00:21:34.273 [2024-04-26 23:24:23.420149] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:34.273 [2024-04-26 23:24:23.446391] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:21:34.273 23:24:23 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:21:34.273 23:24:23 -- common/autotest_common.sh@850 -- # return 0 00:21:34.273 23:24:23 -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk /tmp/tmp.i0mQU05Jy1 00:21:34.533 [2024-04-26 23:24:23.644400] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:34.533 [2024-04-26 23:24:23.644452] nvme_tcp.c:2577:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:21:34.533 [2024-04-26 23:24:23.650199] tcp.c: 878:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:21:34.533 [2024-04-26 23:24:23.650222] posix.c: 588:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:21:34.533 [2024-04-26 23:24:23.650246] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:21:34.533 [2024-04-26 23:24:23.651372] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22213f0 (107): Transport endpoint is not connected 00:21:34.533 [2024-04-26 23:24:23.652367] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22213f0 (9): Bad file descriptor 00:21:34.533 [2024-04-26 23:24:23.653370] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:34.533 [2024-04-26 23:24:23.653376] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:21:34.533 [2024-04-26 23:24:23.653381] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:34.533 request: 00:21:34.533 { 00:21:34.533 "name": "TLSTEST", 00:21:34.533 "trtype": "tcp", 00:21:34.533 "traddr": "10.0.0.2", 00:21:34.533 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:21:34.533 "adrfam": "ipv4", 00:21:34.533 "trsvcid": "4420", 00:21:34.533 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:34.533 "psk": "/tmp/tmp.i0mQU05Jy1", 00:21:34.533 "method": "bdev_nvme_attach_controller", 00:21:34.533 "req_id": 1 00:21:34.533 } 00:21:34.533 Got JSON-RPC error response 00:21:34.533 response: 00:21:34.533 { 00:21:34.533 "code": -32602, 00:21:34.533 "message": "Invalid parameters" 00:21:34.533 } 00:21:34.533 23:24:23 -- target/tls.sh@36 -- # killprocess 3985471 00:21:34.533 23:24:23 -- common/autotest_common.sh@936 -- # '[' -z 3985471 ']' 00:21:34.533 23:24:23 -- common/autotest_common.sh@940 -- # kill -0 3985471 00:21:34.533 23:24:23 -- common/autotest_common.sh@941 -- # uname 00:21:34.533 23:24:23 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:21:34.533 23:24:23 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3985471 00:21:34.533 23:24:23 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:21:34.533 23:24:23 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:21:34.533 23:24:23 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3985471' 00:21:34.533 killing process with pid 3985471 00:21:34.533 23:24:23 -- common/autotest_common.sh@955 -- # kill 3985471 00:21:34.533 Received shutdown signal, test time was about 10.000000 seconds 00:21:34.533 00:21:34.533 Latency(us) 00:21:34.533 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:34.533 =================================================================================================================== 00:21:34.533 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:21:34.533 [2024-04-26 23:24:23.723917] app.c: 937:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:21:34.534 23:24:23 -- common/autotest_common.sh@960 -- # wait 3985471 00:21:34.794 23:24:23 -- target/tls.sh@37 -- # return 1 00:21:34.794 23:24:23 -- common/autotest_common.sh@641 -- # es=1 00:21:34.794 23:24:23 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:21:34.794 23:24:23 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:21:34.794 23:24:23 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:21:34.794 23:24:23 -- target/tls.sh@152 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.i0mQU05Jy1 00:21:34.794 23:24:23 -- common/autotest_common.sh@638 -- # local es=0 00:21:34.794 23:24:23 -- common/autotest_common.sh@640 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.i0mQU05Jy1 00:21:34.794 23:24:23 -- common/autotest_common.sh@626 -- # local arg=run_bdevperf 00:21:34.794 23:24:23 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:21:34.794 23:24:23 -- common/autotest_common.sh@630 -- # type -t run_bdevperf 00:21:34.794 23:24:23 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:21:34.794 23:24:23 -- common/autotest_common.sh@641 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.i0mQU05Jy1 00:21:34.794 23:24:23 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:21:34.794 23:24:23 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:21:34.794 23:24:23 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:21:34.794 23:24:23 -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.i0mQU05Jy1' 00:21:34.794 23:24:23 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:34.794 23:24:23 -- target/tls.sh@28 -- # bdevperf_pid=3985490 00:21:34.794 23:24:23 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:21:34.794 23:24:23 -- target/tls.sh@31 -- # waitforlisten 3985490 /var/tmp/bdevperf.sock 00:21:34.794 23:24:23 -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:21:34.794 23:24:23 -- common/autotest_common.sh@817 -- # '[' -z 3985490 ']' 00:21:34.794 23:24:23 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:34.795 23:24:23 -- common/autotest_common.sh@822 -- # local max_retries=100 00:21:34.795 23:24:23 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:34.795 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:34.795 23:24:23 -- common/autotest_common.sh@826 -- # xtrace_disable 00:21:34.795 23:24:23 -- common/autotest_common.sh@10 -- # set +x 00:21:34.795 [2024-04-26 23:24:23.871334] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:21:34.795 [2024-04-26 23:24:23.871387] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3985490 ] 00:21:34.795 EAL: No free 2048 kB hugepages reported on node 1 00:21:34.795 [2024-04-26 23:24:23.922718] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:34.795 [2024-04-26 23:24:23.947210] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:21:34.795 23:24:24 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:21:34.795 23:24:24 -- common/autotest_common.sh@850 -- # return 0 00:21:34.795 23:24:24 -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.i0mQU05Jy1 00:21:35.056 [2024-04-26 23:24:24.161392] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:35.056 [2024-04-26 23:24:24.161451] nvme_tcp.c:2577:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:21:35.056 [2024-04-26 23:24:24.171474] tcp.c: 878:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:21:35.056 [2024-04-26 23:24:24.171495] posix.c: 588:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:21:35.056 [2024-04-26 23:24:24.171518] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:21:35.056 [2024-04-26 23:24:24.172276] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1daa3f0 (107): Transport endpoint is not connected 00:21:35.056 [2024-04-26 23:24:24.173270] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1daa3f0 (9): Bad file descriptor 00:21:35.056 [2024-04-26 23:24:24.174271] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:21:35.056 [2024-04-26 23:24:24.174278] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:21:35.056 [2024-04-26 23:24:24.174284] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:21:35.056 request: 00:21:35.056 { 00:21:35.056 "name": "TLSTEST", 00:21:35.056 "trtype": "tcp", 00:21:35.056 "traddr": "10.0.0.2", 00:21:35.056 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:35.056 "adrfam": "ipv4", 00:21:35.056 "trsvcid": "4420", 00:21:35.056 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:21:35.056 "psk": "/tmp/tmp.i0mQU05Jy1", 00:21:35.056 "method": "bdev_nvme_attach_controller", 00:21:35.056 "req_id": 1 00:21:35.056 } 00:21:35.056 Got JSON-RPC error response 00:21:35.056 response: 00:21:35.056 { 00:21:35.056 "code": -32602, 00:21:35.056 "message": "Invalid parameters" 00:21:35.056 } 00:21:35.056 23:24:24 -- target/tls.sh@36 -- # killprocess 3985490 00:21:35.056 23:24:24 -- common/autotest_common.sh@936 -- # '[' -z 3985490 ']' 00:21:35.056 23:24:24 -- common/autotest_common.sh@940 -- # kill -0 3985490 00:21:35.056 23:24:24 -- common/autotest_common.sh@941 -- # uname 00:21:35.056 23:24:24 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:21:35.056 23:24:24 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3985490 00:21:35.056 23:24:24 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:21:35.056 23:24:24 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:21:35.056 23:24:24 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3985490' 00:21:35.056 killing process with pid 3985490 00:21:35.056 23:24:24 -- common/autotest_common.sh@955 -- # kill 3985490 00:21:35.056 Received shutdown signal, test time was about 10.000000 seconds 00:21:35.056 00:21:35.056 Latency(us) 00:21:35.056 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:35.056 =================================================================================================================== 00:21:35.056 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:21:35.056 [2024-04-26 23:24:24.263361] app.c: 937:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:21:35.056 23:24:24 -- common/autotest_common.sh@960 -- # wait 3985490 00:21:35.317 23:24:24 -- target/tls.sh@37 -- # return 1 00:21:35.317 23:24:24 -- common/autotest_common.sh@641 -- # es=1 00:21:35.317 23:24:24 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:21:35.317 23:24:24 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:21:35.317 23:24:24 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:21:35.317 23:24:24 -- target/tls.sh@155 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:21:35.317 23:24:24 -- common/autotest_common.sh@638 -- # local es=0 00:21:35.317 23:24:24 -- common/autotest_common.sh@640 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:21:35.317 23:24:24 -- common/autotest_common.sh@626 -- # local arg=run_bdevperf 00:21:35.317 23:24:24 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:21:35.317 23:24:24 -- common/autotest_common.sh@630 -- # type -t run_bdevperf 00:21:35.317 23:24:24 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:21:35.317 23:24:24 -- common/autotest_common.sh@641 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:21:35.317 23:24:24 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:21:35.317 23:24:24 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:21:35.317 23:24:24 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:21:35.317 23:24:24 -- target/tls.sh@23 -- # psk= 00:21:35.317 23:24:24 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:35.317 23:24:24 -- target/tls.sh@28 -- # bdevperf_pid=3985705 00:21:35.317 23:24:24 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:21:35.317 23:24:24 -- target/tls.sh@31 -- # waitforlisten 3985705 /var/tmp/bdevperf.sock 00:21:35.317 23:24:24 -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:21:35.317 23:24:24 -- common/autotest_common.sh@817 -- # '[' -z 3985705 ']' 00:21:35.317 23:24:24 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:35.317 23:24:24 -- common/autotest_common.sh@822 -- # local max_retries=100 00:21:35.317 23:24:24 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:35.317 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:35.317 23:24:24 -- common/autotest_common.sh@826 -- # xtrace_disable 00:21:35.317 23:24:24 -- common/autotest_common.sh@10 -- # set +x 00:21:35.317 [2024-04-26 23:24:24.409484] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:21:35.317 [2024-04-26 23:24:24.409537] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3985705 ] 00:21:35.317 EAL: No free 2048 kB hugepages reported on node 1 00:21:35.317 [2024-04-26 23:24:24.460518] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:35.317 [2024-04-26 23:24:24.485719] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:21:35.317 23:24:24 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:21:35.317 23:24:24 -- common/autotest_common.sh@850 -- # return 0 00:21:35.317 23:24:24 -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:21:35.578 [2024-04-26 23:24:24.702885] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:21:35.578 [2024-04-26 23:24:24.704709] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2263a00 (9): Bad file descriptor 00:21:35.578 [2024-04-26 23:24:24.705708] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:35.578 [2024-04-26 23:24:24.705718] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:21:35.578 [2024-04-26 23:24:24.705724] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:35.578 request: 00:21:35.578 { 00:21:35.578 "name": "TLSTEST", 00:21:35.578 "trtype": "tcp", 00:21:35.578 "traddr": "10.0.0.2", 00:21:35.578 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:35.578 "adrfam": "ipv4", 00:21:35.578 "trsvcid": "4420", 00:21:35.578 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:35.578 "method": "bdev_nvme_attach_controller", 00:21:35.578 "req_id": 1 00:21:35.578 } 00:21:35.578 Got JSON-RPC error response 00:21:35.578 response: 00:21:35.578 { 00:21:35.578 "code": -32602, 00:21:35.578 "message": "Invalid parameters" 00:21:35.578 } 00:21:35.578 23:24:24 -- target/tls.sh@36 -- # killprocess 3985705 00:21:35.578 23:24:24 -- common/autotest_common.sh@936 -- # '[' -z 3985705 ']' 00:21:35.578 23:24:24 -- common/autotest_common.sh@940 -- # kill -0 3985705 00:21:35.578 23:24:24 -- common/autotest_common.sh@941 -- # uname 00:21:35.578 23:24:24 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:21:35.578 23:24:24 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3985705 00:21:35.578 23:24:24 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:21:35.578 23:24:24 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:21:35.578 23:24:24 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3985705' 00:21:35.578 killing process with pid 3985705 00:21:35.578 23:24:24 -- common/autotest_common.sh@955 -- # kill 3985705 00:21:35.578 Received shutdown signal, test time was about 10.000000 seconds 00:21:35.578 00:21:35.578 Latency(us) 00:21:35.578 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:35.578 =================================================================================================================== 00:21:35.578 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:21:35.578 23:24:24 -- common/autotest_common.sh@960 -- # wait 3985705 00:21:35.837 23:24:24 -- target/tls.sh@37 -- # return 1 00:21:35.837 23:24:24 -- common/autotest_common.sh@641 -- # es=1 00:21:35.837 23:24:24 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:21:35.837 23:24:24 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:21:35.837 23:24:24 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:21:35.837 23:24:24 -- target/tls.sh@158 -- # killprocess 3979845 00:21:35.837 23:24:24 -- common/autotest_common.sh@936 -- # '[' -z 3979845 ']' 00:21:35.837 23:24:24 -- common/autotest_common.sh@940 -- # kill -0 3979845 00:21:35.837 23:24:24 -- common/autotest_common.sh@941 -- # uname 00:21:35.837 23:24:24 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:21:35.837 23:24:24 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3979845 00:21:35.837 23:24:24 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:21:35.837 23:24:24 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:21:35.837 23:24:24 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3979845' 00:21:35.837 killing process with pid 3979845 00:21:35.837 23:24:24 -- common/autotest_common.sh@955 -- # kill 3979845 00:21:35.837 [2024-04-26 23:24:24.941335] app.c: 937:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:21:35.837 23:24:24 -- common/autotest_common.sh@960 -- # wait 3979845 00:21:35.837 23:24:25 -- target/tls.sh@159 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:21:35.837 23:24:25 -- nvmf/common.sh@704 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:21:35.837 23:24:25 -- nvmf/common.sh@691 -- # local prefix key digest 00:21:35.837 23:24:25 -- nvmf/common.sh@693 -- # prefix=NVMeTLSkey-1 00:21:35.837 23:24:25 -- nvmf/common.sh@693 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:21:35.837 23:24:25 -- nvmf/common.sh@693 -- # digest=2 00:21:35.837 23:24:25 -- nvmf/common.sh@694 -- # python - 00:21:36.097 23:24:25 -- target/tls.sh@159 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:21:36.097 23:24:25 -- target/tls.sh@160 -- # mktemp 00:21:36.097 23:24:25 -- target/tls.sh@160 -- # key_long_path=/tmp/tmp.bmCTHB2doi 00:21:36.097 23:24:25 -- target/tls.sh@161 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:21:36.097 23:24:25 -- target/tls.sh@162 -- # chmod 0600 /tmp/tmp.bmCTHB2doi 00:21:36.097 23:24:25 -- target/tls.sh@163 -- # nvmfappstart -m 0x2 00:21:36.097 23:24:25 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:21:36.097 23:24:25 -- common/autotest_common.sh@710 -- # xtrace_disable 00:21:36.097 23:24:25 -- common/autotest_common.sh@10 -- # set +x 00:21:36.097 23:24:25 -- nvmf/common.sh@470 -- # nvmfpid=3985845 00:21:36.097 23:24:25 -- nvmf/common.sh@471 -- # waitforlisten 3985845 00:21:36.097 23:24:25 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:21:36.097 23:24:25 -- common/autotest_common.sh@817 -- # '[' -z 3985845 ']' 00:21:36.097 23:24:25 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:36.097 23:24:25 -- common/autotest_common.sh@822 -- # local max_retries=100 00:21:36.097 23:24:25 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:36.097 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:36.097 23:24:25 -- common/autotest_common.sh@826 -- # xtrace_disable 00:21:36.097 23:24:25 -- common/autotest_common.sh@10 -- # set +x 00:21:36.097 [2024-04-26 23:24:25.190356] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:21:36.097 [2024-04-26 23:24:25.190407] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:36.097 EAL: No free 2048 kB hugepages reported on node 1 00:21:36.097 [2024-04-26 23:24:25.255356] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:36.097 [2024-04-26 23:24:25.283987] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:36.097 [2024-04-26 23:24:25.284025] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:36.097 [2024-04-26 23:24:25.284032] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:36.097 [2024-04-26 23:24:25.284039] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:36.097 [2024-04-26 23:24:25.284045] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:36.097 [2024-04-26 23:24:25.284064] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:21:36.097 23:24:25 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:21:36.098 23:24:25 -- common/autotest_common.sh@850 -- # return 0 00:21:36.358 23:24:25 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:21:36.358 23:24:25 -- common/autotest_common.sh@716 -- # xtrace_disable 00:21:36.358 23:24:25 -- common/autotest_common.sh@10 -- # set +x 00:21:36.358 23:24:25 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:36.358 23:24:25 -- target/tls.sh@165 -- # setup_nvmf_tgt /tmp/tmp.bmCTHB2doi 00:21:36.358 23:24:25 -- target/tls.sh@49 -- # local key=/tmp/tmp.bmCTHB2doi 00:21:36.358 23:24:25 -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:21:36.358 [2024-04-26 23:24:25.530805] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:36.358 23:24:25 -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:21:36.618 23:24:25 -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:21:36.618 [2024-04-26 23:24:25.819541] tcp.c: 925:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:36.618 [2024-04-26 23:24:25.819754] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:36.618 23:24:25 -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:21:36.878 malloc0 00:21:36.878 23:24:25 -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:21:37.139 23:24:26 -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.bmCTHB2doi 00:21:37.139 [2024-04-26 23:24:26.263439] tcp.c:3652:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:21:37.139 23:24:26 -- target/tls.sh@167 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.bmCTHB2doi 00:21:37.139 23:24:26 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:21:37.139 23:24:26 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:21:37.139 23:24:26 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:21:37.139 23:24:26 -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.bmCTHB2doi' 00:21:37.139 23:24:26 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:37.139 23:24:26 -- target/tls.sh@28 -- # bdevperf_pid=3986190 00:21:37.139 23:24:26 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:21:37.139 23:24:26 -- target/tls.sh@31 -- # waitforlisten 3986190 /var/tmp/bdevperf.sock 00:21:37.139 23:24:26 -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:21:37.139 23:24:26 -- common/autotest_common.sh@817 -- # '[' -z 3986190 ']' 00:21:37.139 23:24:26 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:37.139 23:24:26 -- common/autotest_common.sh@822 -- # local max_retries=100 00:21:37.139 23:24:26 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:37.139 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:37.139 23:24:26 -- common/autotest_common.sh@826 -- # xtrace_disable 00:21:37.139 23:24:26 -- common/autotest_common.sh@10 -- # set +x 00:21:37.139 [2024-04-26 23:24:26.333782] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:21:37.139 [2024-04-26 23:24:26.333862] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3986190 ] 00:21:37.139 EAL: No free 2048 kB hugepages reported on node 1 00:21:37.139 [2024-04-26 23:24:26.385149] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:37.399 [2024-04-26 23:24:26.411783] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:21:37.969 23:24:27 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:21:37.969 23:24:27 -- common/autotest_common.sh@850 -- # return 0 00:21:37.969 23:24:27 -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.bmCTHB2doi 00:21:37.969 [2024-04-26 23:24:27.187294] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:37.969 [2024-04-26 23:24:27.187349] nvme_tcp.c:2577:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:21:38.229 TLSTESTn1 00:21:38.229 23:24:27 -- target/tls.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:21:38.229 Running I/O for 10 seconds... 00:21:48.226 00:21:48.226 Latency(us) 00:21:48.226 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:48.226 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:21:48.226 Verification LBA range: start 0x0 length 0x2000 00:21:48.226 TLSTESTn1 : 10.02 3675.89 14.36 0.00 0.00 34776.90 6799.36 72526.51 00:21:48.226 =================================================================================================================== 00:21:48.226 Total : 3675.89 14.36 0.00 0.00 34776.90 6799.36 72526.51 00:21:48.226 0 00:21:48.226 23:24:37 -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:48.227 23:24:37 -- target/tls.sh@45 -- # killprocess 3986190 00:21:48.227 23:24:37 -- common/autotest_common.sh@936 -- # '[' -z 3986190 ']' 00:21:48.227 23:24:37 -- common/autotest_common.sh@940 -- # kill -0 3986190 00:21:48.227 23:24:37 -- common/autotest_common.sh@941 -- # uname 00:21:48.227 23:24:37 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:21:48.227 23:24:37 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3986190 00:21:48.489 23:24:37 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:21:48.489 23:24:37 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:21:48.489 23:24:37 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3986190' 00:21:48.489 killing process with pid 3986190 00:21:48.489 23:24:37 -- common/autotest_common.sh@955 -- # kill 3986190 00:21:48.489 Received shutdown signal, test time was about 10.000000 seconds 00:21:48.489 00:21:48.489 Latency(us) 00:21:48.489 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:48.489 =================================================================================================================== 00:21:48.489 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:48.489 [2024-04-26 23:24:37.484417] app.c: 937:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:21:48.489 23:24:37 -- common/autotest_common.sh@960 -- # wait 3986190 00:21:48.489 23:24:37 -- target/tls.sh@170 -- # chmod 0666 /tmp/tmp.bmCTHB2doi 00:21:48.489 23:24:37 -- target/tls.sh@171 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.bmCTHB2doi 00:21:48.489 23:24:37 -- common/autotest_common.sh@638 -- # local es=0 00:21:48.489 23:24:37 -- common/autotest_common.sh@640 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.bmCTHB2doi 00:21:48.489 23:24:37 -- common/autotest_common.sh@626 -- # local arg=run_bdevperf 00:21:48.489 23:24:37 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:21:48.489 23:24:37 -- common/autotest_common.sh@630 -- # type -t run_bdevperf 00:21:48.489 23:24:37 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:21:48.489 23:24:37 -- common/autotest_common.sh@641 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.bmCTHB2doi 00:21:48.489 23:24:37 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:21:48.489 23:24:37 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:21:48.489 23:24:37 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:21:48.489 23:24:37 -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.bmCTHB2doi' 00:21:48.489 23:24:37 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:48.489 23:24:37 -- target/tls.sh@28 -- # bdevperf_pid=3988217 00:21:48.489 23:24:37 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:21:48.489 23:24:37 -- target/tls.sh@31 -- # waitforlisten 3988217 /var/tmp/bdevperf.sock 00:21:48.489 23:24:37 -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:21:48.489 23:24:37 -- common/autotest_common.sh@817 -- # '[' -z 3988217 ']' 00:21:48.489 23:24:37 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:48.489 23:24:37 -- common/autotest_common.sh@822 -- # local max_retries=100 00:21:48.489 23:24:37 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:48.489 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:48.489 23:24:37 -- common/autotest_common.sh@826 -- # xtrace_disable 00:21:48.489 23:24:37 -- common/autotest_common.sh@10 -- # set +x 00:21:48.489 [2024-04-26 23:24:37.642046] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:21:48.489 [2024-04-26 23:24:37.642100] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3988217 ] 00:21:48.489 EAL: No free 2048 kB hugepages reported on node 1 00:21:48.489 [2024-04-26 23:24:37.693115] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:48.489 [2024-04-26 23:24:37.717329] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:21:48.749 23:24:37 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:21:48.749 23:24:37 -- common/autotest_common.sh@850 -- # return 0 00:21:48.749 23:24:37 -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.bmCTHB2doi 00:21:48.749 [2024-04-26 23:24:37.931651] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:48.750 [2024-04-26 23:24:37.931689] bdev_nvme.c:6071:bdev_nvme_load_psk: *ERROR*: Incorrect permissions for PSK file 00:21:48.750 [2024-04-26 23:24:37.931694] bdev_nvme.c:6180:bdev_nvme_create: *ERROR*: Could not load PSK from /tmp/tmp.bmCTHB2doi 00:21:48.750 request: 00:21:48.750 { 00:21:48.750 "name": "TLSTEST", 00:21:48.750 "trtype": "tcp", 00:21:48.750 "traddr": "10.0.0.2", 00:21:48.750 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:48.750 "adrfam": "ipv4", 00:21:48.750 "trsvcid": "4420", 00:21:48.750 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:48.750 "psk": "/tmp/tmp.bmCTHB2doi", 00:21:48.750 "method": "bdev_nvme_attach_controller", 00:21:48.750 "req_id": 1 00:21:48.750 } 00:21:48.750 Got JSON-RPC error response 00:21:48.750 response: 00:21:48.750 { 00:21:48.750 "code": -1, 00:21:48.750 "message": "Operation not permitted" 00:21:48.750 } 00:21:48.750 23:24:37 -- target/tls.sh@36 -- # killprocess 3988217 00:21:48.750 23:24:37 -- common/autotest_common.sh@936 -- # '[' -z 3988217 ']' 00:21:48.750 23:24:37 -- common/autotest_common.sh@940 -- # kill -0 3988217 00:21:48.750 23:24:37 -- common/autotest_common.sh@941 -- # uname 00:21:48.750 23:24:37 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:21:48.750 23:24:37 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3988217 00:21:49.011 23:24:38 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:21:49.011 23:24:38 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:21:49.011 23:24:38 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3988217' 00:21:49.011 killing process with pid 3988217 00:21:49.011 23:24:38 -- common/autotest_common.sh@955 -- # kill 3988217 00:21:49.011 Received shutdown signal, test time was about 10.000000 seconds 00:21:49.011 00:21:49.011 Latency(us) 00:21:49.011 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:49.011 =================================================================================================================== 00:21:49.011 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:21:49.011 23:24:38 -- common/autotest_common.sh@960 -- # wait 3988217 00:21:49.011 23:24:38 -- target/tls.sh@37 -- # return 1 00:21:49.011 23:24:38 -- common/autotest_common.sh@641 -- # es=1 00:21:49.011 23:24:38 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:21:49.011 23:24:38 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:21:49.011 23:24:38 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:21:49.011 23:24:38 -- target/tls.sh@174 -- # killprocess 3985845 00:21:49.011 23:24:38 -- common/autotest_common.sh@936 -- # '[' -z 3985845 ']' 00:21:49.011 23:24:38 -- common/autotest_common.sh@940 -- # kill -0 3985845 00:21:49.011 23:24:38 -- common/autotest_common.sh@941 -- # uname 00:21:49.011 23:24:38 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:21:49.011 23:24:38 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3985845 00:21:49.011 23:24:38 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:21:49.011 23:24:38 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:21:49.011 23:24:38 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3985845' 00:21:49.011 killing process with pid 3985845 00:21:49.011 23:24:38 -- common/autotest_common.sh@955 -- # kill 3985845 00:21:49.011 [2024-04-26 23:24:38.168432] app.c: 937:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:21:49.011 23:24:38 -- common/autotest_common.sh@960 -- # wait 3985845 00:21:49.271 23:24:38 -- target/tls.sh@175 -- # nvmfappstart -m 0x2 00:21:49.271 23:24:38 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:21:49.271 23:24:38 -- common/autotest_common.sh@710 -- # xtrace_disable 00:21:49.271 23:24:38 -- common/autotest_common.sh@10 -- # set +x 00:21:49.271 23:24:38 -- nvmf/common.sh@470 -- # nvmfpid=3988500 00:21:49.271 23:24:38 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:21:49.271 23:24:38 -- nvmf/common.sh@471 -- # waitforlisten 3988500 00:21:49.271 23:24:38 -- common/autotest_common.sh@817 -- # '[' -z 3988500 ']' 00:21:49.271 23:24:38 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:49.271 23:24:38 -- common/autotest_common.sh@822 -- # local max_retries=100 00:21:49.271 23:24:38 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:49.271 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:49.271 23:24:38 -- common/autotest_common.sh@826 -- # xtrace_disable 00:21:49.271 23:24:38 -- common/autotest_common.sh@10 -- # set +x 00:21:49.271 [2024-04-26 23:24:38.335765] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:21:49.271 [2024-04-26 23:24:38.335822] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:49.271 EAL: No free 2048 kB hugepages reported on node 1 00:21:49.271 [2024-04-26 23:24:38.399406] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:49.271 [2024-04-26 23:24:38.427559] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:49.272 [2024-04-26 23:24:38.427596] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:49.272 [2024-04-26 23:24:38.427604] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:49.272 [2024-04-26 23:24:38.427610] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:49.272 [2024-04-26 23:24:38.427616] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:49.272 [2024-04-26 23:24:38.427638] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:21:49.272 23:24:38 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:21:49.272 23:24:38 -- common/autotest_common.sh@850 -- # return 0 00:21:49.272 23:24:38 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:21:49.272 23:24:38 -- common/autotest_common.sh@716 -- # xtrace_disable 00:21:49.272 23:24:38 -- common/autotest_common.sh@10 -- # set +x 00:21:49.534 23:24:38 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:49.534 23:24:38 -- target/tls.sh@177 -- # NOT setup_nvmf_tgt /tmp/tmp.bmCTHB2doi 00:21:49.534 23:24:38 -- common/autotest_common.sh@638 -- # local es=0 00:21:49.534 23:24:38 -- common/autotest_common.sh@640 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.bmCTHB2doi 00:21:49.534 23:24:38 -- common/autotest_common.sh@626 -- # local arg=setup_nvmf_tgt 00:21:49.534 23:24:38 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:21:49.534 23:24:38 -- common/autotest_common.sh@630 -- # type -t setup_nvmf_tgt 00:21:49.534 23:24:38 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:21:49.534 23:24:38 -- common/autotest_common.sh@641 -- # setup_nvmf_tgt /tmp/tmp.bmCTHB2doi 00:21:49.534 23:24:38 -- target/tls.sh@49 -- # local key=/tmp/tmp.bmCTHB2doi 00:21:49.534 23:24:38 -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:21:49.534 [2024-04-26 23:24:38.678435] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:49.534 23:24:38 -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:21:49.794 23:24:38 -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:21:49.794 [2024-04-26 23:24:38.967147] tcp.c: 925:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:49.794 [2024-04-26 23:24:38.967355] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:49.794 23:24:38 -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:21:50.055 malloc0 00:21:50.055 23:24:39 -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:21:50.055 23:24:39 -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.bmCTHB2doi 00:21:50.316 [2024-04-26 23:24:39.395026] tcp.c:3562:tcp_load_psk: *ERROR*: Incorrect permissions for PSK file 00:21:50.316 [2024-04-26 23:24:39.395051] tcp.c:3648:nvmf_tcp_subsystem_add_host: *ERROR*: Could not retrieve PSK from file 00:21:50.316 [2024-04-26 23:24:39.395074] subsystem.c: 971:spdk_nvmf_subsystem_add_host: *ERROR*: Unable to add host to TCP transport 00:21:50.316 request: 00:21:50.316 { 00:21:50.316 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:50.316 "host": "nqn.2016-06.io.spdk:host1", 00:21:50.316 "psk": "/tmp/tmp.bmCTHB2doi", 00:21:50.316 "method": "nvmf_subsystem_add_host", 00:21:50.316 "req_id": 1 00:21:50.316 } 00:21:50.316 Got JSON-RPC error response 00:21:50.316 response: 00:21:50.316 { 00:21:50.317 "code": -32603, 00:21:50.317 "message": "Internal error" 00:21:50.317 } 00:21:50.317 23:24:39 -- common/autotest_common.sh@641 -- # es=1 00:21:50.317 23:24:39 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:21:50.317 23:24:39 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:21:50.317 23:24:39 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:21:50.317 23:24:39 -- target/tls.sh@180 -- # killprocess 3988500 00:21:50.317 23:24:39 -- common/autotest_common.sh@936 -- # '[' -z 3988500 ']' 00:21:50.317 23:24:39 -- common/autotest_common.sh@940 -- # kill -0 3988500 00:21:50.317 23:24:39 -- common/autotest_common.sh@941 -- # uname 00:21:50.317 23:24:39 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:21:50.317 23:24:39 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3988500 00:21:50.317 23:24:39 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:21:50.317 23:24:39 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:21:50.317 23:24:39 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3988500' 00:21:50.317 killing process with pid 3988500 00:21:50.317 23:24:39 -- common/autotest_common.sh@955 -- # kill 3988500 00:21:50.317 23:24:39 -- common/autotest_common.sh@960 -- # wait 3988500 00:21:50.578 23:24:39 -- target/tls.sh@181 -- # chmod 0600 /tmp/tmp.bmCTHB2doi 00:21:50.578 23:24:39 -- target/tls.sh@184 -- # nvmfappstart -m 0x2 00:21:50.578 23:24:39 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:21:50.578 23:24:39 -- common/autotest_common.sh@710 -- # xtrace_disable 00:21:50.578 23:24:39 -- common/autotest_common.sh@10 -- # set +x 00:21:50.578 23:24:39 -- nvmf/common.sh@470 -- # nvmfpid=3988665 00:21:50.578 23:24:39 -- nvmf/common.sh@471 -- # waitforlisten 3988665 00:21:50.578 23:24:39 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:21:50.578 23:24:39 -- common/autotest_common.sh@817 -- # '[' -z 3988665 ']' 00:21:50.578 23:24:39 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:50.578 23:24:39 -- common/autotest_common.sh@822 -- # local max_retries=100 00:21:50.578 23:24:39 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:50.578 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:50.578 23:24:39 -- common/autotest_common.sh@826 -- # xtrace_disable 00:21:50.578 23:24:39 -- common/autotest_common.sh@10 -- # set +x 00:21:50.578 [2024-04-26 23:24:39.650915] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:21:50.578 [2024-04-26 23:24:39.650974] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:50.578 EAL: No free 2048 kB hugepages reported on node 1 00:21:50.578 [2024-04-26 23:24:39.716243] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:50.578 [2024-04-26 23:24:39.745292] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:50.578 [2024-04-26 23:24:39.745331] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:50.578 [2024-04-26 23:24:39.745339] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:50.578 [2024-04-26 23:24:39.745345] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:50.578 [2024-04-26 23:24:39.745351] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:50.578 [2024-04-26 23:24:39.745372] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:21:50.578 23:24:39 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:21:50.578 23:24:39 -- common/autotest_common.sh@850 -- # return 0 00:21:50.578 23:24:39 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:21:50.578 23:24:39 -- common/autotest_common.sh@716 -- # xtrace_disable 00:21:50.578 23:24:39 -- common/autotest_common.sh@10 -- # set +x 00:21:50.838 23:24:39 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:50.838 23:24:39 -- target/tls.sh@185 -- # setup_nvmf_tgt /tmp/tmp.bmCTHB2doi 00:21:50.838 23:24:39 -- target/tls.sh@49 -- # local key=/tmp/tmp.bmCTHB2doi 00:21:50.838 23:24:39 -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:21:50.838 [2024-04-26 23:24:40.004352] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:50.838 23:24:40 -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:21:51.098 23:24:40 -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:21:51.098 [2024-04-26 23:24:40.309117] tcp.c: 925:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:51.098 [2024-04-26 23:24:40.309332] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:51.099 23:24:40 -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:21:51.359 malloc0 00:21:51.359 23:24:40 -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:21:51.619 23:24:40 -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.bmCTHB2doi 00:21:51.619 [2024-04-26 23:24:40.757024] tcp.c:3652:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:21:51.619 23:24:40 -- target/tls.sh@188 -- # bdevperf_pid=3988953 00:21:51.619 23:24:40 -- target/tls.sh@190 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:21:51.619 23:24:40 -- target/tls.sh@187 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:21:51.619 23:24:40 -- target/tls.sh@191 -- # waitforlisten 3988953 /var/tmp/bdevperf.sock 00:21:51.619 23:24:40 -- common/autotest_common.sh@817 -- # '[' -z 3988953 ']' 00:21:51.619 23:24:40 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:51.619 23:24:40 -- common/autotest_common.sh@822 -- # local max_retries=100 00:21:51.619 23:24:40 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:51.619 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:51.619 23:24:40 -- common/autotest_common.sh@826 -- # xtrace_disable 00:21:51.619 23:24:40 -- common/autotest_common.sh@10 -- # set +x 00:21:51.619 [2024-04-26 23:24:40.815961] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:21:51.619 [2024-04-26 23:24:40.816012] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3988953 ] 00:21:51.619 EAL: No free 2048 kB hugepages reported on node 1 00:21:51.619 [2024-04-26 23:24:40.866354] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:51.879 [2024-04-26 23:24:40.892827] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:21:51.879 23:24:40 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:21:51.879 23:24:40 -- common/autotest_common.sh@850 -- # return 0 00:21:51.879 23:24:40 -- target/tls.sh@192 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.bmCTHB2doi 00:21:51.879 [2024-04-26 23:24:41.098928] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:51.879 [2024-04-26 23:24:41.098985] nvme_tcp.c:2577:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:21:52.139 TLSTESTn1 00:21:52.139 23:24:41 -- target/tls.sh@196 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py save_config 00:21:52.400 23:24:41 -- target/tls.sh@196 -- # tgtconf='{ 00:21:52.400 "subsystems": [ 00:21:52.400 { 00:21:52.400 "subsystem": "keyring", 00:21:52.400 "config": [] 00:21:52.400 }, 00:21:52.400 { 00:21:52.400 "subsystem": "iobuf", 00:21:52.400 "config": [ 00:21:52.400 { 00:21:52.400 "method": "iobuf_set_options", 00:21:52.400 "params": { 00:21:52.400 "small_pool_count": 8192, 00:21:52.400 "large_pool_count": 1024, 00:21:52.400 "small_bufsize": 8192, 00:21:52.400 "large_bufsize": 135168 00:21:52.400 } 00:21:52.400 } 00:21:52.400 ] 00:21:52.400 }, 00:21:52.400 { 00:21:52.400 "subsystem": "sock", 00:21:52.400 "config": [ 00:21:52.400 { 00:21:52.400 "method": "sock_impl_set_options", 00:21:52.400 "params": { 00:21:52.400 "impl_name": "posix", 00:21:52.400 "recv_buf_size": 2097152, 00:21:52.400 "send_buf_size": 2097152, 00:21:52.400 "enable_recv_pipe": true, 00:21:52.400 "enable_quickack": false, 00:21:52.400 "enable_placement_id": 0, 00:21:52.400 "enable_zerocopy_send_server": true, 00:21:52.400 "enable_zerocopy_send_client": false, 00:21:52.400 "zerocopy_threshold": 0, 00:21:52.400 "tls_version": 0, 00:21:52.400 "enable_ktls": false 00:21:52.400 } 00:21:52.400 }, 00:21:52.400 { 00:21:52.400 "method": "sock_impl_set_options", 00:21:52.400 "params": { 00:21:52.400 "impl_name": "ssl", 00:21:52.400 "recv_buf_size": 4096, 00:21:52.400 "send_buf_size": 4096, 00:21:52.400 "enable_recv_pipe": true, 00:21:52.400 "enable_quickack": false, 00:21:52.400 "enable_placement_id": 0, 00:21:52.400 "enable_zerocopy_send_server": true, 00:21:52.400 "enable_zerocopy_send_client": false, 00:21:52.400 "zerocopy_threshold": 0, 00:21:52.400 "tls_version": 0, 00:21:52.400 "enable_ktls": false 00:21:52.400 } 00:21:52.400 } 00:21:52.400 ] 00:21:52.400 }, 00:21:52.400 { 00:21:52.400 "subsystem": "vmd", 00:21:52.400 "config": [] 00:21:52.400 }, 00:21:52.400 { 00:21:52.400 "subsystem": "accel", 00:21:52.400 "config": [ 00:21:52.400 { 00:21:52.400 "method": "accel_set_options", 00:21:52.400 "params": { 00:21:52.400 "small_cache_size": 128, 00:21:52.400 "large_cache_size": 16, 00:21:52.400 "task_count": 2048, 00:21:52.400 "sequence_count": 2048, 00:21:52.400 "buf_count": 2048 00:21:52.400 } 00:21:52.400 } 00:21:52.400 ] 00:21:52.400 }, 00:21:52.400 { 00:21:52.400 "subsystem": "bdev", 00:21:52.401 "config": [ 00:21:52.401 { 00:21:52.401 "method": "bdev_set_options", 00:21:52.401 "params": { 00:21:52.401 "bdev_io_pool_size": 65535, 00:21:52.401 "bdev_io_cache_size": 256, 00:21:52.401 "bdev_auto_examine": true, 00:21:52.401 "iobuf_small_cache_size": 128, 00:21:52.401 "iobuf_large_cache_size": 16 00:21:52.401 } 00:21:52.401 }, 00:21:52.401 { 00:21:52.401 "method": "bdev_raid_set_options", 00:21:52.401 "params": { 00:21:52.401 "process_window_size_kb": 1024 00:21:52.401 } 00:21:52.401 }, 00:21:52.401 { 00:21:52.401 "method": "bdev_iscsi_set_options", 00:21:52.401 "params": { 00:21:52.401 "timeout_sec": 30 00:21:52.401 } 00:21:52.401 }, 00:21:52.401 { 00:21:52.401 "method": "bdev_nvme_set_options", 00:21:52.401 "params": { 00:21:52.401 "action_on_timeout": "none", 00:21:52.401 "timeout_us": 0, 00:21:52.401 "timeout_admin_us": 0, 00:21:52.401 "keep_alive_timeout_ms": 10000, 00:21:52.401 "arbitration_burst": 0, 00:21:52.401 "low_priority_weight": 0, 00:21:52.401 "medium_priority_weight": 0, 00:21:52.401 "high_priority_weight": 0, 00:21:52.401 "nvme_adminq_poll_period_us": 10000, 00:21:52.401 "nvme_ioq_poll_period_us": 0, 00:21:52.401 "io_queue_requests": 0, 00:21:52.401 "delay_cmd_submit": true, 00:21:52.401 "transport_retry_count": 4, 00:21:52.401 "bdev_retry_count": 3, 00:21:52.401 "transport_ack_timeout": 0, 00:21:52.401 "ctrlr_loss_timeout_sec": 0, 00:21:52.401 "reconnect_delay_sec": 0, 00:21:52.401 "fast_io_fail_timeout_sec": 0, 00:21:52.401 "disable_auto_failback": false, 00:21:52.401 "generate_uuids": false, 00:21:52.401 "transport_tos": 0, 00:21:52.401 "nvme_error_stat": false, 00:21:52.401 "rdma_srq_size": 0, 00:21:52.401 "io_path_stat": false, 00:21:52.401 "allow_accel_sequence": false, 00:21:52.401 "rdma_max_cq_size": 0, 00:21:52.401 "rdma_cm_event_timeout_ms": 0, 00:21:52.401 "dhchap_digests": [ 00:21:52.401 "sha256", 00:21:52.401 "sha384", 00:21:52.401 "sha512" 00:21:52.401 ], 00:21:52.401 "dhchap_dhgroups": [ 00:21:52.401 "null", 00:21:52.401 "ffdhe2048", 00:21:52.401 "ffdhe3072", 00:21:52.401 "ffdhe4096", 00:21:52.401 "ffdhe6144", 00:21:52.401 "ffdhe8192" 00:21:52.401 ] 00:21:52.401 } 00:21:52.401 }, 00:21:52.401 { 00:21:52.401 "method": "bdev_nvme_set_hotplug", 00:21:52.401 "params": { 00:21:52.401 "period_us": 100000, 00:21:52.401 "enable": false 00:21:52.401 } 00:21:52.401 }, 00:21:52.401 { 00:21:52.401 "method": "bdev_malloc_create", 00:21:52.401 "params": { 00:21:52.401 "name": "malloc0", 00:21:52.401 "num_blocks": 8192, 00:21:52.401 "block_size": 4096, 00:21:52.401 "physical_block_size": 4096, 00:21:52.401 "uuid": "a92e4cda-5008-4bab-a637-aa32511b3f87", 00:21:52.401 "optimal_io_boundary": 0 00:21:52.401 } 00:21:52.401 }, 00:21:52.401 { 00:21:52.401 "method": "bdev_wait_for_examine" 00:21:52.401 } 00:21:52.401 ] 00:21:52.401 }, 00:21:52.401 { 00:21:52.401 "subsystem": "nbd", 00:21:52.401 "config": [] 00:21:52.401 }, 00:21:52.401 { 00:21:52.401 "subsystem": "scheduler", 00:21:52.401 "config": [ 00:21:52.401 { 00:21:52.401 "method": "framework_set_scheduler", 00:21:52.401 "params": { 00:21:52.401 "name": "static" 00:21:52.401 } 00:21:52.401 } 00:21:52.401 ] 00:21:52.401 }, 00:21:52.401 { 00:21:52.401 "subsystem": "nvmf", 00:21:52.401 "config": [ 00:21:52.401 { 00:21:52.401 "method": "nvmf_set_config", 00:21:52.401 "params": { 00:21:52.401 "discovery_filter": "match_any", 00:21:52.401 "admin_cmd_passthru": { 00:21:52.401 "identify_ctrlr": false 00:21:52.401 } 00:21:52.401 } 00:21:52.401 }, 00:21:52.401 { 00:21:52.401 "method": "nvmf_set_max_subsystems", 00:21:52.401 "params": { 00:21:52.401 "max_subsystems": 1024 00:21:52.401 } 00:21:52.401 }, 00:21:52.401 { 00:21:52.401 "method": "nvmf_set_crdt", 00:21:52.401 "params": { 00:21:52.401 "crdt1": 0, 00:21:52.401 "crdt2": 0, 00:21:52.401 "crdt3": 0 00:21:52.401 } 00:21:52.401 }, 00:21:52.401 { 00:21:52.401 "method": "nvmf_create_transport", 00:21:52.401 "params": { 00:21:52.401 "trtype": "TCP", 00:21:52.401 "max_queue_depth": 128, 00:21:52.401 "max_io_qpairs_per_ctrlr": 127, 00:21:52.401 "in_capsule_data_size": 4096, 00:21:52.401 "max_io_size": 131072, 00:21:52.401 "io_unit_size": 131072, 00:21:52.401 "max_aq_depth": 128, 00:21:52.401 "num_shared_buffers": 511, 00:21:52.401 "buf_cache_size": 4294967295, 00:21:52.401 "dif_insert_or_strip": false, 00:21:52.401 "zcopy": false, 00:21:52.401 "c2h_success": false, 00:21:52.401 "sock_priority": 0, 00:21:52.401 "abort_timeout_sec": 1, 00:21:52.401 "ack_timeout": 0, 00:21:52.401 "data_wr_pool_size": 0 00:21:52.401 } 00:21:52.401 }, 00:21:52.401 { 00:21:52.401 "method": "nvmf_create_subsystem", 00:21:52.401 "params": { 00:21:52.401 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:52.401 "allow_any_host": false, 00:21:52.401 "serial_number": "SPDK00000000000001", 00:21:52.401 "model_number": "SPDK bdev Controller", 00:21:52.401 "max_namespaces": 10, 00:21:52.401 "min_cntlid": 1, 00:21:52.401 "max_cntlid": 65519, 00:21:52.401 "ana_reporting": false 00:21:52.401 } 00:21:52.401 }, 00:21:52.401 { 00:21:52.401 "method": "nvmf_subsystem_add_host", 00:21:52.401 "params": { 00:21:52.401 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:52.401 "host": "nqn.2016-06.io.spdk:host1", 00:21:52.401 "psk": "/tmp/tmp.bmCTHB2doi" 00:21:52.401 } 00:21:52.401 }, 00:21:52.402 { 00:21:52.402 "method": "nvmf_subsystem_add_ns", 00:21:52.402 "params": { 00:21:52.402 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:52.402 "namespace": { 00:21:52.402 "nsid": 1, 00:21:52.402 "bdev_name": "malloc0", 00:21:52.402 "nguid": "A92E4CDA50084BABA637AA32511B3F87", 00:21:52.402 "uuid": "a92e4cda-5008-4bab-a637-aa32511b3f87", 00:21:52.402 "no_auto_visible": false 00:21:52.402 } 00:21:52.402 } 00:21:52.402 }, 00:21:52.402 { 00:21:52.402 "method": "nvmf_subsystem_add_listener", 00:21:52.402 "params": { 00:21:52.402 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:52.402 "listen_address": { 00:21:52.402 "trtype": "TCP", 00:21:52.402 "adrfam": "IPv4", 00:21:52.402 "traddr": "10.0.0.2", 00:21:52.402 "trsvcid": "4420" 00:21:52.402 }, 00:21:52.402 "secure_channel": true 00:21:52.402 } 00:21:52.402 } 00:21:52.402 ] 00:21:52.402 } 00:21:52.402 ] 00:21:52.402 }' 00:21:52.402 23:24:41 -- target/tls.sh@197 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:21:52.402 23:24:41 -- target/tls.sh@197 -- # bdevperfconf='{ 00:21:52.402 "subsystems": [ 00:21:52.402 { 00:21:52.402 "subsystem": "keyring", 00:21:52.402 "config": [] 00:21:52.402 }, 00:21:52.402 { 00:21:52.402 "subsystem": "iobuf", 00:21:52.402 "config": [ 00:21:52.402 { 00:21:52.402 "method": "iobuf_set_options", 00:21:52.402 "params": { 00:21:52.402 "small_pool_count": 8192, 00:21:52.402 "large_pool_count": 1024, 00:21:52.402 "small_bufsize": 8192, 00:21:52.402 "large_bufsize": 135168 00:21:52.402 } 00:21:52.402 } 00:21:52.402 ] 00:21:52.402 }, 00:21:52.402 { 00:21:52.402 "subsystem": "sock", 00:21:52.402 "config": [ 00:21:52.402 { 00:21:52.402 "method": "sock_impl_set_options", 00:21:52.402 "params": { 00:21:52.402 "impl_name": "posix", 00:21:52.402 "recv_buf_size": 2097152, 00:21:52.402 "send_buf_size": 2097152, 00:21:52.402 "enable_recv_pipe": true, 00:21:52.402 "enable_quickack": false, 00:21:52.402 "enable_placement_id": 0, 00:21:52.402 "enable_zerocopy_send_server": true, 00:21:52.402 "enable_zerocopy_send_client": false, 00:21:52.402 "zerocopy_threshold": 0, 00:21:52.402 "tls_version": 0, 00:21:52.402 "enable_ktls": false 00:21:52.402 } 00:21:52.402 }, 00:21:52.402 { 00:21:52.402 "method": "sock_impl_set_options", 00:21:52.402 "params": { 00:21:52.402 "impl_name": "ssl", 00:21:52.402 "recv_buf_size": 4096, 00:21:52.402 "send_buf_size": 4096, 00:21:52.402 "enable_recv_pipe": true, 00:21:52.402 "enable_quickack": false, 00:21:52.402 "enable_placement_id": 0, 00:21:52.402 "enable_zerocopy_send_server": true, 00:21:52.402 "enable_zerocopy_send_client": false, 00:21:52.402 "zerocopy_threshold": 0, 00:21:52.402 "tls_version": 0, 00:21:52.402 "enable_ktls": false 00:21:52.402 } 00:21:52.402 } 00:21:52.402 ] 00:21:52.402 }, 00:21:52.402 { 00:21:52.402 "subsystem": "vmd", 00:21:52.402 "config": [] 00:21:52.402 }, 00:21:52.402 { 00:21:52.402 "subsystem": "accel", 00:21:52.402 "config": [ 00:21:52.402 { 00:21:52.402 "method": "accel_set_options", 00:21:52.402 "params": { 00:21:52.402 "small_cache_size": 128, 00:21:52.402 "large_cache_size": 16, 00:21:52.402 "task_count": 2048, 00:21:52.402 "sequence_count": 2048, 00:21:52.402 "buf_count": 2048 00:21:52.402 } 00:21:52.402 } 00:21:52.402 ] 00:21:52.402 }, 00:21:52.402 { 00:21:52.402 "subsystem": "bdev", 00:21:52.402 "config": [ 00:21:52.402 { 00:21:52.402 "method": "bdev_set_options", 00:21:52.402 "params": { 00:21:52.402 "bdev_io_pool_size": 65535, 00:21:52.402 "bdev_io_cache_size": 256, 00:21:52.402 "bdev_auto_examine": true, 00:21:52.402 "iobuf_small_cache_size": 128, 00:21:52.402 "iobuf_large_cache_size": 16 00:21:52.402 } 00:21:52.402 }, 00:21:52.402 { 00:21:52.402 "method": "bdev_raid_set_options", 00:21:52.402 "params": { 00:21:52.402 "process_window_size_kb": 1024 00:21:52.402 } 00:21:52.402 }, 00:21:52.402 { 00:21:52.402 "method": "bdev_iscsi_set_options", 00:21:52.402 "params": { 00:21:52.402 "timeout_sec": 30 00:21:52.402 } 00:21:52.402 }, 00:21:52.402 { 00:21:52.402 "method": "bdev_nvme_set_options", 00:21:52.402 "params": { 00:21:52.402 "action_on_timeout": "none", 00:21:52.402 "timeout_us": 0, 00:21:52.402 "timeout_admin_us": 0, 00:21:52.402 "keep_alive_timeout_ms": 10000, 00:21:52.402 "arbitration_burst": 0, 00:21:52.402 "low_priority_weight": 0, 00:21:52.402 "medium_priority_weight": 0, 00:21:52.402 "high_priority_weight": 0, 00:21:52.402 "nvme_adminq_poll_period_us": 10000, 00:21:52.402 "nvme_ioq_poll_period_us": 0, 00:21:52.402 "io_queue_requests": 512, 00:21:52.402 "delay_cmd_submit": true, 00:21:52.402 "transport_retry_count": 4, 00:21:52.402 "bdev_retry_count": 3, 00:21:52.402 "transport_ack_timeout": 0, 00:21:52.402 "ctrlr_loss_timeout_sec": 0, 00:21:52.402 "reconnect_delay_sec": 0, 00:21:52.402 "fast_io_fail_timeout_sec": 0, 00:21:52.402 "disable_auto_failback": false, 00:21:52.402 "generate_uuids": false, 00:21:52.402 "transport_tos": 0, 00:21:52.402 "nvme_error_stat": false, 00:21:52.402 "rdma_srq_size": 0, 00:21:52.402 "io_path_stat": false, 00:21:52.402 "allow_accel_sequence": false, 00:21:52.402 "rdma_max_cq_size": 0, 00:21:52.402 "rdma_cm_event_timeout_ms": 0, 00:21:52.402 "dhchap_digests": [ 00:21:52.402 "sha256", 00:21:52.402 "sha384", 00:21:52.402 "sha512" 00:21:52.402 ], 00:21:52.402 "dhchap_dhgroups": [ 00:21:52.402 "null", 00:21:52.402 "ffdhe2048", 00:21:52.402 "ffdhe3072", 00:21:52.402 "ffdhe4096", 00:21:52.402 "ffdhe6144", 00:21:52.402 "ffdhe8192" 00:21:52.402 ] 00:21:52.402 } 00:21:52.402 }, 00:21:52.402 { 00:21:52.402 "method": "bdev_nvme_attach_controller", 00:21:52.402 "params": { 00:21:52.402 "name": "TLSTEST", 00:21:52.402 "trtype": "TCP", 00:21:52.402 "adrfam": "IPv4", 00:21:52.402 "traddr": "10.0.0.2", 00:21:52.402 "trsvcid": "4420", 00:21:52.402 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:52.402 "prchk_reftag": false, 00:21:52.402 "prchk_guard": false, 00:21:52.402 "ctrlr_loss_timeout_sec": 0, 00:21:52.402 "reconnect_delay_sec": 0, 00:21:52.402 "fast_io_fail_timeout_sec": 0, 00:21:52.402 "psk": "/tmp/tmp.bmCTHB2doi", 00:21:52.402 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:52.402 "hdgst": false, 00:21:52.402 "ddgst": false 00:21:52.402 } 00:21:52.402 }, 00:21:52.402 { 00:21:52.402 "method": "bdev_nvme_set_hotplug", 00:21:52.402 "params": { 00:21:52.402 "period_us": 100000, 00:21:52.402 "enable": false 00:21:52.402 } 00:21:52.402 }, 00:21:52.402 { 00:21:52.402 "method": "bdev_wait_for_examine" 00:21:52.402 } 00:21:52.402 ] 00:21:52.402 }, 00:21:52.402 { 00:21:52.402 "subsystem": "nbd", 00:21:52.402 "config": [] 00:21:52.402 } 00:21:52.402 ] 00:21:52.402 }' 00:21:52.402 23:24:41 -- target/tls.sh@199 -- # killprocess 3988953 00:21:52.402 23:24:41 -- common/autotest_common.sh@936 -- # '[' -z 3988953 ']' 00:21:52.402 23:24:41 -- common/autotest_common.sh@940 -- # kill -0 3988953 00:21:52.663 23:24:41 -- common/autotest_common.sh@941 -- # uname 00:21:52.663 23:24:41 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:21:52.663 23:24:41 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3988953 00:21:52.663 23:24:41 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:21:52.663 23:24:41 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:21:52.663 23:24:41 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3988953' 00:21:52.663 killing process with pid 3988953 00:21:52.663 23:24:41 -- common/autotest_common.sh@955 -- # kill 3988953 00:21:52.663 Received shutdown signal, test time was about 10.000000 seconds 00:21:52.663 00:21:52.663 Latency(us) 00:21:52.663 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:52.663 =================================================================================================================== 00:21:52.663 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:21:52.663 [2024-04-26 23:24:41.709046] app.c: 937:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:21:52.663 23:24:41 -- common/autotest_common.sh@960 -- # wait 3988953 00:21:52.663 23:24:41 -- target/tls.sh@200 -- # killprocess 3988665 00:21:52.663 23:24:41 -- common/autotest_common.sh@936 -- # '[' -z 3988665 ']' 00:21:52.663 23:24:41 -- common/autotest_common.sh@940 -- # kill -0 3988665 00:21:52.663 23:24:41 -- common/autotest_common.sh@941 -- # uname 00:21:52.663 23:24:41 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:21:52.663 23:24:41 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3988665 00:21:52.663 23:24:41 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:21:52.663 23:24:41 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:21:52.663 23:24:41 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3988665' 00:21:52.663 killing process with pid 3988665 00:21:52.663 23:24:41 -- common/autotest_common.sh@955 -- # kill 3988665 00:21:52.663 [2024-04-26 23:24:41.873207] app.c: 937:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:21:52.663 23:24:41 -- common/autotest_common.sh@960 -- # wait 3988665 00:21:52.924 23:24:41 -- target/tls.sh@203 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:21:52.924 23:24:41 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:21:52.924 23:24:41 -- common/autotest_common.sh@710 -- # xtrace_disable 00:21:52.924 23:24:41 -- common/autotest_common.sh@10 -- # set +x 00:21:52.924 23:24:41 -- target/tls.sh@203 -- # echo '{ 00:21:52.924 "subsystems": [ 00:21:52.924 { 00:21:52.924 "subsystem": "keyring", 00:21:52.924 "config": [] 00:21:52.924 }, 00:21:52.924 { 00:21:52.924 "subsystem": "iobuf", 00:21:52.924 "config": [ 00:21:52.924 { 00:21:52.924 "method": "iobuf_set_options", 00:21:52.924 "params": { 00:21:52.924 "small_pool_count": 8192, 00:21:52.924 "large_pool_count": 1024, 00:21:52.924 "small_bufsize": 8192, 00:21:52.924 "large_bufsize": 135168 00:21:52.924 } 00:21:52.924 } 00:21:52.924 ] 00:21:52.924 }, 00:21:52.924 { 00:21:52.924 "subsystem": "sock", 00:21:52.924 "config": [ 00:21:52.924 { 00:21:52.924 "method": "sock_impl_set_options", 00:21:52.924 "params": { 00:21:52.924 "impl_name": "posix", 00:21:52.924 "recv_buf_size": 2097152, 00:21:52.924 "send_buf_size": 2097152, 00:21:52.924 "enable_recv_pipe": true, 00:21:52.924 "enable_quickack": false, 00:21:52.924 "enable_placement_id": 0, 00:21:52.924 "enable_zerocopy_send_server": true, 00:21:52.924 "enable_zerocopy_send_client": false, 00:21:52.924 "zerocopy_threshold": 0, 00:21:52.924 "tls_version": 0, 00:21:52.924 "enable_ktls": false 00:21:52.924 } 00:21:52.924 }, 00:21:52.924 { 00:21:52.924 "method": "sock_impl_set_options", 00:21:52.924 "params": { 00:21:52.924 "impl_name": "ssl", 00:21:52.924 "recv_buf_size": 4096, 00:21:52.924 "send_buf_size": 4096, 00:21:52.924 "enable_recv_pipe": true, 00:21:52.924 "enable_quickack": false, 00:21:52.924 "enable_placement_id": 0, 00:21:52.924 "enable_zerocopy_send_server": true, 00:21:52.924 "enable_zerocopy_send_client": false, 00:21:52.924 "zerocopy_threshold": 0, 00:21:52.924 "tls_version": 0, 00:21:52.924 "enable_ktls": false 00:21:52.924 } 00:21:52.924 } 00:21:52.924 ] 00:21:52.924 }, 00:21:52.924 { 00:21:52.924 "subsystem": "vmd", 00:21:52.924 "config": [] 00:21:52.924 }, 00:21:52.924 { 00:21:52.924 "subsystem": "accel", 00:21:52.924 "config": [ 00:21:52.924 { 00:21:52.924 "method": "accel_set_options", 00:21:52.924 "params": { 00:21:52.924 "small_cache_size": 128, 00:21:52.924 "large_cache_size": 16, 00:21:52.924 "task_count": 2048, 00:21:52.924 "sequence_count": 2048, 00:21:52.924 "buf_count": 2048 00:21:52.924 } 00:21:52.924 } 00:21:52.924 ] 00:21:52.924 }, 00:21:52.924 { 00:21:52.924 "subsystem": "bdev", 00:21:52.924 "config": [ 00:21:52.924 { 00:21:52.924 "method": "bdev_set_options", 00:21:52.924 "params": { 00:21:52.924 "bdev_io_pool_size": 65535, 00:21:52.924 "bdev_io_cache_size": 256, 00:21:52.924 "bdev_auto_examine": true, 00:21:52.924 "iobuf_small_cache_size": 128, 00:21:52.924 "iobuf_large_cache_size": 16 00:21:52.924 } 00:21:52.924 }, 00:21:52.924 { 00:21:52.924 "method": "bdev_raid_set_options", 00:21:52.924 "params": { 00:21:52.924 "process_window_size_kb": 1024 00:21:52.924 } 00:21:52.924 }, 00:21:52.924 { 00:21:52.924 "method": "bdev_iscsi_set_options", 00:21:52.924 "params": { 00:21:52.924 "timeout_sec": 30 00:21:52.924 } 00:21:52.924 }, 00:21:52.924 { 00:21:52.924 "method": "bdev_nvme_set_options", 00:21:52.924 "params": { 00:21:52.924 "action_on_timeout": "none", 00:21:52.924 "timeout_us": 0, 00:21:52.924 "timeout_admin_us": 0, 00:21:52.924 "keep_alive_timeout_ms": 10000, 00:21:52.924 "arbitration_burst": 0, 00:21:52.924 "low_priority_weight": 0, 00:21:52.924 "medium_priority_weight": 0, 00:21:52.924 "high_priority_weight": 0, 00:21:52.924 "nvme_adminq_poll_period_us": 10000, 00:21:52.924 "nvme_ioq_poll_period_us": 0, 00:21:52.924 "io_queue_requests": 0, 00:21:52.924 "delay_cmd_submit": true, 00:21:52.924 "transport_retry_count": 4, 00:21:52.924 "bdev_retry_count": 3, 00:21:52.924 "transport_ack_timeout": 0, 00:21:52.924 "ctrlr_loss_timeout_sec": 0, 00:21:52.924 "reconnect_delay_sec": 0, 00:21:52.924 "fast_io_fail_timeout_sec": 0, 00:21:52.924 "disable_auto_failback": false, 00:21:52.924 "generate_uuids": false, 00:21:52.924 "transport_tos": 0, 00:21:52.924 "nvme_error_stat": false, 00:21:52.924 "rdma_srq_size": 0, 00:21:52.924 "io_path_stat": false, 00:21:52.924 "allow_accel_sequence": false, 00:21:52.924 "rdma_max_cq_size": 0, 00:21:52.924 "rdma_cm_event_timeout_ms": 0, 00:21:52.924 "dhchap_digests": [ 00:21:52.924 "sha256", 00:21:52.924 "sha384", 00:21:52.924 "sha512" 00:21:52.924 ], 00:21:52.924 "dhchap_dhgroups": [ 00:21:52.924 "null", 00:21:52.924 "ffdhe2048", 00:21:52.924 "ffdhe3072", 00:21:52.924 "ffdhe4096", 00:21:52.924 "ffdhe6144", 00:21:52.924 "ffdhe8192" 00:21:52.924 ] 00:21:52.924 } 00:21:52.924 }, 00:21:52.924 { 00:21:52.924 "method": "bdev_nvme_set_hotplug", 00:21:52.924 "params": { 00:21:52.924 "period_us": 100000, 00:21:52.924 "enable": false 00:21:52.924 } 00:21:52.924 }, 00:21:52.924 { 00:21:52.924 "method": "bdev_malloc_create", 00:21:52.924 "params": { 00:21:52.924 "name": "malloc0", 00:21:52.924 "num_blocks": 8192, 00:21:52.924 "block_size": 4096, 00:21:52.924 "physical_block_size": 4096, 00:21:52.924 "uuid": "a92e4cda-5008-4bab-a637-aa32511b3f87", 00:21:52.925 "optimal_io_boundary": 0 00:21:52.925 } 00:21:52.925 }, 00:21:52.925 { 00:21:52.925 "method": "bdev_wait_for_examine" 00:21:52.925 } 00:21:52.925 ] 00:21:52.925 }, 00:21:52.925 { 00:21:52.925 "subsystem": "nbd", 00:21:52.925 "config": [] 00:21:52.925 }, 00:21:52.925 { 00:21:52.925 "subsystem": "scheduler", 00:21:52.925 "config": [ 00:21:52.925 { 00:21:52.925 "method": "framework_set_scheduler", 00:21:52.925 "params": { 00:21:52.925 "name": "static" 00:21:52.925 } 00:21:52.925 } 00:21:52.925 ] 00:21:52.925 }, 00:21:52.925 { 00:21:52.925 "subsystem": "nvmf", 00:21:52.925 "config": [ 00:21:52.925 { 00:21:52.925 "method": "nvmf_set_config", 00:21:52.925 "params": { 00:21:52.925 "discovery_filter": "match_any", 00:21:52.925 "admin_cmd_passthru": { 00:21:52.925 "identify_ctrlr": false 00:21:52.925 } 00:21:52.925 } 00:21:52.925 }, 00:21:52.925 { 00:21:52.925 "method": "nvmf_set_max_subsystems", 00:21:52.925 "params": { 00:21:52.925 "max_subsystems": 1024 00:21:52.925 } 00:21:52.925 }, 00:21:52.925 { 00:21:52.925 "method": "nvmf_set_crdt", 00:21:52.925 "params": { 00:21:52.925 "crdt1": 0, 00:21:52.925 "crdt2": 0, 00:21:52.925 "crdt3": 0 00:21:52.925 } 00:21:52.925 }, 00:21:52.925 { 00:21:52.925 "method": "nvmf_create_transport", 00:21:52.925 "params": { 00:21:52.925 "trtype": "TCP", 00:21:52.925 "max_queue_depth": 128, 00:21:52.925 "max_io_qpairs_per_ctrlr": 127, 00:21:52.925 "in_capsule_data_size": 4096, 00:21:52.925 "max_io_size": 131072, 00:21:52.925 "io_unit_size": 131072, 00:21:52.925 "max_aq_depth": 128, 00:21:52.925 "num_shared_buffers": 511, 00:21:52.925 "buf_cache_size": 4294967295, 00:21:52.925 "dif_insert_or_strip": false, 00:21:52.925 "zcopy": false, 00:21:52.925 "c2h_success": false, 00:21:52.925 "sock_priority": 0, 00:21:52.925 "abort_timeout_sec": 1, 00:21:52.925 "ack_timeout": 0, 00:21:52.925 "data_wr_pool_size": 0 00:21:52.925 } 00:21:52.925 }, 00:21:52.925 { 00:21:52.925 "method": "nvmf_create_subsystem", 00:21:52.925 "params": { 00:21:52.925 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:52.925 "allow_any_host": false, 00:21:52.925 "serial_number": "SPDK00000000000001", 00:21:52.925 "model_number": "SPDK bdev Controller", 00:21:52.925 "max_namespaces": 10, 00:21:52.925 "min_cntlid": 1, 00:21:52.925 "max_cntlid": 65519, 00:21:52.925 "ana_reporting": false 00:21:52.925 } 00:21:52.925 }, 00:21:52.925 { 00:21:52.925 "method": "nvmf_subsystem_add_host", 00:21:52.925 "params": { 00:21:52.925 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:52.925 "host": "nqn.2016-06.io.spdk:host1", 00:21:52.925 "psk": "/tmp/tmp.bmCTHB2doi" 00:21:52.925 } 00:21:52.925 }, 00:21:52.925 { 00:21:52.925 "method": "nvmf_subsystem_add_ns", 00:21:52.925 "params": { 00:21:52.925 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:52.925 "namespace": { 00:21:52.925 "nsid": 1, 00:21:52.925 "bdev_name": "malloc0", 00:21:52.925 "nguid": "A92E4CDA50084BABA637AA32511B3F87", 00:21:52.925 "uuid": "a92e4cda-5008-4bab-a637-aa32511b3f87", 00:21:52.925 "no_auto_visible": false 00:21:52.925 } 00:21:52.925 } 00:21:52.925 }, 00:21:52.925 { 00:21:52.925 "method": "nvmf_subsystem_add_listener", 00:21:52.925 "params": { 00:21:52.925 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:52.925 "listen_address": { 00:21:52.925 "trtype": "TCP", 00:21:52.925 "adrfam": "IPv4", 00:21:52.925 "traddr": "10.0.0.2", 00:21:52.925 "trsvcid": "4420" 00:21:52.925 }, 00:21:52.925 "secure_channel": true 00:21:52.925 } 00:21:52.925 } 00:21:52.925 ] 00:21:52.925 } 00:21:52.925 ] 00:21:52.925 }' 00:21:52.925 23:24:42 -- nvmf/common.sh@470 -- # nvmfpid=3989298 00:21:52.925 23:24:42 -- nvmf/common.sh@471 -- # waitforlisten 3989298 00:21:52.925 23:24:42 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:21:52.925 23:24:42 -- common/autotest_common.sh@817 -- # '[' -z 3989298 ']' 00:21:52.925 23:24:42 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:52.925 23:24:42 -- common/autotest_common.sh@822 -- # local max_retries=100 00:21:52.925 23:24:42 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:52.925 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:52.925 23:24:42 -- common/autotest_common.sh@826 -- # xtrace_disable 00:21:52.925 23:24:42 -- common/autotest_common.sh@10 -- # set +x 00:21:52.925 [2024-04-26 23:24:42.054582] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:21:52.925 [2024-04-26 23:24:42.054637] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:52.925 EAL: No free 2048 kB hugepages reported on node 1 00:21:52.925 [2024-04-26 23:24:42.118499] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:52.925 [2024-04-26 23:24:42.146974] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:52.925 [2024-04-26 23:24:42.147009] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:52.925 [2024-04-26 23:24:42.147016] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:52.925 [2024-04-26 23:24:42.147022] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:52.925 [2024-04-26 23:24:42.147028] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:52.925 [2024-04-26 23:24:42.147084] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:21:53.186 [2024-04-26 23:24:42.321724] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:53.186 [2024-04-26 23:24:42.337668] tcp.c:3652:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:21:53.186 [2024-04-26 23:24:42.353726] tcp.c: 925:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:53.186 [2024-04-26 23:24:42.364140] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:53.756 23:24:42 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:21:53.756 23:24:42 -- common/autotest_common.sh@850 -- # return 0 00:21:53.756 23:24:42 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:21:53.756 23:24:42 -- common/autotest_common.sh@716 -- # xtrace_disable 00:21:53.756 23:24:42 -- common/autotest_common.sh@10 -- # set +x 00:21:53.756 23:24:42 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:53.756 23:24:42 -- target/tls.sh@207 -- # bdevperf_pid=3989330 00:21:53.756 23:24:42 -- target/tls.sh@208 -- # waitforlisten 3989330 /var/tmp/bdevperf.sock 00:21:53.756 23:24:42 -- common/autotest_common.sh@817 -- # '[' -z 3989330 ']' 00:21:53.756 23:24:42 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:53.756 23:24:42 -- common/autotest_common.sh@822 -- # local max_retries=100 00:21:53.756 23:24:42 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:53.756 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:53.756 23:24:42 -- common/autotest_common.sh@826 -- # xtrace_disable 00:21:53.756 23:24:42 -- target/tls.sh@204 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:21:53.756 23:24:42 -- common/autotest_common.sh@10 -- # set +x 00:21:53.756 23:24:42 -- target/tls.sh@204 -- # echo '{ 00:21:53.756 "subsystems": [ 00:21:53.756 { 00:21:53.756 "subsystem": "keyring", 00:21:53.756 "config": [] 00:21:53.756 }, 00:21:53.756 { 00:21:53.756 "subsystem": "iobuf", 00:21:53.756 "config": [ 00:21:53.756 { 00:21:53.756 "method": "iobuf_set_options", 00:21:53.756 "params": { 00:21:53.756 "small_pool_count": 8192, 00:21:53.756 "large_pool_count": 1024, 00:21:53.756 "small_bufsize": 8192, 00:21:53.756 "large_bufsize": 135168 00:21:53.756 } 00:21:53.756 } 00:21:53.756 ] 00:21:53.756 }, 00:21:53.756 { 00:21:53.756 "subsystem": "sock", 00:21:53.756 "config": [ 00:21:53.756 { 00:21:53.756 "method": "sock_impl_set_options", 00:21:53.756 "params": { 00:21:53.756 "impl_name": "posix", 00:21:53.756 "recv_buf_size": 2097152, 00:21:53.756 "send_buf_size": 2097152, 00:21:53.756 "enable_recv_pipe": true, 00:21:53.756 "enable_quickack": false, 00:21:53.756 "enable_placement_id": 0, 00:21:53.756 "enable_zerocopy_send_server": true, 00:21:53.756 "enable_zerocopy_send_client": false, 00:21:53.756 "zerocopy_threshold": 0, 00:21:53.756 "tls_version": 0, 00:21:53.756 "enable_ktls": false 00:21:53.756 } 00:21:53.756 }, 00:21:53.756 { 00:21:53.756 "method": "sock_impl_set_options", 00:21:53.756 "params": { 00:21:53.756 "impl_name": "ssl", 00:21:53.756 "recv_buf_size": 4096, 00:21:53.756 "send_buf_size": 4096, 00:21:53.756 "enable_recv_pipe": true, 00:21:53.756 "enable_quickack": false, 00:21:53.756 "enable_placement_id": 0, 00:21:53.756 "enable_zerocopy_send_server": true, 00:21:53.756 "enable_zerocopy_send_client": false, 00:21:53.756 "zerocopy_threshold": 0, 00:21:53.756 "tls_version": 0, 00:21:53.756 "enable_ktls": false 00:21:53.756 } 00:21:53.756 } 00:21:53.756 ] 00:21:53.756 }, 00:21:53.756 { 00:21:53.756 "subsystem": "vmd", 00:21:53.756 "config": [] 00:21:53.756 }, 00:21:53.756 { 00:21:53.756 "subsystem": "accel", 00:21:53.756 "config": [ 00:21:53.756 { 00:21:53.756 "method": "accel_set_options", 00:21:53.756 "params": { 00:21:53.756 "small_cache_size": 128, 00:21:53.756 "large_cache_size": 16, 00:21:53.756 "task_count": 2048, 00:21:53.756 "sequence_count": 2048, 00:21:53.756 "buf_count": 2048 00:21:53.756 } 00:21:53.756 } 00:21:53.756 ] 00:21:53.756 }, 00:21:53.756 { 00:21:53.756 "subsystem": "bdev", 00:21:53.756 "config": [ 00:21:53.756 { 00:21:53.756 "method": "bdev_set_options", 00:21:53.756 "params": { 00:21:53.756 "bdev_io_pool_size": 65535, 00:21:53.756 "bdev_io_cache_size": 256, 00:21:53.756 "bdev_auto_examine": true, 00:21:53.756 "iobuf_small_cache_size": 128, 00:21:53.756 "iobuf_large_cache_size": 16 00:21:53.756 } 00:21:53.756 }, 00:21:53.756 { 00:21:53.756 "method": "bdev_raid_set_options", 00:21:53.756 "params": { 00:21:53.756 "process_window_size_kb": 1024 00:21:53.756 } 00:21:53.756 }, 00:21:53.756 { 00:21:53.756 "method": "bdev_iscsi_set_options", 00:21:53.756 "params": { 00:21:53.756 "timeout_sec": 30 00:21:53.756 } 00:21:53.756 }, 00:21:53.756 { 00:21:53.756 "method": "bdev_nvme_set_options", 00:21:53.756 "params": { 00:21:53.756 "action_on_timeout": "none", 00:21:53.756 "timeout_us": 0, 00:21:53.756 "timeout_admin_us": 0, 00:21:53.756 "keep_alive_timeout_ms": 10000, 00:21:53.756 "arbitration_burst": 0, 00:21:53.756 "low_priority_weight": 0, 00:21:53.756 "medium_priority_weight": 0, 00:21:53.756 "high_priority_weight": 0, 00:21:53.756 "nvme_adminq_poll_period_us": 10000, 00:21:53.756 "nvme_ioq_poll_period_us": 0, 00:21:53.756 "io_queue_requests": 512, 00:21:53.756 "delay_cmd_submit": true, 00:21:53.756 "transport_retry_count": 4, 00:21:53.756 "bdev_retry_count": 3, 00:21:53.756 "transport_ack_timeout": 0, 00:21:53.756 "ctrlr_loss_timeout_sec": 0, 00:21:53.756 "reconnect_delay_sec": 0, 00:21:53.756 "fast_io_fail_timeout_sec": 0, 00:21:53.756 "disable_auto_failback": false, 00:21:53.756 "generate_uuids": false, 00:21:53.756 "transport_tos": 0, 00:21:53.756 "nvme_error_stat": false, 00:21:53.756 "rdma_srq_size": 0, 00:21:53.756 "io_path_stat": false, 00:21:53.756 "allow_accel_sequence": false, 00:21:53.756 "rdma_max_cq_size": 0, 00:21:53.756 "rdma_cm_event_timeout_ms": 0, 00:21:53.756 "dhchap_digests": [ 00:21:53.756 "sha256", 00:21:53.756 "sha384", 00:21:53.756 "sha512" 00:21:53.756 ], 00:21:53.756 "dhchap_dhgroups": [ 00:21:53.756 "null", 00:21:53.756 "ffdhe2048", 00:21:53.756 "ffdhe3072", 00:21:53.756 "ffdhe4096", 00:21:53.756 "ffdhe6144", 00:21:53.756 "ffdhe8192" 00:21:53.756 ] 00:21:53.756 } 00:21:53.756 }, 00:21:53.756 { 00:21:53.756 "method": "bdev_nvme_attach_controller", 00:21:53.756 "params": { 00:21:53.756 "name": "TLSTEST", 00:21:53.756 "trtype": "TCP", 00:21:53.756 "adrfam": "IPv4", 00:21:53.756 "traddr": "10.0.0.2", 00:21:53.756 "trsvcid": "4420", 00:21:53.756 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:53.756 "prchk_reftag": false, 00:21:53.756 "prchk_guard": false, 00:21:53.756 "ctrlr_loss_timeout_sec": 0, 00:21:53.756 "reconnect_delay_sec": 0, 00:21:53.756 "fast_io_fail_timeout_sec": 0, 00:21:53.756 "psk": "/tmp/tmp.bmCTHB2doi", 00:21:53.756 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:53.756 "hdgst": false, 00:21:53.756 "ddgst": false 00:21:53.756 } 00:21:53.756 }, 00:21:53.756 { 00:21:53.756 "method": "bdev_nvme_set_hotplug", 00:21:53.756 "params": { 00:21:53.756 "period_us": 100000, 00:21:53.756 "enable": false 00:21:53.756 } 00:21:53.756 }, 00:21:53.756 { 00:21:53.756 "method": "bdev_wait_for_examine" 00:21:53.756 } 00:21:53.757 ] 00:21:53.757 }, 00:21:53.757 { 00:21:53.757 "subsystem": "nbd", 00:21:53.757 "config": [] 00:21:53.757 } 00:21:53.757 ] 00:21:53.757 }' 00:21:53.757 [2024-04-26 23:24:42.898414] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:21:53.757 [2024-04-26 23:24:42.898462] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3989330 ] 00:21:53.757 EAL: No free 2048 kB hugepages reported on node 1 00:21:53.757 [2024-04-26 23:24:42.948902] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:53.757 [2024-04-26 23:24:42.975420] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:21:54.016 [2024-04-26 23:24:43.086672] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:54.016 [2024-04-26 23:24:43.086736] nvme_tcp.c:2577:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:21:54.585 23:24:43 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:21:54.585 23:24:43 -- common/autotest_common.sh@850 -- # return 0 00:21:54.585 23:24:43 -- target/tls.sh@211 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:21:54.585 Running I/O for 10 seconds... 00:22:04.577 00:22:04.577 Latency(us) 00:22:04.577 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:04.577 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:22:04.577 Verification LBA range: start 0x0 length 0x2000 00:22:04.577 TLSTESTn1 : 10.02 4810.03 18.79 0.00 0.00 26573.09 5980.16 48059.73 00:22:04.577 =================================================================================================================== 00:22:04.577 Total : 4810.03 18.79 0.00 0.00 26573.09 5980.16 48059.73 00:22:04.577 0 00:22:04.577 23:24:53 -- target/tls.sh@213 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:04.577 23:24:53 -- target/tls.sh@214 -- # killprocess 3989330 00:22:04.577 23:24:53 -- common/autotest_common.sh@936 -- # '[' -z 3989330 ']' 00:22:04.577 23:24:53 -- common/autotest_common.sh@940 -- # kill -0 3989330 00:22:04.577 23:24:53 -- common/autotest_common.sh@941 -- # uname 00:22:04.577 23:24:53 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:22:04.577 23:24:53 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3989330 00:22:04.837 23:24:53 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:22:04.837 23:24:53 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:22:04.837 23:24:53 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3989330' 00:22:04.837 killing process with pid 3989330 00:22:04.837 23:24:53 -- common/autotest_common.sh@955 -- # kill 3989330 00:22:04.837 Received shutdown signal, test time was about 10.000000 seconds 00:22:04.837 00:22:04.837 Latency(us) 00:22:04.837 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:04.837 =================================================================================================================== 00:22:04.837 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:04.837 [2024-04-26 23:24:53.857860] app.c: 937:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:22:04.837 23:24:53 -- common/autotest_common.sh@960 -- # wait 3989330 00:22:04.837 23:24:53 -- target/tls.sh@215 -- # killprocess 3989298 00:22:04.837 23:24:53 -- common/autotest_common.sh@936 -- # '[' -z 3989298 ']' 00:22:04.837 23:24:53 -- common/autotest_common.sh@940 -- # kill -0 3989298 00:22:04.837 23:24:53 -- common/autotest_common.sh@941 -- # uname 00:22:04.837 23:24:53 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:22:04.837 23:24:53 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3989298 00:22:04.837 23:24:54 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:22:04.837 23:24:54 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:22:04.837 23:24:54 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3989298' 00:22:04.837 killing process with pid 3989298 00:22:04.837 23:24:54 -- common/autotest_common.sh@955 -- # kill 3989298 00:22:04.837 [2024-04-26 23:24:54.018737] app.c: 937:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:22:04.837 23:24:54 -- common/autotest_common.sh@960 -- # wait 3989298 00:22:05.098 23:24:54 -- target/tls.sh@218 -- # nvmfappstart 00:22:05.098 23:24:54 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:22:05.098 23:24:54 -- common/autotest_common.sh@710 -- # xtrace_disable 00:22:05.098 23:24:54 -- common/autotest_common.sh@10 -- # set +x 00:22:05.098 23:24:54 -- nvmf/common.sh@470 -- # nvmfpid=3991666 00:22:05.098 23:24:54 -- nvmf/common.sh@471 -- # waitforlisten 3991666 00:22:05.098 23:24:54 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:22:05.098 23:24:54 -- common/autotest_common.sh@817 -- # '[' -z 3991666 ']' 00:22:05.098 23:24:54 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:05.098 23:24:54 -- common/autotest_common.sh@822 -- # local max_retries=100 00:22:05.098 23:24:54 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:05.098 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:05.098 23:24:54 -- common/autotest_common.sh@826 -- # xtrace_disable 00:22:05.098 23:24:54 -- common/autotest_common.sh@10 -- # set +x 00:22:05.098 [2024-04-26 23:24:54.204756] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:22:05.098 [2024-04-26 23:24:54.204831] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:05.098 EAL: No free 2048 kB hugepages reported on node 1 00:22:05.098 [2024-04-26 23:24:54.270569] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:05.098 [2024-04-26 23:24:54.300571] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:05.098 [2024-04-26 23:24:54.300609] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:05.098 [2024-04-26 23:24:54.300617] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:05.098 [2024-04-26 23:24:54.300624] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:05.098 [2024-04-26 23:24:54.300630] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:05.098 [2024-04-26 23:24:54.300647] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:06.037 23:24:54 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:22:06.037 23:24:54 -- common/autotest_common.sh@850 -- # return 0 00:22:06.037 23:24:54 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:22:06.037 23:24:54 -- common/autotest_common.sh@716 -- # xtrace_disable 00:22:06.037 23:24:54 -- common/autotest_common.sh@10 -- # set +x 00:22:06.037 23:24:54 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:06.037 23:24:54 -- target/tls.sh@219 -- # setup_nvmf_tgt /tmp/tmp.bmCTHB2doi 00:22:06.037 23:24:54 -- target/tls.sh@49 -- # local key=/tmp/tmp.bmCTHB2doi 00:22:06.037 23:24:54 -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:22:06.037 [2024-04-26 23:24:55.137162] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:06.037 23:24:55 -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:22:06.297 23:24:55 -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:22:06.297 [2024-04-26 23:24:55.437907] tcp.c: 925:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:06.297 [2024-04-26 23:24:55.438125] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:06.297 23:24:55 -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:22:06.557 malloc0 00:22:06.557 23:24:55 -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:22:06.557 23:24:55 -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.bmCTHB2doi 00:22:06.818 [2024-04-26 23:24:55.873796] tcp.c:3652:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:22:06.818 23:24:55 -- target/tls.sh@220 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:22:06.818 23:24:55 -- target/tls.sh@222 -- # bdevperf_pid=3992029 00:22:06.818 23:24:55 -- target/tls.sh@224 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:06.818 23:24:55 -- target/tls.sh@225 -- # waitforlisten 3992029 /var/tmp/bdevperf.sock 00:22:06.818 23:24:55 -- common/autotest_common.sh@817 -- # '[' -z 3992029 ']' 00:22:06.818 23:24:55 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:06.818 23:24:55 -- common/autotest_common.sh@822 -- # local max_retries=100 00:22:06.818 23:24:55 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:06.818 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:06.818 23:24:55 -- common/autotest_common.sh@826 -- # xtrace_disable 00:22:06.818 23:24:55 -- common/autotest_common.sh@10 -- # set +x 00:22:06.818 [2024-04-26 23:24:55.920574] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:22:06.818 [2024-04-26 23:24:55.920619] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3992029 ] 00:22:06.818 EAL: No free 2048 kB hugepages reported on node 1 00:22:06.818 [2024-04-26 23:24:55.980626] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:06.818 [2024-04-26 23:24:56.009865] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:06.818 23:24:56 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:22:06.818 23:24:56 -- common/autotest_common.sh@850 -- # return 0 00:22:07.078 23:24:56 -- target/tls.sh@227 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.bmCTHB2doi 00:22:07.078 23:24:56 -- target/tls.sh@228 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:22:07.338 [2024-04-26 23:24:56.360809] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:07.338 nvme0n1 00:22:07.338 23:24:56 -- target/tls.sh@232 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:22:07.338 Running I/O for 1 seconds... 00:22:08.720 00:22:08.720 Latency(us) 00:22:08.720 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:08.720 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:22:08.720 Verification LBA range: start 0x0 length 0x2000 00:22:08.720 nvme0n1 : 1.06 3257.40 12.72 0.00 0.00 38348.59 6089.39 52647.25 00:22:08.720 =================================================================================================================== 00:22:08.720 Total : 3257.40 12.72 0.00 0.00 38348.59 6089.39 52647.25 00:22:08.720 0 00:22:08.720 23:24:57 -- target/tls.sh@234 -- # killprocess 3992029 00:22:08.720 23:24:57 -- common/autotest_common.sh@936 -- # '[' -z 3992029 ']' 00:22:08.720 23:24:57 -- common/autotest_common.sh@940 -- # kill -0 3992029 00:22:08.720 23:24:57 -- common/autotest_common.sh@941 -- # uname 00:22:08.720 23:24:57 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:22:08.720 23:24:57 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3992029 00:22:08.720 23:24:57 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:22:08.720 23:24:57 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:22:08.720 23:24:57 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3992029' 00:22:08.720 killing process with pid 3992029 00:22:08.720 23:24:57 -- common/autotest_common.sh@955 -- # kill 3992029 00:22:08.720 Received shutdown signal, test time was about 1.000000 seconds 00:22:08.720 00:22:08.720 Latency(us) 00:22:08.720 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:08.720 =================================================================================================================== 00:22:08.720 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:08.720 23:24:57 -- common/autotest_common.sh@960 -- # wait 3992029 00:22:08.720 23:24:57 -- target/tls.sh@235 -- # killprocess 3991666 00:22:08.720 23:24:57 -- common/autotest_common.sh@936 -- # '[' -z 3991666 ']' 00:22:08.720 23:24:57 -- common/autotest_common.sh@940 -- # kill -0 3991666 00:22:08.720 23:24:57 -- common/autotest_common.sh@941 -- # uname 00:22:08.720 23:24:57 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:22:08.720 23:24:57 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3991666 00:22:08.720 23:24:57 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:22:08.720 23:24:57 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:22:08.720 23:24:57 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3991666' 00:22:08.720 killing process with pid 3991666 00:22:08.720 23:24:57 -- common/autotest_common.sh@955 -- # kill 3991666 00:22:08.720 [2024-04-26 23:24:57.832559] app.c: 937:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:22:08.720 23:24:57 -- common/autotest_common.sh@960 -- # wait 3991666 00:22:08.720 23:24:57 -- target/tls.sh@238 -- # nvmfappstart 00:22:08.720 23:24:57 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:22:08.720 23:24:57 -- common/autotest_common.sh@710 -- # xtrace_disable 00:22:08.720 23:24:57 -- common/autotest_common.sh@10 -- # set +x 00:22:08.720 23:24:57 -- nvmf/common.sh@470 -- # nvmfpid=3992383 00:22:08.720 23:24:57 -- nvmf/common.sh@471 -- # waitforlisten 3992383 00:22:08.720 23:24:57 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:22:08.720 23:24:57 -- common/autotest_common.sh@817 -- # '[' -z 3992383 ']' 00:22:08.720 23:24:57 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:08.720 23:24:57 -- common/autotest_common.sh@822 -- # local max_retries=100 00:22:08.720 23:24:57 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:08.720 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:08.720 23:24:57 -- common/autotest_common.sh@826 -- # xtrace_disable 00:22:08.720 23:24:57 -- common/autotest_common.sh@10 -- # set +x 00:22:08.979 [2024-04-26 23:24:57.998972] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:22:08.979 [2024-04-26 23:24:57.999032] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:08.979 EAL: No free 2048 kB hugepages reported on node 1 00:22:08.979 [2024-04-26 23:24:58.061056] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:08.979 [2024-04-26 23:24:58.088881] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:08.979 [2024-04-26 23:24:58.088917] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:08.979 [2024-04-26 23:24:58.088925] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:08.979 [2024-04-26 23:24:58.088932] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:08.979 [2024-04-26 23:24:58.088938] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:08.979 [2024-04-26 23:24:58.088955] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:08.980 23:24:58 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:22:08.980 23:24:58 -- common/autotest_common.sh@850 -- # return 0 00:22:08.980 23:24:58 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:22:08.980 23:24:58 -- common/autotest_common.sh@716 -- # xtrace_disable 00:22:08.980 23:24:58 -- common/autotest_common.sh@10 -- # set +x 00:22:08.980 23:24:58 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:08.980 23:24:58 -- target/tls.sh@239 -- # rpc_cmd 00:22:08.980 23:24:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:08.980 23:24:58 -- common/autotest_common.sh@10 -- # set +x 00:22:08.980 [2024-04-26 23:24:58.203815] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:08.980 malloc0 00:22:08.980 [2024-04-26 23:24:58.230585] tcp.c: 925:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:08.980 [2024-04-26 23:24:58.230790] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:09.239 23:24:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:09.239 23:24:58 -- target/tls.sh@252 -- # bdevperf_pid=3992405 00:22:09.239 23:24:58 -- target/tls.sh@254 -- # waitforlisten 3992405 /var/tmp/bdevperf.sock 00:22:09.239 23:24:58 -- target/tls.sh@250 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:22:09.239 23:24:58 -- common/autotest_common.sh@817 -- # '[' -z 3992405 ']' 00:22:09.239 23:24:58 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:09.239 23:24:58 -- common/autotest_common.sh@822 -- # local max_retries=100 00:22:09.239 23:24:58 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:09.239 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:09.239 23:24:58 -- common/autotest_common.sh@826 -- # xtrace_disable 00:22:09.239 23:24:58 -- common/autotest_common.sh@10 -- # set +x 00:22:09.239 [2024-04-26 23:24:58.315747] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:22:09.239 [2024-04-26 23:24:58.315798] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3992405 ] 00:22:09.239 EAL: No free 2048 kB hugepages reported on node 1 00:22:09.239 [2024-04-26 23:24:58.374277] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:09.239 [2024-04-26 23:24:58.403002] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:09.832 23:24:59 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:22:09.832 23:24:59 -- common/autotest_common.sh@850 -- # return 0 00:22:09.832 23:24:59 -- target/tls.sh@255 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.bmCTHB2doi 00:22:10.091 23:24:59 -- target/tls.sh@256 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:22:10.091 [2024-04-26 23:24:59.339293] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:10.351 nvme0n1 00:22:10.351 23:24:59 -- target/tls.sh@260 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:22:10.351 Running I/O for 1 seconds... 00:22:11.302 00:22:11.302 Latency(us) 00:22:11.302 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:11.302 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:22:11.302 Verification LBA range: start 0x0 length 0x2000 00:22:11.302 nvme0n1 : 1.02 3433.82 13.41 0.00 0.00 36944.26 7864.32 39976.96 00:22:11.302 =================================================================================================================== 00:22:11.302 Total : 3433.82 13.41 0.00 0.00 36944.26 7864.32 39976.96 00:22:11.302 0 00:22:11.562 23:25:00 -- target/tls.sh@263 -- # rpc_cmd save_config 00:22:11.562 23:25:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:11.562 23:25:00 -- common/autotest_common.sh@10 -- # set +x 00:22:11.562 23:25:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:11.562 23:25:00 -- target/tls.sh@263 -- # tgtcfg='{ 00:22:11.562 "subsystems": [ 00:22:11.562 { 00:22:11.562 "subsystem": "keyring", 00:22:11.562 "config": [ 00:22:11.562 { 00:22:11.562 "method": "keyring_file_add_key", 00:22:11.562 "params": { 00:22:11.562 "name": "key0", 00:22:11.562 "path": "/tmp/tmp.bmCTHB2doi" 00:22:11.562 } 00:22:11.562 } 00:22:11.562 ] 00:22:11.562 }, 00:22:11.562 { 00:22:11.562 "subsystem": "iobuf", 00:22:11.562 "config": [ 00:22:11.562 { 00:22:11.562 "method": "iobuf_set_options", 00:22:11.562 "params": { 00:22:11.562 "small_pool_count": 8192, 00:22:11.562 "large_pool_count": 1024, 00:22:11.562 "small_bufsize": 8192, 00:22:11.562 "large_bufsize": 135168 00:22:11.562 } 00:22:11.562 } 00:22:11.562 ] 00:22:11.562 }, 00:22:11.562 { 00:22:11.562 "subsystem": "sock", 00:22:11.562 "config": [ 00:22:11.562 { 00:22:11.562 "method": "sock_impl_set_options", 00:22:11.562 "params": { 00:22:11.562 "impl_name": "posix", 00:22:11.562 "recv_buf_size": 2097152, 00:22:11.562 "send_buf_size": 2097152, 00:22:11.562 "enable_recv_pipe": true, 00:22:11.562 "enable_quickack": false, 00:22:11.562 "enable_placement_id": 0, 00:22:11.562 "enable_zerocopy_send_server": true, 00:22:11.562 "enable_zerocopy_send_client": false, 00:22:11.562 "zerocopy_threshold": 0, 00:22:11.562 "tls_version": 0, 00:22:11.562 "enable_ktls": false 00:22:11.562 } 00:22:11.562 }, 00:22:11.562 { 00:22:11.562 "method": "sock_impl_set_options", 00:22:11.562 "params": { 00:22:11.562 "impl_name": "ssl", 00:22:11.562 "recv_buf_size": 4096, 00:22:11.562 "send_buf_size": 4096, 00:22:11.562 "enable_recv_pipe": true, 00:22:11.562 "enable_quickack": false, 00:22:11.562 "enable_placement_id": 0, 00:22:11.562 "enable_zerocopy_send_server": true, 00:22:11.562 "enable_zerocopy_send_client": false, 00:22:11.562 "zerocopy_threshold": 0, 00:22:11.562 "tls_version": 0, 00:22:11.562 "enable_ktls": false 00:22:11.562 } 00:22:11.562 } 00:22:11.562 ] 00:22:11.562 }, 00:22:11.562 { 00:22:11.562 "subsystem": "vmd", 00:22:11.562 "config": [] 00:22:11.562 }, 00:22:11.562 { 00:22:11.562 "subsystem": "accel", 00:22:11.562 "config": [ 00:22:11.562 { 00:22:11.562 "method": "accel_set_options", 00:22:11.562 "params": { 00:22:11.562 "small_cache_size": 128, 00:22:11.562 "large_cache_size": 16, 00:22:11.562 "task_count": 2048, 00:22:11.562 "sequence_count": 2048, 00:22:11.562 "buf_count": 2048 00:22:11.562 } 00:22:11.562 } 00:22:11.562 ] 00:22:11.562 }, 00:22:11.562 { 00:22:11.562 "subsystem": "bdev", 00:22:11.562 "config": [ 00:22:11.562 { 00:22:11.562 "method": "bdev_set_options", 00:22:11.562 "params": { 00:22:11.562 "bdev_io_pool_size": 65535, 00:22:11.562 "bdev_io_cache_size": 256, 00:22:11.562 "bdev_auto_examine": true, 00:22:11.562 "iobuf_small_cache_size": 128, 00:22:11.562 "iobuf_large_cache_size": 16 00:22:11.562 } 00:22:11.562 }, 00:22:11.562 { 00:22:11.562 "method": "bdev_raid_set_options", 00:22:11.562 "params": { 00:22:11.562 "process_window_size_kb": 1024 00:22:11.562 } 00:22:11.562 }, 00:22:11.562 { 00:22:11.562 "method": "bdev_iscsi_set_options", 00:22:11.562 "params": { 00:22:11.562 "timeout_sec": 30 00:22:11.562 } 00:22:11.562 }, 00:22:11.562 { 00:22:11.562 "method": "bdev_nvme_set_options", 00:22:11.562 "params": { 00:22:11.562 "action_on_timeout": "none", 00:22:11.562 "timeout_us": 0, 00:22:11.562 "timeout_admin_us": 0, 00:22:11.562 "keep_alive_timeout_ms": 10000, 00:22:11.562 "arbitration_burst": 0, 00:22:11.562 "low_priority_weight": 0, 00:22:11.562 "medium_priority_weight": 0, 00:22:11.562 "high_priority_weight": 0, 00:22:11.562 "nvme_adminq_poll_period_us": 10000, 00:22:11.562 "nvme_ioq_poll_period_us": 0, 00:22:11.562 "io_queue_requests": 0, 00:22:11.562 "delay_cmd_submit": true, 00:22:11.562 "transport_retry_count": 4, 00:22:11.562 "bdev_retry_count": 3, 00:22:11.562 "transport_ack_timeout": 0, 00:22:11.562 "ctrlr_loss_timeout_sec": 0, 00:22:11.562 "reconnect_delay_sec": 0, 00:22:11.562 "fast_io_fail_timeout_sec": 0, 00:22:11.562 "disable_auto_failback": false, 00:22:11.562 "generate_uuids": false, 00:22:11.562 "transport_tos": 0, 00:22:11.562 "nvme_error_stat": false, 00:22:11.562 "rdma_srq_size": 0, 00:22:11.562 "io_path_stat": false, 00:22:11.562 "allow_accel_sequence": false, 00:22:11.562 "rdma_max_cq_size": 0, 00:22:11.562 "rdma_cm_event_timeout_ms": 0, 00:22:11.562 "dhchap_digests": [ 00:22:11.562 "sha256", 00:22:11.562 "sha384", 00:22:11.562 "sha512" 00:22:11.562 ], 00:22:11.562 "dhchap_dhgroups": [ 00:22:11.562 "null", 00:22:11.562 "ffdhe2048", 00:22:11.562 "ffdhe3072", 00:22:11.562 "ffdhe4096", 00:22:11.562 "ffdhe6144", 00:22:11.562 "ffdhe8192" 00:22:11.562 ] 00:22:11.562 } 00:22:11.562 }, 00:22:11.562 { 00:22:11.562 "method": "bdev_nvme_set_hotplug", 00:22:11.562 "params": { 00:22:11.562 "period_us": 100000, 00:22:11.562 "enable": false 00:22:11.562 } 00:22:11.562 }, 00:22:11.562 { 00:22:11.562 "method": "bdev_malloc_create", 00:22:11.562 "params": { 00:22:11.562 "name": "malloc0", 00:22:11.562 "num_blocks": 8192, 00:22:11.562 "block_size": 4096, 00:22:11.562 "physical_block_size": 4096, 00:22:11.562 "uuid": "29a8e42e-9570-45ee-a210-f2373b1c6a3a", 00:22:11.562 "optimal_io_boundary": 0 00:22:11.562 } 00:22:11.562 }, 00:22:11.562 { 00:22:11.562 "method": "bdev_wait_for_examine" 00:22:11.562 } 00:22:11.562 ] 00:22:11.562 }, 00:22:11.562 { 00:22:11.562 "subsystem": "nbd", 00:22:11.562 "config": [] 00:22:11.562 }, 00:22:11.562 { 00:22:11.562 "subsystem": "scheduler", 00:22:11.562 "config": [ 00:22:11.562 { 00:22:11.562 "method": "framework_set_scheduler", 00:22:11.563 "params": { 00:22:11.563 "name": "static" 00:22:11.563 } 00:22:11.563 } 00:22:11.563 ] 00:22:11.563 }, 00:22:11.563 { 00:22:11.563 "subsystem": "nvmf", 00:22:11.563 "config": [ 00:22:11.563 { 00:22:11.563 "method": "nvmf_set_config", 00:22:11.563 "params": { 00:22:11.563 "discovery_filter": "match_any", 00:22:11.563 "admin_cmd_passthru": { 00:22:11.563 "identify_ctrlr": false 00:22:11.563 } 00:22:11.563 } 00:22:11.563 }, 00:22:11.563 { 00:22:11.563 "method": "nvmf_set_max_subsystems", 00:22:11.563 "params": { 00:22:11.563 "max_subsystems": 1024 00:22:11.563 } 00:22:11.563 }, 00:22:11.563 { 00:22:11.563 "method": "nvmf_set_crdt", 00:22:11.563 "params": { 00:22:11.563 "crdt1": 0, 00:22:11.563 "crdt2": 0, 00:22:11.563 "crdt3": 0 00:22:11.563 } 00:22:11.563 }, 00:22:11.563 { 00:22:11.563 "method": "nvmf_create_transport", 00:22:11.563 "params": { 00:22:11.563 "trtype": "TCP", 00:22:11.563 "max_queue_depth": 128, 00:22:11.563 "max_io_qpairs_per_ctrlr": 127, 00:22:11.563 "in_capsule_data_size": 4096, 00:22:11.563 "max_io_size": 131072, 00:22:11.563 "io_unit_size": 131072, 00:22:11.563 "max_aq_depth": 128, 00:22:11.563 "num_shared_buffers": 511, 00:22:11.563 "buf_cache_size": 4294967295, 00:22:11.563 "dif_insert_or_strip": false, 00:22:11.563 "zcopy": false, 00:22:11.563 "c2h_success": false, 00:22:11.563 "sock_priority": 0, 00:22:11.563 "abort_timeout_sec": 1, 00:22:11.563 "ack_timeout": 0, 00:22:11.563 "data_wr_pool_size": 0 00:22:11.563 } 00:22:11.563 }, 00:22:11.563 { 00:22:11.563 "method": "nvmf_create_subsystem", 00:22:11.563 "params": { 00:22:11.563 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:11.563 "allow_any_host": false, 00:22:11.563 "serial_number": "00000000000000000000", 00:22:11.563 "model_number": "SPDK bdev Controller", 00:22:11.563 "max_namespaces": 32, 00:22:11.563 "min_cntlid": 1, 00:22:11.563 "max_cntlid": 65519, 00:22:11.563 "ana_reporting": false 00:22:11.563 } 00:22:11.563 }, 00:22:11.563 { 00:22:11.563 "method": "nvmf_subsystem_add_host", 00:22:11.563 "params": { 00:22:11.563 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:11.563 "host": "nqn.2016-06.io.spdk:host1", 00:22:11.563 "psk": "key0" 00:22:11.563 } 00:22:11.563 }, 00:22:11.563 { 00:22:11.563 "method": "nvmf_subsystem_add_ns", 00:22:11.563 "params": { 00:22:11.563 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:11.563 "namespace": { 00:22:11.563 "nsid": 1, 00:22:11.563 "bdev_name": "malloc0", 00:22:11.563 "nguid": "29A8E42E957045EEA210F2373B1C6A3A", 00:22:11.563 "uuid": "29a8e42e-9570-45ee-a210-f2373b1c6a3a", 00:22:11.563 "no_auto_visible": false 00:22:11.563 } 00:22:11.563 } 00:22:11.563 }, 00:22:11.563 { 00:22:11.563 "method": "nvmf_subsystem_add_listener", 00:22:11.563 "params": { 00:22:11.563 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:11.563 "listen_address": { 00:22:11.563 "trtype": "TCP", 00:22:11.563 "adrfam": "IPv4", 00:22:11.563 "traddr": "10.0.0.2", 00:22:11.563 "trsvcid": "4420" 00:22:11.563 }, 00:22:11.563 "secure_channel": true 00:22:11.563 } 00:22:11.563 } 00:22:11.563 ] 00:22:11.563 } 00:22:11.563 ] 00:22:11.563 }' 00:22:11.563 23:25:00 -- target/tls.sh@264 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:22:11.823 23:25:00 -- target/tls.sh@264 -- # bperfcfg='{ 00:22:11.823 "subsystems": [ 00:22:11.824 { 00:22:11.824 "subsystem": "keyring", 00:22:11.824 "config": [ 00:22:11.824 { 00:22:11.824 "method": "keyring_file_add_key", 00:22:11.824 "params": { 00:22:11.824 "name": "key0", 00:22:11.824 "path": "/tmp/tmp.bmCTHB2doi" 00:22:11.824 } 00:22:11.824 } 00:22:11.824 ] 00:22:11.824 }, 00:22:11.824 { 00:22:11.824 "subsystem": "iobuf", 00:22:11.824 "config": [ 00:22:11.824 { 00:22:11.824 "method": "iobuf_set_options", 00:22:11.824 "params": { 00:22:11.824 "small_pool_count": 8192, 00:22:11.824 "large_pool_count": 1024, 00:22:11.824 "small_bufsize": 8192, 00:22:11.824 "large_bufsize": 135168 00:22:11.824 } 00:22:11.824 } 00:22:11.824 ] 00:22:11.824 }, 00:22:11.824 { 00:22:11.824 "subsystem": "sock", 00:22:11.824 "config": [ 00:22:11.824 { 00:22:11.824 "method": "sock_impl_set_options", 00:22:11.824 "params": { 00:22:11.824 "impl_name": "posix", 00:22:11.824 "recv_buf_size": 2097152, 00:22:11.824 "send_buf_size": 2097152, 00:22:11.824 "enable_recv_pipe": true, 00:22:11.824 "enable_quickack": false, 00:22:11.824 "enable_placement_id": 0, 00:22:11.824 "enable_zerocopy_send_server": true, 00:22:11.824 "enable_zerocopy_send_client": false, 00:22:11.824 "zerocopy_threshold": 0, 00:22:11.824 "tls_version": 0, 00:22:11.824 "enable_ktls": false 00:22:11.824 } 00:22:11.824 }, 00:22:11.824 { 00:22:11.824 "method": "sock_impl_set_options", 00:22:11.824 "params": { 00:22:11.824 "impl_name": "ssl", 00:22:11.824 "recv_buf_size": 4096, 00:22:11.824 "send_buf_size": 4096, 00:22:11.824 "enable_recv_pipe": true, 00:22:11.824 "enable_quickack": false, 00:22:11.824 "enable_placement_id": 0, 00:22:11.824 "enable_zerocopy_send_server": true, 00:22:11.824 "enable_zerocopy_send_client": false, 00:22:11.824 "zerocopy_threshold": 0, 00:22:11.824 "tls_version": 0, 00:22:11.824 "enable_ktls": false 00:22:11.824 } 00:22:11.824 } 00:22:11.824 ] 00:22:11.824 }, 00:22:11.824 { 00:22:11.824 "subsystem": "vmd", 00:22:11.824 "config": [] 00:22:11.824 }, 00:22:11.824 { 00:22:11.824 "subsystem": "accel", 00:22:11.824 "config": [ 00:22:11.824 { 00:22:11.824 "method": "accel_set_options", 00:22:11.824 "params": { 00:22:11.824 "small_cache_size": 128, 00:22:11.824 "large_cache_size": 16, 00:22:11.824 "task_count": 2048, 00:22:11.824 "sequence_count": 2048, 00:22:11.824 "buf_count": 2048 00:22:11.824 } 00:22:11.824 } 00:22:11.824 ] 00:22:11.824 }, 00:22:11.824 { 00:22:11.824 "subsystem": "bdev", 00:22:11.824 "config": [ 00:22:11.824 { 00:22:11.824 "method": "bdev_set_options", 00:22:11.824 "params": { 00:22:11.824 "bdev_io_pool_size": 65535, 00:22:11.824 "bdev_io_cache_size": 256, 00:22:11.824 "bdev_auto_examine": true, 00:22:11.824 "iobuf_small_cache_size": 128, 00:22:11.824 "iobuf_large_cache_size": 16 00:22:11.824 } 00:22:11.824 }, 00:22:11.824 { 00:22:11.824 "method": "bdev_raid_set_options", 00:22:11.824 "params": { 00:22:11.824 "process_window_size_kb": 1024 00:22:11.824 } 00:22:11.824 }, 00:22:11.824 { 00:22:11.824 "method": "bdev_iscsi_set_options", 00:22:11.824 "params": { 00:22:11.824 "timeout_sec": 30 00:22:11.824 } 00:22:11.824 }, 00:22:11.824 { 00:22:11.824 "method": "bdev_nvme_set_options", 00:22:11.824 "params": { 00:22:11.824 "action_on_timeout": "none", 00:22:11.824 "timeout_us": 0, 00:22:11.824 "timeout_admin_us": 0, 00:22:11.824 "keep_alive_timeout_ms": 10000, 00:22:11.824 "arbitration_burst": 0, 00:22:11.824 "low_priority_weight": 0, 00:22:11.824 "medium_priority_weight": 0, 00:22:11.824 "high_priority_weight": 0, 00:22:11.824 "nvme_adminq_poll_period_us": 10000, 00:22:11.824 "nvme_ioq_poll_period_us": 0, 00:22:11.824 "io_queue_requests": 512, 00:22:11.824 "delay_cmd_submit": true, 00:22:11.824 "transport_retry_count": 4, 00:22:11.824 "bdev_retry_count": 3, 00:22:11.824 "transport_ack_timeout": 0, 00:22:11.824 "ctrlr_loss_timeout_sec": 0, 00:22:11.824 "reconnect_delay_sec": 0, 00:22:11.824 "fast_io_fail_timeout_sec": 0, 00:22:11.824 "disable_auto_failback": false, 00:22:11.824 "generate_uuids": false, 00:22:11.824 "transport_tos": 0, 00:22:11.824 "nvme_error_stat": false, 00:22:11.824 "rdma_srq_size": 0, 00:22:11.824 "io_path_stat": false, 00:22:11.824 "allow_accel_sequence": false, 00:22:11.824 "rdma_max_cq_size": 0, 00:22:11.824 "rdma_cm_event_timeout_ms": 0, 00:22:11.824 "dhchap_digests": [ 00:22:11.824 "sha256", 00:22:11.824 "sha384", 00:22:11.824 "sha512" 00:22:11.824 ], 00:22:11.824 "dhchap_dhgroups": [ 00:22:11.824 "null", 00:22:11.824 "ffdhe2048", 00:22:11.824 "ffdhe3072", 00:22:11.824 "ffdhe4096", 00:22:11.824 "ffdhe6144", 00:22:11.824 "ffdhe8192" 00:22:11.824 ] 00:22:11.824 } 00:22:11.824 }, 00:22:11.824 { 00:22:11.824 "method": "bdev_nvme_attach_controller", 00:22:11.824 "params": { 00:22:11.824 "name": "nvme0", 00:22:11.824 "trtype": "TCP", 00:22:11.824 "adrfam": "IPv4", 00:22:11.824 "traddr": "10.0.0.2", 00:22:11.824 "trsvcid": "4420", 00:22:11.824 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:11.824 "prchk_reftag": false, 00:22:11.824 "prchk_guard": false, 00:22:11.824 "ctrlr_loss_timeout_sec": 0, 00:22:11.824 "reconnect_delay_sec": 0, 00:22:11.824 "fast_io_fail_timeout_sec": 0, 00:22:11.824 "psk": "key0", 00:22:11.824 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:11.824 "hdgst": false, 00:22:11.824 "ddgst": false 00:22:11.824 } 00:22:11.824 }, 00:22:11.824 { 00:22:11.824 "method": "bdev_nvme_set_hotplug", 00:22:11.824 "params": { 00:22:11.824 "period_us": 100000, 00:22:11.824 "enable": false 00:22:11.824 } 00:22:11.824 }, 00:22:11.824 { 00:22:11.824 "method": "bdev_enable_histogram", 00:22:11.824 "params": { 00:22:11.824 "name": "nvme0n1", 00:22:11.824 "enable": true 00:22:11.824 } 00:22:11.824 }, 00:22:11.824 { 00:22:11.824 "method": "bdev_wait_for_examine" 00:22:11.824 } 00:22:11.824 ] 00:22:11.824 }, 00:22:11.824 { 00:22:11.824 "subsystem": "nbd", 00:22:11.824 "config": [] 00:22:11.824 } 00:22:11.824 ] 00:22:11.824 }' 00:22:11.824 23:25:00 -- target/tls.sh@266 -- # killprocess 3992405 00:22:11.824 23:25:00 -- common/autotest_common.sh@936 -- # '[' -z 3992405 ']' 00:22:11.824 23:25:00 -- common/autotest_common.sh@940 -- # kill -0 3992405 00:22:11.824 23:25:00 -- common/autotest_common.sh@941 -- # uname 00:22:11.824 23:25:00 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:22:11.824 23:25:00 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3992405 00:22:11.824 23:25:00 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:22:11.824 23:25:00 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:22:11.824 23:25:00 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3992405' 00:22:11.824 killing process with pid 3992405 00:22:11.824 23:25:00 -- common/autotest_common.sh@955 -- # kill 3992405 00:22:11.824 Received shutdown signal, test time was about 1.000000 seconds 00:22:11.824 00:22:11.824 Latency(us) 00:22:11.824 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:11.824 =================================================================================================================== 00:22:11.824 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:11.824 23:25:00 -- common/autotest_common.sh@960 -- # wait 3992405 00:22:12.085 23:25:01 -- target/tls.sh@267 -- # killprocess 3992383 00:22:12.085 23:25:01 -- common/autotest_common.sh@936 -- # '[' -z 3992383 ']' 00:22:12.085 23:25:01 -- common/autotest_common.sh@940 -- # kill -0 3992383 00:22:12.085 23:25:01 -- common/autotest_common.sh@941 -- # uname 00:22:12.085 23:25:01 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:22:12.085 23:25:01 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3992383 00:22:12.085 23:25:01 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:22:12.085 23:25:01 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:22:12.085 23:25:01 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3992383' 00:22:12.085 killing process with pid 3992383 00:22:12.085 23:25:01 -- common/autotest_common.sh@955 -- # kill 3992383 00:22:12.085 23:25:01 -- common/autotest_common.sh@960 -- # wait 3992383 00:22:12.085 23:25:01 -- target/tls.sh@269 -- # nvmfappstart -c /dev/fd/62 00:22:12.085 23:25:01 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:22:12.085 23:25:01 -- common/autotest_common.sh@710 -- # xtrace_disable 00:22:12.085 23:25:01 -- target/tls.sh@269 -- # echo '{ 00:22:12.085 "subsystems": [ 00:22:12.085 { 00:22:12.085 "subsystem": "keyring", 00:22:12.085 "config": [ 00:22:12.085 { 00:22:12.085 "method": "keyring_file_add_key", 00:22:12.085 "params": { 00:22:12.085 "name": "key0", 00:22:12.085 "path": "/tmp/tmp.bmCTHB2doi" 00:22:12.085 } 00:22:12.085 } 00:22:12.085 ] 00:22:12.085 }, 00:22:12.085 { 00:22:12.085 "subsystem": "iobuf", 00:22:12.085 "config": [ 00:22:12.085 { 00:22:12.085 "method": "iobuf_set_options", 00:22:12.085 "params": { 00:22:12.085 "small_pool_count": 8192, 00:22:12.085 "large_pool_count": 1024, 00:22:12.085 "small_bufsize": 8192, 00:22:12.085 "large_bufsize": 135168 00:22:12.085 } 00:22:12.085 } 00:22:12.085 ] 00:22:12.085 }, 00:22:12.085 { 00:22:12.085 "subsystem": "sock", 00:22:12.085 "config": [ 00:22:12.085 { 00:22:12.085 "method": "sock_impl_set_options", 00:22:12.085 "params": { 00:22:12.085 "impl_name": "posix", 00:22:12.085 "recv_buf_size": 2097152, 00:22:12.085 "send_buf_size": 2097152, 00:22:12.085 "enable_recv_pipe": true, 00:22:12.085 "enable_quickack": false, 00:22:12.085 "enable_placement_id": 0, 00:22:12.085 "enable_zerocopy_send_server": true, 00:22:12.085 "enable_zerocopy_send_client": false, 00:22:12.085 "zerocopy_threshold": 0, 00:22:12.085 "tls_version": 0, 00:22:12.085 "enable_ktls": false 00:22:12.085 } 00:22:12.085 }, 00:22:12.085 { 00:22:12.085 "method": "sock_impl_set_options", 00:22:12.085 "params": { 00:22:12.085 "impl_name": "ssl", 00:22:12.085 "recv_buf_size": 4096, 00:22:12.085 "send_buf_size": 4096, 00:22:12.085 "enable_recv_pipe": true, 00:22:12.085 "enable_quickack": false, 00:22:12.085 "enable_placement_id": 0, 00:22:12.085 "enable_zerocopy_send_server": true, 00:22:12.085 "enable_zerocopy_send_client": false, 00:22:12.085 "zerocopy_threshold": 0, 00:22:12.085 "tls_version": 0, 00:22:12.085 "enable_ktls": false 00:22:12.085 } 00:22:12.085 } 00:22:12.085 ] 00:22:12.085 }, 00:22:12.085 { 00:22:12.085 "subsystem": "vmd", 00:22:12.085 "config": [] 00:22:12.085 }, 00:22:12.085 { 00:22:12.085 "subsystem": "accel", 00:22:12.085 "config": [ 00:22:12.085 { 00:22:12.085 "method": "accel_set_options", 00:22:12.085 "params": { 00:22:12.085 "small_cache_size": 128, 00:22:12.085 "large_cache_size": 16, 00:22:12.085 "task_count": 2048, 00:22:12.085 "sequence_count": 2048, 00:22:12.085 "buf_count": 2048 00:22:12.085 } 00:22:12.085 } 00:22:12.085 ] 00:22:12.085 }, 00:22:12.085 { 00:22:12.085 "subsystem": "bdev", 00:22:12.085 "config": [ 00:22:12.085 { 00:22:12.085 "method": "bdev_set_options", 00:22:12.085 "params": { 00:22:12.085 "bdev_io_pool_size": 65535, 00:22:12.085 "bdev_io_cache_size": 256, 00:22:12.085 "bdev_auto_examine": true, 00:22:12.085 "iobuf_small_cache_size": 128, 00:22:12.085 "iobuf_large_cache_size": 16 00:22:12.085 } 00:22:12.085 }, 00:22:12.085 { 00:22:12.085 "method": "bdev_raid_set_options", 00:22:12.085 "params": { 00:22:12.085 "process_window_size_kb": 1024 00:22:12.085 } 00:22:12.085 }, 00:22:12.085 { 00:22:12.085 "method": "bdev_iscsi_set_options", 00:22:12.085 "params": { 00:22:12.085 "timeout_sec": 30 00:22:12.085 } 00:22:12.085 }, 00:22:12.085 { 00:22:12.085 "method": "bdev_nvme_set_options", 00:22:12.085 "params": { 00:22:12.085 "action_on_timeout": "none", 00:22:12.085 "timeout_us": 0, 00:22:12.085 "timeout_admin_us": 0, 00:22:12.085 "keep_alive_timeout_ms": 10000, 00:22:12.085 "arbitration_burst": 0, 00:22:12.085 "low_priority_weight": 0, 00:22:12.085 "medium_priority_weight": 0, 00:22:12.085 "high_priority_weight": 0, 00:22:12.085 "nvme_adminq_poll_period_us": 10000, 00:22:12.085 "nvme_ioq_poll_period_us": 0, 00:22:12.085 "io_queue_requests": 0, 00:22:12.085 "delay_cmd_submit": true, 00:22:12.085 "transport_retry_count": 4, 00:22:12.085 "bdev_retry_count": 3, 00:22:12.085 "transport_ack_timeout": 0, 00:22:12.085 "ctrlr_loss_timeout_sec": 0, 00:22:12.085 "reconnect_delay_sec": 0, 00:22:12.085 "fast_io_fail_timeout_sec": 0, 00:22:12.085 "disable_auto_failback": false, 00:22:12.085 "generate_uuids": false, 00:22:12.085 "transport_tos": 0, 00:22:12.085 "nvme_error_stat": false, 00:22:12.085 "rdma_srq_size": 0, 00:22:12.085 "io_path_stat": false, 00:22:12.085 "allow_accel_sequence": false, 00:22:12.085 "rdma_max_cq_size": 0, 00:22:12.085 "rdma_cm_event_timeout_ms": 0, 00:22:12.085 "dhchap_digests": [ 00:22:12.085 "sha256", 00:22:12.085 "sha384", 00:22:12.085 "sha512" 00:22:12.085 ], 00:22:12.085 "dhchap_dhgroups": [ 00:22:12.085 "null", 00:22:12.085 "ffdhe2048", 00:22:12.085 "ffdhe3072", 00:22:12.085 "ffdhe4096", 00:22:12.085 "ffdhe6144", 00:22:12.085 "ffdhe8192" 00:22:12.085 ] 00:22:12.085 } 00:22:12.085 }, 00:22:12.085 { 00:22:12.085 "method": "bdev_nvme_set_hotplug", 00:22:12.085 "params": { 00:22:12.085 "period_us": 100000, 00:22:12.085 "enable": false 00:22:12.085 } 00:22:12.085 }, 00:22:12.085 { 00:22:12.085 "method": "bdev_malloc_create", 00:22:12.085 "params": { 00:22:12.085 "name": "malloc0", 00:22:12.085 "num_blocks": 8192, 00:22:12.085 "block_size": 4096, 00:22:12.085 "physical_block_size": 4096, 00:22:12.085 "uuid": "29a8e42e-9570-45ee-a210-f2373b1c6a3a", 00:22:12.085 "optimal_io_boundary": 0 00:22:12.085 } 00:22:12.085 }, 00:22:12.085 { 00:22:12.085 "method": "bdev_wait_for_examine" 00:22:12.085 } 00:22:12.085 ] 00:22:12.085 }, 00:22:12.085 { 00:22:12.085 "subsystem": "nbd", 00:22:12.085 "config": [] 00:22:12.085 }, 00:22:12.085 { 00:22:12.085 "subsystem": "scheduler", 00:22:12.085 "config": [ 00:22:12.085 { 00:22:12.085 "method": "framework_set_scheduler", 00:22:12.085 "params": { 00:22:12.085 "name": "static" 00:22:12.085 } 00:22:12.085 } 00:22:12.085 ] 00:22:12.085 }, 00:22:12.085 { 00:22:12.085 "subsystem": "nvmf", 00:22:12.085 "config": [ 00:22:12.085 { 00:22:12.085 "method": "nvmf_set_config", 00:22:12.085 "params": { 00:22:12.085 "discovery_filter": "match_any", 00:22:12.085 "admin_cmd_passthru": { 00:22:12.085 "identify_ctrlr": false 00:22:12.085 } 00:22:12.085 } 00:22:12.085 }, 00:22:12.085 { 00:22:12.085 "method": "nvmf_set_max_subsystems", 00:22:12.085 "params": { 00:22:12.085 "max_subsystems": 1024 00:22:12.085 } 00:22:12.085 }, 00:22:12.085 { 00:22:12.085 "method": "nvmf_set_crdt", 00:22:12.085 "params": { 00:22:12.085 "crdt1": 0, 00:22:12.085 "crdt2": 0, 00:22:12.085 "crdt3": 0 00:22:12.085 } 00:22:12.085 }, 00:22:12.085 { 00:22:12.085 "method": "nvmf_create_transport", 00:22:12.085 "params": { 00:22:12.085 "trtype": "TCP", 00:22:12.085 "max_queue_depth": 128, 00:22:12.085 "max_io_qpairs_per_ctrlr": 127, 00:22:12.085 "in_capsule_data_size": 4096, 00:22:12.085 "max_io_size": 131072, 00:22:12.085 "io_unit_size": 131072, 00:22:12.085 "max_aq_depth": 128, 00:22:12.085 "num_shared_buffers": 511, 00:22:12.085 "buf_cache_size": 4294967295, 00:22:12.085 "dif_insert_or_strip": false, 00:22:12.085 "zcopy": false, 00:22:12.085 "c2h_success": false, 00:22:12.085 "sock_priority": 0, 00:22:12.085 "abort_timeout_sec": 1, 00:22:12.085 "ack_timeout": 0, 00:22:12.085 "data_wr_pool_size": 0 00:22:12.085 } 00:22:12.085 }, 00:22:12.085 { 00:22:12.085 "method": "nvmf_create_subsystem", 00:22:12.085 "params": { 00:22:12.085 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:12.085 "allow_any_host": false, 00:22:12.085 "serial_number": "00000000000000000000", 00:22:12.085 "model_number": "SPDK bdev 23:25:01 -- common/autotest_common.sh@10 -- # set +x 00:22:12.085 Controller", 00:22:12.085 "max_namespaces": 32, 00:22:12.085 "min_cntlid": 1, 00:22:12.085 "max_cntlid": 65519, 00:22:12.085 "ana_reporting": false 00:22:12.085 } 00:22:12.085 }, 00:22:12.085 { 00:22:12.085 "method": "nvmf_subsystem_add_host", 00:22:12.085 "params": { 00:22:12.085 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:12.085 "host": "nqn.2016-06.io.spdk:host1", 00:22:12.085 "psk": "key0" 00:22:12.085 } 00:22:12.085 }, 00:22:12.085 { 00:22:12.085 "method": "nvmf_subsystem_add_ns", 00:22:12.085 "params": { 00:22:12.085 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:12.085 "namespace": { 00:22:12.085 "nsid": 1, 00:22:12.085 "bdev_name": "malloc0", 00:22:12.085 "nguid": "29A8E42E957045EEA210F2373B1C6A3A", 00:22:12.085 "uuid": "29a8e42e-9570-45ee-a210-f2373b1c6a3a", 00:22:12.085 "no_auto_visible": false 00:22:12.085 } 00:22:12.085 } 00:22:12.085 }, 00:22:12.086 { 00:22:12.086 "method": "nvmf_subsystem_add_listener", 00:22:12.086 "params": { 00:22:12.086 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:12.086 "listen_address": { 00:22:12.086 "trtype": "TCP", 00:22:12.086 "adrfam": "IPv4", 00:22:12.086 "traddr": "10.0.0.2", 00:22:12.086 "trsvcid": "4420" 00:22:12.086 }, 00:22:12.086 "secure_channel": true 00:22:12.086 } 00:22:12.086 } 00:22:12.086 ] 00:22:12.086 } 00:22:12.086 ] 00:22:12.086 }' 00:22:12.086 23:25:01 -- nvmf/common.sh@470 -- # nvmfpid=3993088 00:22:12.086 23:25:01 -- nvmf/common.sh@471 -- # waitforlisten 3993088 00:22:12.086 23:25:01 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:22:12.086 23:25:01 -- common/autotest_common.sh@817 -- # '[' -z 3993088 ']' 00:22:12.086 23:25:01 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:12.086 23:25:01 -- common/autotest_common.sh@822 -- # local max_retries=100 00:22:12.086 23:25:01 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:12.086 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:12.086 23:25:01 -- common/autotest_common.sh@826 -- # xtrace_disable 00:22:12.086 23:25:01 -- common/autotest_common.sh@10 -- # set +x 00:22:12.086 [2024-04-26 23:25:01.329661] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:22:12.086 [2024-04-26 23:25:01.329719] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:12.345 EAL: No free 2048 kB hugepages reported on node 1 00:22:12.345 [2024-04-26 23:25:01.395699] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:12.345 [2024-04-26 23:25:01.426533] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:12.346 [2024-04-26 23:25:01.426571] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:12.346 [2024-04-26 23:25:01.426578] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:12.346 [2024-04-26 23:25:01.426585] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:12.346 [2024-04-26 23:25:01.426590] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:12.346 [2024-04-26 23:25:01.426641] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:12.637 [2024-04-26 23:25:01.609296] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:12.637 [2024-04-26 23:25:01.641298] tcp.c: 925:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:12.637 [2024-04-26 23:25:01.654031] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:12.924 23:25:02 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:22:12.924 23:25:02 -- common/autotest_common.sh@850 -- # return 0 00:22:12.924 23:25:02 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:22:12.924 23:25:02 -- common/autotest_common.sh@716 -- # xtrace_disable 00:22:12.924 23:25:02 -- common/autotest_common.sh@10 -- # set +x 00:22:12.924 23:25:02 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:12.924 23:25:02 -- target/tls.sh@272 -- # bdevperf_pid=3993163 00:22:12.924 23:25:02 -- target/tls.sh@273 -- # waitforlisten 3993163 /var/tmp/bdevperf.sock 00:22:12.924 23:25:02 -- common/autotest_common.sh@817 -- # '[' -z 3993163 ']' 00:22:12.924 23:25:02 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:12.924 23:25:02 -- common/autotest_common.sh@822 -- # local max_retries=100 00:22:12.924 23:25:02 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:12.924 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:12.924 23:25:02 -- target/tls.sh@270 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:22:12.924 23:25:02 -- common/autotest_common.sh@826 -- # xtrace_disable 00:22:12.924 23:25:02 -- common/autotest_common.sh@10 -- # set +x 00:22:12.924 23:25:02 -- target/tls.sh@270 -- # echo '{ 00:22:12.924 "subsystems": [ 00:22:12.924 { 00:22:12.924 "subsystem": "keyring", 00:22:12.924 "config": [ 00:22:12.924 { 00:22:12.924 "method": "keyring_file_add_key", 00:22:12.924 "params": { 00:22:12.924 "name": "key0", 00:22:12.924 "path": "/tmp/tmp.bmCTHB2doi" 00:22:12.924 } 00:22:12.924 } 00:22:12.924 ] 00:22:12.924 }, 00:22:12.924 { 00:22:12.924 "subsystem": "iobuf", 00:22:12.924 "config": [ 00:22:12.924 { 00:22:12.924 "method": "iobuf_set_options", 00:22:12.924 "params": { 00:22:12.924 "small_pool_count": 8192, 00:22:12.924 "large_pool_count": 1024, 00:22:12.924 "small_bufsize": 8192, 00:22:12.924 "large_bufsize": 135168 00:22:12.924 } 00:22:12.924 } 00:22:12.924 ] 00:22:12.924 }, 00:22:12.924 { 00:22:12.924 "subsystem": "sock", 00:22:12.924 "config": [ 00:22:12.924 { 00:22:12.924 "method": "sock_impl_set_options", 00:22:12.924 "params": { 00:22:12.924 "impl_name": "posix", 00:22:12.924 "recv_buf_size": 2097152, 00:22:12.924 "send_buf_size": 2097152, 00:22:12.924 "enable_recv_pipe": true, 00:22:12.924 "enable_quickack": false, 00:22:12.924 "enable_placement_id": 0, 00:22:12.924 "enable_zerocopy_send_server": true, 00:22:12.924 "enable_zerocopy_send_client": false, 00:22:12.924 "zerocopy_threshold": 0, 00:22:12.924 "tls_version": 0, 00:22:12.924 "enable_ktls": false 00:22:12.924 } 00:22:12.924 }, 00:22:12.924 { 00:22:12.924 "method": "sock_impl_set_options", 00:22:12.924 "params": { 00:22:12.924 "impl_name": "ssl", 00:22:12.924 "recv_buf_size": 4096, 00:22:12.924 "send_buf_size": 4096, 00:22:12.924 "enable_recv_pipe": true, 00:22:12.924 "enable_quickack": false, 00:22:12.924 "enable_placement_id": 0, 00:22:12.924 "enable_zerocopy_send_server": true, 00:22:12.924 "enable_zerocopy_send_client": false, 00:22:12.924 "zerocopy_threshold": 0, 00:22:12.924 "tls_version": 0, 00:22:12.924 "enable_ktls": false 00:22:12.924 } 00:22:12.924 } 00:22:12.924 ] 00:22:12.924 }, 00:22:12.924 { 00:22:12.924 "subsystem": "vmd", 00:22:12.924 "config": [] 00:22:12.924 }, 00:22:12.924 { 00:22:12.924 "subsystem": "accel", 00:22:12.924 "config": [ 00:22:12.924 { 00:22:12.924 "method": "accel_set_options", 00:22:12.924 "params": { 00:22:12.924 "small_cache_size": 128, 00:22:12.924 "large_cache_size": 16, 00:22:12.924 "task_count": 2048, 00:22:12.924 "sequence_count": 2048, 00:22:12.924 "buf_count": 2048 00:22:12.924 } 00:22:12.924 } 00:22:12.924 ] 00:22:12.924 }, 00:22:12.924 { 00:22:12.924 "subsystem": "bdev", 00:22:12.924 "config": [ 00:22:12.925 { 00:22:12.925 "method": "bdev_set_options", 00:22:12.925 "params": { 00:22:12.925 "bdev_io_pool_size": 65535, 00:22:12.925 "bdev_io_cache_size": 256, 00:22:12.925 "bdev_auto_examine": true, 00:22:12.925 "iobuf_small_cache_size": 128, 00:22:12.925 "iobuf_large_cache_size": 16 00:22:12.925 } 00:22:12.925 }, 00:22:12.925 { 00:22:12.925 "method": "bdev_raid_set_options", 00:22:12.925 "params": { 00:22:12.925 "process_window_size_kb": 1024 00:22:12.925 } 00:22:12.925 }, 00:22:12.925 { 00:22:12.925 "method": "bdev_iscsi_set_options", 00:22:12.925 "params": { 00:22:12.925 "timeout_sec": 30 00:22:12.925 } 00:22:12.925 }, 00:22:12.925 { 00:22:12.925 "method": "bdev_nvme_set_options", 00:22:12.925 "params": { 00:22:12.925 "action_on_timeout": "none", 00:22:12.925 "timeout_us": 0, 00:22:12.925 "timeout_admin_us": 0, 00:22:12.925 "keep_alive_timeout_ms": 10000, 00:22:12.925 "arbitration_burst": 0, 00:22:12.925 "low_priority_weight": 0, 00:22:12.925 "medium_priority_weight": 0, 00:22:12.925 "high_priority_weight": 0, 00:22:12.925 "nvme_adminq_poll_period_us": 10000, 00:22:12.925 "nvme_ioq_poll_period_us": 0, 00:22:12.925 "io_queue_requests": 512, 00:22:12.925 "delay_cmd_submit": true, 00:22:12.925 "transport_retry_count": 4, 00:22:12.925 "bdev_retry_count": 3, 00:22:12.925 "transport_ack_timeout": 0, 00:22:12.925 "ctrlr_loss_timeout_sec": 0, 00:22:12.925 "reconnect_delay_sec": 0, 00:22:12.925 "fast_io_fail_timeout_sec": 0, 00:22:12.925 "disable_auto_failback": false, 00:22:12.925 "generate_uuids": false, 00:22:12.925 "transport_tos": 0, 00:22:12.925 "nvme_error_stat": false, 00:22:12.925 "rdma_srq_size": 0, 00:22:12.925 "io_path_stat": false, 00:22:12.925 "allow_accel_sequence": false, 00:22:12.925 "rdma_max_cq_size": 0, 00:22:12.925 "rdma_cm_event_timeout_ms": 0, 00:22:12.925 "dhchap_digests": [ 00:22:12.925 "sha256", 00:22:12.925 "sha384", 00:22:12.925 "sha512" 00:22:12.925 ], 00:22:12.925 "dhchap_dhgroups": [ 00:22:12.925 "null", 00:22:12.925 "ffdhe2048", 00:22:12.925 "ffdhe3072", 00:22:12.925 "ffdhe4096", 00:22:12.925 "ffdhe6144", 00:22:12.925 "ffdhe8192" 00:22:12.925 ] 00:22:12.925 } 00:22:12.925 }, 00:22:12.925 { 00:22:12.925 "method": "bdev_nvme_attach_controller", 00:22:12.925 "params": { 00:22:12.925 "name": "nvme0", 00:22:12.925 "trtype": "TCP", 00:22:12.925 "adrfam": "IPv4", 00:22:12.925 "traddr": "10.0.0.2", 00:22:12.925 "trsvcid": "4420", 00:22:12.925 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:12.925 "prchk_reftag": false, 00:22:12.925 "prchk_guard": false, 00:22:12.925 "ctrlr_loss_timeout_sec": 0, 00:22:12.925 "reconnect_delay_sec": 0, 00:22:12.925 "fast_io_fail_timeout_sec": 0, 00:22:12.925 "psk": "key0", 00:22:12.925 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:12.925 "hdgst": false, 00:22:12.925 "ddgst": false 00:22:12.925 } 00:22:12.925 }, 00:22:12.925 { 00:22:12.925 "method": "bdev_nvme_set_hotplug", 00:22:12.925 "params": { 00:22:12.925 "period_us": 100000, 00:22:12.925 "enable": false 00:22:12.925 } 00:22:12.925 }, 00:22:12.925 { 00:22:12.925 "method": "bdev_enable_histogram", 00:22:12.925 "params": { 00:22:12.925 "name": "nvme0n1", 00:22:12.925 "enable": true 00:22:12.925 } 00:22:12.925 }, 00:22:12.925 { 00:22:12.925 "method": "bdev_wait_for_examine" 00:22:12.925 } 00:22:12.925 ] 00:22:12.925 }, 00:22:12.925 { 00:22:12.925 "subsystem": "nbd", 00:22:12.925 "config": [] 00:22:12.925 } 00:22:12.925 ] 00:22:12.925 }' 00:22:13.187 [2024-04-26 23:25:02.179822] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:22:13.187 [2024-04-26 23:25:02.179885] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3993163 ] 00:22:13.187 EAL: No free 2048 kB hugepages reported on node 1 00:22:13.187 [2024-04-26 23:25:02.239232] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:13.187 [2024-04-26 23:25:02.268490] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:13.187 [2024-04-26 23:25:02.392743] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:13.757 23:25:02 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:22:13.757 23:25:02 -- common/autotest_common.sh@850 -- # return 0 00:22:13.757 23:25:02 -- target/tls.sh@275 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:22:13.757 23:25:02 -- target/tls.sh@275 -- # jq -r '.[].name' 00:22:14.017 23:25:03 -- target/tls.sh@275 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:14.017 23:25:03 -- target/tls.sh@276 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:22:14.017 Running I/O for 1 seconds... 00:22:15.398 00:22:15.398 Latency(us) 00:22:15.398 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:15.398 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:22:15.398 Verification LBA range: start 0x0 length 0x2000 00:22:15.398 nvme0n1 : 1.03 3419.30 13.36 0.00 0.00 36964.79 8956.59 62477.65 00:22:15.398 =================================================================================================================== 00:22:15.398 Total : 3419.30 13.36 0.00 0.00 36964.79 8956.59 62477.65 00:22:15.398 0 00:22:15.398 23:25:04 -- target/tls.sh@278 -- # trap - SIGINT SIGTERM EXIT 00:22:15.398 23:25:04 -- target/tls.sh@279 -- # cleanup 00:22:15.398 23:25:04 -- target/tls.sh@15 -- # process_shm --id 0 00:22:15.398 23:25:04 -- common/autotest_common.sh@794 -- # type=--id 00:22:15.399 23:25:04 -- common/autotest_common.sh@795 -- # id=0 00:22:15.399 23:25:04 -- common/autotest_common.sh@796 -- # '[' --id = --pid ']' 00:22:15.399 23:25:04 -- common/autotest_common.sh@800 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:22:15.399 23:25:04 -- common/autotest_common.sh@800 -- # shm_files=nvmf_trace.0 00:22:15.399 23:25:04 -- common/autotest_common.sh@802 -- # [[ -z nvmf_trace.0 ]] 00:22:15.399 23:25:04 -- common/autotest_common.sh@806 -- # for n in $shm_files 00:22:15.399 23:25:04 -- common/autotest_common.sh@807 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:22:15.399 nvmf_trace.0 00:22:15.399 23:25:04 -- common/autotest_common.sh@809 -- # return 0 00:22:15.399 23:25:04 -- target/tls.sh@16 -- # killprocess 3993163 00:22:15.399 23:25:04 -- common/autotest_common.sh@936 -- # '[' -z 3993163 ']' 00:22:15.399 23:25:04 -- common/autotest_common.sh@940 -- # kill -0 3993163 00:22:15.399 23:25:04 -- common/autotest_common.sh@941 -- # uname 00:22:15.399 23:25:04 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:22:15.399 23:25:04 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3993163 00:22:15.399 23:25:04 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:22:15.399 23:25:04 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:22:15.399 23:25:04 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3993163' 00:22:15.399 killing process with pid 3993163 00:22:15.399 23:25:04 -- common/autotest_common.sh@955 -- # kill 3993163 00:22:15.399 Received shutdown signal, test time was about 1.000000 seconds 00:22:15.399 00:22:15.399 Latency(us) 00:22:15.399 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:15.399 =================================================================================================================== 00:22:15.399 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:15.399 23:25:04 -- common/autotest_common.sh@960 -- # wait 3993163 00:22:15.399 23:25:04 -- target/tls.sh@17 -- # nvmftestfini 00:22:15.399 23:25:04 -- nvmf/common.sh@477 -- # nvmfcleanup 00:22:15.399 23:25:04 -- nvmf/common.sh@117 -- # sync 00:22:15.399 23:25:04 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:15.399 23:25:04 -- nvmf/common.sh@120 -- # set +e 00:22:15.399 23:25:04 -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:15.399 23:25:04 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:15.399 rmmod nvme_tcp 00:22:15.399 rmmod nvme_fabrics 00:22:15.399 rmmod nvme_keyring 00:22:15.399 23:25:04 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:15.399 23:25:04 -- nvmf/common.sh@124 -- # set -e 00:22:15.399 23:25:04 -- nvmf/common.sh@125 -- # return 0 00:22:15.399 23:25:04 -- nvmf/common.sh@478 -- # '[' -n 3993088 ']' 00:22:15.399 23:25:04 -- nvmf/common.sh@479 -- # killprocess 3993088 00:22:15.399 23:25:04 -- common/autotest_common.sh@936 -- # '[' -z 3993088 ']' 00:22:15.399 23:25:04 -- common/autotest_common.sh@940 -- # kill -0 3993088 00:22:15.399 23:25:04 -- common/autotest_common.sh@941 -- # uname 00:22:15.399 23:25:04 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:22:15.399 23:25:04 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3993088 00:22:15.399 23:25:04 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:22:15.399 23:25:04 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:22:15.399 23:25:04 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3993088' 00:22:15.399 killing process with pid 3993088 00:22:15.399 23:25:04 -- common/autotest_common.sh@955 -- # kill 3993088 00:22:15.399 23:25:04 -- common/autotest_common.sh@960 -- # wait 3993088 00:22:15.659 23:25:04 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:22:15.659 23:25:04 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:22:15.659 23:25:04 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:22:15.659 23:25:04 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:15.659 23:25:04 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:15.659 23:25:04 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:15.659 23:25:04 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:15.659 23:25:04 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:17.569 23:25:06 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:22:17.569 23:25:06 -- target/tls.sh@18 -- # rm -f /tmp/tmp.i0mQU05Jy1 /tmp/tmp.lhziUR1LdY /tmp/tmp.bmCTHB2doi 00:22:17.569 00:22:17.569 real 1m16.301s 00:22:17.569 user 1m56.026s 00:22:17.569 sys 0m25.900s 00:22:17.569 23:25:06 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:22:17.569 23:25:06 -- common/autotest_common.sh@10 -- # set +x 00:22:17.569 ************************************ 00:22:17.569 END TEST nvmf_tls 00:22:17.569 ************************************ 00:22:17.829 23:25:06 -- nvmf/nvmf.sh@61 -- # run_test nvmf_fips /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:22:17.829 23:25:06 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:22:17.829 23:25:06 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:22:17.829 23:25:06 -- common/autotest_common.sh@10 -- # set +x 00:22:17.829 ************************************ 00:22:17.829 START TEST nvmf_fips 00:22:17.829 ************************************ 00:22:17.829 23:25:06 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:22:18.091 * Looking for test storage... 00:22:18.091 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips 00:22:18.091 23:25:07 -- fips/fips.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:18.091 23:25:07 -- nvmf/common.sh@7 -- # uname -s 00:22:18.091 23:25:07 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:18.091 23:25:07 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:18.091 23:25:07 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:18.091 23:25:07 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:18.091 23:25:07 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:18.091 23:25:07 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:18.091 23:25:07 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:18.091 23:25:07 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:18.091 23:25:07 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:18.091 23:25:07 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:18.091 23:25:07 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:22:18.091 23:25:07 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:22:18.091 23:25:07 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:18.091 23:25:07 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:18.091 23:25:07 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:18.091 23:25:07 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:18.091 23:25:07 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:18.091 23:25:07 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:18.091 23:25:07 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:18.091 23:25:07 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:18.091 23:25:07 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:18.091 23:25:07 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:18.091 23:25:07 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:18.091 23:25:07 -- paths/export.sh@5 -- # export PATH 00:22:18.091 23:25:07 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:18.091 23:25:07 -- nvmf/common.sh@47 -- # : 0 00:22:18.091 23:25:07 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:18.091 23:25:07 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:18.091 23:25:07 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:18.091 23:25:07 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:18.091 23:25:07 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:18.091 23:25:07 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:18.091 23:25:07 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:18.091 23:25:07 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:18.091 23:25:07 -- fips/fips.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:22:18.091 23:25:07 -- fips/fips.sh@89 -- # check_openssl_version 00:22:18.091 23:25:07 -- fips/fips.sh@83 -- # local target=3.0.0 00:22:18.091 23:25:07 -- fips/fips.sh@85 -- # openssl version 00:22:18.091 23:25:07 -- fips/fips.sh@85 -- # awk '{print $2}' 00:22:18.091 23:25:07 -- fips/fips.sh@85 -- # ge 3.0.9 3.0.0 00:22:18.091 23:25:07 -- scripts/common.sh@373 -- # cmp_versions 3.0.9 '>=' 3.0.0 00:22:18.091 23:25:07 -- scripts/common.sh@330 -- # local ver1 ver1_l 00:22:18.091 23:25:07 -- scripts/common.sh@331 -- # local ver2 ver2_l 00:22:18.091 23:25:07 -- scripts/common.sh@333 -- # IFS=.-: 00:22:18.091 23:25:07 -- scripts/common.sh@333 -- # read -ra ver1 00:22:18.091 23:25:07 -- scripts/common.sh@334 -- # IFS=.-: 00:22:18.091 23:25:07 -- scripts/common.sh@334 -- # read -ra ver2 00:22:18.091 23:25:07 -- scripts/common.sh@335 -- # local 'op=>=' 00:22:18.091 23:25:07 -- scripts/common.sh@337 -- # ver1_l=3 00:22:18.091 23:25:07 -- scripts/common.sh@338 -- # ver2_l=3 00:22:18.091 23:25:07 -- scripts/common.sh@340 -- # local lt=0 gt=0 eq=0 v 00:22:18.091 23:25:07 -- scripts/common.sh@341 -- # case "$op" in 00:22:18.091 23:25:07 -- scripts/common.sh@345 -- # : 1 00:22:18.091 23:25:07 -- scripts/common.sh@361 -- # (( v = 0 )) 00:22:18.091 23:25:07 -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:18.091 23:25:07 -- scripts/common.sh@362 -- # decimal 3 00:22:18.091 23:25:07 -- scripts/common.sh@350 -- # local d=3 00:22:18.091 23:25:07 -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:22:18.091 23:25:07 -- scripts/common.sh@352 -- # echo 3 00:22:18.091 23:25:07 -- scripts/common.sh@362 -- # ver1[v]=3 00:22:18.091 23:25:07 -- scripts/common.sh@363 -- # decimal 3 00:22:18.091 23:25:07 -- scripts/common.sh@350 -- # local d=3 00:22:18.091 23:25:07 -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:22:18.091 23:25:07 -- scripts/common.sh@352 -- # echo 3 00:22:18.091 23:25:07 -- scripts/common.sh@363 -- # ver2[v]=3 00:22:18.091 23:25:07 -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:22:18.091 23:25:07 -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:22:18.091 23:25:07 -- scripts/common.sh@361 -- # (( v++ )) 00:22:18.091 23:25:07 -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:18.091 23:25:07 -- scripts/common.sh@362 -- # decimal 0 00:22:18.091 23:25:07 -- scripts/common.sh@350 -- # local d=0 00:22:18.091 23:25:07 -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:22:18.091 23:25:07 -- scripts/common.sh@352 -- # echo 0 00:22:18.091 23:25:07 -- scripts/common.sh@362 -- # ver1[v]=0 00:22:18.091 23:25:07 -- scripts/common.sh@363 -- # decimal 0 00:22:18.091 23:25:07 -- scripts/common.sh@350 -- # local d=0 00:22:18.091 23:25:07 -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:22:18.091 23:25:07 -- scripts/common.sh@352 -- # echo 0 00:22:18.091 23:25:07 -- scripts/common.sh@363 -- # ver2[v]=0 00:22:18.091 23:25:07 -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:22:18.091 23:25:07 -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:22:18.091 23:25:07 -- scripts/common.sh@361 -- # (( v++ )) 00:22:18.091 23:25:07 -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:18.091 23:25:07 -- scripts/common.sh@362 -- # decimal 9 00:22:18.091 23:25:07 -- scripts/common.sh@350 -- # local d=9 00:22:18.091 23:25:07 -- scripts/common.sh@351 -- # [[ 9 =~ ^[0-9]+$ ]] 00:22:18.091 23:25:07 -- scripts/common.sh@352 -- # echo 9 00:22:18.091 23:25:07 -- scripts/common.sh@362 -- # ver1[v]=9 00:22:18.091 23:25:07 -- scripts/common.sh@363 -- # decimal 0 00:22:18.091 23:25:07 -- scripts/common.sh@350 -- # local d=0 00:22:18.091 23:25:07 -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:22:18.091 23:25:07 -- scripts/common.sh@352 -- # echo 0 00:22:18.091 23:25:07 -- scripts/common.sh@363 -- # ver2[v]=0 00:22:18.091 23:25:07 -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:22:18.091 23:25:07 -- scripts/common.sh@364 -- # return 0 00:22:18.091 23:25:07 -- fips/fips.sh@95 -- # openssl info -modulesdir 00:22:18.091 23:25:07 -- fips/fips.sh@95 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:22:18.091 23:25:07 -- fips/fips.sh@100 -- # openssl fipsinstall -help 00:22:18.091 23:25:07 -- fips/fips.sh@100 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:22:18.091 23:25:07 -- fips/fips.sh@101 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:22:18.091 23:25:07 -- fips/fips.sh@104 -- # export callback=build_openssl_config 00:22:18.091 23:25:07 -- fips/fips.sh@104 -- # callback=build_openssl_config 00:22:18.091 23:25:07 -- fips/fips.sh@113 -- # build_openssl_config 00:22:18.091 23:25:07 -- fips/fips.sh@37 -- # cat 00:22:18.091 23:25:07 -- fips/fips.sh@57 -- # [[ ! -t 0 ]] 00:22:18.091 23:25:07 -- fips/fips.sh@58 -- # cat - 00:22:18.091 23:25:07 -- fips/fips.sh@114 -- # export OPENSSL_CONF=spdk_fips.conf 00:22:18.091 23:25:07 -- fips/fips.sh@114 -- # OPENSSL_CONF=spdk_fips.conf 00:22:18.091 23:25:07 -- fips/fips.sh@116 -- # mapfile -t providers 00:22:18.091 23:25:07 -- fips/fips.sh@116 -- # openssl list -providers 00:22:18.091 23:25:07 -- fips/fips.sh@116 -- # grep name 00:22:18.091 23:25:07 -- fips/fips.sh@120 -- # (( 2 != 2 )) 00:22:18.091 23:25:07 -- fips/fips.sh@120 -- # [[ name: openssl base provider != *base* ]] 00:22:18.091 23:25:07 -- fips/fips.sh@120 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:22:18.091 23:25:07 -- fips/fips.sh@127 -- # NOT openssl md5 /dev/fd/62 00:22:18.091 23:25:07 -- common/autotest_common.sh@638 -- # local es=0 00:22:18.091 23:25:07 -- common/autotest_common.sh@640 -- # valid_exec_arg openssl md5 /dev/fd/62 00:22:18.091 23:25:07 -- fips/fips.sh@127 -- # : 00:22:18.091 23:25:07 -- common/autotest_common.sh@626 -- # local arg=openssl 00:22:18.091 23:25:07 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:22:18.091 23:25:07 -- common/autotest_common.sh@630 -- # type -t openssl 00:22:18.091 23:25:07 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:22:18.091 23:25:07 -- common/autotest_common.sh@632 -- # type -P openssl 00:22:18.091 23:25:07 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:22:18.091 23:25:07 -- common/autotest_common.sh@632 -- # arg=/usr/bin/openssl 00:22:18.091 23:25:07 -- common/autotest_common.sh@632 -- # [[ -x /usr/bin/openssl ]] 00:22:18.091 23:25:07 -- common/autotest_common.sh@641 -- # openssl md5 /dev/fd/62 00:22:18.091 Error setting digest 00:22:18.091 00A233B16B7F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:373:Global default library context, Algorithm (MD5 : 97), Properties () 00:22:18.091 00A233B16B7F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:254: 00:22:18.091 23:25:07 -- common/autotest_common.sh@641 -- # es=1 00:22:18.091 23:25:07 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:22:18.091 23:25:07 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:22:18.091 23:25:07 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:22:18.091 23:25:07 -- fips/fips.sh@130 -- # nvmftestinit 00:22:18.091 23:25:07 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:22:18.091 23:25:07 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:18.091 23:25:07 -- nvmf/common.sh@437 -- # prepare_net_devs 00:22:18.091 23:25:07 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:22:18.091 23:25:07 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:22:18.091 23:25:07 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:18.091 23:25:07 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:18.091 23:25:07 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:18.091 23:25:07 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:22:18.091 23:25:07 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:22:18.091 23:25:07 -- nvmf/common.sh@285 -- # xtrace_disable 00:22:18.091 23:25:07 -- common/autotest_common.sh@10 -- # set +x 00:22:26.238 23:25:13 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:22:26.238 23:25:13 -- nvmf/common.sh@291 -- # pci_devs=() 00:22:26.238 23:25:13 -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:26.238 23:25:13 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:26.238 23:25:13 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:26.238 23:25:13 -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:26.238 23:25:13 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:26.238 23:25:13 -- nvmf/common.sh@295 -- # net_devs=() 00:22:26.238 23:25:13 -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:26.238 23:25:13 -- nvmf/common.sh@296 -- # e810=() 00:22:26.238 23:25:13 -- nvmf/common.sh@296 -- # local -ga e810 00:22:26.238 23:25:13 -- nvmf/common.sh@297 -- # x722=() 00:22:26.238 23:25:13 -- nvmf/common.sh@297 -- # local -ga x722 00:22:26.238 23:25:13 -- nvmf/common.sh@298 -- # mlx=() 00:22:26.238 23:25:13 -- nvmf/common.sh@298 -- # local -ga mlx 00:22:26.238 23:25:13 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:26.238 23:25:13 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:26.239 23:25:13 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:26.239 23:25:13 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:26.239 23:25:13 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:26.239 23:25:13 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:26.239 23:25:13 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:26.239 23:25:13 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:26.239 23:25:13 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:26.239 23:25:13 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:26.239 23:25:13 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:26.239 23:25:13 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:26.239 23:25:13 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:22:26.239 23:25:13 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:22:26.239 23:25:13 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:22:26.239 23:25:13 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:22:26.239 23:25:13 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:26.239 23:25:13 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:26.239 23:25:13 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:22:26.239 Found 0000:31:00.0 (0x8086 - 0x159b) 00:22:26.239 23:25:13 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:26.239 23:25:13 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:26.239 23:25:13 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:26.239 23:25:13 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:26.239 23:25:13 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:26.239 23:25:13 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:26.239 23:25:13 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:22:26.239 Found 0000:31:00.1 (0x8086 - 0x159b) 00:22:26.239 23:25:14 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:26.239 23:25:14 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:26.239 23:25:14 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:26.239 23:25:14 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:26.239 23:25:14 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:26.239 23:25:14 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:26.239 23:25:14 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:22:26.239 23:25:14 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:22:26.239 23:25:14 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:26.239 23:25:14 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:26.239 23:25:14 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:22:26.239 23:25:14 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:26.239 23:25:14 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:22:26.239 Found net devices under 0000:31:00.0: cvl_0_0 00:22:26.239 23:25:14 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:22:26.239 23:25:14 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:26.239 23:25:14 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:26.239 23:25:14 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:22:26.239 23:25:14 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:26.239 23:25:14 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:22:26.239 Found net devices under 0000:31:00.1: cvl_0_1 00:22:26.239 23:25:14 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:22:26.239 23:25:14 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:22:26.239 23:25:14 -- nvmf/common.sh@403 -- # is_hw=yes 00:22:26.239 23:25:14 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:22:26.239 23:25:14 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:22:26.239 23:25:14 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:22:26.239 23:25:14 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:26.239 23:25:14 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:26.239 23:25:14 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:26.239 23:25:14 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:22:26.239 23:25:14 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:26.239 23:25:14 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:26.239 23:25:14 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:22:26.239 23:25:14 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:26.239 23:25:14 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:26.239 23:25:14 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:22:26.239 23:25:14 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:22:26.239 23:25:14 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:22:26.239 23:25:14 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:26.239 23:25:14 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:26.239 23:25:14 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:26.239 23:25:14 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:22:26.239 23:25:14 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:26.239 23:25:14 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:26.239 23:25:14 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:26.239 23:25:14 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:22:26.239 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:26.239 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.692 ms 00:22:26.239 00:22:26.239 --- 10.0.0.2 ping statistics --- 00:22:26.239 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:26.239 rtt min/avg/max/mdev = 0.692/0.692/0.692/0.000 ms 00:22:26.239 23:25:14 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:26.239 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:26.239 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.331 ms 00:22:26.239 00:22:26.239 --- 10.0.0.1 ping statistics --- 00:22:26.239 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:26.239 rtt min/avg/max/mdev = 0.331/0.331/0.331/0.000 ms 00:22:26.239 23:25:14 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:26.239 23:25:14 -- nvmf/common.sh@411 -- # return 0 00:22:26.239 23:25:14 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:22:26.239 23:25:14 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:26.239 23:25:14 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:22:26.239 23:25:14 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:22:26.239 23:25:14 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:26.239 23:25:14 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:22:26.239 23:25:14 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:22:26.239 23:25:14 -- fips/fips.sh@131 -- # nvmfappstart -m 0x2 00:22:26.239 23:25:14 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:22:26.239 23:25:14 -- common/autotest_common.sh@710 -- # xtrace_disable 00:22:26.239 23:25:14 -- common/autotest_common.sh@10 -- # set +x 00:22:26.239 23:25:14 -- nvmf/common.sh@470 -- # nvmfpid=3997881 00:22:26.239 23:25:14 -- nvmf/common.sh@471 -- # waitforlisten 3997881 00:22:26.239 23:25:14 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:22:26.239 23:25:14 -- common/autotest_common.sh@817 -- # '[' -z 3997881 ']' 00:22:26.239 23:25:14 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:26.239 23:25:14 -- common/autotest_common.sh@822 -- # local max_retries=100 00:22:26.239 23:25:14 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:26.239 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:26.239 23:25:14 -- common/autotest_common.sh@826 -- # xtrace_disable 00:22:26.239 23:25:14 -- common/autotest_common.sh@10 -- # set +x 00:22:26.239 [2024-04-26 23:25:14.409996] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:22:26.239 [2024-04-26 23:25:14.410076] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:26.239 EAL: No free 2048 kB hugepages reported on node 1 00:22:26.239 [2024-04-26 23:25:14.482498] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:26.239 [2024-04-26 23:25:14.518935] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:26.239 [2024-04-26 23:25:14.519009] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:26.239 [2024-04-26 23:25:14.519017] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:26.239 [2024-04-26 23:25:14.519025] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:26.239 [2024-04-26 23:25:14.519031] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:26.239 [2024-04-26 23:25:14.519051] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:26.239 23:25:15 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:22:26.239 23:25:15 -- common/autotest_common.sh@850 -- # return 0 00:22:26.239 23:25:15 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:22:26.239 23:25:15 -- common/autotest_common.sh@716 -- # xtrace_disable 00:22:26.239 23:25:15 -- common/autotest_common.sh@10 -- # set +x 00:22:26.239 23:25:15 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:26.239 23:25:15 -- fips/fips.sh@133 -- # trap cleanup EXIT 00:22:26.239 23:25:15 -- fips/fips.sh@136 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:22:26.239 23:25:15 -- fips/fips.sh@137 -- # key_path=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:22:26.239 23:25:15 -- fips/fips.sh@138 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:22:26.239 23:25:15 -- fips/fips.sh@139 -- # chmod 0600 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:22:26.239 23:25:15 -- fips/fips.sh@141 -- # setup_nvmf_tgt_conf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:22:26.239 23:25:15 -- fips/fips.sh@22 -- # local key=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:22:26.239 23:25:15 -- fips/fips.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:22:26.239 [2024-04-26 23:25:15.337973] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:26.239 [2024-04-26 23:25:15.353978] tcp.c: 925:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:26.239 [2024-04-26 23:25:15.354144] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:26.239 [2024-04-26 23:25:15.380679] tcp.c:3652:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:22:26.239 malloc0 00:22:26.239 23:25:15 -- fips/fips.sh@144 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:26.239 23:25:15 -- fips/fips.sh@147 -- # bdevperf_pid=3998230 00:22:26.239 23:25:15 -- fips/fips.sh@148 -- # waitforlisten 3998230 /var/tmp/bdevperf.sock 00:22:26.239 23:25:15 -- fips/fips.sh@145 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:22:26.239 23:25:15 -- common/autotest_common.sh@817 -- # '[' -z 3998230 ']' 00:22:26.239 23:25:15 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:26.239 23:25:15 -- common/autotest_common.sh@822 -- # local max_retries=100 00:22:26.239 23:25:15 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:26.239 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:26.239 23:25:15 -- common/autotest_common.sh@826 -- # xtrace_disable 00:22:26.239 23:25:15 -- common/autotest_common.sh@10 -- # set +x 00:22:26.239 [2024-04-26 23:25:15.460765] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:22:26.239 [2024-04-26 23:25:15.460818] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3998230 ] 00:22:26.239 EAL: No free 2048 kB hugepages reported on node 1 00:22:26.499 [2024-04-26 23:25:15.512015] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:26.499 [2024-04-26 23:25:15.538532] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:22:27.077 23:25:16 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:22:27.077 23:25:16 -- common/autotest_common.sh@850 -- # return 0 00:22:27.077 23:25:16 -- fips/fips.sh@150 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:22:27.338 [2024-04-26 23:25:16.358221] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:27.338 [2024-04-26 23:25:16.358286] nvme_tcp.c:2577:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:22:27.338 TLSTESTn1 00:22:27.338 23:25:16 -- fips/fips.sh@154 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:22:27.338 Running I/O for 10 seconds... 00:22:39.570 00:22:39.570 Latency(us) 00:22:39.570 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:39.570 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:22:39.570 Verification LBA range: start 0x0 length 0x2000 00:22:39.570 TLSTESTn1 : 10.04 3759.46 14.69 0.00 0.00 33973.02 6116.69 78643.20 00:22:39.570 =================================================================================================================== 00:22:39.570 Total : 3759.46 14.69 0.00 0.00 33973.02 6116.69 78643.20 00:22:39.570 0 00:22:39.570 23:25:26 -- fips/fips.sh@1 -- # cleanup 00:22:39.570 23:25:26 -- fips/fips.sh@15 -- # process_shm --id 0 00:22:39.570 23:25:26 -- common/autotest_common.sh@794 -- # type=--id 00:22:39.570 23:25:26 -- common/autotest_common.sh@795 -- # id=0 00:22:39.570 23:25:26 -- common/autotest_common.sh@796 -- # '[' --id = --pid ']' 00:22:39.570 23:25:26 -- common/autotest_common.sh@800 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:22:39.570 23:25:26 -- common/autotest_common.sh@800 -- # shm_files=nvmf_trace.0 00:22:39.570 23:25:26 -- common/autotest_common.sh@802 -- # [[ -z nvmf_trace.0 ]] 00:22:39.570 23:25:26 -- common/autotest_common.sh@806 -- # for n in $shm_files 00:22:39.570 23:25:26 -- common/autotest_common.sh@807 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:22:39.570 nvmf_trace.0 00:22:39.570 23:25:26 -- common/autotest_common.sh@809 -- # return 0 00:22:39.570 23:25:26 -- fips/fips.sh@16 -- # killprocess 3998230 00:22:39.570 23:25:26 -- common/autotest_common.sh@936 -- # '[' -z 3998230 ']' 00:22:39.570 23:25:26 -- common/autotest_common.sh@940 -- # kill -0 3998230 00:22:39.570 23:25:26 -- common/autotest_common.sh@941 -- # uname 00:22:39.570 23:25:26 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:22:39.570 23:25:26 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3998230 00:22:39.570 23:25:26 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:22:39.570 23:25:26 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:22:39.570 23:25:26 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3998230' 00:22:39.570 killing process with pid 3998230 00:22:39.570 23:25:26 -- common/autotest_common.sh@955 -- # kill 3998230 00:22:39.570 Received shutdown signal, test time was about 10.000000 seconds 00:22:39.570 00:22:39.570 Latency(us) 00:22:39.570 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:39.570 =================================================================================================================== 00:22:39.570 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:39.570 [2024-04-26 23:25:26.759491] app.c: 937:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:22:39.570 23:25:26 -- common/autotest_common.sh@960 -- # wait 3998230 00:22:39.570 23:25:26 -- fips/fips.sh@17 -- # nvmftestfini 00:22:39.570 23:25:26 -- nvmf/common.sh@477 -- # nvmfcleanup 00:22:39.570 23:25:26 -- nvmf/common.sh@117 -- # sync 00:22:39.570 23:25:26 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:39.570 23:25:26 -- nvmf/common.sh@120 -- # set +e 00:22:39.570 23:25:26 -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:39.570 23:25:26 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:39.570 rmmod nvme_tcp 00:22:39.570 rmmod nvme_fabrics 00:22:39.570 rmmod nvme_keyring 00:22:39.570 23:25:26 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:39.570 23:25:26 -- nvmf/common.sh@124 -- # set -e 00:22:39.570 23:25:26 -- nvmf/common.sh@125 -- # return 0 00:22:39.570 23:25:26 -- nvmf/common.sh@478 -- # '[' -n 3997881 ']' 00:22:39.570 23:25:26 -- nvmf/common.sh@479 -- # killprocess 3997881 00:22:39.570 23:25:26 -- common/autotest_common.sh@936 -- # '[' -z 3997881 ']' 00:22:39.570 23:25:26 -- common/autotest_common.sh@940 -- # kill -0 3997881 00:22:39.570 23:25:26 -- common/autotest_common.sh@941 -- # uname 00:22:39.570 23:25:26 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:22:39.570 23:25:26 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3997881 00:22:39.570 23:25:26 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:22:39.570 23:25:26 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:22:39.570 23:25:26 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3997881' 00:22:39.570 killing process with pid 3997881 00:22:39.570 23:25:26 -- common/autotest_common.sh@955 -- # kill 3997881 00:22:39.570 [2024-04-26 23:25:26.988198] app.c: 937:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:22:39.570 23:25:26 -- common/autotest_common.sh@960 -- # wait 3997881 00:22:39.570 23:25:27 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:22:39.570 23:25:27 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:22:39.570 23:25:27 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:22:39.570 23:25:27 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:39.570 23:25:27 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:39.570 23:25:27 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:39.570 23:25:27 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:39.571 23:25:27 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:40.142 23:25:29 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:22:40.142 23:25:29 -- fips/fips.sh@18 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:22:40.142 00:22:40.142 real 0m22.200s 00:22:40.142 user 0m23.403s 00:22:40.142 sys 0m9.401s 00:22:40.142 23:25:29 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:22:40.142 23:25:29 -- common/autotest_common.sh@10 -- # set +x 00:22:40.142 ************************************ 00:22:40.142 END TEST nvmf_fips 00:22:40.142 ************************************ 00:22:40.142 23:25:29 -- nvmf/nvmf.sh@64 -- # '[' 1 -eq 1 ']' 00:22:40.142 23:25:29 -- nvmf/nvmf.sh@65 -- # run_test nvmf_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:22:40.142 23:25:29 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:22:40.142 23:25:29 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:22:40.142 23:25:29 -- common/autotest_common.sh@10 -- # set +x 00:22:40.142 ************************************ 00:22:40.142 START TEST nvmf_fuzz 00:22:40.142 ************************************ 00:22:40.142 23:25:29 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:22:40.403 * Looking for test storage... 00:22:40.403 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:22:40.403 23:25:29 -- target/fabrics_fuzz.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:40.403 23:25:29 -- nvmf/common.sh@7 -- # uname -s 00:22:40.403 23:25:29 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:40.403 23:25:29 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:40.403 23:25:29 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:40.403 23:25:29 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:40.403 23:25:29 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:40.403 23:25:29 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:40.403 23:25:29 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:40.403 23:25:29 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:40.403 23:25:29 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:40.403 23:25:29 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:40.403 23:25:29 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:22:40.403 23:25:29 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:22:40.403 23:25:29 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:40.403 23:25:29 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:40.403 23:25:29 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:40.403 23:25:29 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:40.403 23:25:29 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:40.403 23:25:29 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:40.403 23:25:29 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:40.403 23:25:29 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:40.403 23:25:29 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:40.403 23:25:29 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:40.403 23:25:29 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:40.403 23:25:29 -- paths/export.sh@5 -- # export PATH 00:22:40.403 23:25:29 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:40.403 23:25:29 -- nvmf/common.sh@47 -- # : 0 00:22:40.403 23:25:29 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:40.403 23:25:29 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:40.403 23:25:29 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:40.403 23:25:29 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:40.403 23:25:29 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:40.403 23:25:29 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:40.403 23:25:29 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:40.403 23:25:29 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:40.403 23:25:29 -- target/fabrics_fuzz.sh@11 -- # nvmftestinit 00:22:40.403 23:25:29 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:22:40.403 23:25:29 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:40.403 23:25:29 -- nvmf/common.sh@437 -- # prepare_net_devs 00:22:40.403 23:25:29 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:22:40.403 23:25:29 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:22:40.403 23:25:29 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:40.403 23:25:29 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:40.403 23:25:29 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:40.403 23:25:29 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:22:40.403 23:25:29 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:22:40.403 23:25:29 -- nvmf/common.sh@285 -- # xtrace_disable 00:22:40.403 23:25:29 -- common/autotest_common.sh@10 -- # set +x 00:22:48.551 23:25:36 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:22:48.551 23:25:36 -- nvmf/common.sh@291 -- # pci_devs=() 00:22:48.551 23:25:36 -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:48.551 23:25:36 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:48.551 23:25:36 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:48.551 23:25:36 -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:48.551 23:25:36 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:48.551 23:25:36 -- nvmf/common.sh@295 -- # net_devs=() 00:22:48.551 23:25:36 -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:48.551 23:25:36 -- nvmf/common.sh@296 -- # e810=() 00:22:48.551 23:25:36 -- nvmf/common.sh@296 -- # local -ga e810 00:22:48.551 23:25:36 -- nvmf/common.sh@297 -- # x722=() 00:22:48.551 23:25:36 -- nvmf/common.sh@297 -- # local -ga x722 00:22:48.551 23:25:36 -- nvmf/common.sh@298 -- # mlx=() 00:22:48.551 23:25:36 -- nvmf/common.sh@298 -- # local -ga mlx 00:22:48.551 23:25:36 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:48.551 23:25:36 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:48.551 23:25:36 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:48.551 23:25:36 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:48.551 23:25:36 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:48.551 23:25:36 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:48.551 23:25:36 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:48.551 23:25:36 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:48.551 23:25:36 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:48.551 23:25:36 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:48.551 23:25:36 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:48.551 23:25:36 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:48.551 23:25:36 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:22:48.551 23:25:36 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:22:48.551 23:25:36 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:22:48.551 23:25:36 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:22:48.551 23:25:36 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:48.551 23:25:36 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:48.551 23:25:36 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:22:48.551 Found 0000:31:00.0 (0x8086 - 0x159b) 00:22:48.551 23:25:36 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:48.551 23:25:36 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:48.551 23:25:36 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:48.551 23:25:36 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:48.551 23:25:36 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:48.551 23:25:36 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:48.551 23:25:36 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:22:48.551 Found 0000:31:00.1 (0x8086 - 0x159b) 00:22:48.551 23:25:36 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:48.551 23:25:36 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:48.551 23:25:36 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:48.551 23:25:36 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:48.551 23:25:36 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:48.551 23:25:36 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:48.551 23:25:36 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:22:48.551 23:25:36 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:22:48.551 23:25:36 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:48.551 23:25:36 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:48.551 23:25:36 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:22:48.551 23:25:36 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:48.551 23:25:36 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:22:48.552 Found net devices under 0000:31:00.0: cvl_0_0 00:22:48.552 23:25:36 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:22:48.552 23:25:36 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:48.552 23:25:36 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:48.552 23:25:36 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:22:48.552 23:25:36 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:48.552 23:25:36 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:22:48.552 Found net devices under 0000:31:00.1: cvl_0_1 00:22:48.552 23:25:36 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:22:48.552 23:25:36 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:22:48.552 23:25:36 -- nvmf/common.sh@403 -- # is_hw=yes 00:22:48.552 23:25:36 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:22:48.552 23:25:36 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:22:48.552 23:25:36 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:22:48.552 23:25:36 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:48.552 23:25:36 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:48.552 23:25:36 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:48.552 23:25:36 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:22:48.552 23:25:36 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:48.552 23:25:36 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:48.552 23:25:36 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:22:48.552 23:25:36 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:48.552 23:25:36 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:48.552 23:25:36 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:22:48.552 23:25:36 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:22:48.552 23:25:36 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:22:48.552 23:25:36 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:48.552 23:25:36 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:48.552 23:25:36 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:48.552 23:25:36 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:22:48.552 23:25:36 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:48.552 23:25:36 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:48.552 23:25:36 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:48.552 23:25:36 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:22:48.552 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:48.552 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.576 ms 00:22:48.552 00:22:48.552 --- 10.0.0.2 ping statistics --- 00:22:48.552 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:48.552 rtt min/avg/max/mdev = 0.576/0.576/0.576/0.000 ms 00:22:48.552 23:25:36 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:48.552 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:48.552 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.204 ms 00:22:48.552 00:22:48.552 --- 10.0.0.1 ping statistics --- 00:22:48.552 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:48.552 rtt min/avg/max/mdev = 0.204/0.204/0.204/0.000 ms 00:22:48.552 23:25:36 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:48.552 23:25:36 -- nvmf/common.sh@411 -- # return 0 00:22:48.552 23:25:36 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:22:48.552 23:25:36 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:48.552 23:25:36 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:22:48.552 23:25:36 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:22:48.552 23:25:36 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:48.552 23:25:36 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:22:48.552 23:25:36 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:22:48.552 23:25:36 -- target/fabrics_fuzz.sh@14 -- # nvmfpid=4004639 00:22:48.552 23:25:36 -- target/fabrics_fuzz.sh@16 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:22:48.552 23:25:36 -- target/fabrics_fuzz.sh@13 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:22:48.552 23:25:36 -- target/fabrics_fuzz.sh@18 -- # waitforlisten 4004639 00:22:48.552 23:25:36 -- common/autotest_common.sh@817 -- # '[' -z 4004639 ']' 00:22:48.552 23:25:36 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:48.552 23:25:36 -- common/autotest_common.sh@822 -- # local max_retries=100 00:22:48.552 23:25:36 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:48.552 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:48.552 23:25:36 -- common/autotest_common.sh@826 -- # xtrace_disable 00:22:48.552 23:25:36 -- common/autotest_common.sh@10 -- # set +x 00:22:48.552 23:25:37 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:22:48.552 23:25:37 -- common/autotest_common.sh@850 -- # return 0 00:22:48.552 23:25:37 -- target/fabrics_fuzz.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:48.552 23:25:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:48.552 23:25:37 -- common/autotest_common.sh@10 -- # set +x 00:22:48.552 23:25:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:48.552 23:25:37 -- target/fabrics_fuzz.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 64 512 00:22:48.552 23:25:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:48.552 23:25:37 -- common/autotest_common.sh@10 -- # set +x 00:22:48.552 Malloc0 00:22:48.552 23:25:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:48.552 23:25:37 -- target/fabrics_fuzz.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:48.552 23:25:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:48.552 23:25:37 -- common/autotest_common.sh@10 -- # set +x 00:22:48.552 23:25:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:48.552 23:25:37 -- target/fabrics_fuzz.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:22:48.552 23:25:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:48.552 23:25:37 -- common/autotest_common.sh@10 -- # set +x 00:22:48.552 23:25:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:48.552 23:25:37 -- target/fabrics_fuzz.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:48.552 23:25:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:48.552 23:25:37 -- common/autotest_common.sh@10 -- # set +x 00:22:48.552 23:25:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:48.552 23:25:37 -- target/fabrics_fuzz.sh@27 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' 00:22:48.552 23:25:37 -- target/fabrics_fuzz.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' -N -a 00:23:20.692 Fuzzing completed. Shutting down the fuzz application 00:23:20.692 00:23:20.692 Dumping successful admin opcodes: 00:23:20.692 8, 9, 10, 24, 00:23:20.692 Dumping successful io opcodes: 00:23:20.692 0, 9, 00:23:20.692 NS: 0x200003aeff00 I/O qp, Total commands completed: 820302, total successful commands: 4761, random_seed: 3943500224 00:23:20.692 NS: 0x200003aeff00 admin qp, Total commands completed: 107275, total successful commands: 880, random_seed: 363028928 00:23:20.692 23:26:07 -- target/fabrics_fuzz.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' -j /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/example.json -a 00:23:20.692 Fuzzing completed. Shutting down the fuzz application 00:23:20.692 00:23:20.692 Dumping successful admin opcodes: 00:23:20.692 24, 00:23:20.692 Dumping successful io opcodes: 00:23:20.692 00:23:20.692 NS: 0x200003aeff00 I/O qp, Total commands completed: 0, total successful commands: 0, random_seed: 1814386479 00:23:20.692 NS: 0x200003aeff00 admin qp, Total commands completed: 16, total successful commands: 4, random_seed: 1814489317 00:23:20.692 23:26:09 -- target/fabrics_fuzz.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:20.692 23:26:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:20.692 23:26:09 -- common/autotest_common.sh@10 -- # set +x 00:23:20.692 23:26:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:20.692 23:26:09 -- target/fabrics_fuzz.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:23:20.692 23:26:09 -- target/fabrics_fuzz.sh@38 -- # nvmftestfini 00:23:20.692 23:26:09 -- nvmf/common.sh@477 -- # nvmfcleanup 00:23:20.692 23:26:09 -- nvmf/common.sh@117 -- # sync 00:23:20.692 23:26:09 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:20.692 23:26:09 -- nvmf/common.sh@120 -- # set +e 00:23:20.692 23:26:09 -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:20.692 23:26:09 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:20.692 rmmod nvme_tcp 00:23:20.692 rmmod nvme_fabrics 00:23:20.692 rmmod nvme_keyring 00:23:20.692 23:26:09 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:20.692 23:26:09 -- nvmf/common.sh@124 -- # set -e 00:23:20.692 23:26:09 -- nvmf/common.sh@125 -- # return 0 00:23:20.692 23:26:09 -- nvmf/common.sh@478 -- # '[' -n 4004639 ']' 00:23:20.692 23:26:09 -- nvmf/common.sh@479 -- # killprocess 4004639 00:23:20.692 23:26:09 -- common/autotest_common.sh@936 -- # '[' -z 4004639 ']' 00:23:20.692 23:26:09 -- common/autotest_common.sh@940 -- # kill -0 4004639 00:23:20.692 23:26:09 -- common/autotest_common.sh@941 -- # uname 00:23:20.692 23:26:09 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:23:20.692 23:26:09 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 4004639 00:23:20.692 23:26:09 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:23:20.692 23:26:09 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:23:20.692 23:26:09 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 4004639' 00:23:20.692 killing process with pid 4004639 00:23:20.692 23:26:09 -- common/autotest_common.sh@955 -- # kill 4004639 00:23:20.692 23:26:09 -- common/autotest_common.sh@960 -- # wait 4004639 00:23:20.692 23:26:09 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:23:20.692 23:26:09 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:23:20.692 23:26:09 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:23:20.692 23:26:09 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:20.692 23:26:09 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:20.692 23:26:09 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:20.692 23:26:09 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:20.692 23:26:09 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:22.610 23:26:11 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:23:22.610 23:26:11 -- target/fabrics_fuzz.sh@39 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_fuzz_logs1.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_fuzz_logs2.txt 00:23:22.610 00:23:22.610 real 0m42.167s 00:23:22.610 user 0m56.301s 00:23:22.610 sys 0m14.965s 00:23:22.610 23:26:11 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:23:22.610 23:26:11 -- common/autotest_common.sh@10 -- # set +x 00:23:22.610 ************************************ 00:23:22.610 END TEST nvmf_fuzz 00:23:22.610 ************************************ 00:23:22.610 23:26:11 -- nvmf/nvmf.sh@66 -- # run_test nvmf_multiconnection /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:23:22.610 23:26:11 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:23:22.610 23:26:11 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:23:22.610 23:26:11 -- common/autotest_common.sh@10 -- # set +x 00:23:22.610 ************************************ 00:23:22.610 START TEST nvmf_multiconnection 00:23:22.610 ************************************ 00:23:22.610 23:26:11 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:23:22.610 * Looking for test storage... 00:23:22.610 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:23:22.610 23:26:11 -- target/multiconnection.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:22.610 23:26:11 -- nvmf/common.sh@7 -- # uname -s 00:23:22.610 23:26:11 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:22.610 23:26:11 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:22.610 23:26:11 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:22.610 23:26:11 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:22.610 23:26:11 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:22.610 23:26:11 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:22.610 23:26:11 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:22.610 23:26:11 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:22.610 23:26:11 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:22.610 23:26:11 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:22.610 23:26:11 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:23:22.610 23:26:11 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:23:22.610 23:26:11 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:22.611 23:26:11 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:22.611 23:26:11 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:22.611 23:26:11 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:22.611 23:26:11 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:22.611 23:26:11 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:22.611 23:26:11 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:22.611 23:26:11 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:22.611 23:26:11 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:22.611 23:26:11 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:22.611 23:26:11 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:22.611 23:26:11 -- paths/export.sh@5 -- # export PATH 00:23:22.611 23:26:11 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:22.611 23:26:11 -- nvmf/common.sh@47 -- # : 0 00:23:22.611 23:26:11 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:22.611 23:26:11 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:22.611 23:26:11 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:22.611 23:26:11 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:22.611 23:26:11 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:22.611 23:26:11 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:22.611 23:26:11 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:22.611 23:26:11 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:22.611 23:26:11 -- target/multiconnection.sh@11 -- # MALLOC_BDEV_SIZE=64 00:23:22.611 23:26:11 -- target/multiconnection.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:23:22.611 23:26:11 -- target/multiconnection.sh@14 -- # NVMF_SUBSYS=11 00:23:22.611 23:26:11 -- target/multiconnection.sh@16 -- # nvmftestinit 00:23:22.611 23:26:11 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:23:22.611 23:26:11 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:22.611 23:26:11 -- nvmf/common.sh@437 -- # prepare_net_devs 00:23:22.611 23:26:11 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:23:22.611 23:26:11 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:23:22.611 23:26:11 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:22.611 23:26:11 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:22.611 23:26:11 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:22.872 23:26:11 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:23:22.872 23:26:11 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:23:22.872 23:26:11 -- nvmf/common.sh@285 -- # xtrace_disable 00:23:22.872 23:26:11 -- common/autotest_common.sh@10 -- # set +x 00:23:29.464 23:26:18 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:23:29.464 23:26:18 -- nvmf/common.sh@291 -- # pci_devs=() 00:23:29.464 23:26:18 -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:29.464 23:26:18 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:29.464 23:26:18 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:29.464 23:26:18 -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:29.464 23:26:18 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:29.464 23:26:18 -- nvmf/common.sh@295 -- # net_devs=() 00:23:29.464 23:26:18 -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:29.464 23:26:18 -- nvmf/common.sh@296 -- # e810=() 00:23:29.464 23:26:18 -- nvmf/common.sh@296 -- # local -ga e810 00:23:29.464 23:26:18 -- nvmf/common.sh@297 -- # x722=() 00:23:29.464 23:26:18 -- nvmf/common.sh@297 -- # local -ga x722 00:23:29.464 23:26:18 -- nvmf/common.sh@298 -- # mlx=() 00:23:29.464 23:26:18 -- nvmf/common.sh@298 -- # local -ga mlx 00:23:29.464 23:26:18 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:29.464 23:26:18 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:29.464 23:26:18 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:29.464 23:26:18 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:29.464 23:26:18 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:29.464 23:26:18 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:29.464 23:26:18 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:29.464 23:26:18 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:29.464 23:26:18 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:29.464 23:26:18 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:29.464 23:26:18 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:29.464 23:26:18 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:29.464 23:26:18 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:23:29.464 23:26:18 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:23:29.464 23:26:18 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:23:29.464 23:26:18 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:23:29.464 23:26:18 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:29.464 23:26:18 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:29.464 23:26:18 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:23:29.464 Found 0000:31:00.0 (0x8086 - 0x159b) 00:23:29.464 23:26:18 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:29.464 23:26:18 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:29.464 23:26:18 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:29.464 23:26:18 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:29.464 23:26:18 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:29.464 23:26:18 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:29.464 23:26:18 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:23:29.464 Found 0000:31:00.1 (0x8086 - 0x159b) 00:23:29.464 23:26:18 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:29.464 23:26:18 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:29.464 23:26:18 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:29.464 23:26:18 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:29.464 23:26:18 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:29.464 23:26:18 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:29.464 23:26:18 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:23:29.464 23:26:18 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:23:29.464 23:26:18 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:29.464 23:26:18 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:29.464 23:26:18 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:23:29.464 23:26:18 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:29.464 23:26:18 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:23:29.464 Found net devices under 0000:31:00.0: cvl_0_0 00:23:29.464 23:26:18 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:23:29.464 23:26:18 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:29.464 23:26:18 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:29.464 23:26:18 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:23:29.464 23:26:18 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:29.464 23:26:18 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:23:29.464 Found net devices under 0000:31:00.1: cvl_0_1 00:23:29.464 23:26:18 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:23:29.464 23:26:18 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:23:29.464 23:26:18 -- nvmf/common.sh@403 -- # is_hw=yes 00:23:29.464 23:26:18 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:23:29.464 23:26:18 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:23:29.464 23:26:18 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:23:29.464 23:26:18 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:29.464 23:26:18 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:29.464 23:26:18 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:29.464 23:26:18 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:23:29.464 23:26:18 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:29.464 23:26:18 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:29.464 23:26:18 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:23:29.464 23:26:18 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:29.464 23:26:18 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:29.464 23:26:18 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:23:29.464 23:26:18 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:23:29.464 23:26:18 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:23:29.464 23:26:18 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:29.726 23:26:18 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:29.726 23:26:18 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:29.726 23:26:18 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:23:29.726 23:26:18 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:29.726 23:26:18 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:29.726 23:26:18 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:29.726 23:26:18 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:23:29.726 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:29.726 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.640 ms 00:23:29.726 00:23:29.726 --- 10.0.0.2 ping statistics --- 00:23:29.726 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:29.726 rtt min/avg/max/mdev = 0.640/0.640/0.640/0.000 ms 00:23:29.727 23:26:18 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:29.727 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:29.727 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.226 ms 00:23:29.727 00:23:29.727 --- 10.0.0.1 ping statistics --- 00:23:29.727 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:29.727 rtt min/avg/max/mdev = 0.226/0.226/0.226/0.000 ms 00:23:29.727 23:26:18 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:29.727 23:26:18 -- nvmf/common.sh@411 -- # return 0 00:23:29.727 23:26:18 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:23:29.727 23:26:18 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:29.727 23:26:18 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:23:29.727 23:26:18 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:23:29.727 23:26:18 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:29.727 23:26:18 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:23:29.727 23:26:18 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:23:29.987 23:26:19 -- target/multiconnection.sh@17 -- # nvmfappstart -m 0xF 00:23:29.987 23:26:19 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:23:29.987 23:26:19 -- common/autotest_common.sh@710 -- # xtrace_disable 00:23:29.987 23:26:19 -- common/autotest_common.sh@10 -- # set +x 00:23:29.987 23:26:19 -- nvmf/common.sh@470 -- # nvmfpid=4015038 00:23:29.987 23:26:19 -- nvmf/common.sh@471 -- # waitforlisten 4015038 00:23:29.987 23:26:19 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:23:29.987 23:26:19 -- common/autotest_common.sh@817 -- # '[' -z 4015038 ']' 00:23:29.987 23:26:19 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:29.987 23:26:19 -- common/autotest_common.sh@822 -- # local max_retries=100 00:23:29.987 23:26:19 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:29.987 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:29.987 23:26:19 -- common/autotest_common.sh@826 -- # xtrace_disable 00:23:29.987 23:26:19 -- common/autotest_common.sh@10 -- # set +x 00:23:29.987 [2024-04-26 23:26:19.069056] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:23:29.987 [2024-04-26 23:26:19.069119] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:29.987 EAL: No free 2048 kB hugepages reported on node 1 00:23:29.987 [2024-04-26 23:26:19.142401] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:29.987 [2024-04-26 23:26:19.181835] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:29.987 [2024-04-26 23:26:19.181891] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:29.987 [2024-04-26 23:26:19.181905] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:29.987 [2024-04-26 23:26:19.181913] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:29.987 [2024-04-26 23:26:19.181920] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:29.987 [2024-04-26 23:26:19.182108] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:29.987 [2024-04-26 23:26:19.182254] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:23:29.987 [2024-04-26 23:26:19.182416] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:29.987 [2024-04-26 23:26:19.182417] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:23:30.929 23:26:19 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:23:30.929 23:26:19 -- common/autotest_common.sh@850 -- # return 0 00:23:30.929 23:26:19 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:23:30.929 23:26:19 -- common/autotest_common.sh@716 -- # xtrace_disable 00:23:30.929 23:26:19 -- common/autotest_common.sh@10 -- # set +x 00:23:30.929 23:26:19 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:30.929 23:26:19 -- target/multiconnection.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:30.929 23:26:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:30.929 23:26:19 -- common/autotest_common.sh@10 -- # set +x 00:23:30.929 [2024-04-26 23:26:19.901516] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:30.929 23:26:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:30.929 23:26:19 -- target/multiconnection.sh@21 -- # seq 1 11 00:23:30.929 23:26:19 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:30.929 23:26:19 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:23:30.929 23:26:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:30.929 23:26:19 -- common/autotest_common.sh@10 -- # set +x 00:23:30.929 Malloc1 00:23:30.929 23:26:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:30.929 23:26:19 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK1 00:23:30.929 23:26:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:30.929 23:26:19 -- common/autotest_common.sh@10 -- # set +x 00:23:30.929 23:26:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:30.929 23:26:19 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:23:30.929 23:26:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:30.929 23:26:19 -- common/autotest_common.sh@10 -- # set +x 00:23:30.929 23:26:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:30.929 23:26:19 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:30.929 23:26:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:30.929 23:26:19 -- common/autotest_common.sh@10 -- # set +x 00:23:30.929 [2024-04-26 23:26:19.968928] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:30.929 23:26:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:30.929 23:26:19 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:30.929 23:26:19 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc2 00:23:30.929 23:26:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:30.929 23:26:19 -- common/autotest_common.sh@10 -- # set +x 00:23:30.929 Malloc2 00:23:30.929 23:26:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:30.929 23:26:19 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:23:30.929 23:26:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:30.929 23:26:19 -- common/autotest_common.sh@10 -- # set +x 00:23:30.929 23:26:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:30.929 23:26:20 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc2 00:23:30.929 23:26:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:30.929 23:26:20 -- common/autotest_common.sh@10 -- # set +x 00:23:30.929 23:26:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:30.929 23:26:20 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:23:30.929 23:26:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:30.929 23:26:20 -- common/autotest_common.sh@10 -- # set +x 00:23:30.929 23:26:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:30.929 23:26:20 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:30.929 23:26:20 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc3 00:23:30.929 23:26:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:30.929 23:26:20 -- common/autotest_common.sh@10 -- # set +x 00:23:30.929 Malloc3 00:23:30.929 23:26:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:30.930 23:26:20 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK3 00:23:30.930 23:26:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:30.930 23:26:20 -- common/autotest_common.sh@10 -- # set +x 00:23:30.930 23:26:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:30.930 23:26:20 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Malloc3 00:23:30.930 23:26:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:30.930 23:26:20 -- common/autotest_common.sh@10 -- # set +x 00:23:30.930 23:26:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:30.930 23:26:20 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:23:30.930 23:26:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:30.930 23:26:20 -- common/autotest_common.sh@10 -- # set +x 00:23:30.930 23:26:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:30.930 23:26:20 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:30.930 23:26:20 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc4 00:23:30.930 23:26:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:30.930 23:26:20 -- common/autotest_common.sh@10 -- # set +x 00:23:30.930 Malloc4 00:23:30.930 23:26:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:30.930 23:26:20 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK4 00:23:30.930 23:26:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:30.930 23:26:20 -- common/autotest_common.sh@10 -- # set +x 00:23:30.930 23:26:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:30.930 23:26:20 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Malloc4 00:23:30.930 23:26:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:30.930 23:26:20 -- common/autotest_common.sh@10 -- # set +x 00:23:30.930 23:26:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:30.930 23:26:20 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:23:30.930 23:26:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:30.930 23:26:20 -- common/autotest_common.sh@10 -- # set +x 00:23:30.930 23:26:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:30.930 23:26:20 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:30.930 23:26:20 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc5 00:23:30.930 23:26:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:30.930 23:26:20 -- common/autotest_common.sh@10 -- # set +x 00:23:30.930 Malloc5 00:23:30.930 23:26:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:30.930 23:26:20 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode5 -a -s SPDK5 00:23:30.930 23:26:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:30.930 23:26:20 -- common/autotest_common.sh@10 -- # set +x 00:23:30.930 23:26:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:30.930 23:26:20 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode5 Malloc5 00:23:30.930 23:26:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:30.930 23:26:20 -- common/autotest_common.sh@10 -- # set +x 00:23:30.930 23:26:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:30.930 23:26:20 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode5 -t tcp -a 10.0.0.2 -s 4420 00:23:30.930 23:26:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:30.930 23:26:20 -- common/autotest_common.sh@10 -- # set +x 00:23:31.191 23:26:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:31.191 23:26:20 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:31.191 23:26:20 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc6 00:23:31.191 23:26:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:31.191 23:26:20 -- common/autotest_common.sh@10 -- # set +x 00:23:31.191 Malloc6 00:23:31.191 23:26:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:31.191 23:26:20 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode6 -a -s SPDK6 00:23:31.191 23:26:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:31.191 23:26:20 -- common/autotest_common.sh@10 -- # set +x 00:23:31.191 23:26:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:31.191 23:26:20 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode6 Malloc6 00:23:31.191 23:26:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:31.191 23:26:20 -- common/autotest_common.sh@10 -- # set +x 00:23:31.191 23:26:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:31.191 23:26:20 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode6 -t tcp -a 10.0.0.2 -s 4420 00:23:31.191 23:26:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:31.191 23:26:20 -- common/autotest_common.sh@10 -- # set +x 00:23:31.191 23:26:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:31.191 23:26:20 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:31.191 23:26:20 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc7 00:23:31.191 23:26:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:31.191 23:26:20 -- common/autotest_common.sh@10 -- # set +x 00:23:31.191 Malloc7 00:23:31.191 23:26:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:31.191 23:26:20 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode7 -a -s SPDK7 00:23:31.191 23:26:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:31.191 23:26:20 -- common/autotest_common.sh@10 -- # set +x 00:23:31.191 23:26:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:31.191 23:26:20 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode7 Malloc7 00:23:31.191 23:26:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:31.191 23:26:20 -- common/autotest_common.sh@10 -- # set +x 00:23:31.191 23:26:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:31.191 23:26:20 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode7 -t tcp -a 10.0.0.2 -s 4420 00:23:31.191 23:26:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:31.191 23:26:20 -- common/autotest_common.sh@10 -- # set +x 00:23:31.191 23:26:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:31.191 23:26:20 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:31.191 23:26:20 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc8 00:23:31.191 23:26:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:31.191 23:26:20 -- common/autotest_common.sh@10 -- # set +x 00:23:31.191 Malloc8 00:23:31.191 23:26:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:31.191 23:26:20 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode8 -a -s SPDK8 00:23:31.191 23:26:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:31.191 23:26:20 -- common/autotest_common.sh@10 -- # set +x 00:23:31.191 23:26:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:31.191 23:26:20 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode8 Malloc8 00:23:31.191 23:26:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:31.191 23:26:20 -- common/autotest_common.sh@10 -- # set +x 00:23:31.191 23:26:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:31.191 23:26:20 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode8 -t tcp -a 10.0.0.2 -s 4420 00:23:31.191 23:26:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:31.191 23:26:20 -- common/autotest_common.sh@10 -- # set +x 00:23:31.191 23:26:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:31.191 23:26:20 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:31.191 23:26:20 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc9 00:23:31.191 23:26:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:31.191 23:26:20 -- common/autotest_common.sh@10 -- # set +x 00:23:31.191 Malloc9 00:23:31.191 23:26:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:31.191 23:26:20 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode9 -a -s SPDK9 00:23:31.191 23:26:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:31.191 23:26:20 -- common/autotest_common.sh@10 -- # set +x 00:23:31.191 23:26:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:31.191 23:26:20 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode9 Malloc9 00:23:31.191 23:26:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:31.191 23:26:20 -- common/autotest_common.sh@10 -- # set +x 00:23:31.191 23:26:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:31.191 23:26:20 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode9 -t tcp -a 10.0.0.2 -s 4420 00:23:31.191 23:26:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:31.191 23:26:20 -- common/autotest_common.sh@10 -- # set +x 00:23:31.191 23:26:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:31.191 23:26:20 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:31.191 23:26:20 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc10 00:23:31.191 23:26:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:31.191 23:26:20 -- common/autotest_common.sh@10 -- # set +x 00:23:31.191 Malloc10 00:23:31.191 23:26:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:31.191 23:26:20 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode10 -a -s SPDK10 00:23:31.191 23:26:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:31.191 23:26:20 -- common/autotest_common.sh@10 -- # set +x 00:23:31.191 23:26:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:31.191 23:26:20 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode10 Malloc10 00:23:31.191 23:26:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:31.191 23:26:20 -- common/autotest_common.sh@10 -- # set +x 00:23:31.191 23:26:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:31.191 23:26:20 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode10 -t tcp -a 10.0.0.2 -s 4420 00:23:31.191 23:26:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:31.191 23:26:20 -- common/autotest_common.sh@10 -- # set +x 00:23:31.452 23:26:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:31.452 23:26:20 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:31.452 23:26:20 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc11 00:23:31.452 23:26:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:31.452 23:26:20 -- common/autotest_common.sh@10 -- # set +x 00:23:31.452 Malloc11 00:23:31.452 23:26:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:31.452 23:26:20 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode11 -a -s SPDK11 00:23:31.452 23:26:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:31.452 23:26:20 -- common/autotest_common.sh@10 -- # set +x 00:23:31.452 23:26:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:31.452 23:26:20 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode11 Malloc11 00:23:31.452 23:26:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:31.452 23:26:20 -- common/autotest_common.sh@10 -- # set +x 00:23:31.452 23:26:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:31.452 23:26:20 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode11 -t tcp -a 10.0.0.2 -s 4420 00:23:31.452 23:26:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:31.452 23:26:20 -- common/autotest_common.sh@10 -- # set +x 00:23:31.452 23:26:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:31.452 23:26:20 -- target/multiconnection.sh@28 -- # seq 1 11 00:23:31.452 23:26:20 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:31.452 23:26:20 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:23:32.834 23:26:22 -- target/multiconnection.sh@30 -- # waitforserial SPDK1 00:23:32.834 23:26:22 -- common/autotest_common.sh@1184 -- # local i=0 00:23:32.834 23:26:22 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:23:32.834 23:26:22 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:23:32.834 23:26:22 -- common/autotest_common.sh@1191 -- # sleep 2 00:23:34.857 23:26:24 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:23:34.857 23:26:24 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:23:34.857 23:26:24 -- common/autotest_common.sh@1193 -- # grep -c SPDK1 00:23:34.857 23:26:24 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:23:34.857 23:26:24 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:23:34.857 23:26:24 -- common/autotest_common.sh@1194 -- # return 0 00:23:34.857 23:26:24 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:34.857 23:26:24 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode2 -a 10.0.0.2 -s 4420 00:23:36.769 23:26:25 -- target/multiconnection.sh@30 -- # waitforserial SPDK2 00:23:36.769 23:26:25 -- common/autotest_common.sh@1184 -- # local i=0 00:23:36.769 23:26:25 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:23:36.769 23:26:25 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:23:36.769 23:26:25 -- common/autotest_common.sh@1191 -- # sleep 2 00:23:38.688 23:26:27 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:23:38.688 23:26:27 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:23:38.688 23:26:27 -- common/autotest_common.sh@1193 -- # grep -c SPDK2 00:23:38.688 23:26:27 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:23:38.688 23:26:27 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:23:38.688 23:26:27 -- common/autotest_common.sh@1194 -- # return 0 00:23:38.688 23:26:27 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:38.688 23:26:27 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode3 -a 10.0.0.2 -s 4420 00:23:40.072 23:26:29 -- target/multiconnection.sh@30 -- # waitforserial SPDK3 00:23:40.072 23:26:29 -- common/autotest_common.sh@1184 -- # local i=0 00:23:40.072 23:26:29 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:23:40.072 23:26:29 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:23:40.072 23:26:29 -- common/autotest_common.sh@1191 -- # sleep 2 00:23:42.650 23:26:31 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:23:42.650 23:26:31 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:23:42.650 23:26:31 -- common/autotest_common.sh@1193 -- # grep -c SPDK3 00:23:42.650 23:26:31 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:23:42.650 23:26:31 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:23:42.650 23:26:31 -- common/autotest_common.sh@1194 -- # return 0 00:23:42.650 23:26:31 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:42.650 23:26:31 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode4 -a 10.0.0.2 -s 4420 00:23:44.034 23:26:32 -- target/multiconnection.sh@30 -- # waitforserial SPDK4 00:23:44.034 23:26:32 -- common/autotest_common.sh@1184 -- # local i=0 00:23:44.034 23:26:32 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:23:44.034 23:26:32 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:23:44.034 23:26:32 -- common/autotest_common.sh@1191 -- # sleep 2 00:23:45.943 23:26:34 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:23:45.943 23:26:34 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:23:45.943 23:26:34 -- common/autotest_common.sh@1193 -- # grep -c SPDK4 00:23:45.943 23:26:34 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:23:45.943 23:26:34 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:23:45.943 23:26:34 -- common/autotest_common.sh@1194 -- # return 0 00:23:45.943 23:26:34 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:45.943 23:26:34 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode5 -a 10.0.0.2 -s 4420 00:23:47.326 23:26:36 -- target/multiconnection.sh@30 -- # waitforserial SPDK5 00:23:47.326 23:26:36 -- common/autotest_common.sh@1184 -- # local i=0 00:23:47.326 23:26:36 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:23:47.326 23:26:36 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:23:47.326 23:26:36 -- common/autotest_common.sh@1191 -- # sleep 2 00:23:49.870 23:26:38 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:23:49.870 23:26:38 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:23:49.870 23:26:38 -- common/autotest_common.sh@1193 -- # grep -c SPDK5 00:23:49.870 23:26:38 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:23:49.870 23:26:38 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:23:49.870 23:26:38 -- common/autotest_common.sh@1194 -- # return 0 00:23:49.870 23:26:38 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:49.870 23:26:38 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode6 -a 10.0.0.2 -s 4420 00:23:51.254 23:26:40 -- target/multiconnection.sh@30 -- # waitforserial SPDK6 00:23:51.254 23:26:40 -- common/autotest_common.sh@1184 -- # local i=0 00:23:51.254 23:26:40 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:23:51.254 23:26:40 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:23:51.254 23:26:40 -- common/autotest_common.sh@1191 -- # sleep 2 00:23:53.176 23:26:42 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:23:53.176 23:26:42 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:23:53.176 23:26:42 -- common/autotest_common.sh@1193 -- # grep -c SPDK6 00:23:53.176 23:26:42 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:23:53.176 23:26:42 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:23:53.176 23:26:42 -- common/autotest_common.sh@1194 -- # return 0 00:23:53.176 23:26:42 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:53.176 23:26:42 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode7 -a 10.0.0.2 -s 4420 00:23:55.086 23:26:44 -- target/multiconnection.sh@30 -- # waitforserial SPDK7 00:23:55.086 23:26:44 -- common/autotest_common.sh@1184 -- # local i=0 00:23:55.086 23:26:44 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:23:55.086 23:26:44 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:23:55.086 23:26:44 -- common/autotest_common.sh@1191 -- # sleep 2 00:23:56.998 23:26:46 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:23:56.998 23:26:46 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:23:56.998 23:26:46 -- common/autotest_common.sh@1193 -- # grep -c SPDK7 00:23:56.998 23:26:46 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:23:56.998 23:26:46 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:23:56.998 23:26:46 -- common/autotest_common.sh@1194 -- # return 0 00:23:56.998 23:26:46 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:56.998 23:26:46 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode8 -a 10.0.0.2 -s 4420 00:23:58.913 23:26:47 -- target/multiconnection.sh@30 -- # waitforserial SPDK8 00:23:58.913 23:26:47 -- common/autotest_common.sh@1184 -- # local i=0 00:23:58.913 23:26:47 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:23:58.913 23:26:47 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:23:58.913 23:26:47 -- common/autotest_common.sh@1191 -- # sleep 2 00:24:00.830 23:26:49 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:24:00.830 23:26:49 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:24:00.830 23:26:49 -- common/autotest_common.sh@1193 -- # grep -c SPDK8 00:24:00.830 23:26:49 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:24:00.830 23:26:49 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:24:00.830 23:26:49 -- common/autotest_common.sh@1194 -- # return 0 00:24:00.830 23:26:49 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:00.830 23:26:49 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode9 -a 10.0.0.2 -s 4420 00:24:02.748 23:26:51 -- target/multiconnection.sh@30 -- # waitforserial SPDK9 00:24:02.748 23:26:51 -- common/autotest_common.sh@1184 -- # local i=0 00:24:02.748 23:26:51 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:24:02.748 23:26:51 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:24:02.748 23:26:51 -- common/autotest_common.sh@1191 -- # sleep 2 00:24:04.664 23:26:53 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:24:04.664 23:26:53 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:24:04.664 23:26:53 -- common/autotest_common.sh@1193 -- # grep -c SPDK9 00:24:04.664 23:26:53 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:24:04.664 23:26:53 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:24:04.664 23:26:53 -- common/autotest_common.sh@1194 -- # return 0 00:24:04.664 23:26:53 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:04.664 23:26:53 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode10 -a 10.0.0.2 -s 4420 00:24:06.579 23:26:55 -- target/multiconnection.sh@30 -- # waitforserial SPDK10 00:24:06.579 23:26:55 -- common/autotest_common.sh@1184 -- # local i=0 00:24:06.579 23:26:55 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:24:06.579 23:26:55 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:24:06.579 23:26:55 -- common/autotest_common.sh@1191 -- # sleep 2 00:24:08.558 23:26:57 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:24:08.558 23:26:57 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:24:08.558 23:26:57 -- common/autotest_common.sh@1193 -- # grep -c SPDK10 00:24:08.558 23:26:57 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:24:08.558 23:26:57 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:24:08.558 23:26:57 -- common/autotest_common.sh@1194 -- # return 0 00:24:08.558 23:26:57 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:08.558 23:26:57 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode11 -a 10.0.0.2 -s 4420 00:24:10.469 23:26:59 -- target/multiconnection.sh@30 -- # waitforserial SPDK11 00:24:10.469 23:26:59 -- common/autotest_common.sh@1184 -- # local i=0 00:24:10.469 23:26:59 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:24:10.470 23:26:59 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:24:10.470 23:26:59 -- common/autotest_common.sh@1191 -- # sleep 2 00:24:12.384 23:27:01 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:24:12.384 23:27:01 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:24:12.384 23:27:01 -- common/autotest_common.sh@1193 -- # grep -c SPDK11 00:24:12.384 23:27:01 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:24:12.384 23:27:01 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:24:12.384 23:27:01 -- common/autotest_common.sh@1194 -- # return 0 00:24:12.384 23:27:01 -- target/multiconnection.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t read -r 10 00:24:12.384 [global] 00:24:12.384 thread=1 00:24:12.384 invalidate=1 00:24:12.384 rw=read 00:24:12.384 time_based=1 00:24:12.384 runtime=10 00:24:12.384 ioengine=libaio 00:24:12.384 direct=1 00:24:12.384 bs=262144 00:24:12.384 iodepth=64 00:24:12.384 norandommap=1 00:24:12.384 numjobs=1 00:24:12.384 00:24:12.384 [job0] 00:24:12.384 filename=/dev/nvme0n1 00:24:12.384 [job1] 00:24:12.384 filename=/dev/nvme10n1 00:24:12.384 [job2] 00:24:12.384 filename=/dev/nvme1n1 00:24:12.384 [job3] 00:24:12.384 filename=/dev/nvme2n1 00:24:12.384 [job4] 00:24:12.384 filename=/dev/nvme3n1 00:24:12.384 [job5] 00:24:12.384 filename=/dev/nvme4n1 00:24:12.384 [job6] 00:24:12.384 filename=/dev/nvme5n1 00:24:12.384 [job7] 00:24:12.384 filename=/dev/nvme6n1 00:24:12.384 [job8] 00:24:12.384 filename=/dev/nvme7n1 00:24:12.384 [job9] 00:24:12.384 filename=/dev/nvme8n1 00:24:12.384 [job10] 00:24:12.384 filename=/dev/nvme9n1 00:24:12.683 Could not set queue depth (nvme0n1) 00:24:12.683 Could not set queue depth (nvme10n1) 00:24:12.683 Could not set queue depth (nvme1n1) 00:24:12.683 Could not set queue depth (nvme2n1) 00:24:12.683 Could not set queue depth (nvme3n1) 00:24:12.683 Could not set queue depth (nvme4n1) 00:24:12.683 Could not set queue depth (nvme5n1) 00:24:12.683 Could not set queue depth (nvme6n1) 00:24:12.683 Could not set queue depth (nvme7n1) 00:24:12.683 Could not set queue depth (nvme8n1) 00:24:12.683 Could not set queue depth (nvme9n1) 00:24:12.945 job0: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:12.945 job1: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:12.945 job2: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:12.945 job3: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:12.945 job4: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:12.945 job5: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:12.945 job6: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:12.945 job7: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:12.945 job8: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:12.945 job9: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:12.945 job10: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:12.945 fio-3.35 00:24:12.945 Starting 11 threads 00:24:25.276 00:24:25.276 job0: (groupid=0, jobs=1): err= 0: pid=4023837: Fri Apr 26 23:27:12 2024 00:24:25.276 read: IOPS=854, BW=214MiB/s (224MB/s)(2151MiB/10066msec) 00:24:25.276 slat (usec): min=6, max=54802, avg=1122.87, stdev=2882.91 00:24:25.276 clat (msec): min=12, max=195, avg=73.68, stdev=28.07 00:24:25.276 lat (msec): min=13, max=195, avg=74.80, stdev=28.55 00:24:25.276 clat percentiles (msec): 00:24:25.276 | 1.00th=[ 29], 5.00th=[ 33], 10.00th=[ 36], 20.00th=[ 51], 00:24:25.276 | 30.00th=[ 58], 40.00th=[ 65], 50.00th=[ 73], 60.00th=[ 79], 00:24:25.276 | 70.00th=[ 84], 80.00th=[ 96], 90.00th=[ 112], 95.00th=[ 127], 00:24:25.276 | 99.00th=[ 155], 99.50th=[ 169], 99.90th=[ 190], 99.95th=[ 194], 00:24:25.276 | 99.99th=[ 197] 00:24:25.276 bw ( KiB/s): min=108544, max=372224, per=8.93%, avg=218649.60, stdev=72898.16, samples=20 00:24:25.276 iops : min= 424, max= 1454, avg=854.10, stdev=284.76, samples=20 00:24:25.276 lat (msec) : 20=0.06%, 50=19.26%, 100=63.56%, 250=17.12% 00:24:25.276 cpu : usr=0.32%, sys=3.05%, ctx=1945, majf=0, minf=4097 00:24:25.276 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:24:25.276 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:25.276 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:24:25.276 issued rwts: total=8604,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:25.276 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:25.276 job1: (groupid=0, jobs=1): err= 0: pid=4023856: Fri Apr 26 23:27:12 2024 00:24:25.276 read: IOPS=683, BW=171MiB/s (179MB/s)(1719MiB/10065msec) 00:24:25.276 slat (usec): min=8, max=48038, avg=1244.44, stdev=3462.55 00:24:25.276 clat (msec): min=7, max=207, avg=92.34, stdev=23.88 00:24:25.276 lat (msec): min=7, max=209, avg=93.59, stdev=24.38 00:24:25.276 clat percentiles (msec): 00:24:25.276 | 1.00th=[ 30], 5.00th=[ 55], 10.00th=[ 66], 20.00th=[ 77], 00:24:25.276 | 30.00th=[ 81], 40.00th=[ 85], 50.00th=[ 90], 60.00th=[ 97], 00:24:25.276 | 70.00th=[ 104], 80.00th=[ 112], 90.00th=[ 122], 95.00th=[ 131], 00:24:25.276 | 99.00th=[ 159], 99.50th=[ 171], 99.90th=[ 178], 99.95th=[ 190], 00:24:25.276 | 99.99th=[ 207] 00:24:25.276 bw ( KiB/s): min=103936, max=226304, per=7.12%, avg=174454.85, stdev=32501.08, samples=20 00:24:25.276 iops : min= 406, max= 884, avg=681.45, stdev=126.96, samples=20 00:24:25.276 lat (msec) : 10=0.01%, 20=0.41%, 50=3.02%, 100=61.83%, 250=34.72% 00:24:25.276 cpu : usr=0.25%, sys=2.15%, ctx=1702, majf=0, minf=4097 00:24:25.276 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:24:25.276 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:25.276 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:24:25.276 issued rwts: total=6877,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:25.276 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:25.276 job2: (groupid=0, jobs=1): err= 0: pid=4023875: Fri Apr 26 23:27:12 2024 00:24:25.276 read: IOPS=990, BW=248MiB/s (260MB/s)(2486MiB/10037msec) 00:24:25.276 slat (usec): min=5, max=73602, avg=796.76, stdev=2864.31 00:24:25.276 clat (msec): min=2, max=183, avg=63.74, stdev=29.46 00:24:25.276 lat (msec): min=2, max=187, avg=64.54, stdev=29.87 00:24:25.276 clat percentiles (msec): 00:24:25.276 | 1.00th=[ 9], 5.00th=[ 18], 10.00th=[ 31], 20.00th=[ 37], 00:24:25.276 | 30.00th=[ 45], 40.00th=[ 56], 50.00th=[ 62], 60.00th=[ 68], 00:24:25.276 | 70.00th=[ 77], 80.00th=[ 92], 90.00th=[ 106], 95.00th=[ 116], 00:24:25.276 | 99.00th=[ 138], 99.50th=[ 146], 99.90th=[ 165], 99.95th=[ 182], 00:24:25.276 | 99.99th=[ 184] 00:24:25.276 bw ( KiB/s): min=141312, max=428544, per=10.33%, avg=252947.30, stdev=72082.32, samples=20 00:24:25.276 iops : min= 552, max= 1674, avg=988.05, stdev=281.59, samples=20 00:24:25.276 lat (msec) : 4=0.12%, 10=1.68%, 20=4.37%, 50=28.20%, 100=52.16% 00:24:25.276 lat (msec) : 250=13.47% 00:24:25.276 cpu : usr=0.42%, sys=2.91%, ctx=2280, majf=0, minf=4097 00:24:25.276 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:24:25.276 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:25.276 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:24:25.276 issued rwts: total=9943,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:25.276 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:25.276 job3: (groupid=0, jobs=1): err= 0: pid=4023887: Fri Apr 26 23:27:12 2024 00:24:25.276 read: IOPS=859, BW=215MiB/s (225MB/s)(2158MiB/10041msec) 00:24:25.276 slat (usec): min=8, max=53702, avg=1116.93, stdev=2884.99 00:24:25.276 clat (msec): min=4, max=182, avg=73.24, stdev=22.27 00:24:25.277 lat (msec): min=4, max=211, avg=74.36, stdev=22.60 00:24:25.277 clat percentiles (msec): 00:24:25.277 | 1.00th=[ 33], 5.00th=[ 46], 10.00th=[ 52], 20.00th=[ 58], 00:24:25.277 | 30.00th=[ 62], 40.00th=[ 65], 50.00th=[ 69], 60.00th=[ 73], 00:24:25.277 | 70.00th=[ 80], 80.00th=[ 88], 90.00th=[ 100], 95.00th=[ 118], 00:24:25.277 | 99.00th=[ 155], 99.50th=[ 169], 99.90th=[ 176], 99.95th=[ 182], 00:24:25.277 | 99.99th=[ 184] 00:24:25.277 bw ( KiB/s): min=109056, max=292864, per=8.95%, avg=219338.65, stdev=49100.00, samples=20 00:24:25.277 iops : min= 426, max= 1144, avg=856.75, stdev=191.78, samples=20 00:24:25.277 lat (msec) : 10=0.08%, 20=0.08%, 50=8.04%, 100=82.09%, 250=9.71% 00:24:25.277 cpu : usr=0.39%, sys=3.02%, ctx=1898, majf=0, minf=4097 00:24:25.277 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:24:25.277 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:25.277 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:24:25.277 issued rwts: total=8630,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:25.277 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:25.277 job4: (groupid=0, jobs=1): err= 0: pid=4023893: Fri Apr 26 23:27:12 2024 00:24:25.277 read: IOPS=939, BW=235MiB/s (246MB/s)(2368MiB/10085msec) 00:24:25.277 slat (usec): min=6, max=84239, avg=910.34, stdev=2806.77 00:24:25.277 clat (usec): min=1267, max=199417, avg=67160.77, stdev=28918.83 00:24:25.277 lat (usec): min=1316, max=200845, avg=68071.11, stdev=29341.21 00:24:25.277 clat percentiles (msec): 00:24:25.277 | 1.00th=[ 11], 5.00th=[ 26], 10.00th=[ 35], 20.00th=[ 46], 00:24:25.277 | 30.00th=[ 51], 40.00th=[ 55], 50.00th=[ 61], 60.00th=[ 70], 00:24:25.277 | 70.00th=[ 79], 80.00th=[ 96], 90.00th=[ 107], 95.00th=[ 117], 00:24:25.277 | 99.00th=[ 150], 99.50th=[ 167], 99.90th=[ 171], 99.95th=[ 176], 00:24:25.277 | 99.99th=[ 201] 00:24:25.277 bw ( KiB/s): min=147968, max=344064, per=9.83%, avg=240793.60, stdev=65696.67, samples=20 00:24:25.277 iops : min= 578, max= 1344, avg=940.60, stdev=256.63, samples=20 00:24:25.277 lat (msec) : 2=0.20%, 4=0.07%, 10=0.59%, 20=2.67%, 50=24.50% 00:24:25.277 lat (msec) : 100=55.29%, 250=16.67% 00:24:25.277 cpu : usr=0.33%, sys=2.96%, ctx=2274, majf=0, minf=4097 00:24:25.277 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:24:25.277 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:25.277 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:24:25.277 issued rwts: total=9470,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:25.277 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:25.277 job5: (groupid=0, jobs=1): err= 0: pid=4023901: Fri Apr 26 23:27:12 2024 00:24:25.277 read: IOPS=753, BW=188MiB/s (198MB/s)(1901MiB/10085msec) 00:24:25.277 slat (usec): min=5, max=60909, avg=1085.19, stdev=3091.18 00:24:25.277 clat (msec): min=6, max=187, avg=83.70, stdev=25.32 00:24:25.277 lat (msec): min=6, max=198, avg=84.78, stdev=25.73 00:24:25.277 clat percentiles (msec): 00:24:25.277 | 1.00th=[ 35], 5.00th=[ 52], 10.00th=[ 56], 20.00th=[ 63], 00:24:25.277 | 30.00th=[ 68], 40.00th=[ 73], 50.00th=[ 80], 60.00th=[ 88], 00:24:25.277 | 70.00th=[ 97], 80.00th=[ 105], 90.00th=[ 115], 95.00th=[ 129], 00:24:25.277 | 99.00th=[ 165], 99.50th=[ 171], 99.90th=[ 178], 99.95th=[ 184], 00:24:25.277 | 99.99th=[ 188] 00:24:25.277 bw ( KiB/s): min=114688, max=265216, per=7.88%, avg=193068.70, stdev=43586.28, samples=20 00:24:25.277 iops : min= 448, max= 1036, avg=754.15, stdev=170.26, samples=20 00:24:25.277 lat (msec) : 10=0.14%, 20=0.12%, 50=3.96%, 100=69.29%, 250=26.49% 00:24:25.277 cpu : usr=0.31%, sys=2.35%, ctx=1838, majf=0, minf=4097 00:24:25.277 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:24:25.277 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:25.277 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:24:25.277 issued rwts: total=7604,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:25.277 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:25.277 job6: (groupid=0, jobs=1): err= 0: pid=4023902: Fri Apr 26 23:27:12 2024 00:24:25.277 read: IOPS=856, BW=214MiB/s (225MB/s)(2160MiB/10084msec) 00:24:25.277 slat (usec): min=5, max=89439, avg=984.68, stdev=3100.01 00:24:25.277 clat (msec): min=3, max=183, avg=73.63, stdev=25.96 00:24:25.277 lat (msec): min=3, max=217, avg=74.62, stdev=26.40 00:24:25.277 clat percentiles (msec): 00:24:25.277 | 1.00th=[ 16], 5.00th=[ 35], 10.00th=[ 42], 20.00th=[ 50], 00:24:25.277 | 30.00th=[ 57], 40.00th=[ 67], 50.00th=[ 74], 60.00th=[ 82], 00:24:25.277 | 70.00th=[ 89], 80.00th=[ 99], 90.00th=[ 107], 95.00th=[ 115], 00:24:25.277 | 99.00th=[ 130], 99.50th=[ 138], 99.90th=[ 178], 99.95th=[ 182], 00:24:25.277 | 99.99th=[ 184] 00:24:25.277 bw ( KiB/s): min=145920, max=368640, per=8.96%, avg=219520.00, stdev=54144.45, samples=20 00:24:25.277 iops : min= 570, max= 1440, avg=857.50, stdev=211.50, samples=20 00:24:25.277 lat (msec) : 4=0.05%, 10=0.38%, 20=1.03%, 50=19.51%, 100=61.73% 00:24:25.277 lat (msec) : 250=17.31% 00:24:25.277 cpu : usr=0.29%, sys=2.56%, ctx=2090, majf=0, minf=3534 00:24:25.277 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:24:25.277 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:25.277 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:24:25.277 issued rwts: total=8638,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:25.277 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:25.277 job7: (groupid=0, jobs=1): err= 0: pid=4023904: Fri Apr 26 23:27:12 2024 00:24:25.277 read: IOPS=857, BW=214MiB/s (225MB/s)(2154MiB/10047msec) 00:24:25.277 slat (usec): min=6, max=27255, avg=1157.06, stdev=2856.26 00:24:25.277 clat (msec): min=17, max=123, avg=73.36, stdev=19.20 00:24:25.277 lat (msec): min=17, max=126, avg=74.51, stdev=19.43 00:24:25.277 clat percentiles (msec): 00:24:25.277 | 1.00th=[ 29], 5.00th=[ 32], 10.00th=[ 46], 20.00th=[ 58], 00:24:25.277 | 30.00th=[ 66], 40.00th=[ 72], 50.00th=[ 77], 60.00th=[ 81], 00:24:25.277 | 70.00th=[ 85], 80.00th=[ 90], 90.00th=[ 96], 95.00th=[ 102], 00:24:25.277 | 99.00th=[ 112], 99.50th=[ 114], 99.90th=[ 120], 99.95th=[ 122], 00:24:25.277 | 99.99th=[ 124] 00:24:25.277 bw ( KiB/s): min=162304, max=352256, per=8.94%, avg=218982.40, stdev=51555.04, samples=20 00:24:25.277 iops : min= 634, max= 1376, avg=855.40, stdev=201.39, samples=20 00:24:25.277 lat (msec) : 20=0.03%, 50=12.80%, 100=81.27%, 250=5.90% 00:24:25.277 cpu : usr=0.39%, sys=2.94%, ctx=1764, majf=0, minf=4097 00:24:25.277 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:24:25.277 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:25.277 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:24:25.277 issued rwts: total=8617,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:25.277 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:25.277 job8: (groupid=0, jobs=1): err= 0: pid=4023905: Fri Apr 26 23:27:12 2024 00:24:25.277 read: IOPS=924, BW=231MiB/s (242MB/s)(2328MiB/10066msec) 00:24:25.277 slat (usec): min=5, max=49307, avg=882.97, stdev=2659.09 00:24:25.277 clat (usec): min=1274, max=159546, avg=68239.04, stdev=27126.67 00:24:25.277 lat (usec): min=1323, max=165628, avg=69122.02, stdev=27539.02 00:24:25.277 clat percentiles (msec): 00:24:25.277 | 1.00th=[ 8], 5.00th=[ 26], 10.00th=[ 32], 20.00th=[ 42], 00:24:25.277 | 30.00th=[ 56], 40.00th=[ 65], 50.00th=[ 71], 60.00th=[ 78], 00:24:25.277 | 70.00th=[ 83], 80.00th=[ 89], 90.00th=[ 103], 95.00th=[ 115], 00:24:25.277 | 99.00th=[ 129], 99.50th=[ 133], 99.90th=[ 144], 99.95th=[ 150], 00:24:25.277 | 99.99th=[ 161] 00:24:25.277 bw ( KiB/s): min=139264, max=445952, per=9.66%, avg=236723.20, stdev=78345.65, samples=20 00:24:25.277 iops : min= 544, max= 1742, avg=924.70, stdev=306.04, samples=20 00:24:25.277 lat (msec) : 2=0.24%, 4=0.15%, 10=0.98%, 20=2.49%, 50=20.98% 00:24:25.277 lat (msec) : 100=63.68%, 250=11.48% 00:24:25.277 cpu : usr=0.37%, sys=2.71%, ctx=2268, majf=0, minf=4097 00:24:25.277 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:24:25.277 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:25.277 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:24:25.277 issued rwts: total=9310,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:25.277 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:25.277 job9: (groupid=0, jobs=1): err= 0: pid=4023906: Fri Apr 26 23:27:12 2024 00:24:25.277 read: IOPS=1081, BW=270MiB/s (284MB/s)(2709MiB/10014msec) 00:24:25.277 slat (usec): min=6, max=33205, avg=898.46, stdev=2415.07 00:24:25.277 clat (msec): min=10, max=134, avg=58.22, stdev=26.74 00:24:25.277 lat (msec): min=16, max=137, avg=59.11, stdev=27.13 00:24:25.277 clat percentiles (msec): 00:24:25.277 | 1.00th=[ 25], 5.00th=[ 27], 10.00th=[ 28], 20.00th=[ 29], 00:24:25.277 | 30.00th=[ 31], 40.00th=[ 46], 50.00th=[ 58], 60.00th=[ 70], 00:24:25.277 | 70.00th=[ 79], 80.00th=[ 86], 90.00th=[ 94], 95.00th=[ 100], 00:24:25.277 | 99.00th=[ 116], 99.50th=[ 122], 99.90th=[ 131], 99.95th=[ 132], 00:24:25.277 | 99.99th=[ 134] 00:24:25.277 bw ( KiB/s): min=165888, max=543744, per=11.26%, avg=275737.60, stdev=132967.09, samples=20 00:24:25.277 iops : min= 648, max= 2124, avg=1077.10, stdev=519.40, samples=20 00:24:25.277 lat (msec) : 20=0.15%, 50=43.41%, 100=51.76%, 250=4.68% 00:24:25.277 cpu : usr=0.33%, sys=3.38%, ctx=2225, majf=0, minf=4097 00:24:25.277 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:24:25.277 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:25.277 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:24:25.277 issued rwts: total=10834,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:25.277 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:25.277 job10: (groupid=0, jobs=1): err= 0: pid=4023907: Fri Apr 26 23:27:12 2024 00:24:25.277 read: IOPS=794, BW=199MiB/s (208MB/s)(2003MiB/10088msec) 00:24:25.277 slat (usec): min=5, max=70405, avg=967.30, stdev=3282.14 00:24:25.277 clat (msec): min=2, max=238, avg=79.53, stdev=31.63 00:24:25.277 lat (msec): min=2, max=238, avg=80.50, stdev=32.10 00:24:25.277 clat percentiles (msec): 00:24:25.277 | 1.00th=[ 9], 5.00th=[ 18], 10.00th=[ 35], 20.00th=[ 58], 00:24:25.277 | 30.00th=[ 66], 40.00th=[ 75], 50.00th=[ 81], 60.00th=[ 88], 00:24:25.277 | 70.00th=[ 96], 80.00th=[ 105], 90.00th=[ 115], 95.00th=[ 127], 00:24:25.277 | 99.00th=[ 167], 99.50th=[ 174], 99.90th=[ 186], 99.95th=[ 192], 00:24:25.277 | 99.99th=[ 239] 00:24:25.277 bw ( KiB/s): min=104448, max=305152, per=8.31%, avg=203468.80, stdev=49645.08, samples=20 00:24:25.277 iops : min= 408, max= 1192, avg=794.80, stdev=193.93, samples=20 00:24:25.277 lat (msec) : 4=0.24%, 10=1.29%, 20=4.17%, 50=8.90%, 100=59.93% 00:24:25.277 lat (msec) : 250=25.48% 00:24:25.277 cpu : usr=0.30%, sys=2.57%, ctx=2010, majf=0, minf=4097 00:24:25.277 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:24:25.277 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:25.277 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:24:25.277 issued rwts: total=8011,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:25.277 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:25.277 00:24:25.277 Run status group 0 (all jobs): 00:24:25.277 READ: bw=2392MiB/s (2509MB/s), 171MiB/s-270MiB/s (179MB/s-284MB/s), io=23.6GiB (25.3GB), run=10014-10088msec 00:24:25.277 00:24:25.277 Disk stats (read/write): 00:24:25.277 nvme0n1: ios=16846/0, merge=0/0, ticks=1216264/0, in_queue=1216264, util=96.40% 00:24:25.277 nvme10n1: ios=13393/0, merge=0/0, ticks=1220775/0, in_queue=1220775, util=96.66% 00:24:25.277 nvme1n1: ios=19430/0, merge=0/0, ticks=1226103/0, in_queue=1226103, util=97.05% 00:24:25.277 nvme2n1: ios=16873/0, merge=0/0, ticks=1221105/0, in_queue=1221105, util=97.29% 00:24:25.277 nvme3n1: ios=18628/0, merge=0/0, ticks=1218875/0, in_queue=1218875, util=97.44% 00:24:25.277 nvme4n1: ios=14902/0, merge=0/0, ticks=1220535/0, in_queue=1220535, util=97.87% 00:24:25.277 nvme5n1: ios=16999/0, merge=0/0, ticks=1220787/0, in_queue=1220787, util=98.12% 00:24:25.277 nvme6n1: ios=16856/0, merge=0/0, ticks=1219500/0, in_queue=1219500, util=98.28% 00:24:25.277 nvme7n1: ios=18265/0, merge=0/0, ticks=1221883/0, in_queue=1221883, util=98.80% 00:24:25.277 nvme8n1: ios=20870/0, merge=0/0, ticks=1222402/0, in_queue=1222402, util=98.99% 00:24:25.277 nvme9n1: ios=15709/0, merge=0/0, ticks=1222189/0, in_queue=1222189, util=99.22% 00:24:25.277 23:27:12 -- target/multiconnection.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t randwrite -r 10 00:24:25.277 [global] 00:24:25.277 thread=1 00:24:25.277 invalidate=1 00:24:25.277 rw=randwrite 00:24:25.277 time_based=1 00:24:25.277 runtime=10 00:24:25.277 ioengine=libaio 00:24:25.277 direct=1 00:24:25.277 bs=262144 00:24:25.277 iodepth=64 00:24:25.277 norandommap=1 00:24:25.277 numjobs=1 00:24:25.277 00:24:25.277 [job0] 00:24:25.277 filename=/dev/nvme0n1 00:24:25.277 [job1] 00:24:25.277 filename=/dev/nvme10n1 00:24:25.277 [job2] 00:24:25.277 filename=/dev/nvme1n1 00:24:25.277 [job3] 00:24:25.277 filename=/dev/nvme2n1 00:24:25.277 [job4] 00:24:25.277 filename=/dev/nvme3n1 00:24:25.277 [job5] 00:24:25.277 filename=/dev/nvme4n1 00:24:25.277 [job6] 00:24:25.277 filename=/dev/nvme5n1 00:24:25.277 [job7] 00:24:25.277 filename=/dev/nvme6n1 00:24:25.277 [job8] 00:24:25.277 filename=/dev/nvme7n1 00:24:25.277 [job9] 00:24:25.277 filename=/dev/nvme8n1 00:24:25.277 [job10] 00:24:25.277 filename=/dev/nvme9n1 00:24:25.277 Could not set queue depth (nvme0n1) 00:24:25.277 Could not set queue depth (nvme10n1) 00:24:25.277 Could not set queue depth (nvme1n1) 00:24:25.277 Could not set queue depth (nvme2n1) 00:24:25.277 Could not set queue depth (nvme3n1) 00:24:25.277 Could not set queue depth (nvme4n1) 00:24:25.277 Could not set queue depth (nvme5n1) 00:24:25.277 Could not set queue depth (nvme6n1) 00:24:25.277 Could not set queue depth (nvme7n1) 00:24:25.277 Could not set queue depth (nvme8n1) 00:24:25.277 Could not set queue depth (nvme9n1) 00:24:25.277 job0: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:25.277 job1: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:25.277 job2: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:25.277 job3: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:25.277 job4: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:25.277 job5: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:25.277 job6: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:25.277 job7: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:25.277 job8: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:25.277 job9: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:25.277 job10: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:25.277 fio-3.35 00:24:25.277 Starting 11 threads 00:24:35.278 00:24:35.278 job0: (groupid=0, jobs=1): err= 0: pid=4026374: Fri Apr 26 23:27:23 2024 00:24:35.278 write: IOPS=712, BW=178MiB/s (187MB/s)(1795MiB/10078msec); 0 zone resets 00:24:35.278 slat (usec): min=24, max=14918, avg=1340.14, stdev=2402.01 00:24:35.278 clat (msec): min=2, max=154, avg=88.45, stdev=15.32 00:24:35.278 lat (msec): min=2, max=154, avg=89.79, stdev=15.47 00:24:35.278 clat percentiles (msec): 00:24:35.278 | 1.00th=[ 35], 5.00th=[ 66], 10.00th=[ 75], 20.00th=[ 80], 00:24:35.278 | 30.00th=[ 82], 40.00th=[ 84], 50.00th=[ 86], 60.00th=[ 89], 00:24:35.278 | 70.00th=[ 101], 80.00th=[ 104], 90.00th=[ 106], 95.00th=[ 108], 00:24:35.278 | 99.00th=[ 115], 99.50th=[ 120], 99.90th=[ 146], 99.95th=[ 150], 00:24:35.278 | 99.99th=[ 155] 00:24:35.278 bw ( KiB/s): min=154112, max=243712, per=9.66%, avg=182220.80, stdev=23036.65, samples=20 00:24:35.278 iops : min= 602, max= 952, avg=711.80, stdev=89.99, samples=20 00:24:35.278 lat (msec) : 4=0.01%, 10=0.06%, 20=0.11%, 50=2.56%, 100=68.11% 00:24:35.278 lat (msec) : 250=29.15% 00:24:35.278 cpu : usr=1.59%, sys=2.03%, ctx=2089, majf=0, minf=1 00:24:35.278 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.1% 00:24:35.278 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:35.278 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:24:35.278 issued rwts: total=0,7181,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:35.278 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:35.278 job1: (groupid=0, jobs=1): err= 0: pid=4026399: Fri Apr 26 23:27:23 2024 00:24:35.278 write: IOPS=761, BW=190MiB/s (200MB/s)(1932MiB/10142msec); 0 zone resets 00:24:35.278 slat (usec): min=26, max=80958, avg=1251.14, stdev=2589.79 00:24:35.278 clat (msec): min=17, max=292, avg=82.70, stdev=32.34 00:24:35.278 lat (msec): min=17, max=292, avg=83.95, stdev=32.72 00:24:35.278 clat percentiles (msec): 00:24:35.278 | 1.00th=[ 43], 5.00th=[ 46], 10.00th=[ 47], 20.00th=[ 51], 00:24:35.278 | 30.00th=[ 55], 40.00th=[ 62], 50.00th=[ 95], 60.00th=[ 101], 00:24:35.278 | 70.00th=[ 104], 80.00th=[ 106], 90.00th=[ 114], 95.00th=[ 132], 00:24:35.278 | 99.00th=[ 161], 99.50th=[ 197], 99.90th=[ 271], 99.95th=[ 284], 00:24:35.278 | 99.99th=[ 292] 00:24:35.278 bw ( KiB/s): min=102912, max=322560, per=10.40%, avg=196224.00, stdev=73034.00, samples=20 00:24:35.278 iops : min= 402, max= 1260, avg=766.50, stdev=285.29, samples=20 00:24:35.278 lat (msec) : 20=0.06%, 50=19.51%, 100=39.96%, 250=40.23%, 500=0.23% 00:24:35.278 cpu : usr=1.65%, sys=2.16%, ctx=2137, majf=0, minf=1 00:24:35.278 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:24:35.278 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:35.278 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:24:35.278 issued rwts: total=0,7728,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:35.278 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:35.278 job2: (groupid=0, jobs=1): err= 0: pid=4026417: Fri Apr 26 23:27:23 2024 00:24:35.278 write: IOPS=1029, BW=257MiB/s (270MB/s)(2592MiB/10073msec); 0 zone resets 00:24:35.278 slat (usec): min=22, max=100408, avg=959.73, stdev=2184.47 00:24:35.278 clat (msec): min=2, max=155, avg=61.19, stdev=15.73 00:24:35.278 lat (msec): min=2, max=155, avg=62.15, stdev=15.87 00:24:35.278 clat percentiles (msec): 00:24:35.278 | 1.00th=[ 26], 5.00th=[ 47], 10.00th=[ 48], 20.00th=[ 50], 00:24:35.278 | 30.00th=[ 52], 40.00th=[ 54], 50.00th=[ 56], 60.00th=[ 59], 00:24:35.278 | 70.00th=[ 64], 80.00th=[ 79], 90.00th=[ 83], 95.00th=[ 85], 00:24:35.278 | 99.00th=[ 113], 99.50th=[ 128], 99.90th=[ 148], 99.95th=[ 153], 00:24:35.278 | 99.99th=[ 157] 00:24:35.278 bw ( KiB/s): min=193536, max=316416, per=13.98%, avg=263782.40, stdev=47626.17, samples=20 00:24:35.278 iops : min= 756, max= 1236, avg=1030.40, stdev=186.04, samples=20 00:24:35.278 lat (msec) : 4=0.02%, 10=0.14%, 20=0.44%, 50=20.73%, 100=77.03% 00:24:35.278 lat (msec) : 250=1.64% 00:24:35.278 cpu : usr=2.46%, sys=3.24%, ctx=2515, majf=0, minf=1 00:24:35.278 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:24:35.278 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:35.278 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:24:35.278 issued rwts: total=0,10367,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:35.278 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:35.278 job3: (groupid=0, jobs=1): err= 0: pid=4026428: Fri Apr 26 23:27:23 2024 00:24:35.278 write: IOPS=650, BW=163MiB/s (170MB/s)(1635MiB/10056msec); 0 zone resets 00:24:35.278 slat (usec): min=25, max=148816, avg=1449.46, stdev=3276.11 00:24:35.278 clat (usec): min=1867, max=219197, avg=96916.41, stdev=24849.50 00:24:35.278 lat (msec): min=2, max=224, avg=98.37, stdev=25.15 00:24:35.278 clat percentiles (msec): 00:24:35.278 | 1.00th=[ 15], 5.00th=[ 43], 10.00th=[ 57], 20.00th=[ 90], 00:24:35.278 | 30.00th=[ 99], 40.00th=[ 102], 50.00th=[ 104], 60.00th=[ 105], 00:24:35.278 | 70.00th=[ 106], 80.00th=[ 107], 90.00th=[ 114], 95.00th=[ 131], 00:24:35.278 | 99.00th=[ 146], 99.50th=[ 180], 99.90th=[ 215], 99.95th=[ 220], 00:24:35.278 | 99.99th=[ 220] 00:24:35.278 bw ( KiB/s): min=124928, max=260096, per=8.79%, avg=165785.60, stdev=31373.09, samples=20 00:24:35.278 iops : min= 488, max= 1016, avg=647.60, stdev=122.55, samples=20 00:24:35.278 lat (msec) : 2=0.02%, 4=0.03%, 10=0.41%, 20=1.19%, 50=5.77% 00:24:35.278 lat (msec) : 100=30.25%, 250=62.33% 00:24:35.278 cpu : usr=1.62%, sys=2.23%, ctx=2107, majf=0, minf=1 00:24:35.278 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.0% 00:24:35.278 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:35.278 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:24:35.278 issued rwts: total=0,6539,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:35.278 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:35.278 job4: (groupid=0, jobs=1): err= 0: pid=4026435: Fri Apr 26 23:27:23 2024 00:24:35.278 write: IOPS=549, BW=137MiB/s (144MB/s)(1393MiB/10138msec); 0 zone resets 00:24:35.278 slat (usec): min=26, max=74688, avg=1791.08, stdev=3488.54 00:24:35.278 clat (msec): min=19, max=292, avg=114.64, stdev=22.92 00:24:35.278 lat (msec): min=19, max=292, avg=116.43, stdev=23.01 00:24:35.278 clat percentiles (msec): 00:24:35.278 | 1.00th=[ 66], 5.00th=[ 95], 10.00th=[ 97], 20.00th=[ 101], 00:24:35.278 | 30.00th=[ 103], 40.00th=[ 104], 50.00th=[ 106], 60.00th=[ 122], 00:24:35.278 | 70.00th=[ 130], 80.00th=[ 131], 90.00th=[ 133], 95.00th=[ 148], 00:24:35.278 | 99.00th=[ 207], 99.50th=[ 232], 99.90th=[ 284], 99.95th=[ 284], 00:24:35.278 | 99.99th=[ 292] 00:24:35.279 bw ( KiB/s): min=100864, max=159744, per=7.47%, avg=140979.20, stdev=17575.44, samples=20 00:24:35.279 iops : min= 394, max= 624, avg=550.70, stdev=68.65, samples=20 00:24:35.279 lat (msec) : 20=0.07%, 50=0.54%, 100=18.56%, 250=80.50%, 500=0.32% 00:24:35.279 cpu : usr=1.44%, sys=1.62%, ctx=1414, majf=0, minf=1 00:24:35.279 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:24:35.279 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:35.279 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:24:35.279 issued rwts: total=0,5570,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:35.279 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:35.279 job5: (groupid=0, jobs=1): err= 0: pid=4026457: Fri Apr 26 23:27:23 2024 00:24:35.279 write: IOPS=570, BW=143MiB/s (150MB/s)(1446MiB/10140msec); 0 zone resets 00:24:35.279 slat (usec): min=29, max=29734, avg=1702.06, stdev=2989.67 00:24:35.279 clat (msec): min=17, max=291, avg=110.48, stdev=19.96 00:24:35.279 lat (msec): min=17, max=291, avg=112.18, stdev=20.03 00:24:35.279 clat percentiles (msec): 00:24:35.279 | 1.00th=[ 65], 5.00th=[ 95], 10.00th=[ 97], 20.00th=[ 101], 00:24:35.279 | 30.00th=[ 103], 40.00th=[ 103], 50.00th=[ 104], 60.00th=[ 105], 00:24:35.279 | 70.00th=[ 115], 80.00th=[ 130], 90.00th=[ 132], 95.00th=[ 133], 00:24:35.279 | 99.00th=[ 159], 99.50th=[ 222], 99.90th=[ 284], 99.95th=[ 284], 00:24:35.279 | 99.99th=[ 292] 00:24:35.279 bw ( KiB/s): min=102912, max=159744, per=7.76%, avg=146416.80, stdev=17639.19, samples=20 00:24:35.279 iops : min= 402, max= 624, avg=571.90, stdev=68.89, samples=20 00:24:35.279 lat (msec) : 20=0.07%, 50=0.48%, 100=19.38%, 250=79.75%, 500=0.31% 00:24:35.279 cpu : usr=1.18%, sys=1.67%, ctx=1566, majf=0, minf=1 00:24:35.279 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:24:35.279 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:35.279 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:24:35.279 issued rwts: total=0,5783,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:35.279 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:35.279 job6: (groupid=0, jobs=1): err= 0: pid=4026468: Fri Apr 26 23:27:23 2024 00:24:35.279 write: IOPS=625, BW=156MiB/s (164MB/s)(1576MiB/10070msec); 0 zone resets 00:24:35.279 slat (usec): min=25, max=41070, avg=1562.41, stdev=2724.70 00:24:35.279 clat (msec): min=27, max=145, avg=100.68, stdev= 8.66 00:24:35.279 lat (msec): min=27, max=145, avg=102.24, stdev= 8.43 00:24:35.279 clat percentiles (msec): 00:24:35.279 | 1.00th=[ 68], 5.00th=[ 88], 10.00th=[ 94], 20.00th=[ 97], 00:24:35.279 | 30.00th=[ 100], 40.00th=[ 102], 50.00th=[ 103], 60.00th=[ 104], 00:24:35.279 | 70.00th=[ 105], 80.00th=[ 105], 90.00th=[ 106], 95.00th=[ 108], 00:24:35.279 | 99.00th=[ 127], 99.50th=[ 132], 99.90th=[ 138], 99.95th=[ 142], 00:24:35.279 | 99.99th=[ 146] 00:24:35.279 bw ( KiB/s): min=145408, max=171520, per=8.46%, avg=159718.40, stdev=5941.88, samples=20 00:24:35.279 iops : min= 568, max= 670, avg=623.90, stdev=23.21, samples=20 00:24:35.279 lat (msec) : 50=0.49%, 100=33.43%, 250=66.07% 00:24:35.279 cpu : usr=1.50%, sys=2.12%, ctx=1695, majf=0, minf=1 00:24:35.279 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:24:35.279 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:35.279 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:24:35.279 issued rwts: total=0,6302,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:35.279 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:35.279 job7: (groupid=0, jobs=1): err= 0: pid=4026478: Fri Apr 26 23:27:23 2024 00:24:35.279 write: IOPS=620, BW=155MiB/s (163MB/s)(1563MiB/10071msec); 0 zone resets 00:24:35.279 slat (usec): min=24, max=37739, avg=1586.08, stdev=2743.33 00:24:35.279 clat (msec): min=6, max=146, avg=101.50, stdev= 8.34 00:24:35.279 lat (msec): min=6, max=146, avg=103.08, stdev= 8.04 00:24:35.279 clat percentiles (msec): 00:24:35.279 | 1.00th=[ 74], 5.00th=[ 94], 10.00th=[ 96], 20.00th=[ 99], 00:24:35.279 | 30.00th=[ 101], 40.00th=[ 103], 50.00th=[ 104], 60.00th=[ 104], 00:24:35.279 | 70.00th=[ 105], 80.00th=[ 105], 90.00th=[ 106], 95.00th=[ 108], 00:24:35.279 | 99.00th=[ 126], 99.50th=[ 130], 99.90th=[ 138], 99.95th=[ 142], 00:24:35.279 | 99.99th=[ 146] 00:24:35.279 bw ( KiB/s): min=145408, max=175616, per=8.39%, avg=158412.80, stdev=5850.45, samples=20 00:24:35.279 iops : min= 568, max= 686, avg=618.80, stdev=22.85, samples=20 00:24:35.279 lat (msec) : 10=0.05%, 20=0.13%, 50=0.21%, 100=27.96%, 250=71.65% 00:24:35.279 cpu : usr=1.51%, sys=2.01%, ctx=1650, majf=0, minf=1 00:24:35.279 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:24:35.279 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:35.279 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:24:35.279 issued rwts: total=0,6251,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:35.279 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:35.279 job8: (groupid=0, jobs=1): err= 0: pid=4026484: Fri Apr 26 23:27:23 2024 00:24:35.279 write: IOPS=590, BW=148MiB/s (155MB/s)(1497MiB/10141msec); 0 zone resets 00:24:35.279 slat (usec): min=18, max=125334, avg=1649.94, stdev=3470.55 00:24:35.279 clat (msec): min=11, max=292, avg=106.70, stdev=25.05 00:24:35.279 lat (msec): min=11, max=292, avg=108.35, stdev=25.21 00:24:35.279 clat percentiles (msec): 00:24:35.279 | 1.00th=[ 45], 5.00th=[ 78], 10.00th=[ 81], 20.00th=[ 86], 00:24:35.279 | 30.00th=[ 99], 40.00th=[ 103], 50.00th=[ 104], 60.00th=[ 106], 00:24:35.279 | 70.00th=[ 113], 80.00th=[ 130], 90.00th=[ 132], 95.00th=[ 142], 00:24:35.279 | 99.00th=[ 184], 99.50th=[ 224], 99.90th=[ 284], 99.95th=[ 284], 00:24:35.279 | 99.99th=[ 292] 00:24:35.279 bw ( KiB/s): min=102912, max=197632, per=8.04%, avg=151628.80, stdev=25609.85, samples=20 00:24:35.279 iops : min= 402, max= 772, avg=592.30, stdev=100.04, samples=20 00:24:35.279 lat (msec) : 20=0.13%, 50=1.14%, 100=33.81%, 250=64.62%, 500=0.30% 00:24:35.279 cpu : usr=1.44%, sys=1.62%, ctx=1576, majf=0, minf=1 00:24:35.279 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=98.9% 00:24:35.279 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:35.279 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:24:35.279 issued rwts: total=0,5986,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:35.279 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:35.279 job9: (groupid=0, jobs=1): err= 0: pid=4026485: Fri Apr 26 23:27:23 2024 00:24:35.279 write: IOPS=734, BW=184MiB/s (192MB/s)(1850MiB/10076msec); 0 zone resets 00:24:35.279 slat (usec): min=27, max=17802, avg=1315.82, stdev=2359.56 00:24:35.279 clat (msec): min=9, max=150, avg=85.81, stdev=17.86 00:24:35.279 lat (msec): min=9, max=150, avg=87.13, stdev=18.07 00:24:35.279 clat percentiles (msec): 00:24:35.279 | 1.00th=[ 24], 5.00th=[ 58], 10.00th=[ 63], 20.00th=[ 78], 00:24:35.279 | 30.00th=[ 80], 40.00th=[ 83], 50.00th=[ 84], 60.00th=[ 89], 00:24:35.279 | 70.00th=[ 101], 80.00th=[ 104], 90.00th=[ 106], 95.00th=[ 107], 00:24:35.279 | 99.00th=[ 111], 99.50th=[ 113], 99.90th=[ 142], 99.95th=[ 146], 00:24:35.279 | 99.99th=[ 150] 00:24:35.279 bw ( KiB/s): min=153600, max=281088, per=9.95%, avg=187801.60, stdev=34337.81, samples=20 00:24:35.279 iops : min= 600, max= 1098, avg=733.60, stdev=134.13, samples=20 00:24:35.279 lat (msec) : 10=0.07%, 20=0.64%, 50=2.76%, 100=67.32%, 250=29.22% 00:24:35.279 cpu : usr=1.68%, sys=1.99%, ctx=2119, majf=0, minf=1 00:24:35.279 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.1% 00:24:35.279 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:35.279 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:24:35.279 issued rwts: total=0,7399,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:35.279 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:35.279 job10: (groupid=0, jobs=1): err= 0: pid=4026486: Fri Apr 26 23:27:23 2024 00:24:35.279 write: IOPS=557, BW=139MiB/s (146MB/s)(1414MiB/10141msec); 0 zone resets 00:24:35.279 slat (usec): min=22, max=31260, avg=1745.64, stdev=3077.93 00:24:35.279 clat (msec): min=18, max=290, avg=113.01, stdev=21.06 00:24:35.279 lat (msec): min=18, max=290, avg=114.75, stdev=21.15 00:24:35.279 clat percentiles (msec): 00:24:35.279 | 1.00th=[ 61], 5.00th=[ 94], 10.00th=[ 96], 20.00th=[ 100], 00:24:35.279 | 30.00th=[ 102], 40.00th=[ 104], 50.00th=[ 105], 60.00th=[ 115], 00:24:35.279 | 70.00th=[ 128], 80.00th=[ 131], 90.00th=[ 132], 95.00th=[ 140], 00:24:35.279 | 99.00th=[ 159], 99.50th=[ 222], 99.90th=[ 279], 99.95th=[ 279], 00:24:35.279 | 99.99th=[ 292] 00:24:35.279 bw ( KiB/s): min=102912, max=171520, per=7.58%, avg=143129.60, stdev=18620.03, samples=20 00:24:35.279 iops : min= 402, max= 670, avg=559.10, stdev=72.73, samples=20 00:24:35.279 lat (msec) : 20=0.07%, 50=0.78%, 100=21.93%, 250=76.90%, 500=0.32% 00:24:35.279 cpu : usr=1.21%, sys=1.58%, ctx=1521, majf=0, minf=1 00:24:35.279 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:24:35.279 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:35.279 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:24:35.279 issued rwts: total=0,5654,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:35.279 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:35.279 00:24:35.279 Run status group 0 (all jobs): 00:24:35.279 WRITE: bw=1843MiB/s (1932MB/s), 137MiB/s-257MiB/s (144MB/s-270MB/s), io=18.3GiB (19.6GB), run=10056-10142msec 00:24:35.279 00:24:35.279 Disk stats (read/write): 00:24:35.279 nvme0n1: ios=49/13995, merge=0/0, ticks=250/1200959, in_queue=1201209, util=98.45% 00:24:35.279 nvme10n1: ios=47/15406, merge=0/0, ticks=1866/1219961, in_queue=1221827, util=99.96% 00:24:35.279 nvme1n1: ios=48/20359, merge=0/0, ticks=2164/1174576, in_queue=1176740, util=100.00% 00:24:35.279 nvme2n1: ios=45/12568, merge=0/0, ticks=1870/1188846, in_queue=1190716, util=99.86% 00:24:35.279 nvme3n1: ios=53/11095, merge=0/0, ticks=2702/1208333, in_queue=1211035, util=100.00% 00:24:35.279 nvme4n1: ios=0/11518, merge=0/0, ticks=0/1226184, in_queue=1226184, util=97.79% 00:24:35.279 nvme5n1: ios=0/12227, merge=0/0, ticks=0/1197510, in_queue=1197510, util=97.94% 00:24:35.279 nvme6n1: ios=0/12102, merge=0/0, ticks=0/1197496, in_queue=1197496, util=98.12% 00:24:35.279 nvme7n1: ios=40/11924, merge=0/0, ticks=1288/1208586, in_queue=1209874, util=100.00% 00:24:35.279 nvme8n1: ios=0/14420, merge=0/0, ticks=0/1200111, in_queue=1200111, util=98.90% 00:24:35.279 nvme9n1: ios=0/11259, merge=0/0, ticks=0/1226381, in_queue=1226381, util=99.11% 00:24:35.279 23:27:23 -- target/multiconnection.sh@36 -- # sync 00:24:35.279 23:27:23 -- target/multiconnection.sh@37 -- # seq 1 11 00:24:35.280 23:27:23 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:35.280 23:27:23 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:24:35.280 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:24:35.280 23:27:24 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK1 00:24:35.280 23:27:24 -- common/autotest_common.sh@1205 -- # local i=0 00:24:35.280 23:27:24 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:24:35.280 23:27:24 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK1 00:24:35.280 23:27:24 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:24:35.280 23:27:24 -- common/autotest_common.sh@1213 -- # grep -q -w SPDK1 00:24:35.280 23:27:24 -- common/autotest_common.sh@1217 -- # return 0 00:24:35.280 23:27:24 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:35.280 23:27:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:35.280 23:27:24 -- common/autotest_common.sh@10 -- # set +x 00:24:35.280 23:27:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:35.280 23:27:24 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:35.280 23:27:24 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode2 00:24:35.280 NQN:nqn.2016-06.io.spdk:cnode2 disconnected 1 controller(s) 00:24:35.280 23:27:24 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK2 00:24:35.280 23:27:24 -- common/autotest_common.sh@1205 -- # local i=0 00:24:35.280 23:27:24 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:24:35.280 23:27:24 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK2 00:24:35.280 23:27:24 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:24:35.280 23:27:24 -- common/autotest_common.sh@1213 -- # grep -q -w SPDK2 00:24:35.541 23:27:24 -- common/autotest_common.sh@1217 -- # return 0 00:24:35.541 23:27:24 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:24:35.541 23:27:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:35.541 23:27:24 -- common/autotest_common.sh@10 -- # set +x 00:24:35.541 23:27:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:35.541 23:27:24 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:35.541 23:27:24 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode3 00:24:35.802 NQN:nqn.2016-06.io.spdk:cnode3 disconnected 1 controller(s) 00:24:35.802 23:27:24 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK3 00:24:35.802 23:27:24 -- common/autotest_common.sh@1205 -- # local i=0 00:24:35.802 23:27:24 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:24:35.802 23:27:24 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK3 00:24:35.802 23:27:24 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:24:35.802 23:27:24 -- common/autotest_common.sh@1213 -- # grep -q -w SPDK3 00:24:35.802 23:27:24 -- common/autotest_common.sh@1217 -- # return 0 00:24:35.802 23:27:24 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:24:35.802 23:27:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:35.802 23:27:24 -- common/autotest_common.sh@10 -- # set +x 00:24:35.802 23:27:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:35.802 23:27:24 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:35.802 23:27:24 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode4 00:24:36.064 NQN:nqn.2016-06.io.spdk:cnode4 disconnected 1 controller(s) 00:24:36.064 23:27:25 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK4 00:24:36.064 23:27:25 -- common/autotest_common.sh@1205 -- # local i=0 00:24:36.064 23:27:25 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:24:36.064 23:27:25 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK4 00:24:36.064 23:27:25 -- common/autotest_common.sh@1213 -- # grep -q -w SPDK4 00:24:36.064 23:27:25 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:24:36.064 23:27:25 -- common/autotest_common.sh@1217 -- # return 0 00:24:36.064 23:27:25 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:24:36.064 23:27:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:36.064 23:27:25 -- common/autotest_common.sh@10 -- # set +x 00:24:36.064 23:27:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:36.064 23:27:25 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:36.064 23:27:25 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode5 00:24:36.325 NQN:nqn.2016-06.io.spdk:cnode5 disconnected 1 controller(s) 00:24:36.325 23:27:25 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK5 00:24:36.325 23:27:25 -- common/autotest_common.sh@1205 -- # local i=0 00:24:36.325 23:27:25 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:24:36.325 23:27:25 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK5 00:24:36.325 23:27:25 -- common/autotest_common.sh@1213 -- # grep -q -w SPDK5 00:24:36.325 23:27:25 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:24:36.325 23:27:25 -- common/autotest_common.sh@1217 -- # return 0 00:24:36.325 23:27:25 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode5 00:24:36.325 23:27:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:36.325 23:27:25 -- common/autotest_common.sh@10 -- # set +x 00:24:36.325 23:27:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:36.325 23:27:25 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:36.325 23:27:25 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode6 00:24:36.586 NQN:nqn.2016-06.io.spdk:cnode6 disconnected 1 controller(s) 00:24:36.586 23:27:25 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK6 00:24:36.586 23:27:25 -- common/autotest_common.sh@1205 -- # local i=0 00:24:36.586 23:27:25 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:24:36.586 23:27:25 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK6 00:24:36.586 23:27:25 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:24:36.586 23:27:25 -- common/autotest_common.sh@1213 -- # grep -q -w SPDK6 00:24:36.586 23:27:25 -- common/autotest_common.sh@1217 -- # return 0 00:24:36.586 23:27:25 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode6 00:24:36.586 23:27:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:36.586 23:27:25 -- common/autotest_common.sh@10 -- # set +x 00:24:36.586 23:27:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:36.586 23:27:25 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:36.586 23:27:25 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode7 00:24:36.847 NQN:nqn.2016-06.io.spdk:cnode7 disconnected 1 controller(s) 00:24:36.847 23:27:25 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK7 00:24:36.847 23:27:25 -- common/autotest_common.sh@1205 -- # local i=0 00:24:36.847 23:27:25 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:24:36.847 23:27:25 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK7 00:24:36.847 23:27:25 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:24:36.847 23:27:25 -- common/autotest_common.sh@1213 -- # grep -q -w SPDK7 00:24:36.847 23:27:25 -- common/autotest_common.sh@1217 -- # return 0 00:24:36.847 23:27:25 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode7 00:24:36.847 23:27:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:36.847 23:27:25 -- common/autotest_common.sh@10 -- # set +x 00:24:36.847 23:27:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:36.847 23:27:25 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:36.847 23:27:25 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode8 00:24:36.847 NQN:nqn.2016-06.io.spdk:cnode8 disconnected 1 controller(s) 00:24:36.847 23:27:26 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK8 00:24:36.847 23:27:26 -- common/autotest_common.sh@1205 -- # local i=0 00:24:36.847 23:27:26 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:24:36.847 23:27:26 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK8 00:24:36.847 23:27:26 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:24:36.847 23:27:26 -- common/autotest_common.sh@1213 -- # grep -q -w SPDK8 00:24:36.847 23:27:26 -- common/autotest_common.sh@1217 -- # return 0 00:24:36.847 23:27:26 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode8 00:24:36.847 23:27:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:36.847 23:27:26 -- common/autotest_common.sh@10 -- # set +x 00:24:36.847 23:27:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:36.847 23:27:26 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:36.847 23:27:26 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode9 00:24:37.108 NQN:nqn.2016-06.io.spdk:cnode9 disconnected 1 controller(s) 00:24:37.108 23:27:26 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK9 00:24:37.108 23:27:26 -- common/autotest_common.sh@1205 -- # local i=0 00:24:37.108 23:27:26 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:24:37.108 23:27:26 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK9 00:24:37.108 23:27:26 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:24:37.108 23:27:26 -- common/autotest_common.sh@1213 -- # grep -q -w SPDK9 00:24:37.108 23:27:26 -- common/autotest_common.sh@1217 -- # return 0 00:24:37.108 23:27:26 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode9 00:24:37.108 23:27:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:37.108 23:27:26 -- common/autotest_common.sh@10 -- # set +x 00:24:37.108 23:27:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:37.108 23:27:26 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:37.108 23:27:26 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode10 00:24:37.369 NQN:nqn.2016-06.io.spdk:cnode10 disconnected 1 controller(s) 00:24:37.369 23:27:26 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK10 00:24:37.369 23:27:26 -- common/autotest_common.sh@1205 -- # local i=0 00:24:37.369 23:27:26 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:24:37.369 23:27:26 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK10 00:24:37.369 23:27:26 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:24:37.369 23:27:26 -- common/autotest_common.sh@1213 -- # grep -q -w SPDK10 00:24:37.369 23:27:26 -- common/autotest_common.sh@1217 -- # return 0 00:24:37.369 23:27:26 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode10 00:24:37.369 23:27:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:37.369 23:27:26 -- common/autotest_common.sh@10 -- # set +x 00:24:37.369 23:27:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:37.369 23:27:26 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:37.369 23:27:26 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode11 00:24:37.369 NQN:nqn.2016-06.io.spdk:cnode11 disconnected 1 controller(s) 00:24:37.369 23:27:26 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK11 00:24:37.369 23:27:26 -- common/autotest_common.sh@1205 -- # local i=0 00:24:37.369 23:27:26 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:24:37.369 23:27:26 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK11 00:24:37.369 23:27:26 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:24:37.369 23:27:26 -- common/autotest_common.sh@1213 -- # grep -q -w SPDK11 00:24:37.369 23:27:26 -- common/autotest_common.sh@1217 -- # return 0 00:24:37.369 23:27:26 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode11 00:24:37.369 23:27:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:37.369 23:27:26 -- common/autotest_common.sh@10 -- # set +x 00:24:37.630 23:27:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:37.630 23:27:26 -- target/multiconnection.sh@43 -- # rm -f ./local-job0-0-verify.state 00:24:37.630 23:27:26 -- target/multiconnection.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:24:37.630 23:27:26 -- target/multiconnection.sh@47 -- # nvmftestfini 00:24:37.630 23:27:26 -- nvmf/common.sh@477 -- # nvmfcleanup 00:24:37.630 23:27:26 -- nvmf/common.sh@117 -- # sync 00:24:37.630 23:27:26 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:37.630 23:27:26 -- nvmf/common.sh@120 -- # set +e 00:24:37.630 23:27:26 -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:37.630 23:27:26 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:37.630 rmmod nvme_tcp 00:24:37.630 rmmod nvme_fabrics 00:24:37.630 rmmod nvme_keyring 00:24:37.630 23:27:26 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:37.630 23:27:26 -- nvmf/common.sh@124 -- # set -e 00:24:37.630 23:27:26 -- nvmf/common.sh@125 -- # return 0 00:24:37.630 23:27:26 -- nvmf/common.sh@478 -- # '[' -n 4015038 ']' 00:24:37.630 23:27:26 -- nvmf/common.sh@479 -- # killprocess 4015038 00:24:37.630 23:27:26 -- common/autotest_common.sh@936 -- # '[' -z 4015038 ']' 00:24:37.630 23:27:26 -- common/autotest_common.sh@940 -- # kill -0 4015038 00:24:37.630 23:27:26 -- common/autotest_common.sh@941 -- # uname 00:24:37.630 23:27:26 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:24:37.630 23:27:26 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 4015038 00:24:37.630 23:27:26 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:24:37.630 23:27:26 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:24:37.630 23:27:26 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 4015038' 00:24:37.630 killing process with pid 4015038 00:24:37.630 23:27:26 -- common/autotest_common.sh@955 -- # kill 4015038 00:24:37.630 23:27:26 -- common/autotest_common.sh@960 -- # wait 4015038 00:24:37.890 23:27:27 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:24:37.890 23:27:27 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:24:37.890 23:27:27 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:24:37.890 23:27:27 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:37.890 23:27:27 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:37.890 23:27:27 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:37.890 23:27:27 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:37.890 23:27:27 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:40.436 23:27:29 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:24:40.436 00:24:40.436 real 1m17.348s 00:24:40.436 user 4m51.620s 00:24:40.436 sys 0m23.329s 00:24:40.436 23:27:29 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:24:40.436 23:27:29 -- common/autotest_common.sh@10 -- # set +x 00:24:40.436 ************************************ 00:24:40.436 END TEST nvmf_multiconnection 00:24:40.436 ************************************ 00:24:40.436 23:27:29 -- nvmf/nvmf.sh@67 -- # run_test nvmf_initiator_timeout /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:24:40.436 23:27:29 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:24:40.436 23:27:29 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:24:40.436 23:27:29 -- common/autotest_common.sh@10 -- # set +x 00:24:40.436 ************************************ 00:24:40.436 START TEST nvmf_initiator_timeout 00:24:40.436 ************************************ 00:24:40.436 23:27:29 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:24:40.436 * Looking for test storage... 00:24:40.436 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:24:40.436 23:27:29 -- target/initiator_timeout.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:40.436 23:27:29 -- nvmf/common.sh@7 -- # uname -s 00:24:40.436 23:27:29 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:40.436 23:27:29 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:40.436 23:27:29 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:40.436 23:27:29 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:40.436 23:27:29 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:40.436 23:27:29 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:40.437 23:27:29 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:40.437 23:27:29 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:40.437 23:27:29 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:40.437 23:27:29 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:40.437 23:27:29 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:24:40.437 23:27:29 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:24:40.437 23:27:29 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:40.437 23:27:29 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:40.437 23:27:29 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:40.437 23:27:29 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:40.437 23:27:29 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:40.437 23:27:29 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:40.437 23:27:29 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:40.437 23:27:29 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:40.437 23:27:29 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:40.437 23:27:29 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:40.437 23:27:29 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:40.437 23:27:29 -- paths/export.sh@5 -- # export PATH 00:24:40.437 23:27:29 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:40.437 23:27:29 -- nvmf/common.sh@47 -- # : 0 00:24:40.437 23:27:29 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:40.437 23:27:29 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:40.437 23:27:29 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:40.437 23:27:29 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:40.437 23:27:29 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:40.437 23:27:29 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:40.437 23:27:29 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:40.437 23:27:29 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:40.437 23:27:29 -- target/initiator_timeout.sh@11 -- # MALLOC_BDEV_SIZE=64 00:24:40.437 23:27:29 -- target/initiator_timeout.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:24:40.437 23:27:29 -- target/initiator_timeout.sh@14 -- # nvmftestinit 00:24:40.437 23:27:29 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:24:40.437 23:27:29 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:40.437 23:27:29 -- nvmf/common.sh@437 -- # prepare_net_devs 00:24:40.437 23:27:29 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:24:40.437 23:27:29 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:24:40.437 23:27:29 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:40.437 23:27:29 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:40.437 23:27:29 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:40.437 23:27:29 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:24:40.437 23:27:29 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:24:40.437 23:27:29 -- nvmf/common.sh@285 -- # xtrace_disable 00:24:40.437 23:27:29 -- common/autotest_common.sh@10 -- # set +x 00:24:47.027 23:27:35 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:24:47.027 23:27:35 -- nvmf/common.sh@291 -- # pci_devs=() 00:24:47.027 23:27:35 -- nvmf/common.sh@291 -- # local -a pci_devs 00:24:47.027 23:27:35 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:24:47.027 23:27:35 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:24:47.027 23:27:35 -- nvmf/common.sh@293 -- # pci_drivers=() 00:24:47.027 23:27:35 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:24:47.027 23:27:35 -- nvmf/common.sh@295 -- # net_devs=() 00:24:47.027 23:27:35 -- nvmf/common.sh@295 -- # local -ga net_devs 00:24:47.027 23:27:35 -- nvmf/common.sh@296 -- # e810=() 00:24:47.027 23:27:35 -- nvmf/common.sh@296 -- # local -ga e810 00:24:47.027 23:27:35 -- nvmf/common.sh@297 -- # x722=() 00:24:47.027 23:27:35 -- nvmf/common.sh@297 -- # local -ga x722 00:24:47.027 23:27:35 -- nvmf/common.sh@298 -- # mlx=() 00:24:47.027 23:27:35 -- nvmf/common.sh@298 -- # local -ga mlx 00:24:47.027 23:27:35 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:47.027 23:27:35 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:47.027 23:27:35 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:47.027 23:27:35 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:47.027 23:27:35 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:47.027 23:27:35 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:47.027 23:27:35 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:47.027 23:27:35 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:47.027 23:27:35 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:47.027 23:27:35 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:47.027 23:27:35 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:47.027 23:27:35 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:24:47.027 23:27:35 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:24:47.027 23:27:35 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:24:47.027 23:27:35 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:24:47.027 23:27:35 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:24:47.027 23:27:35 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:24:47.027 23:27:35 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:47.027 23:27:35 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:24:47.027 Found 0000:31:00.0 (0x8086 - 0x159b) 00:24:47.027 23:27:35 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:47.027 23:27:35 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:47.027 23:27:35 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:47.027 23:27:35 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:47.027 23:27:35 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:47.027 23:27:35 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:47.027 23:27:35 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:24:47.027 Found 0000:31:00.1 (0x8086 - 0x159b) 00:24:47.027 23:27:35 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:47.027 23:27:35 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:47.027 23:27:35 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:47.027 23:27:35 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:47.027 23:27:35 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:47.027 23:27:35 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:24:47.027 23:27:35 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:24:47.027 23:27:35 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:24:47.027 23:27:35 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:47.027 23:27:35 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:47.027 23:27:35 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:24:47.027 23:27:35 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:47.027 23:27:35 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:24:47.027 Found net devices under 0000:31:00.0: cvl_0_0 00:24:47.027 23:27:35 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:24:47.027 23:27:35 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:47.027 23:27:35 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:47.027 23:27:35 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:24:47.027 23:27:35 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:47.027 23:27:35 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:24:47.027 Found net devices under 0000:31:00.1: cvl_0_1 00:24:47.027 23:27:35 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:24:47.027 23:27:35 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:24:47.027 23:27:35 -- nvmf/common.sh@403 -- # is_hw=yes 00:24:47.027 23:27:35 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:24:47.027 23:27:35 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:24:47.027 23:27:35 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:24:47.027 23:27:35 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:47.027 23:27:35 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:47.027 23:27:35 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:47.027 23:27:35 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:24:47.027 23:27:35 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:47.027 23:27:35 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:47.027 23:27:35 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:24:47.027 23:27:35 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:47.027 23:27:35 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:47.027 23:27:35 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:24:47.027 23:27:35 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:24:47.027 23:27:35 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:24:47.027 23:27:35 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:47.027 23:27:36 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:47.027 23:27:36 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:47.027 23:27:36 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:24:47.027 23:27:36 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:47.027 23:27:36 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:47.027 23:27:36 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:47.027 23:27:36 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:24:47.027 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:47.027 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.546 ms 00:24:47.027 00:24:47.027 --- 10.0.0.2 ping statistics --- 00:24:47.027 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:47.027 rtt min/avg/max/mdev = 0.546/0.546/0.546/0.000 ms 00:24:47.027 23:27:36 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:47.027 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:47.027 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.266 ms 00:24:47.027 00:24:47.027 --- 10.0.0.1 ping statistics --- 00:24:47.028 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:47.028 rtt min/avg/max/mdev = 0.266/0.266/0.266/0.000 ms 00:24:47.028 23:27:36 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:47.028 23:27:36 -- nvmf/common.sh@411 -- # return 0 00:24:47.028 23:27:36 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:24:47.028 23:27:36 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:47.028 23:27:36 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:24:47.028 23:27:36 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:24:47.028 23:27:36 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:47.028 23:27:36 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:24:47.028 23:27:36 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:24:47.288 23:27:36 -- target/initiator_timeout.sh@15 -- # nvmfappstart -m 0xF 00:24:47.288 23:27:36 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:24:47.288 23:27:36 -- common/autotest_common.sh@710 -- # xtrace_disable 00:24:47.288 23:27:36 -- common/autotest_common.sh@10 -- # set +x 00:24:47.288 23:27:36 -- nvmf/common.sh@470 -- # nvmfpid=4032823 00:24:47.288 23:27:36 -- nvmf/common.sh@471 -- # waitforlisten 4032823 00:24:47.288 23:27:36 -- common/autotest_common.sh@817 -- # '[' -z 4032823 ']' 00:24:47.288 23:27:36 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:47.288 23:27:36 -- common/autotest_common.sh@822 -- # local max_retries=100 00:24:47.289 23:27:36 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:47.289 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:47.289 23:27:36 -- common/autotest_common.sh@826 -- # xtrace_disable 00:24:47.289 23:27:36 -- common/autotest_common.sh@10 -- # set +x 00:24:47.289 23:27:36 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:24:47.289 [2024-04-26 23:27:36.362363] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:24:47.289 [2024-04-26 23:27:36.362427] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:47.289 EAL: No free 2048 kB hugepages reported on node 1 00:24:47.289 [2024-04-26 23:27:36.433921] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:47.289 [2024-04-26 23:27:36.471855] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:47.289 [2024-04-26 23:27:36.471903] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:47.289 [2024-04-26 23:27:36.471910] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:47.289 [2024-04-26 23:27:36.471917] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:47.289 [2024-04-26 23:27:36.471923] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:47.289 [2024-04-26 23:27:36.472045] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:24:47.289 [2024-04-26 23:27:36.472160] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:24:47.289 [2024-04-26 23:27:36.472318] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:47.289 [2024-04-26 23:27:36.472318] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:24:48.231 23:27:37 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:24:48.231 23:27:37 -- common/autotest_common.sh@850 -- # return 0 00:24:48.231 23:27:37 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:24:48.231 23:27:37 -- common/autotest_common.sh@716 -- # xtrace_disable 00:24:48.231 23:27:37 -- common/autotest_common.sh@10 -- # set +x 00:24:48.231 23:27:37 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:48.231 23:27:37 -- target/initiator_timeout.sh@17 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:24:48.231 23:27:37 -- target/initiator_timeout.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:24:48.231 23:27:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:48.231 23:27:37 -- common/autotest_common.sh@10 -- # set +x 00:24:48.231 Malloc0 00:24:48.231 23:27:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:48.231 23:27:37 -- target/initiator_timeout.sh@22 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 30 -t 30 -w 30 -n 30 00:24:48.231 23:27:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:48.231 23:27:37 -- common/autotest_common.sh@10 -- # set +x 00:24:48.231 Delay0 00:24:48.231 23:27:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:48.231 23:27:37 -- target/initiator_timeout.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:48.231 23:27:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:48.231 23:27:37 -- common/autotest_common.sh@10 -- # set +x 00:24:48.231 [2024-04-26 23:27:37.222611] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:48.231 23:27:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:48.231 23:27:37 -- target/initiator_timeout.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:24:48.231 23:27:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:48.231 23:27:37 -- common/autotest_common.sh@10 -- # set +x 00:24:48.231 23:27:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:48.231 23:27:37 -- target/initiator_timeout.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:24:48.231 23:27:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:48.231 23:27:37 -- common/autotest_common.sh@10 -- # set +x 00:24:48.231 23:27:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:48.231 23:27:37 -- target/initiator_timeout.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:48.231 23:27:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:48.231 23:27:37 -- common/autotest_common.sh@10 -- # set +x 00:24:48.231 [2024-04-26 23:27:37.262900] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:48.231 23:27:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:48.231 23:27:37 -- target/initiator_timeout.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:24:49.626 23:27:38 -- target/initiator_timeout.sh@31 -- # waitforserial SPDKISFASTANDAWESOME 00:24:49.626 23:27:38 -- common/autotest_common.sh@1184 -- # local i=0 00:24:49.626 23:27:38 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:24:49.626 23:27:38 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:24:49.626 23:27:38 -- common/autotest_common.sh@1191 -- # sleep 2 00:24:51.535 23:27:40 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:24:51.535 23:27:40 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:24:51.535 23:27:40 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:24:51.535 23:27:40 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:24:51.535 23:27:40 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:24:51.535 23:27:40 -- common/autotest_common.sh@1194 -- # return 0 00:24:51.535 23:27:40 -- target/initiator_timeout.sh@35 -- # fio_pid=4033834 00:24:51.535 23:27:40 -- target/initiator_timeout.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 60 -v 00:24:51.535 23:27:40 -- target/initiator_timeout.sh@37 -- # sleep 3 00:24:51.535 [global] 00:24:51.535 thread=1 00:24:51.535 invalidate=1 00:24:51.535 rw=write 00:24:51.535 time_based=1 00:24:51.535 runtime=60 00:24:51.535 ioengine=libaio 00:24:51.535 direct=1 00:24:51.535 bs=4096 00:24:51.535 iodepth=1 00:24:51.535 norandommap=0 00:24:51.535 numjobs=1 00:24:51.535 00:24:51.535 verify_dump=1 00:24:51.535 verify_backlog=512 00:24:51.535 verify_state_save=0 00:24:51.535 do_verify=1 00:24:51.535 verify=crc32c-intel 00:24:51.817 [job0] 00:24:51.817 filename=/dev/nvme0n1 00:24:51.817 Could not set queue depth (nvme0n1) 00:24:52.076 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:24:52.076 fio-3.35 00:24:52.076 Starting 1 thread 00:24:54.606 23:27:43 -- target/initiator_timeout.sh@40 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 31000000 00:24:54.606 23:27:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:54.606 23:27:43 -- common/autotest_common.sh@10 -- # set +x 00:24:54.606 true 00:24:54.606 23:27:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:54.606 23:27:43 -- target/initiator_timeout.sh@41 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 31000000 00:24:54.606 23:27:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:54.606 23:27:43 -- common/autotest_common.sh@10 -- # set +x 00:24:54.606 true 00:24:54.606 23:27:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:54.606 23:27:43 -- target/initiator_timeout.sh@42 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 31000000 00:24:54.606 23:27:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:54.606 23:27:43 -- common/autotest_common.sh@10 -- # set +x 00:24:54.606 true 00:24:54.606 23:27:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:54.606 23:27:43 -- target/initiator_timeout.sh@43 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 310000000 00:24:54.606 23:27:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:54.606 23:27:43 -- common/autotest_common.sh@10 -- # set +x 00:24:54.606 true 00:24:54.606 23:27:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:54.606 23:27:43 -- target/initiator_timeout.sh@45 -- # sleep 3 00:24:57.881 23:27:46 -- target/initiator_timeout.sh@48 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 30 00:24:57.881 23:27:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:57.881 23:27:46 -- common/autotest_common.sh@10 -- # set +x 00:24:57.881 true 00:24:57.881 23:27:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:57.881 23:27:46 -- target/initiator_timeout.sh@49 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 30 00:24:57.881 23:27:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:57.881 23:27:46 -- common/autotest_common.sh@10 -- # set +x 00:24:57.881 true 00:24:57.881 23:27:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:57.881 23:27:46 -- target/initiator_timeout.sh@50 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 30 00:24:57.881 23:27:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:57.881 23:27:46 -- common/autotest_common.sh@10 -- # set +x 00:24:57.881 true 00:24:57.881 23:27:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:57.881 23:27:46 -- target/initiator_timeout.sh@51 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 30 00:24:57.881 23:27:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:57.881 23:27:46 -- common/autotest_common.sh@10 -- # set +x 00:24:57.881 true 00:24:57.881 23:27:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:57.881 23:27:46 -- target/initiator_timeout.sh@53 -- # fio_status=0 00:24:57.881 23:27:46 -- target/initiator_timeout.sh@54 -- # wait 4033834 00:25:54.124 00:25:54.124 job0: (groupid=0, jobs=1): err= 0: pid=4034021: Fri Apr 26 23:28:41 2024 00:25:54.124 read: IOPS=213, BW=853KiB/s (874kB/s)(50.0MiB/60000msec) 00:25:54.124 slat (usec): min=6, max=7307, avg=24.53, stdev=83.04 00:25:54.124 clat (usec): min=214, max=41877k, avg=4186.16, stdev=370144.33 00:25:54.124 lat (usec): min=223, max=41877k, avg=4210.69, stdev=370144.35 00:25:54.124 clat percentiles (usec): 00:25:54.124 | 1.00th=[ 396], 5.00th=[ 469], 10.00th=[ 498], 20.00th=[ 553], 00:25:54.124 | 30.00th=[ 570], 40.00th=[ 603], 50.00th=[ 660], 60.00th=[ 865], 00:25:54.124 | 70.00th=[ 906], 80.00th=[ 930], 90.00th=[ 947], 95.00th=[ 963], 00:25:54.124 | 99.00th=[ 996], 99.50th=[ 1057], 99.90th=[42206], 99.95th=[42206], 00:25:54.124 | 99.99th=[42730] 00:25:54.124 write: IOPS=216, BW=866KiB/s (887kB/s)(50.7MiB/60000msec); 0 zone resets 00:25:54.124 slat (nsec): min=9443, max=70530, avg=27597.12, stdev=10327.36 00:25:54.124 clat (usec): min=151, max=4027, avg=427.51, stdev=82.96 00:25:54.124 lat (usec): min=162, max=4061, avg=455.11, stdev=86.58 00:25:54.124 clat percentiles (usec): 00:25:54.124 | 1.00th=[ 200], 5.00th=[ 297], 10.00th=[ 318], 20.00th=[ 375], 00:25:54.124 | 30.00th=[ 400], 40.00th=[ 416], 50.00th=[ 429], 60.00th=[ 453], 00:25:54.124 | 70.00th=[ 474], 80.00th=[ 498], 90.00th=[ 519], 95.00th=[ 529], 00:25:54.124 | 99.00th=[ 562], 99.50th=[ 578], 99.90th=[ 635], 99.95th=[ 693], 00:25:54.124 | 99.99th=[ 930] 00:25:54.124 bw ( KiB/s): min= 272, max= 4096, per=100.00%, avg=3413.33, stdev=1393.94, samples=30 00:25:54.124 iops : min= 68, max= 1024, avg=853.33, stdev=348.48, samples=30 00:25:54.124 lat (usec) : 250=1.19%, 500=44.29%, 750=31.67%, 1000=22.38% 00:25:54.124 lat (msec) : 2=0.23%, 4=0.01%, 10=0.01%, 50=0.23%, >=2000=0.01% 00:25:54.124 cpu : usr=0.65%, sys=1.11%, ctx=25793, majf=0, minf=35 00:25:54.124 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:25:54.124 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:54.124 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:54.124 issued rwts: total=12800,12991,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:54.124 latency : target=0, window=0, percentile=100.00%, depth=1 00:25:54.124 00:25:54.124 Run status group 0 (all jobs): 00:25:54.124 READ: bw=853KiB/s (874kB/s), 853KiB/s-853KiB/s (874kB/s-874kB/s), io=50.0MiB (52.4MB), run=60000-60000msec 00:25:54.124 WRITE: bw=866KiB/s (887kB/s), 866KiB/s-866KiB/s (887kB/s-887kB/s), io=50.7MiB (53.2MB), run=60000-60000msec 00:25:54.124 00:25:54.124 Disk stats (read/write): 00:25:54.124 nvme0n1: ios=12863/12800, merge=0/0, ticks=12767/5351, in_queue=18118, util=100.00% 00:25:54.124 23:28:41 -- target/initiator_timeout.sh@56 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:25:54.124 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:25:54.124 23:28:41 -- target/initiator_timeout.sh@57 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:25:54.124 23:28:41 -- common/autotest_common.sh@1205 -- # local i=0 00:25:54.124 23:28:41 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:25:54.124 23:28:41 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:25:54.124 23:28:41 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:25:54.124 23:28:41 -- common/autotest_common.sh@1213 -- # grep -q -w SPDKISFASTANDAWESOME 00:25:54.124 23:28:41 -- common/autotest_common.sh@1217 -- # return 0 00:25:54.124 23:28:41 -- target/initiator_timeout.sh@59 -- # '[' 0 -eq 0 ']' 00:25:54.124 23:28:41 -- target/initiator_timeout.sh@60 -- # echo 'nvmf hotplug test: fio successful as expected' 00:25:54.124 nvmf hotplug test: fio successful as expected 00:25:54.124 23:28:41 -- target/initiator_timeout.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:54.124 23:28:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:54.124 23:28:41 -- common/autotest_common.sh@10 -- # set +x 00:25:54.124 23:28:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:54.124 23:28:41 -- target/initiator_timeout.sh@69 -- # rm -f ./local-job0-0-verify.state 00:25:54.124 23:28:41 -- target/initiator_timeout.sh@71 -- # trap - SIGINT SIGTERM EXIT 00:25:54.124 23:28:41 -- target/initiator_timeout.sh@73 -- # nvmftestfini 00:25:54.124 23:28:41 -- nvmf/common.sh@477 -- # nvmfcleanup 00:25:54.124 23:28:41 -- nvmf/common.sh@117 -- # sync 00:25:54.124 23:28:41 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:25:54.124 23:28:41 -- nvmf/common.sh@120 -- # set +e 00:25:54.124 23:28:41 -- nvmf/common.sh@121 -- # for i in {1..20} 00:25:54.124 23:28:41 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:25:54.124 rmmod nvme_tcp 00:25:54.124 rmmod nvme_fabrics 00:25:54.124 rmmod nvme_keyring 00:25:54.124 23:28:41 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:25:54.125 23:28:41 -- nvmf/common.sh@124 -- # set -e 00:25:54.125 23:28:41 -- nvmf/common.sh@125 -- # return 0 00:25:54.125 23:28:41 -- nvmf/common.sh@478 -- # '[' -n 4032823 ']' 00:25:54.125 23:28:41 -- nvmf/common.sh@479 -- # killprocess 4032823 00:25:54.125 23:28:41 -- common/autotest_common.sh@936 -- # '[' -z 4032823 ']' 00:25:54.125 23:28:41 -- common/autotest_common.sh@940 -- # kill -0 4032823 00:25:54.125 23:28:41 -- common/autotest_common.sh@941 -- # uname 00:25:54.125 23:28:41 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:25:54.125 23:28:41 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 4032823 00:25:54.125 23:28:41 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:25:54.125 23:28:41 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:25:54.125 23:28:41 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 4032823' 00:25:54.125 killing process with pid 4032823 00:25:54.125 23:28:41 -- common/autotest_common.sh@955 -- # kill 4032823 00:25:54.125 23:28:41 -- common/autotest_common.sh@960 -- # wait 4032823 00:25:54.125 23:28:41 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:25:54.125 23:28:41 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:25:54.125 23:28:41 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:25:54.125 23:28:41 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:54.125 23:28:41 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:25:54.125 23:28:41 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:54.125 23:28:41 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:54.125 23:28:41 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:54.700 23:28:43 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:25:54.700 00:25:54.700 real 1m14.468s 00:25:54.700 user 4m35.202s 00:25:54.700 sys 0m8.220s 00:25:54.700 23:28:43 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:25:54.700 23:28:43 -- common/autotest_common.sh@10 -- # set +x 00:25:54.700 ************************************ 00:25:54.700 END TEST nvmf_initiator_timeout 00:25:54.700 ************************************ 00:25:54.700 23:28:43 -- nvmf/nvmf.sh@70 -- # [[ phy == phy ]] 00:25:54.700 23:28:43 -- nvmf/nvmf.sh@71 -- # '[' tcp = tcp ']' 00:25:54.700 23:28:43 -- nvmf/nvmf.sh@72 -- # gather_supported_nvmf_pci_devs 00:25:54.700 23:28:43 -- nvmf/common.sh@285 -- # xtrace_disable 00:25:54.700 23:28:43 -- common/autotest_common.sh@10 -- # set +x 00:26:01.299 23:28:50 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:26:01.299 23:28:50 -- nvmf/common.sh@291 -- # pci_devs=() 00:26:01.299 23:28:50 -- nvmf/common.sh@291 -- # local -a pci_devs 00:26:01.299 23:28:50 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:26:01.299 23:28:50 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:26:01.299 23:28:50 -- nvmf/common.sh@293 -- # pci_drivers=() 00:26:01.299 23:28:50 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:26:01.299 23:28:50 -- nvmf/common.sh@295 -- # net_devs=() 00:26:01.299 23:28:50 -- nvmf/common.sh@295 -- # local -ga net_devs 00:26:01.299 23:28:50 -- nvmf/common.sh@296 -- # e810=() 00:26:01.299 23:28:50 -- nvmf/common.sh@296 -- # local -ga e810 00:26:01.299 23:28:50 -- nvmf/common.sh@297 -- # x722=() 00:26:01.299 23:28:50 -- nvmf/common.sh@297 -- # local -ga x722 00:26:01.299 23:28:50 -- nvmf/common.sh@298 -- # mlx=() 00:26:01.299 23:28:50 -- nvmf/common.sh@298 -- # local -ga mlx 00:26:01.299 23:28:50 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:01.299 23:28:50 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:01.299 23:28:50 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:01.299 23:28:50 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:01.299 23:28:50 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:01.299 23:28:50 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:01.299 23:28:50 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:01.299 23:28:50 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:01.299 23:28:50 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:01.299 23:28:50 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:01.299 23:28:50 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:01.299 23:28:50 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:26:01.299 23:28:50 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:26:01.299 23:28:50 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:26:01.299 23:28:50 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:26:01.299 23:28:50 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:26:01.299 23:28:50 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:26:01.299 23:28:50 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:01.299 23:28:50 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:26:01.299 Found 0000:31:00.0 (0x8086 - 0x159b) 00:26:01.299 23:28:50 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:01.299 23:28:50 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:01.299 23:28:50 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:01.299 23:28:50 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:01.299 23:28:50 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:01.299 23:28:50 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:01.299 23:28:50 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:26:01.299 Found 0000:31:00.1 (0x8086 - 0x159b) 00:26:01.299 23:28:50 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:01.299 23:28:50 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:01.299 23:28:50 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:01.299 23:28:50 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:01.299 23:28:50 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:01.299 23:28:50 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:26:01.299 23:28:50 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:26:01.299 23:28:50 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:26:01.299 23:28:50 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:01.299 23:28:50 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:01.299 23:28:50 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:26:01.299 23:28:50 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:01.299 23:28:50 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:26:01.299 Found net devices under 0000:31:00.0: cvl_0_0 00:26:01.299 23:28:50 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:26:01.299 23:28:50 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:01.299 23:28:50 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:01.299 23:28:50 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:26:01.299 23:28:50 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:01.299 23:28:50 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:26:01.299 Found net devices under 0000:31:00.1: cvl_0_1 00:26:01.299 23:28:50 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:26:01.299 23:28:50 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:26:01.299 23:28:50 -- nvmf/nvmf.sh@73 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:01.299 23:28:50 -- nvmf/nvmf.sh@74 -- # (( 2 > 0 )) 00:26:01.299 23:28:50 -- nvmf/nvmf.sh@75 -- # run_test nvmf_perf_adq /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:26:01.299 23:28:50 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:26:01.299 23:28:50 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:26:01.299 23:28:50 -- common/autotest_common.sh@10 -- # set +x 00:26:01.561 ************************************ 00:26:01.561 START TEST nvmf_perf_adq 00:26:01.561 ************************************ 00:26:01.561 23:28:50 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:26:01.561 * Looking for test storage... 00:26:01.561 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:26:01.561 23:28:50 -- target/perf_adq.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:01.561 23:28:50 -- nvmf/common.sh@7 -- # uname -s 00:26:01.561 23:28:50 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:01.561 23:28:50 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:01.561 23:28:50 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:01.561 23:28:50 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:01.561 23:28:50 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:01.561 23:28:50 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:01.561 23:28:50 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:01.561 23:28:50 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:01.561 23:28:50 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:01.561 23:28:50 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:01.561 23:28:50 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:26:01.561 23:28:50 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:26:01.561 23:28:50 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:01.561 23:28:50 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:01.561 23:28:50 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:01.561 23:28:50 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:01.561 23:28:50 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:01.561 23:28:50 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:01.561 23:28:50 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:01.561 23:28:50 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:01.561 23:28:50 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:01.561 23:28:50 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:01.561 23:28:50 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:01.561 23:28:50 -- paths/export.sh@5 -- # export PATH 00:26:01.561 23:28:50 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:01.561 23:28:50 -- nvmf/common.sh@47 -- # : 0 00:26:01.561 23:28:50 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:26:01.561 23:28:50 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:26:01.561 23:28:50 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:01.561 23:28:50 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:01.561 23:28:50 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:01.561 23:28:50 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:26:01.561 23:28:50 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:26:01.561 23:28:50 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:26:01.561 23:28:50 -- target/perf_adq.sh@11 -- # gather_supported_nvmf_pci_devs 00:26:01.561 23:28:50 -- nvmf/common.sh@285 -- # xtrace_disable 00:26:01.561 23:28:50 -- common/autotest_common.sh@10 -- # set +x 00:26:09.709 23:28:57 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:26:09.709 23:28:57 -- nvmf/common.sh@291 -- # pci_devs=() 00:26:09.709 23:28:57 -- nvmf/common.sh@291 -- # local -a pci_devs 00:26:09.709 23:28:57 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:26:09.709 23:28:57 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:26:09.709 23:28:57 -- nvmf/common.sh@293 -- # pci_drivers=() 00:26:09.709 23:28:57 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:26:09.709 23:28:57 -- nvmf/common.sh@295 -- # net_devs=() 00:26:09.709 23:28:57 -- nvmf/common.sh@295 -- # local -ga net_devs 00:26:09.709 23:28:57 -- nvmf/common.sh@296 -- # e810=() 00:26:09.709 23:28:57 -- nvmf/common.sh@296 -- # local -ga e810 00:26:09.709 23:28:57 -- nvmf/common.sh@297 -- # x722=() 00:26:09.709 23:28:57 -- nvmf/common.sh@297 -- # local -ga x722 00:26:09.709 23:28:57 -- nvmf/common.sh@298 -- # mlx=() 00:26:09.709 23:28:57 -- nvmf/common.sh@298 -- # local -ga mlx 00:26:09.709 23:28:57 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:09.709 23:28:57 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:09.709 23:28:57 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:09.709 23:28:57 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:09.709 23:28:57 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:09.709 23:28:57 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:09.709 23:28:57 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:09.709 23:28:57 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:09.709 23:28:57 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:09.709 23:28:57 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:09.709 23:28:57 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:09.709 23:28:57 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:26:09.709 23:28:57 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:26:09.709 23:28:57 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:26:09.709 23:28:57 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:26:09.709 23:28:57 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:26:09.709 23:28:57 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:26:09.709 23:28:57 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:09.709 23:28:57 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:26:09.709 Found 0000:31:00.0 (0x8086 - 0x159b) 00:26:09.709 23:28:57 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:09.709 23:28:57 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:09.709 23:28:57 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:09.709 23:28:57 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:09.709 23:28:57 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:09.709 23:28:57 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:09.709 23:28:57 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:26:09.709 Found 0000:31:00.1 (0x8086 - 0x159b) 00:26:09.709 23:28:57 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:09.709 23:28:57 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:09.709 23:28:57 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:09.709 23:28:57 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:09.709 23:28:57 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:09.709 23:28:57 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:26:09.709 23:28:57 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:26:09.709 23:28:57 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:26:09.709 23:28:57 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:09.709 23:28:57 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:09.709 23:28:57 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:26:09.709 23:28:57 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:09.709 23:28:57 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:26:09.709 Found net devices under 0000:31:00.0: cvl_0_0 00:26:09.709 23:28:57 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:26:09.709 23:28:57 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:09.709 23:28:57 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:09.709 23:28:57 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:26:09.709 23:28:57 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:09.709 23:28:57 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:26:09.709 Found net devices under 0000:31:00.1: cvl_0_1 00:26:09.709 23:28:57 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:26:09.709 23:28:57 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:26:09.709 23:28:57 -- target/perf_adq.sh@12 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:09.709 23:28:57 -- target/perf_adq.sh@13 -- # (( 2 == 0 )) 00:26:09.709 23:28:57 -- target/perf_adq.sh@18 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:26:09.709 23:28:57 -- target/perf_adq.sh@59 -- # adq_reload_driver 00:26:09.709 23:28:57 -- target/perf_adq.sh@52 -- # rmmod ice 00:26:09.971 23:28:59 -- target/perf_adq.sh@53 -- # modprobe ice 00:26:11.951 23:29:01 -- target/perf_adq.sh@54 -- # sleep 5 00:26:17.240 23:29:06 -- target/perf_adq.sh@67 -- # nvmftestinit 00:26:17.240 23:29:06 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:26:17.240 23:29:06 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:17.240 23:29:06 -- nvmf/common.sh@437 -- # prepare_net_devs 00:26:17.240 23:29:06 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:26:17.240 23:29:06 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:26:17.240 23:29:06 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:17.240 23:29:06 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:17.240 23:29:06 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:17.240 23:29:06 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:26:17.240 23:29:06 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:26:17.240 23:29:06 -- nvmf/common.sh@285 -- # xtrace_disable 00:26:17.240 23:29:06 -- common/autotest_common.sh@10 -- # set +x 00:26:17.240 23:29:06 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:26:17.240 23:29:06 -- nvmf/common.sh@291 -- # pci_devs=() 00:26:17.240 23:29:06 -- nvmf/common.sh@291 -- # local -a pci_devs 00:26:17.240 23:29:06 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:26:17.240 23:29:06 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:26:17.240 23:29:06 -- nvmf/common.sh@293 -- # pci_drivers=() 00:26:17.240 23:29:06 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:26:17.240 23:29:06 -- nvmf/common.sh@295 -- # net_devs=() 00:26:17.240 23:29:06 -- nvmf/common.sh@295 -- # local -ga net_devs 00:26:17.240 23:29:06 -- nvmf/common.sh@296 -- # e810=() 00:26:17.240 23:29:06 -- nvmf/common.sh@296 -- # local -ga e810 00:26:17.240 23:29:06 -- nvmf/common.sh@297 -- # x722=() 00:26:17.240 23:29:06 -- nvmf/common.sh@297 -- # local -ga x722 00:26:17.240 23:29:06 -- nvmf/common.sh@298 -- # mlx=() 00:26:17.240 23:29:06 -- nvmf/common.sh@298 -- # local -ga mlx 00:26:17.240 23:29:06 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:17.240 23:29:06 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:17.240 23:29:06 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:17.240 23:29:06 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:17.240 23:29:06 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:17.240 23:29:06 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:17.240 23:29:06 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:17.240 23:29:06 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:17.240 23:29:06 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:17.240 23:29:06 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:17.240 23:29:06 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:17.240 23:29:06 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:26:17.240 23:29:06 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:26:17.240 23:29:06 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:26:17.240 23:29:06 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:26:17.240 23:29:06 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:26:17.240 23:29:06 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:26:17.240 23:29:06 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:17.240 23:29:06 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:26:17.240 Found 0000:31:00.0 (0x8086 - 0x159b) 00:26:17.240 23:29:06 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:17.240 23:29:06 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:17.240 23:29:06 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:17.240 23:29:06 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:17.240 23:29:06 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:17.240 23:29:06 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:17.240 23:29:06 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:26:17.240 Found 0000:31:00.1 (0x8086 - 0x159b) 00:26:17.240 23:29:06 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:17.240 23:29:06 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:17.240 23:29:06 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:17.240 23:29:06 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:17.240 23:29:06 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:17.240 23:29:06 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:26:17.240 23:29:06 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:26:17.240 23:29:06 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:26:17.240 23:29:06 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:17.240 23:29:06 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:17.240 23:29:06 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:26:17.240 23:29:06 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:17.240 23:29:06 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:26:17.240 Found net devices under 0000:31:00.0: cvl_0_0 00:26:17.240 23:29:06 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:26:17.240 23:29:06 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:17.240 23:29:06 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:17.240 23:29:06 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:26:17.240 23:29:06 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:17.240 23:29:06 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:26:17.240 Found net devices under 0000:31:00.1: cvl_0_1 00:26:17.240 23:29:06 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:26:17.240 23:29:06 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:26:17.240 23:29:06 -- nvmf/common.sh@403 -- # is_hw=yes 00:26:17.240 23:29:06 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:26:17.240 23:29:06 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:26:17.240 23:29:06 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:26:17.240 23:29:06 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:17.240 23:29:06 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:17.240 23:29:06 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:17.240 23:29:06 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:26:17.240 23:29:06 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:17.240 23:29:06 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:17.240 23:29:06 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:26:17.240 23:29:06 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:17.240 23:29:06 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:17.240 23:29:06 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:26:17.240 23:29:06 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:26:17.240 23:29:06 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:26:17.240 23:29:06 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:17.240 23:29:06 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:17.240 23:29:06 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:17.240 23:29:06 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:26:17.240 23:29:06 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:17.240 23:29:06 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:17.240 23:29:06 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:17.240 23:29:06 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:26:17.240 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:17.240 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.742 ms 00:26:17.240 00:26:17.240 --- 10.0.0.2 ping statistics --- 00:26:17.240 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:17.240 rtt min/avg/max/mdev = 0.742/0.742/0.742/0.000 ms 00:26:17.240 23:29:06 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:17.240 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:17.240 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.346 ms 00:26:17.240 00:26:17.240 --- 10.0.0.1 ping statistics --- 00:26:17.240 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:17.240 rtt min/avg/max/mdev = 0.346/0.346/0.346/0.000 ms 00:26:17.240 23:29:06 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:17.240 23:29:06 -- nvmf/common.sh@411 -- # return 0 00:26:17.240 23:29:06 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:26:17.240 23:29:06 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:17.240 23:29:06 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:26:17.240 23:29:06 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:26:17.240 23:29:06 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:17.240 23:29:06 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:26:17.240 23:29:06 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:26:17.240 23:29:06 -- target/perf_adq.sh@68 -- # nvmfappstart -m 0xF --wait-for-rpc 00:26:17.240 23:29:06 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:26:17.240 23:29:06 -- common/autotest_common.sh@710 -- # xtrace_disable 00:26:17.240 23:29:06 -- common/autotest_common.sh@10 -- # set +x 00:26:17.240 23:29:06 -- nvmf/common.sh@470 -- # nvmfpid=4055152 00:26:17.240 23:29:06 -- nvmf/common.sh@471 -- # waitforlisten 4055152 00:26:17.240 23:29:06 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:26:17.240 23:29:06 -- common/autotest_common.sh@817 -- # '[' -z 4055152 ']' 00:26:17.240 23:29:06 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:17.240 23:29:06 -- common/autotest_common.sh@822 -- # local max_retries=100 00:26:17.240 23:29:06 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:17.240 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:17.240 23:29:06 -- common/autotest_common.sh@826 -- # xtrace_disable 00:26:17.240 23:29:06 -- common/autotest_common.sh@10 -- # set +x 00:26:17.502 [2024-04-26 23:29:06.539335] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:26:17.502 [2024-04-26 23:29:06.539418] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:17.502 EAL: No free 2048 kB hugepages reported on node 1 00:26:17.502 [2024-04-26 23:29:06.611002] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:26:17.502 [2024-04-26 23:29:06.648733] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:17.502 [2024-04-26 23:29:06.648780] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:17.502 [2024-04-26 23:29:06.648787] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:17.502 [2024-04-26 23:29:06.648794] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:17.502 [2024-04-26 23:29:06.648800] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:17.502 [2024-04-26 23:29:06.648866] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:26:17.502 [2024-04-26 23:29:06.648956] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:26:17.502 [2024-04-26 23:29:06.649261] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:26:17.502 [2024-04-26 23:29:06.649262] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:18.076 23:29:07 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:26:18.076 23:29:07 -- common/autotest_common.sh@850 -- # return 0 00:26:18.076 23:29:07 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:26:18.076 23:29:07 -- common/autotest_common.sh@716 -- # xtrace_disable 00:26:18.076 23:29:07 -- common/autotest_common.sh@10 -- # set +x 00:26:18.335 23:29:07 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:18.335 23:29:07 -- target/perf_adq.sh@69 -- # adq_configure_nvmf_target 0 00:26:18.335 23:29:07 -- target/perf_adq.sh@42 -- # rpc_cmd sock_impl_set_options --enable-placement-id 0 --enable-zerocopy-send-server -i posix 00:26:18.335 23:29:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:18.336 23:29:07 -- common/autotest_common.sh@10 -- # set +x 00:26:18.336 23:29:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:18.336 23:29:07 -- target/perf_adq.sh@43 -- # rpc_cmd framework_start_init 00:26:18.336 23:29:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:18.336 23:29:07 -- common/autotest_common.sh@10 -- # set +x 00:26:18.336 23:29:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:18.336 23:29:07 -- target/perf_adq.sh@44 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 0 00:26:18.336 23:29:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:18.336 23:29:07 -- common/autotest_common.sh@10 -- # set +x 00:26:18.336 [2024-04-26 23:29:07.439751] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:18.336 23:29:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:18.336 23:29:07 -- target/perf_adq.sh@45 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:26:18.336 23:29:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:18.336 23:29:07 -- common/autotest_common.sh@10 -- # set +x 00:26:18.336 Malloc1 00:26:18.336 23:29:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:18.336 23:29:07 -- target/perf_adq.sh@46 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:26:18.336 23:29:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:18.336 23:29:07 -- common/autotest_common.sh@10 -- # set +x 00:26:18.336 23:29:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:18.336 23:29:07 -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:26:18.336 23:29:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:18.336 23:29:07 -- common/autotest_common.sh@10 -- # set +x 00:26:18.336 23:29:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:18.336 23:29:07 -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:18.336 23:29:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:18.336 23:29:07 -- common/autotest_common.sh@10 -- # set +x 00:26:18.336 [2024-04-26 23:29:07.495172] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:18.336 23:29:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:18.336 23:29:07 -- target/perf_adq.sh@73 -- # perfpid=4055351 00:26:18.336 23:29:07 -- target/perf_adq.sh@74 -- # sleep 2 00:26:18.336 23:29:07 -- target/perf_adq.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:26:18.336 EAL: No free 2048 kB hugepages reported on node 1 00:26:20.882 23:29:09 -- target/perf_adq.sh@76 -- # rpc_cmd nvmf_get_stats 00:26:20.882 23:29:09 -- target/perf_adq.sh@76 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 1) | length' 00:26:20.882 23:29:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:20.882 23:29:09 -- target/perf_adq.sh@76 -- # wc -l 00:26:20.882 23:29:09 -- common/autotest_common.sh@10 -- # set +x 00:26:20.882 23:29:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:20.882 23:29:09 -- target/perf_adq.sh@76 -- # count=4 00:26:20.882 23:29:09 -- target/perf_adq.sh@77 -- # [[ 4 -ne 4 ]] 00:26:20.882 23:29:09 -- target/perf_adq.sh@81 -- # wait 4055351 00:26:29.029 Initializing NVMe Controllers 00:26:29.029 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:26:29.029 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:26:29.029 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:26:29.029 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:26:29.029 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:26:29.029 Initialization complete. Launching workers. 00:26:29.029 ======================================================== 00:26:29.029 Latency(us) 00:26:29.029 Device Information : IOPS MiB/s Average min max 00:26:29.029 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 13350.49 52.15 4793.76 1018.07 9084.47 00:26:29.029 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 14843.49 57.98 4311.30 968.80 9525.35 00:26:29.029 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 10190.90 39.81 6280.84 1282.93 10505.02 00:26:29.029 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 10174.40 39.74 6291.43 1737.56 10165.84 00:26:29.029 ======================================================== 00:26:29.029 Total : 48559.28 189.68 5272.17 968.80 10505.02 00:26:29.029 00:26:29.029 23:29:17 -- target/perf_adq.sh@82 -- # nvmftestfini 00:26:29.029 23:29:17 -- nvmf/common.sh@477 -- # nvmfcleanup 00:26:29.029 23:29:17 -- nvmf/common.sh@117 -- # sync 00:26:29.029 23:29:17 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:26:29.029 23:29:17 -- nvmf/common.sh@120 -- # set +e 00:26:29.029 23:29:17 -- nvmf/common.sh@121 -- # for i in {1..20} 00:26:29.029 23:29:17 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:26:29.029 rmmod nvme_tcp 00:26:29.029 rmmod nvme_fabrics 00:26:29.029 rmmod nvme_keyring 00:26:29.029 23:29:17 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:26:29.029 23:29:17 -- nvmf/common.sh@124 -- # set -e 00:26:29.029 23:29:17 -- nvmf/common.sh@125 -- # return 0 00:26:29.029 23:29:17 -- nvmf/common.sh@478 -- # '[' -n 4055152 ']' 00:26:29.029 23:29:17 -- nvmf/common.sh@479 -- # killprocess 4055152 00:26:29.029 23:29:17 -- common/autotest_common.sh@936 -- # '[' -z 4055152 ']' 00:26:29.029 23:29:17 -- common/autotest_common.sh@940 -- # kill -0 4055152 00:26:29.029 23:29:17 -- common/autotest_common.sh@941 -- # uname 00:26:29.029 23:29:17 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:26:29.029 23:29:17 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 4055152 00:26:29.029 23:29:17 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:26:29.029 23:29:17 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:26:29.029 23:29:17 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 4055152' 00:26:29.029 killing process with pid 4055152 00:26:29.029 23:29:17 -- common/autotest_common.sh@955 -- # kill 4055152 00:26:29.029 23:29:17 -- common/autotest_common.sh@960 -- # wait 4055152 00:26:29.029 23:29:18 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:26:29.029 23:29:18 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:26:29.029 23:29:18 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:26:29.029 23:29:18 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:29.029 23:29:18 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:26:29.029 23:29:18 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:29.029 23:29:18 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:29.029 23:29:18 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:30.942 23:29:20 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:26:30.942 23:29:20 -- target/perf_adq.sh@84 -- # adq_reload_driver 00:26:30.942 23:29:20 -- target/perf_adq.sh@52 -- # rmmod ice 00:26:32.326 23:29:21 -- target/perf_adq.sh@53 -- # modprobe ice 00:26:34.865 23:29:23 -- target/perf_adq.sh@54 -- # sleep 5 00:26:40.151 23:29:28 -- target/perf_adq.sh@87 -- # nvmftestinit 00:26:40.151 23:29:28 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:26:40.151 23:29:28 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:40.151 23:29:28 -- nvmf/common.sh@437 -- # prepare_net_devs 00:26:40.151 23:29:28 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:26:40.151 23:29:28 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:26:40.151 23:29:28 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:40.151 23:29:28 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:40.151 23:29:28 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:40.151 23:29:28 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:26:40.151 23:29:28 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:26:40.151 23:29:28 -- nvmf/common.sh@285 -- # xtrace_disable 00:26:40.151 23:29:28 -- common/autotest_common.sh@10 -- # set +x 00:26:40.151 23:29:28 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:26:40.151 23:29:28 -- nvmf/common.sh@291 -- # pci_devs=() 00:26:40.151 23:29:28 -- nvmf/common.sh@291 -- # local -a pci_devs 00:26:40.151 23:29:28 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:26:40.151 23:29:28 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:26:40.151 23:29:28 -- nvmf/common.sh@293 -- # pci_drivers=() 00:26:40.151 23:29:28 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:26:40.151 23:29:28 -- nvmf/common.sh@295 -- # net_devs=() 00:26:40.151 23:29:28 -- nvmf/common.sh@295 -- # local -ga net_devs 00:26:40.151 23:29:28 -- nvmf/common.sh@296 -- # e810=() 00:26:40.151 23:29:28 -- nvmf/common.sh@296 -- # local -ga e810 00:26:40.151 23:29:28 -- nvmf/common.sh@297 -- # x722=() 00:26:40.151 23:29:28 -- nvmf/common.sh@297 -- # local -ga x722 00:26:40.151 23:29:28 -- nvmf/common.sh@298 -- # mlx=() 00:26:40.151 23:29:28 -- nvmf/common.sh@298 -- # local -ga mlx 00:26:40.151 23:29:28 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:40.151 23:29:28 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:40.151 23:29:28 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:40.151 23:29:28 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:40.151 23:29:28 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:40.151 23:29:28 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:40.151 23:29:28 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:40.151 23:29:28 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:40.151 23:29:28 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:40.151 23:29:28 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:40.151 23:29:28 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:40.151 23:29:28 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:26:40.151 23:29:28 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:26:40.151 23:29:28 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:26:40.151 23:29:28 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:26:40.151 23:29:28 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:26:40.151 23:29:28 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:26:40.151 23:29:28 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:40.151 23:29:28 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:26:40.151 Found 0000:31:00.0 (0x8086 - 0x159b) 00:26:40.151 23:29:28 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:40.151 23:29:28 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:40.151 23:29:28 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:40.151 23:29:28 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:40.151 23:29:28 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:40.151 23:29:28 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:40.151 23:29:28 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:26:40.151 Found 0000:31:00.1 (0x8086 - 0x159b) 00:26:40.151 23:29:28 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:40.151 23:29:28 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:40.151 23:29:28 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:40.151 23:29:28 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:40.151 23:29:28 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:40.151 23:29:28 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:26:40.151 23:29:28 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:26:40.152 23:29:28 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:26:40.152 23:29:28 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:40.152 23:29:28 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:40.152 23:29:28 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:26:40.152 23:29:28 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:40.152 23:29:28 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:26:40.152 Found net devices under 0000:31:00.0: cvl_0_0 00:26:40.152 23:29:28 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:26:40.152 23:29:28 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:40.152 23:29:28 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:40.152 23:29:28 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:26:40.152 23:29:28 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:40.152 23:29:28 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:26:40.152 Found net devices under 0000:31:00.1: cvl_0_1 00:26:40.152 23:29:28 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:26:40.152 23:29:28 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:26:40.152 23:29:28 -- nvmf/common.sh@403 -- # is_hw=yes 00:26:40.152 23:29:28 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:26:40.152 23:29:28 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:26:40.152 23:29:28 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:26:40.152 23:29:28 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:40.152 23:29:28 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:40.152 23:29:28 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:40.152 23:29:28 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:26:40.152 23:29:28 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:40.152 23:29:28 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:40.152 23:29:28 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:26:40.152 23:29:28 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:40.152 23:29:28 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:40.152 23:29:28 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:26:40.152 23:29:28 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:26:40.152 23:29:28 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:26:40.152 23:29:28 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:40.152 23:29:28 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:40.152 23:29:28 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:40.152 23:29:28 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:26:40.152 23:29:28 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:40.152 23:29:28 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:40.152 23:29:28 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:40.152 23:29:28 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:26:40.152 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:40.152 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.654 ms 00:26:40.152 00:26:40.152 --- 10.0.0.2 ping statistics --- 00:26:40.152 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:40.152 rtt min/avg/max/mdev = 0.654/0.654/0.654/0.000 ms 00:26:40.152 23:29:28 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:40.152 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:40.152 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.273 ms 00:26:40.152 00:26:40.152 --- 10.0.0.1 ping statistics --- 00:26:40.152 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:40.152 rtt min/avg/max/mdev = 0.273/0.273/0.273/0.000 ms 00:26:40.152 23:29:28 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:40.152 23:29:28 -- nvmf/common.sh@411 -- # return 0 00:26:40.152 23:29:28 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:26:40.152 23:29:28 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:40.152 23:29:28 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:26:40.152 23:29:28 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:26:40.152 23:29:28 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:40.152 23:29:28 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:26:40.152 23:29:28 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:26:40.152 23:29:28 -- target/perf_adq.sh@88 -- # adq_configure_driver 00:26:40.152 23:29:28 -- target/perf_adq.sh@22 -- # ip netns exec cvl_0_0_ns_spdk ethtool --offload cvl_0_0 hw-tc-offload on 00:26:40.152 23:29:28 -- target/perf_adq.sh@24 -- # ip netns exec cvl_0_0_ns_spdk ethtool --set-priv-flags cvl_0_0 channel-pkt-inspect-optimize off 00:26:40.152 23:29:28 -- target/perf_adq.sh@26 -- # sysctl -w net.core.busy_poll=1 00:26:40.152 net.core.busy_poll = 1 00:26:40.152 23:29:28 -- target/perf_adq.sh@27 -- # sysctl -w net.core.busy_read=1 00:26:40.152 net.core.busy_read = 1 00:26:40.152 23:29:29 -- target/perf_adq.sh@29 -- # tc=/usr/sbin/tc 00:26:40.152 23:29:29 -- target/perf_adq.sh@31 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 root mqprio num_tc 2 map 0 1 queues 2@0 2@2 hw 1 mode channel 00:26:40.152 23:29:29 -- target/perf_adq.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 ingress 00:26:40.152 23:29:29 -- target/perf_adq.sh@35 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc filter add dev cvl_0_0 protocol ip parent ffff: prio 1 flower dst_ip 10.0.0.2/32 ip_proto tcp dst_port 4420 skip_sw hw_tc 1 00:26:40.152 23:29:29 -- target/perf_adq.sh@38 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/nvmf/set_xps_rxqs cvl_0_0 00:26:40.152 23:29:29 -- target/perf_adq.sh@89 -- # nvmfappstart -m 0xF --wait-for-rpc 00:26:40.152 23:29:29 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:26:40.152 23:29:29 -- common/autotest_common.sh@710 -- # xtrace_disable 00:26:40.152 23:29:29 -- common/autotest_common.sh@10 -- # set +x 00:26:40.152 23:29:29 -- nvmf/common.sh@470 -- # nvmfpid=4059960 00:26:40.152 23:29:29 -- nvmf/common.sh@471 -- # waitforlisten 4059960 00:26:40.152 23:29:29 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:26:40.152 23:29:29 -- common/autotest_common.sh@817 -- # '[' -z 4059960 ']' 00:26:40.152 23:29:29 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:40.152 23:29:29 -- common/autotest_common.sh@822 -- # local max_retries=100 00:26:40.152 23:29:29 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:40.152 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:40.152 23:29:29 -- common/autotest_common.sh@826 -- # xtrace_disable 00:26:40.152 23:29:29 -- common/autotest_common.sh@10 -- # set +x 00:26:40.152 [2024-04-26 23:29:29.299995] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:26:40.152 [2024-04-26 23:29:29.300054] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:40.152 EAL: No free 2048 kB hugepages reported on node 1 00:26:40.152 [2024-04-26 23:29:29.364866] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:26:40.152 [2024-04-26 23:29:29.394667] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:40.152 [2024-04-26 23:29:29.394705] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:40.152 [2024-04-26 23:29:29.394715] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:40.152 [2024-04-26 23:29:29.394722] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:40.152 [2024-04-26 23:29:29.394729] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:40.152 [2024-04-26 23:29:29.394864] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:26:40.152 [2024-04-26 23:29:29.394940] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:26:40.152 [2024-04-26 23:29:29.395098] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:40.152 [2024-04-26 23:29:29.395099] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:26:40.413 23:29:29 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:26:40.413 23:29:29 -- common/autotest_common.sh@850 -- # return 0 00:26:40.413 23:29:29 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:26:40.413 23:29:29 -- common/autotest_common.sh@716 -- # xtrace_disable 00:26:40.413 23:29:29 -- common/autotest_common.sh@10 -- # set +x 00:26:40.413 23:29:29 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:40.413 23:29:29 -- target/perf_adq.sh@90 -- # adq_configure_nvmf_target 1 00:26:40.413 23:29:29 -- target/perf_adq.sh@42 -- # rpc_cmd sock_impl_set_options --enable-placement-id 1 --enable-zerocopy-send-server -i posix 00:26:40.413 23:29:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:40.413 23:29:29 -- common/autotest_common.sh@10 -- # set +x 00:26:40.413 23:29:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:40.413 23:29:29 -- target/perf_adq.sh@43 -- # rpc_cmd framework_start_init 00:26:40.413 23:29:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:40.413 23:29:29 -- common/autotest_common.sh@10 -- # set +x 00:26:40.413 23:29:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:40.413 23:29:29 -- target/perf_adq.sh@44 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 1 00:26:40.413 23:29:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:40.413 23:29:29 -- common/autotest_common.sh@10 -- # set +x 00:26:40.413 [2024-04-26 23:29:29.556754] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:40.413 23:29:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:40.413 23:29:29 -- target/perf_adq.sh@45 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:26:40.413 23:29:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:40.413 23:29:29 -- common/autotest_common.sh@10 -- # set +x 00:26:40.413 Malloc1 00:26:40.413 23:29:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:40.413 23:29:29 -- target/perf_adq.sh@46 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:26:40.413 23:29:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:40.413 23:29:29 -- common/autotest_common.sh@10 -- # set +x 00:26:40.413 23:29:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:40.413 23:29:29 -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:26:40.413 23:29:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:40.413 23:29:29 -- common/autotest_common.sh@10 -- # set +x 00:26:40.413 23:29:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:40.413 23:29:29 -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:40.413 23:29:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:40.413 23:29:29 -- common/autotest_common.sh@10 -- # set +x 00:26:40.413 [2024-04-26 23:29:29.612054] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:40.413 23:29:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:40.413 23:29:29 -- target/perf_adq.sh@94 -- # perfpid=4059984 00:26:40.413 23:29:29 -- target/perf_adq.sh@95 -- # sleep 2 00:26:40.413 23:29:29 -- target/perf_adq.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:26:40.413 EAL: No free 2048 kB hugepages reported on node 1 00:26:42.962 23:29:31 -- target/perf_adq.sh@97 -- # rpc_cmd nvmf_get_stats 00:26:42.962 23:29:31 -- target/perf_adq.sh@97 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 0) | length' 00:26:42.962 23:29:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:42.962 23:29:31 -- target/perf_adq.sh@97 -- # wc -l 00:26:42.962 23:29:31 -- common/autotest_common.sh@10 -- # set +x 00:26:42.962 23:29:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:42.962 23:29:31 -- target/perf_adq.sh@97 -- # count=2 00:26:42.962 23:29:31 -- target/perf_adq.sh@98 -- # [[ 2 -lt 2 ]] 00:26:42.962 23:29:31 -- target/perf_adq.sh@103 -- # wait 4059984 00:26:51.214 Initializing NVMe Controllers 00:26:51.214 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:26:51.214 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:26:51.214 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:26:51.214 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:26:51.214 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:26:51.214 Initialization complete. Launching workers. 00:26:51.214 ======================================================== 00:26:51.214 Latency(us) 00:26:51.214 Device Information : IOPS MiB/s Average min max 00:26:51.214 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 8804.30 34.39 7270.93 1165.13 54344.33 00:26:51.214 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 14249.90 55.66 4491.18 1231.16 45397.27 00:26:51.214 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 6381.20 24.93 10031.13 1196.79 55166.84 00:26:51.214 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 6368.30 24.88 10080.94 1537.98 54402.79 00:26:51.214 ======================================================== 00:26:51.214 Total : 35803.70 139.86 7156.34 1165.13 55166.84 00:26:51.214 00:26:51.214 23:29:39 -- target/perf_adq.sh@104 -- # nvmftestfini 00:26:51.214 23:29:39 -- nvmf/common.sh@477 -- # nvmfcleanup 00:26:51.214 23:29:39 -- nvmf/common.sh@117 -- # sync 00:26:51.214 23:29:39 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:26:51.214 23:29:39 -- nvmf/common.sh@120 -- # set +e 00:26:51.214 23:29:39 -- nvmf/common.sh@121 -- # for i in {1..20} 00:26:51.214 23:29:39 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:26:51.214 rmmod nvme_tcp 00:26:51.214 rmmod nvme_fabrics 00:26:51.214 rmmod nvme_keyring 00:26:51.214 23:29:39 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:26:51.214 23:29:39 -- nvmf/common.sh@124 -- # set -e 00:26:51.214 23:29:39 -- nvmf/common.sh@125 -- # return 0 00:26:51.214 23:29:39 -- nvmf/common.sh@478 -- # '[' -n 4059960 ']' 00:26:51.214 23:29:39 -- nvmf/common.sh@479 -- # killprocess 4059960 00:26:51.214 23:29:39 -- common/autotest_common.sh@936 -- # '[' -z 4059960 ']' 00:26:51.214 23:29:39 -- common/autotest_common.sh@940 -- # kill -0 4059960 00:26:51.214 23:29:39 -- common/autotest_common.sh@941 -- # uname 00:26:51.214 23:29:39 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:26:51.214 23:29:39 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 4059960 00:26:51.214 23:29:39 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:26:51.214 23:29:39 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:26:51.214 23:29:39 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 4059960' 00:26:51.214 killing process with pid 4059960 00:26:51.214 23:29:39 -- common/autotest_common.sh@955 -- # kill 4059960 00:26:51.214 23:29:39 -- common/autotest_common.sh@960 -- # wait 4059960 00:26:51.214 23:29:40 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:26:51.214 23:29:40 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:26:51.214 23:29:40 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:26:51.214 23:29:40 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:51.214 23:29:40 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:26:51.214 23:29:40 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:51.214 23:29:40 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:51.214 23:29:40 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:53.132 23:29:42 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:26:53.132 23:29:42 -- target/perf_adq.sh@106 -- # trap - SIGINT SIGTERM EXIT 00:26:53.132 00:26:53.132 real 0m51.518s 00:26:53.132 user 2m47.511s 00:26:53.132 sys 0m10.241s 00:26:53.132 23:29:42 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:26:53.132 23:29:42 -- common/autotest_common.sh@10 -- # set +x 00:26:53.132 ************************************ 00:26:53.132 END TEST nvmf_perf_adq 00:26:53.132 ************************************ 00:26:53.132 23:29:42 -- nvmf/nvmf.sh@81 -- # run_test nvmf_shutdown /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:26:53.132 23:29:42 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:26:53.132 23:29:42 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:26:53.132 23:29:42 -- common/autotest_common.sh@10 -- # set +x 00:26:53.132 ************************************ 00:26:53.132 START TEST nvmf_shutdown 00:26:53.132 ************************************ 00:26:53.132 23:29:42 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:26:53.394 * Looking for test storage... 00:26:53.394 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:26:53.394 23:29:42 -- target/shutdown.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:53.394 23:29:42 -- nvmf/common.sh@7 -- # uname -s 00:26:53.394 23:29:42 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:53.394 23:29:42 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:53.394 23:29:42 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:53.394 23:29:42 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:53.394 23:29:42 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:53.394 23:29:42 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:53.394 23:29:42 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:53.394 23:29:42 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:53.394 23:29:42 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:53.394 23:29:42 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:53.394 23:29:42 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:26:53.394 23:29:42 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:26:53.394 23:29:42 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:53.394 23:29:42 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:53.394 23:29:42 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:53.394 23:29:42 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:53.394 23:29:42 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:53.394 23:29:42 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:53.394 23:29:42 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:53.394 23:29:42 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:53.394 23:29:42 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:53.395 23:29:42 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:53.395 23:29:42 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:53.395 23:29:42 -- paths/export.sh@5 -- # export PATH 00:26:53.395 23:29:42 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:53.395 23:29:42 -- nvmf/common.sh@47 -- # : 0 00:26:53.395 23:29:42 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:26:53.395 23:29:42 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:26:53.395 23:29:42 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:53.395 23:29:42 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:53.395 23:29:42 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:53.395 23:29:42 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:26:53.395 23:29:42 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:26:53.395 23:29:42 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:26:53.395 23:29:42 -- target/shutdown.sh@11 -- # MALLOC_BDEV_SIZE=64 00:26:53.395 23:29:42 -- target/shutdown.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:26:53.395 23:29:42 -- target/shutdown.sh@147 -- # run_test nvmf_shutdown_tc1 nvmf_shutdown_tc1 00:26:53.395 23:29:42 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:26:53.395 23:29:42 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:26:53.395 23:29:42 -- common/autotest_common.sh@10 -- # set +x 00:26:53.395 ************************************ 00:26:53.395 START TEST nvmf_shutdown_tc1 00:26:53.395 ************************************ 00:26:53.395 23:29:42 -- common/autotest_common.sh@1111 -- # nvmf_shutdown_tc1 00:26:53.395 23:29:42 -- target/shutdown.sh@74 -- # starttarget 00:26:53.395 23:29:42 -- target/shutdown.sh@15 -- # nvmftestinit 00:26:53.395 23:29:42 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:26:53.395 23:29:42 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:53.395 23:29:42 -- nvmf/common.sh@437 -- # prepare_net_devs 00:26:53.395 23:29:42 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:26:53.395 23:29:42 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:26:53.395 23:29:42 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:53.395 23:29:42 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:53.395 23:29:42 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:53.395 23:29:42 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:26:53.395 23:29:42 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:26:53.395 23:29:42 -- nvmf/common.sh@285 -- # xtrace_disable 00:26:53.395 23:29:42 -- common/autotest_common.sh@10 -- # set +x 00:27:01.531 23:29:49 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:27:01.531 23:29:49 -- nvmf/common.sh@291 -- # pci_devs=() 00:27:01.531 23:29:49 -- nvmf/common.sh@291 -- # local -a pci_devs 00:27:01.531 23:29:49 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:27:01.531 23:29:49 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:27:01.531 23:29:49 -- nvmf/common.sh@293 -- # pci_drivers=() 00:27:01.531 23:29:49 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:27:01.531 23:29:49 -- nvmf/common.sh@295 -- # net_devs=() 00:27:01.531 23:29:49 -- nvmf/common.sh@295 -- # local -ga net_devs 00:27:01.531 23:29:49 -- nvmf/common.sh@296 -- # e810=() 00:27:01.531 23:29:49 -- nvmf/common.sh@296 -- # local -ga e810 00:27:01.531 23:29:49 -- nvmf/common.sh@297 -- # x722=() 00:27:01.531 23:29:49 -- nvmf/common.sh@297 -- # local -ga x722 00:27:01.531 23:29:49 -- nvmf/common.sh@298 -- # mlx=() 00:27:01.531 23:29:49 -- nvmf/common.sh@298 -- # local -ga mlx 00:27:01.531 23:29:49 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:01.531 23:29:49 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:01.531 23:29:49 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:01.531 23:29:49 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:01.531 23:29:49 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:01.531 23:29:49 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:01.531 23:29:49 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:01.531 23:29:49 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:01.531 23:29:49 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:01.531 23:29:49 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:01.531 23:29:49 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:01.531 23:29:49 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:27:01.531 23:29:49 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:27:01.531 23:29:49 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:27:01.531 23:29:49 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:27:01.531 23:29:49 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:27:01.531 23:29:49 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:27:01.531 23:29:49 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:01.531 23:29:49 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:27:01.531 Found 0000:31:00.0 (0x8086 - 0x159b) 00:27:01.531 23:29:49 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:01.531 23:29:49 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:01.531 23:29:49 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:01.531 23:29:49 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:01.531 23:29:49 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:01.531 23:29:49 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:01.531 23:29:49 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:27:01.531 Found 0000:31:00.1 (0x8086 - 0x159b) 00:27:01.531 23:29:49 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:01.531 23:29:49 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:01.531 23:29:49 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:01.531 23:29:49 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:01.531 23:29:49 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:01.531 23:29:49 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:27:01.531 23:29:49 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:27:01.531 23:29:49 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:27:01.531 23:29:49 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:01.531 23:29:49 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:01.531 23:29:49 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:27:01.531 23:29:49 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:01.531 23:29:49 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:27:01.532 Found net devices under 0000:31:00.0: cvl_0_0 00:27:01.532 23:29:49 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:27:01.532 23:29:49 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:01.532 23:29:49 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:01.532 23:29:49 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:27:01.532 23:29:49 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:01.532 23:29:49 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:27:01.532 Found net devices under 0000:31:00.1: cvl_0_1 00:27:01.532 23:29:49 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:27:01.532 23:29:49 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:27:01.532 23:29:49 -- nvmf/common.sh@403 -- # is_hw=yes 00:27:01.532 23:29:49 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:27:01.532 23:29:49 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:27:01.532 23:29:49 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:27:01.532 23:29:49 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:01.532 23:29:49 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:01.532 23:29:49 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:01.532 23:29:49 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:27:01.532 23:29:49 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:01.532 23:29:49 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:01.532 23:29:49 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:27:01.532 23:29:49 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:01.532 23:29:49 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:01.532 23:29:49 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:27:01.532 23:29:49 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:27:01.532 23:29:49 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:27:01.532 23:29:49 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:01.532 23:29:49 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:01.532 23:29:49 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:01.532 23:29:49 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:27:01.532 23:29:49 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:01.532 23:29:49 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:01.532 23:29:49 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:01.532 23:29:49 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:27:01.532 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:01.532 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.617 ms 00:27:01.532 00:27:01.532 --- 10.0.0.2 ping statistics --- 00:27:01.532 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:01.532 rtt min/avg/max/mdev = 0.617/0.617/0.617/0.000 ms 00:27:01.532 23:29:49 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:01.532 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:01.532 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.288 ms 00:27:01.532 00:27:01.532 --- 10.0.0.1 ping statistics --- 00:27:01.532 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:01.532 rtt min/avg/max/mdev = 0.288/0.288/0.288/0.000 ms 00:27:01.532 23:29:49 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:01.532 23:29:49 -- nvmf/common.sh@411 -- # return 0 00:27:01.532 23:29:49 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:27:01.532 23:29:49 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:01.532 23:29:49 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:27:01.532 23:29:49 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:27:01.532 23:29:49 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:01.532 23:29:49 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:27:01.532 23:29:49 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:27:01.532 23:29:49 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:27:01.532 23:29:49 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:27:01.532 23:29:49 -- common/autotest_common.sh@710 -- # xtrace_disable 00:27:01.532 23:29:49 -- common/autotest_common.sh@10 -- # set +x 00:27:01.532 23:29:49 -- nvmf/common.sh@470 -- # nvmfpid=4066365 00:27:01.532 23:29:49 -- nvmf/common.sh@471 -- # waitforlisten 4066365 00:27:01.532 23:29:49 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:27:01.532 23:29:49 -- common/autotest_common.sh@817 -- # '[' -z 4066365 ']' 00:27:01.532 23:29:49 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:01.532 23:29:49 -- common/autotest_common.sh@822 -- # local max_retries=100 00:27:01.532 23:29:49 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:01.532 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:01.532 23:29:49 -- common/autotest_common.sh@826 -- # xtrace_disable 00:27:01.532 23:29:49 -- common/autotest_common.sh@10 -- # set +x 00:27:01.532 [2024-04-26 23:29:49.887421] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:27:01.532 [2024-04-26 23:29:49.887485] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:01.532 EAL: No free 2048 kB hugepages reported on node 1 00:27:01.532 [2024-04-26 23:29:49.959000] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:01.532 [2024-04-26 23:29:49.997228] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:01.532 [2024-04-26 23:29:49.997278] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:01.532 [2024-04-26 23:29:49.997286] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:01.532 [2024-04-26 23:29:49.997292] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:01.532 [2024-04-26 23:29:49.997299] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:01.532 [2024-04-26 23:29:49.997444] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:27:01.532 [2024-04-26 23:29:49.997561] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:27:01.532 [2024-04-26 23:29:49.997720] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:27:01.532 [2024-04-26 23:29:49.997721] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:27:01.532 23:29:50 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:27:01.532 23:29:50 -- common/autotest_common.sh@850 -- # return 0 00:27:01.532 23:29:50 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:27:01.532 23:29:50 -- common/autotest_common.sh@716 -- # xtrace_disable 00:27:01.532 23:29:50 -- common/autotest_common.sh@10 -- # set +x 00:27:01.532 23:29:50 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:01.532 23:29:50 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:27:01.532 23:29:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:01.532 23:29:50 -- common/autotest_common.sh@10 -- # set +x 00:27:01.532 [2024-04-26 23:29:50.711543] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:01.532 23:29:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:01.532 23:29:50 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:27:01.532 23:29:50 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:27:01.532 23:29:50 -- common/autotest_common.sh@710 -- # xtrace_disable 00:27:01.532 23:29:50 -- common/autotest_common.sh@10 -- # set +x 00:27:01.532 23:29:50 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:27:01.532 23:29:50 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:01.532 23:29:50 -- target/shutdown.sh@28 -- # cat 00:27:01.532 23:29:50 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:01.532 23:29:50 -- target/shutdown.sh@28 -- # cat 00:27:01.532 23:29:50 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:01.532 23:29:50 -- target/shutdown.sh@28 -- # cat 00:27:01.532 23:29:50 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:01.532 23:29:50 -- target/shutdown.sh@28 -- # cat 00:27:01.532 23:29:50 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:01.532 23:29:50 -- target/shutdown.sh@28 -- # cat 00:27:01.532 23:29:50 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:01.532 23:29:50 -- target/shutdown.sh@28 -- # cat 00:27:01.532 23:29:50 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:01.532 23:29:50 -- target/shutdown.sh@28 -- # cat 00:27:01.532 23:29:50 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:01.532 23:29:50 -- target/shutdown.sh@28 -- # cat 00:27:01.532 23:29:50 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:01.532 23:29:50 -- target/shutdown.sh@28 -- # cat 00:27:01.532 23:29:50 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:01.532 23:29:50 -- target/shutdown.sh@28 -- # cat 00:27:01.532 23:29:50 -- target/shutdown.sh@35 -- # rpc_cmd 00:27:01.532 23:29:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:01.532 23:29:50 -- common/autotest_common.sh@10 -- # set +x 00:27:01.793 Malloc1 00:27:01.793 [2024-04-26 23:29:50.812022] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:01.793 Malloc2 00:27:01.793 Malloc3 00:27:01.793 Malloc4 00:27:01.793 Malloc5 00:27:01.793 Malloc6 00:27:01.793 Malloc7 00:27:02.055 Malloc8 00:27:02.055 Malloc9 00:27:02.055 Malloc10 00:27:02.055 23:29:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:02.055 23:29:51 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:27:02.055 23:29:51 -- common/autotest_common.sh@716 -- # xtrace_disable 00:27:02.055 23:29:51 -- common/autotest_common.sh@10 -- # set +x 00:27:02.055 23:29:51 -- target/shutdown.sh@78 -- # perfpid=4066595 00:27:02.055 23:29:51 -- target/shutdown.sh@79 -- # waitforlisten 4066595 /var/tmp/bdevperf.sock 00:27:02.055 23:29:51 -- common/autotest_common.sh@817 -- # '[' -z 4066595 ']' 00:27:02.055 23:29:51 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:27:02.055 23:29:51 -- common/autotest_common.sh@822 -- # local max_retries=100 00:27:02.055 23:29:51 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:27:02.055 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:27:02.055 23:29:51 -- target/shutdown.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json /dev/fd/63 00:27:02.055 23:29:51 -- common/autotest_common.sh@826 -- # xtrace_disable 00:27:02.055 23:29:51 -- target/shutdown.sh@77 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:27:02.055 23:29:51 -- common/autotest_common.sh@10 -- # set +x 00:27:02.055 23:29:51 -- nvmf/common.sh@521 -- # config=() 00:27:02.055 23:29:51 -- nvmf/common.sh@521 -- # local subsystem config 00:27:02.055 23:29:51 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:27:02.055 23:29:51 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:27:02.055 { 00:27:02.055 "params": { 00:27:02.055 "name": "Nvme$subsystem", 00:27:02.055 "trtype": "$TEST_TRANSPORT", 00:27:02.055 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:02.055 "adrfam": "ipv4", 00:27:02.055 "trsvcid": "$NVMF_PORT", 00:27:02.055 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:02.055 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:02.055 "hdgst": ${hdgst:-false}, 00:27:02.055 "ddgst": ${ddgst:-false} 00:27:02.055 }, 00:27:02.055 "method": "bdev_nvme_attach_controller" 00:27:02.055 } 00:27:02.055 EOF 00:27:02.055 )") 00:27:02.055 23:29:51 -- nvmf/common.sh@543 -- # cat 00:27:02.055 23:29:51 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:27:02.055 23:29:51 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:27:02.055 { 00:27:02.055 "params": { 00:27:02.055 "name": "Nvme$subsystem", 00:27:02.055 "trtype": "$TEST_TRANSPORT", 00:27:02.055 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:02.055 "adrfam": "ipv4", 00:27:02.055 "trsvcid": "$NVMF_PORT", 00:27:02.055 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:02.055 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:02.055 "hdgst": ${hdgst:-false}, 00:27:02.055 "ddgst": ${ddgst:-false} 00:27:02.055 }, 00:27:02.055 "method": "bdev_nvme_attach_controller" 00:27:02.055 } 00:27:02.055 EOF 00:27:02.055 )") 00:27:02.055 23:29:51 -- nvmf/common.sh@543 -- # cat 00:27:02.055 23:29:51 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:27:02.055 23:29:51 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:27:02.055 { 00:27:02.055 "params": { 00:27:02.055 "name": "Nvme$subsystem", 00:27:02.055 "trtype": "$TEST_TRANSPORT", 00:27:02.055 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:02.055 "adrfam": "ipv4", 00:27:02.055 "trsvcid": "$NVMF_PORT", 00:27:02.055 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:02.055 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:02.055 "hdgst": ${hdgst:-false}, 00:27:02.055 "ddgst": ${ddgst:-false} 00:27:02.055 }, 00:27:02.055 "method": "bdev_nvme_attach_controller" 00:27:02.055 } 00:27:02.055 EOF 00:27:02.055 )") 00:27:02.055 23:29:51 -- nvmf/common.sh@543 -- # cat 00:27:02.055 23:29:51 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:27:02.055 23:29:51 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:27:02.055 { 00:27:02.055 "params": { 00:27:02.055 "name": "Nvme$subsystem", 00:27:02.055 "trtype": "$TEST_TRANSPORT", 00:27:02.055 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:02.055 "adrfam": "ipv4", 00:27:02.055 "trsvcid": "$NVMF_PORT", 00:27:02.055 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:02.055 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:02.055 "hdgst": ${hdgst:-false}, 00:27:02.055 "ddgst": ${ddgst:-false} 00:27:02.055 }, 00:27:02.055 "method": "bdev_nvme_attach_controller" 00:27:02.055 } 00:27:02.055 EOF 00:27:02.055 )") 00:27:02.055 23:29:51 -- nvmf/common.sh@543 -- # cat 00:27:02.055 23:29:51 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:27:02.055 23:29:51 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:27:02.055 { 00:27:02.055 "params": { 00:27:02.055 "name": "Nvme$subsystem", 00:27:02.055 "trtype": "$TEST_TRANSPORT", 00:27:02.055 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:02.055 "adrfam": "ipv4", 00:27:02.055 "trsvcid": "$NVMF_PORT", 00:27:02.055 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:02.055 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:02.055 "hdgst": ${hdgst:-false}, 00:27:02.055 "ddgst": ${ddgst:-false} 00:27:02.055 }, 00:27:02.055 "method": "bdev_nvme_attach_controller" 00:27:02.055 } 00:27:02.055 EOF 00:27:02.055 )") 00:27:02.055 23:29:51 -- nvmf/common.sh@543 -- # cat 00:27:02.055 23:29:51 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:27:02.055 23:29:51 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:27:02.055 { 00:27:02.055 "params": { 00:27:02.055 "name": "Nvme$subsystem", 00:27:02.055 "trtype": "$TEST_TRANSPORT", 00:27:02.055 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:02.055 "adrfam": "ipv4", 00:27:02.055 "trsvcid": "$NVMF_PORT", 00:27:02.055 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:02.055 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:02.055 "hdgst": ${hdgst:-false}, 00:27:02.055 "ddgst": ${ddgst:-false} 00:27:02.055 }, 00:27:02.055 "method": "bdev_nvme_attach_controller" 00:27:02.055 } 00:27:02.055 EOF 00:27:02.055 )") 00:27:02.055 [2024-04-26 23:29:51.259040] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:27:02.055 [2024-04-26 23:29:51.259091] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:27:02.055 23:29:51 -- nvmf/common.sh@543 -- # cat 00:27:02.055 23:29:51 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:27:02.055 23:29:51 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:27:02.055 { 00:27:02.055 "params": { 00:27:02.055 "name": "Nvme$subsystem", 00:27:02.055 "trtype": "$TEST_TRANSPORT", 00:27:02.055 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:02.055 "adrfam": "ipv4", 00:27:02.055 "trsvcid": "$NVMF_PORT", 00:27:02.055 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:02.055 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:02.055 "hdgst": ${hdgst:-false}, 00:27:02.055 "ddgst": ${ddgst:-false} 00:27:02.055 }, 00:27:02.055 "method": "bdev_nvme_attach_controller" 00:27:02.055 } 00:27:02.055 EOF 00:27:02.055 )") 00:27:02.055 23:29:51 -- nvmf/common.sh@543 -- # cat 00:27:02.055 23:29:51 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:27:02.055 23:29:51 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:27:02.055 { 00:27:02.055 "params": { 00:27:02.055 "name": "Nvme$subsystem", 00:27:02.055 "trtype": "$TEST_TRANSPORT", 00:27:02.055 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:02.055 "adrfam": "ipv4", 00:27:02.055 "trsvcid": "$NVMF_PORT", 00:27:02.055 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:02.055 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:02.055 "hdgst": ${hdgst:-false}, 00:27:02.055 "ddgst": ${ddgst:-false} 00:27:02.055 }, 00:27:02.055 "method": "bdev_nvme_attach_controller" 00:27:02.055 } 00:27:02.055 EOF 00:27:02.055 )") 00:27:02.055 23:29:51 -- nvmf/common.sh@543 -- # cat 00:27:02.055 23:29:51 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:27:02.055 23:29:51 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:27:02.055 { 00:27:02.055 "params": { 00:27:02.055 "name": "Nvme$subsystem", 00:27:02.055 "trtype": "$TEST_TRANSPORT", 00:27:02.055 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:02.055 "adrfam": "ipv4", 00:27:02.055 "trsvcid": "$NVMF_PORT", 00:27:02.055 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:02.055 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:02.055 "hdgst": ${hdgst:-false}, 00:27:02.055 "ddgst": ${ddgst:-false} 00:27:02.055 }, 00:27:02.055 "method": "bdev_nvme_attach_controller" 00:27:02.056 } 00:27:02.056 EOF 00:27:02.056 )") 00:27:02.056 23:29:51 -- nvmf/common.sh@543 -- # cat 00:27:02.056 EAL: No free 2048 kB hugepages reported on node 1 00:27:02.056 23:29:51 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:27:02.056 23:29:51 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:27:02.056 { 00:27:02.056 "params": { 00:27:02.056 "name": "Nvme$subsystem", 00:27:02.056 "trtype": "$TEST_TRANSPORT", 00:27:02.056 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:02.056 "adrfam": "ipv4", 00:27:02.056 "trsvcid": "$NVMF_PORT", 00:27:02.056 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:02.056 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:02.056 "hdgst": ${hdgst:-false}, 00:27:02.056 "ddgst": ${ddgst:-false} 00:27:02.056 }, 00:27:02.056 "method": "bdev_nvme_attach_controller" 00:27:02.056 } 00:27:02.056 EOF 00:27:02.056 )") 00:27:02.056 23:29:51 -- nvmf/common.sh@543 -- # cat 00:27:02.056 23:29:51 -- nvmf/common.sh@545 -- # jq . 00:27:02.056 23:29:51 -- nvmf/common.sh@546 -- # IFS=, 00:27:02.056 23:29:51 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:27:02.056 "params": { 00:27:02.056 "name": "Nvme1", 00:27:02.056 "trtype": "tcp", 00:27:02.056 "traddr": "10.0.0.2", 00:27:02.056 "adrfam": "ipv4", 00:27:02.056 "trsvcid": "4420", 00:27:02.056 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:27:02.056 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:27:02.056 "hdgst": false, 00:27:02.056 "ddgst": false 00:27:02.056 }, 00:27:02.056 "method": "bdev_nvme_attach_controller" 00:27:02.056 },{ 00:27:02.056 "params": { 00:27:02.056 "name": "Nvme2", 00:27:02.056 "trtype": "tcp", 00:27:02.056 "traddr": "10.0.0.2", 00:27:02.056 "adrfam": "ipv4", 00:27:02.056 "trsvcid": "4420", 00:27:02.056 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:27:02.056 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:27:02.056 "hdgst": false, 00:27:02.056 "ddgst": false 00:27:02.056 }, 00:27:02.056 "method": "bdev_nvme_attach_controller" 00:27:02.056 },{ 00:27:02.056 "params": { 00:27:02.056 "name": "Nvme3", 00:27:02.056 "trtype": "tcp", 00:27:02.056 "traddr": "10.0.0.2", 00:27:02.056 "adrfam": "ipv4", 00:27:02.056 "trsvcid": "4420", 00:27:02.056 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:27:02.056 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:27:02.056 "hdgst": false, 00:27:02.056 "ddgst": false 00:27:02.056 }, 00:27:02.056 "method": "bdev_nvme_attach_controller" 00:27:02.056 },{ 00:27:02.056 "params": { 00:27:02.056 "name": "Nvme4", 00:27:02.056 "trtype": "tcp", 00:27:02.056 "traddr": "10.0.0.2", 00:27:02.056 "adrfam": "ipv4", 00:27:02.056 "trsvcid": "4420", 00:27:02.056 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:27:02.056 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:27:02.056 "hdgst": false, 00:27:02.056 "ddgst": false 00:27:02.056 }, 00:27:02.056 "method": "bdev_nvme_attach_controller" 00:27:02.056 },{ 00:27:02.056 "params": { 00:27:02.056 "name": "Nvme5", 00:27:02.056 "trtype": "tcp", 00:27:02.056 "traddr": "10.0.0.2", 00:27:02.056 "adrfam": "ipv4", 00:27:02.056 "trsvcid": "4420", 00:27:02.056 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:27:02.056 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:27:02.056 "hdgst": false, 00:27:02.056 "ddgst": false 00:27:02.056 }, 00:27:02.056 "method": "bdev_nvme_attach_controller" 00:27:02.056 },{ 00:27:02.056 "params": { 00:27:02.056 "name": "Nvme6", 00:27:02.056 "trtype": "tcp", 00:27:02.056 "traddr": "10.0.0.2", 00:27:02.056 "adrfam": "ipv4", 00:27:02.056 "trsvcid": "4420", 00:27:02.056 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:27:02.056 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:27:02.056 "hdgst": false, 00:27:02.056 "ddgst": false 00:27:02.056 }, 00:27:02.056 "method": "bdev_nvme_attach_controller" 00:27:02.056 },{ 00:27:02.056 "params": { 00:27:02.056 "name": "Nvme7", 00:27:02.056 "trtype": "tcp", 00:27:02.056 "traddr": "10.0.0.2", 00:27:02.056 "adrfam": "ipv4", 00:27:02.056 "trsvcid": "4420", 00:27:02.056 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:27:02.056 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:27:02.056 "hdgst": false, 00:27:02.056 "ddgst": false 00:27:02.056 }, 00:27:02.056 "method": "bdev_nvme_attach_controller" 00:27:02.056 },{ 00:27:02.056 "params": { 00:27:02.056 "name": "Nvme8", 00:27:02.056 "trtype": "tcp", 00:27:02.056 "traddr": "10.0.0.2", 00:27:02.056 "adrfam": "ipv4", 00:27:02.056 "trsvcid": "4420", 00:27:02.056 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:27:02.056 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:27:02.056 "hdgst": false, 00:27:02.056 "ddgst": false 00:27:02.056 }, 00:27:02.056 "method": "bdev_nvme_attach_controller" 00:27:02.056 },{ 00:27:02.056 "params": { 00:27:02.056 "name": "Nvme9", 00:27:02.056 "trtype": "tcp", 00:27:02.056 "traddr": "10.0.0.2", 00:27:02.056 "adrfam": "ipv4", 00:27:02.056 "trsvcid": "4420", 00:27:02.056 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:27:02.056 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:27:02.056 "hdgst": false, 00:27:02.056 "ddgst": false 00:27:02.056 }, 00:27:02.056 "method": "bdev_nvme_attach_controller" 00:27:02.056 },{ 00:27:02.056 "params": { 00:27:02.056 "name": "Nvme10", 00:27:02.056 "trtype": "tcp", 00:27:02.056 "traddr": "10.0.0.2", 00:27:02.056 "adrfam": "ipv4", 00:27:02.056 "trsvcid": "4420", 00:27:02.056 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:27:02.056 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:27:02.056 "hdgst": false, 00:27:02.056 "ddgst": false 00:27:02.056 }, 00:27:02.056 "method": "bdev_nvme_attach_controller" 00:27:02.056 }' 00:27:02.317 [2024-04-26 23:29:51.320570] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:02.317 [2024-04-26 23:29:51.349754] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:03.259 23:29:52 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:27:03.259 23:29:52 -- common/autotest_common.sh@850 -- # return 0 00:27:03.259 23:29:52 -- target/shutdown.sh@80 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:27:03.259 23:29:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:03.259 23:29:52 -- common/autotest_common.sh@10 -- # set +x 00:27:03.520 23:29:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:03.520 23:29:52 -- target/shutdown.sh@83 -- # kill -9 4066595 00:27:03.520 23:29:52 -- target/shutdown.sh@84 -- # rm -f /var/run/spdk_bdev1 00:27:03.520 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 73: 4066595 Killed $rootdir/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") 00:27:03.520 23:29:52 -- target/shutdown.sh@87 -- # sleep 1 00:27:04.470 23:29:53 -- target/shutdown.sh@88 -- # kill -0 4066365 00:27:04.470 23:29:53 -- target/shutdown.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:27:04.470 23:29:53 -- target/shutdown.sh@91 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:27:04.470 23:29:53 -- nvmf/common.sh@521 -- # config=() 00:27:04.470 23:29:53 -- nvmf/common.sh@521 -- # local subsystem config 00:27:04.470 23:29:53 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:27:04.470 23:29:53 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:27:04.470 { 00:27:04.470 "params": { 00:27:04.470 "name": "Nvme$subsystem", 00:27:04.470 "trtype": "$TEST_TRANSPORT", 00:27:04.470 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:04.470 "adrfam": "ipv4", 00:27:04.470 "trsvcid": "$NVMF_PORT", 00:27:04.470 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:04.470 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:04.470 "hdgst": ${hdgst:-false}, 00:27:04.470 "ddgst": ${ddgst:-false} 00:27:04.470 }, 00:27:04.470 "method": "bdev_nvme_attach_controller" 00:27:04.470 } 00:27:04.470 EOF 00:27:04.470 )") 00:27:04.470 23:29:53 -- nvmf/common.sh@543 -- # cat 00:27:04.470 23:29:53 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:27:04.470 23:29:53 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:27:04.470 { 00:27:04.470 "params": { 00:27:04.470 "name": "Nvme$subsystem", 00:27:04.470 "trtype": "$TEST_TRANSPORT", 00:27:04.470 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:04.470 "adrfam": "ipv4", 00:27:04.470 "trsvcid": "$NVMF_PORT", 00:27:04.470 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:04.470 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:04.470 "hdgst": ${hdgst:-false}, 00:27:04.470 "ddgst": ${ddgst:-false} 00:27:04.470 }, 00:27:04.470 "method": "bdev_nvme_attach_controller" 00:27:04.470 } 00:27:04.470 EOF 00:27:04.470 )") 00:27:04.470 23:29:53 -- nvmf/common.sh@543 -- # cat 00:27:04.470 23:29:53 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:27:04.470 23:29:53 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:27:04.470 { 00:27:04.470 "params": { 00:27:04.470 "name": "Nvme$subsystem", 00:27:04.470 "trtype": "$TEST_TRANSPORT", 00:27:04.470 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:04.470 "adrfam": "ipv4", 00:27:04.470 "trsvcid": "$NVMF_PORT", 00:27:04.470 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:04.470 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:04.470 "hdgst": ${hdgst:-false}, 00:27:04.470 "ddgst": ${ddgst:-false} 00:27:04.470 }, 00:27:04.470 "method": "bdev_nvme_attach_controller" 00:27:04.470 } 00:27:04.470 EOF 00:27:04.470 )") 00:27:04.470 23:29:53 -- nvmf/common.sh@543 -- # cat 00:27:04.470 23:29:53 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:27:04.470 23:29:53 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:27:04.470 { 00:27:04.470 "params": { 00:27:04.470 "name": "Nvme$subsystem", 00:27:04.470 "trtype": "$TEST_TRANSPORT", 00:27:04.470 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:04.470 "adrfam": "ipv4", 00:27:04.470 "trsvcid": "$NVMF_PORT", 00:27:04.470 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:04.470 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:04.470 "hdgst": ${hdgst:-false}, 00:27:04.470 "ddgst": ${ddgst:-false} 00:27:04.470 }, 00:27:04.470 "method": "bdev_nvme_attach_controller" 00:27:04.470 } 00:27:04.470 EOF 00:27:04.470 )") 00:27:04.471 23:29:53 -- nvmf/common.sh@543 -- # cat 00:27:04.471 23:29:53 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:27:04.471 23:29:53 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:27:04.471 { 00:27:04.471 "params": { 00:27:04.471 "name": "Nvme$subsystem", 00:27:04.471 "trtype": "$TEST_TRANSPORT", 00:27:04.471 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:04.471 "adrfam": "ipv4", 00:27:04.471 "trsvcid": "$NVMF_PORT", 00:27:04.471 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:04.471 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:04.471 "hdgst": ${hdgst:-false}, 00:27:04.471 "ddgst": ${ddgst:-false} 00:27:04.471 }, 00:27:04.471 "method": "bdev_nvme_attach_controller" 00:27:04.471 } 00:27:04.471 EOF 00:27:04.471 )") 00:27:04.471 23:29:53 -- nvmf/common.sh@543 -- # cat 00:27:04.471 23:29:53 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:27:04.471 23:29:53 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:27:04.471 { 00:27:04.471 "params": { 00:27:04.471 "name": "Nvme$subsystem", 00:27:04.471 "trtype": "$TEST_TRANSPORT", 00:27:04.471 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:04.471 "adrfam": "ipv4", 00:27:04.471 "trsvcid": "$NVMF_PORT", 00:27:04.471 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:04.471 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:04.471 "hdgst": ${hdgst:-false}, 00:27:04.471 "ddgst": ${ddgst:-false} 00:27:04.471 }, 00:27:04.471 "method": "bdev_nvme_attach_controller" 00:27:04.471 } 00:27:04.471 EOF 00:27:04.471 )") 00:27:04.471 23:29:53 -- nvmf/common.sh@543 -- # cat 00:27:04.471 [2024-04-26 23:29:53.576312] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:27:04.471 [2024-04-26 23:29:53.576364] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4067189 ] 00:27:04.471 23:29:53 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:27:04.471 23:29:53 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:27:04.471 { 00:27:04.471 "params": { 00:27:04.471 "name": "Nvme$subsystem", 00:27:04.471 "trtype": "$TEST_TRANSPORT", 00:27:04.471 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:04.471 "adrfam": "ipv4", 00:27:04.471 "trsvcid": "$NVMF_PORT", 00:27:04.471 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:04.471 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:04.471 "hdgst": ${hdgst:-false}, 00:27:04.471 "ddgst": ${ddgst:-false} 00:27:04.471 }, 00:27:04.471 "method": "bdev_nvme_attach_controller" 00:27:04.471 } 00:27:04.471 EOF 00:27:04.471 )") 00:27:04.471 23:29:53 -- nvmf/common.sh@543 -- # cat 00:27:04.471 23:29:53 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:27:04.471 23:29:53 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:27:04.471 { 00:27:04.471 "params": { 00:27:04.471 "name": "Nvme$subsystem", 00:27:04.471 "trtype": "$TEST_TRANSPORT", 00:27:04.471 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:04.471 "adrfam": "ipv4", 00:27:04.471 "trsvcid": "$NVMF_PORT", 00:27:04.471 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:04.471 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:04.471 "hdgst": ${hdgst:-false}, 00:27:04.471 "ddgst": ${ddgst:-false} 00:27:04.471 }, 00:27:04.471 "method": "bdev_nvme_attach_controller" 00:27:04.471 } 00:27:04.471 EOF 00:27:04.471 )") 00:27:04.471 23:29:53 -- nvmf/common.sh@543 -- # cat 00:27:04.471 23:29:53 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:27:04.471 23:29:53 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:27:04.471 { 00:27:04.471 "params": { 00:27:04.471 "name": "Nvme$subsystem", 00:27:04.471 "trtype": "$TEST_TRANSPORT", 00:27:04.471 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:04.471 "adrfam": "ipv4", 00:27:04.471 "trsvcid": "$NVMF_PORT", 00:27:04.471 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:04.471 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:04.471 "hdgst": ${hdgst:-false}, 00:27:04.471 "ddgst": ${ddgst:-false} 00:27:04.471 }, 00:27:04.471 "method": "bdev_nvme_attach_controller" 00:27:04.471 } 00:27:04.471 EOF 00:27:04.471 )") 00:27:04.471 23:29:53 -- nvmf/common.sh@543 -- # cat 00:27:04.471 23:29:53 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:27:04.471 23:29:53 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:27:04.471 { 00:27:04.471 "params": { 00:27:04.471 "name": "Nvme$subsystem", 00:27:04.471 "trtype": "$TEST_TRANSPORT", 00:27:04.471 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:04.471 "adrfam": "ipv4", 00:27:04.471 "trsvcid": "$NVMF_PORT", 00:27:04.471 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:04.471 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:04.471 "hdgst": ${hdgst:-false}, 00:27:04.471 "ddgst": ${ddgst:-false} 00:27:04.471 }, 00:27:04.471 "method": "bdev_nvme_attach_controller" 00:27:04.471 } 00:27:04.471 EOF 00:27:04.471 )") 00:27:04.471 EAL: No free 2048 kB hugepages reported on node 1 00:27:04.471 23:29:53 -- nvmf/common.sh@543 -- # cat 00:27:04.471 23:29:53 -- nvmf/common.sh@545 -- # jq . 00:27:04.471 23:29:53 -- nvmf/common.sh@546 -- # IFS=, 00:27:04.471 23:29:53 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:27:04.471 "params": { 00:27:04.471 "name": "Nvme1", 00:27:04.471 "trtype": "tcp", 00:27:04.471 "traddr": "10.0.0.2", 00:27:04.471 "adrfam": "ipv4", 00:27:04.471 "trsvcid": "4420", 00:27:04.471 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:27:04.471 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:27:04.471 "hdgst": false, 00:27:04.471 "ddgst": false 00:27:04.471 }, 00:27:04.471 "method": "bdev_nvme_attach_controller" 00:27:04.471 },{ 00:27:04.471 "params": { 00:27:04.471 "name": "Nvme2", 00:27:04.471 "trtype": "tcp", 00:27:04.471 "traddr": "10.0.0.2", 00:27:04.471 "adrfam": "ipv4", 00:27:04.471 "trsvcid": "4420", 00:27:04.471 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:27:04.471 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:27:04.471 "hdgst": false, 00:27:04.471 "ddgst": false 00:27:04.471 }, 00:27:04.471 "method": "bdev_nvme_attach_controller" 00:27:04.471 },{ 00:27:04.471 "params": { 00:27:04.471 "name": "Nvme3", 00:27:04.471 "trtype": "tcp", 00:27:04.471 "traddr": "10.0.0.2", 00:27:04.471 "adrfam": "ipv4", 00:27:04.471 "trsvcid": "4420", 00:27:04.471 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:27:04.471 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:27:04.471 "hdgst": false, 00:27:04.471 "ddgst": false 00:27:04.471 }, 00:27:04.471 "method": "bdev_nvme_attach_controller" 00:27:04.471 },{ 00:27:04.471 "params": { 00:27:04.471 "name": "Nvme4", 00:27:04.471 "trtype": "tcp", 00:27:04.471 "traddr": "10.0.0.2", 00:27:04.471 "adrfam": "ipv4", 00:27:04.471 "trsvcid": "4420", 00:27:04.471 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:27:04.471 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:27:04.471 "hdgst": false, 00:27:04.471 "ddgst": false 00:27:04.471 }, 00:27:04.471 "method": "bdev_nvme_attach_controller" 00:27:04.471 },{ 00:27:04.471 "params": { 00:27:04.472 "name": "Nvme5", 00:27:04.472 "trtype": "tcp", 00:27:04.472 "traddr": "10.0.0.2", 00:27:04.472 "adrfam": "ipv4", 00:27:04.472 "trsvcid": "4420", 00:27:04.472 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:27:04.472 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:27:04.472 "hdgst": false, 00:27:04.472 "ddgst": false 00:27:04.472 }, 00:27:04.472 "method": "bdev_nvme_attach_controller" 00:27:04.472 },{ 00:27:04.472 "params": { 00:27:04.472 "name": "Nvme6", 00:27:04.472 "trtype": "tcp", 00:27:04.472 "traddr": "10.0.0.2", 00:27:04.472 "adrfam": "ipv4", 00:27:04.472 "trsvcid": "4420", 00:27:04.472 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:27:04.472 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:27:04.472 "hdgst": false, 00:27:04.472 "ddgst": false 00:27:04.472 }, 00:27:04.472 "method": "bdev_nvme_attach_controller" 00:27:04.472 },{ 00:27:04.472 "params": { 00:27:04.472 "name": "Nvme7", 00:27:04.472 "trtype": "tcp", 00:27:04.472 "traddr": "10.0.0.2", 00:27:04.472 "adrfam": "ipv4", 00:27:04.472 "trsvcid": "4420", 00:27:04.472 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:27:04.472 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:27:04.472 "hdgst": false, 00:27:04.472 "ddgst": false 00:27:04.472 }, 00:27:04.472 "method": "bdev_nvme_attach_controller" 00:27:04.472 },{ 00:27:04.472 "params": { 00:27:04.472 "name": "Nvme8", 00:27:04.472 "trtype": "tcp", 00:27:04.472 "traddr": "10.0.0.2", 00:27:04.472 "adrfam": "ipv4", 00:27:04.472 "trsvcid": "4420", 00:27:04.472 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:27:04.472 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:27:04.472 "hdgst": false, 00:27:04.472 "ddgst": false 00:27:04.472 }, 00:27:04.472 "method": "bdev_nvme_attach_controller" 00:27:04.472 },{ 00:27:04.472 "params": { 00:27:04.472 "name": "Nvme9", 00:27:04.472 "trtype": "tcp", 00:27:04.472 "traddr": "10.0.0.2", 00:27:04.472 "adrfam": "ipv4", 00:27:04.472 "trsvcid": "4420", 00:27:04.472 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:27:04.472 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:27:04.472 "hdgst": false, 00:27:04.472 "ddgst": false 00:27:04.472 }, 00:27:04.472 "method": "bdev_nvme_attach_controller" 00:27:04.472 },{ 00:27:04.472 "params": { 00:27:04.472 "name": "Nvme10", 00:27:04.472 "trtype": "tcp", 00:27:04.472 "traddr": "10.0.0.2", 00:27:04.472 "adrfam": "ipv4", 00:27:04.472 "trsvcid": "4420", 00:27:04.472 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:27:04.472 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:27:04.472 "hdgst": false, 00:27:04.472 "ddgst": false 00:27:04.472 }, 00:27:04.472 "method": "bdev_nvme_attach_controller" 00:27:04.472 }' 00:27:04.472 [2024-04-26 23:29:53.636804] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:04.472 [2024-04-26 23:29:53.665690] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:05.854 Running I/O for 1 seconds... 00:27:07.237 00:27:07.237 Latency(us) 00:27:07.237 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:07.237 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:07.237 Verification LBA range: start 0x0 length 0x400 00:27:07.237 Nvme1n1 : 1.06 242.63 15.16 0.00 0.00 260960.21 18786.99 241172.48 00:27:07.237 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:07.237 Verification LBA range: start 0x0 length 0x400 00:27:07.237 Nvme2n1 : 1.09 234.68 14.67 0.00 0.00 265062.61 22609.92 249910.61 00:27:07.237 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:07.237 Verification LBA range: start 0x0 length 0x400 00:27:07.237 Nvme3n1 : 1.09 235.47 14.72 0.00 0.00 259262.29 21080.75 255153.49 00:27:07.237 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:07.237 Verification LBA range: start 0x0 length 0x400 00:27:07.237 Nvme4n1 : 1.14 225.30 14.08 0.00 0.00 265413.12 18459.31 290106.03 00:27:07.237 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:07.237 Verification LBA range: start 0x0 length 0x400 00:27:07.237 Nvme5n1 : 1.19 268.88 16.81 0.00 0.00 219978.58 21299.20 225443.84 00:27:07.237 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:07.237 Verification LBA range: start 0x0 length 0x400 00:27:07.237 Nvme6n1 : 1.18 216.23 13.51 0.00 0.00 268696.32 15728.64 255153.49 00:27:07.237 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:07.237 Verification LBA range: start 0x0 length 0x400 00:27:07.237 Nvme7n1 : 1.19 268.44 16.78 0.00 0.00 212694.02 19660.80 227191.47 00:27:07.237 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:07.237 Verification LBA range: start 0x0 length 0x400 00:27:07.237 Nvme8n1 : 1.18 216.05 13.50 0.00 0.00 259074.56 15947.09 274377.39 00:27:07.237 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:07.237 Verification LBA range: start 0x0 length 0x400 00:27:07.237 Nvme9n1 : 1.20 267.17 16.70 0.00 0.00 206194.35 21736.11 246415.36 00:27:07.237 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:07.237 Verification LBA range: start 0x0 length 0x400 00:27:07.237 Nvme10n1 : 1.20 266.46 16.65 0.00 0.00 203091.11 12834.13 249910.61 00:27:07.237 =================================================================================================================== 00:27:07.237 Total : 2441.31 152.58 0.00 0.00 239174.24 12834.13 290106.03 00:27:07.237 23:29:56 -- target/shutdown.sh@94 -- # stoptarget 00:27:07.237 23:29:56 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:27:07.237 23:29:56 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:27:07.237 23:29:56 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:27:07.237 23:29:56 -- target/shutdown.sh@45 -- # nvmftestfini 00:27:07.237 23:29:56 -- nvmf/common.sh@477 -- # nvmfcleanup 00:27:07.237 23:29:56 -- nvmf/common.sh@117 -- # sync 00:27:07.237 23:29:56 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:27:07.237 23:29:56 -- nvmf/common.sh@120 -- # set +e 00:27:07.237 23:29:56 -- nvmf/common.sh@121 -- # for i in {1..20} 00:27:07.237 23:29:56 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:27:07.237 rmmod nvme_tcp 00:27:07.237 rmmod nvme_fabrics 00:27:07.237 rmmod nvme_keyring 00:27:07.237 23:29:56 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:27:07.237 23:29:56 -- nvmf/common.sh@124 -- # set -e 00:27:07.237 23:29:56 -- nvmf/common.sh@125 -- # return 0 00:27:07.237 23:29:56 -- nvmf/common.sh@478 -- # '[' -n 4066365 ']' 00:27:07.237 23:29:56 -- nvmf/common.sh@479 -- # killprocess 4066365 00:27:07.237 23:29:56 -- common/autotest_common.sh@936 -- # '[' -z 4066365 ']' 00:27:07.237 23:29:56 -- common/autotest_common.sh@940 -- # kill -0 4066365 00:27:07.237 23:29:56 -- common/autotest_common.sh@941 -- # uname 00:27:07.237 23:29:56 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:27:07.237 23:29:56 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 4066365 00:27:07.237 23:29:56 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:27:07.237 23:29:56 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:27:07.237 23:29:56 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 4066365' 00:27:07.237 killing process with pid 4066365 00:27:07.237 23:29:56 -- common/autotest_common.sh@955 -- # kill 4066365 00:27:07.237 23:29:56 -- common/autotest_common.sh@960 -- # wait 4066365 00:27:07.498 23:29:56 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:27:07.498 23:29:56 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:27:07.498 23:29:56 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:27:07.498 23:29:56 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:07.498 23:29:56 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:27:07.498 23:29:56 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:07.498 23:29:56 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:07.498 23:29:56 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:10.040 23:29:58 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:27:10.040 00:27:10.040 real 0m16.169s 00:27:10.040 user 0m32.668s 00:27:10.040 sys 0m6.408s 00:27:10.040 23:29:58 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:27:10.040 23:29:58 -- common/autotest_common.sh@10 -- # set +x 00:27:10.040 ************************************ 00:27:10.040 END TEST nvmf_shutdown_tc1 00:27:10.040 ************************************ 00:27:10.040 23:29:58 -- target/shutdown.sh@148 -- # run_test nvmf_shutdown_tc2 nvmf_shutdown_tc2 00:27:10.040 23:29:58 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:27:10.040 23:29:58 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:27:10.040 23:29:58 -- common/autotest_common.sh@10 -- # set +x 00:27:10.040 ************************************ 00:27:10.040 START TEST nvmf_shutdown_tc2 00:27:10.040 ************************************ 00:27:10.040 23:29:58 -- common/autotest_common.sh@1111 -- # nvmf_shutdown_tc2 00:27:10.040 23:29:58 -- target/shutdown.sh@99 -- # starttarget 00:27:10.040 23:29:58 -- target/shutdown.sh@15 -- # nvmftestinit 00:27:10.040 23:29:58 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:27:10.040 23:29:58 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:10.040 23:29:58 -- nvmf/common.sh@437 -- # prepare_net_devs 00:27:10.041 23:29:58 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:27:10.041 23:29:59 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:27:10.041 23:29:59 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:10.041 23:29:59 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:10.041 23:29:59 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:10.041 23:29:59 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:27:10.041 23:29:59 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:27:10.041 23:29:59 -- nvmf/common.sh@285 -- # xtrace_disable 00:27:10.041 23:29:59 -- common/autotest_common.sh@10 -- # set +x 00:27:10.041 23:29:59 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:27:10.041 23:29:59 -- nvmf/common.sh@291 -- # pci_devs=() 00:27:10.041 23:29:59 -- nvmf/common.sh@291 -- # local -a pci_devs 00:27:10.041 23:29:59 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:27:10.041 23:29:59 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:27:10.041 23:29:59 -- nvmf/common.sh@293 -- # pci_drivers=() 00:27:10.041 23:29:59 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:27:10.041 23:29:59 -- nvmf/common.sh@295 -- # net_devs=() 00:27:10.041 23:29:59 -- nvmf/common.sh@295 -- # local -ga net_devs 00:27:10.041 23:29:59 -- nvmf/common.sh@296 -- # e810=() 00:27:10.041 23:29:59 -- nvmf/common.sh@296 -- # local -ga e810 00:27:10.041 23:29:59 -- nvmf/common.sh@297 -- # x722=() 00:27:10.041 23:29:59 -- nvmf/common.sh@297 -- # local -ga x722 00:27:10.041 23:29:59 -- nvmf/common.sh@298 -- # mlx=() 00:27:10.041 23:29:59 -- nvmf/common.sh@298 -- # local -ga mlx 00:27:10.041 23:29:59 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:10.041 23:29:59 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:10.041 23:29:59 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:10.041 23:29:59 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:10.041 23:29:59 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:10.041 23:29:59 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:10.041 23:29:59 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:10.041 23:29:59 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:10.041 23:29:59 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:10.041 23:29:59 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:10.041 23:29:59 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:10.041 23:29:59 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:27:10.041 23:29:59 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:27:10.041 23:29:59 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:27:10.041 23:29:59 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:27:10.041 23:29:59 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:27:10.041 23:29:59 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:27:10.041 23:29:59 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:10.041 23:29:59 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:27:10.041 Found 0000:31:00.0 (0x8086 - 0x159b) 00:27:10.041 23:29:59 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:10.041 23:29:59 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:10.041 23:29:59 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:10.041 23:29:59 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:10.041 23:29:59 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:10.041 23:29:59 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:10.041 23:29:59 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:27:10.041 Found 0000:31:00.1 (0x8086 - 0x159b) 00:27:10.041 23:29:59 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:10.041 23:29:59 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:10.041 23:29:59 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:10.041 23:29:59 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:10.041 23:29:59 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:10.041 23:29:59 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:27:10.041 23:29:59 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:27:10.041 23:29:59 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:27:10.041 23:29:59 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:10.041 23:29:59 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:10.041 23:29:59 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:27:10.041 23:29:59 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:10.041 23:29:59 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:27:10.041 Found net devices under 0000:31:00.0: cvl_0_0 00:27:10.041 23:29:59 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:27:10.041 23:29:59 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:10.041 23:29:59 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:10.041 23:29:59 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:27:10.041 23:29:59 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:10.041 23:29:59 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:27:10.041 Found net devices under 0000:31:00.1: cvl_0_1 00:27:10.041 23:29:59 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:27:10.041 23:29:59 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:27:10.041 23:29:59 -- nvmf/common.sh@403 -- # is_hw=yes 00:27:10.041 23:29:59 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:27:10.041 23:29:59 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:27:10.041 23:29:59 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:27:10.041 23:29:59 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:10.041 23:29:59 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:10.041 23:29:59 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:10.041 23:29:59 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:27:10.041 23:29:59 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:10.041 23:29:59 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:10.041 23:29:59 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:27:10.041 23:29:59 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:10.041 23:29:59 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:10.041 23:29:59 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:27:10.041 23:29:59 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:27:10.041 23:29:59 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:27:10.041 23:29:59 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:10.041 23:29:59 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:10.041 23:29:59 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:10.041 23:29:59 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:27:10.041 23:29:59 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:10.302 23:29:59 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:10.302 23:29:59 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:10.302 23:29:59 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:27:10.302 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:10.302 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.560 ms 00:27:10.302 00:27:10.302 --- 10.0.0.2 ping statistics --- 00:27:10.302 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:10.302 rtt min/avg/max/mdev = 0.560/0.560/0.560/0.000 ms 00:27:10.302 23:29:59 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:10.302 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:10.302 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.323 ms 00:27:10.302 00:27:10.302 --- 10.0.0.1 ping statistics --- 00:27:10.302 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:10.302 rtt min/avg/max/mdev = 0.323/0.323/0.323/0.000 ms 00:27:10.302 23:29:59 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:10.302 23:29:59 -- nvmf/common.sh@411 -- # return 0 00:27:10.302 23:29:59 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:27:10.302 23:29:59 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:10.302 23:29:59 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:27:10.302 23:29:59 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:27:10.302 23:29:59 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:10.302 23:29:59 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:27:10.302 23:29:59 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:27:10.302 23:29:59 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:27:10.302 23:29:59 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:27:10.302 23:29:59 -- common/autotest_common.sh@710 -- # xtrace_disable 00:27:10.302 23:29:59 -- common/autotest_common.sh@10 -- # set +x 00:27:10.302 23:29:59 -- nvmf/common.sh@470 -- # nvmfpid=4068401 00:27:10.302 23:29:59 -- nvmf/common.sh@471 -- # waitforlisten 4068401 00:27:10.302 23:29:59 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:27:10.302 23:29:59 -- common/autotest_common.sh@817 -- # '[' -z 4068401 ']' 00:27:10.302 23:29:59 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:10.302 23:29:59 -- common/autotest_common.sh@822 -- # local max_retries=100 00:27:10.302 23:29:59 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:10.302 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:10.302 23:29:59 -- common/autotest_common.sh@826 -- # xtrace_disable 00:27:10.302 23:29:59 -- common/autotest_common.sh@10 -- # set +x 00:27:10.302 [2024-04-26 23:29:59.454986] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:27:10.302 [2024-04-26 23:29:59.455031] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:10.302 EAL: No free 2048 kB hugepages reported on node 1 00:27:10.302 [2024-04-26 23:29:59.521911] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:10.302 [2024-04-26 23:29:59.551161] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:10.302 [2024-04-26 23:29:59.551201] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:10.302 [2024-04-26 23:29:59.551210] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:10.302 [2024-04-26 23:29:59.551218] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:10.302 [2024-04-26 23:29:59.551225] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:10.302 [2024-04-26 23:29:59.551352] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:27:10.302 [2024-04-26 23:29:59.551507] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:27:10.302 [2024-04-26 23:29:59.551661] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:27:10.302 [2024-04-26 23:29:59.551662] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:27:11.261 23:30:00 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:27:11.261 23:30:00 -- common/autotest_common.sh@850 -- # return 0 00:27:11.261 23:30:00 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:27:11.261 23:30:00 -- common/autotest_common.sh@716 -- # xtrace_disable 00:27:11.261 23:30:00 -- common/autotest_common.sh@10 -- # set +x 00:27:11.261 23:30:00 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:11.261 23:30:00 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:27:11.261 23:30:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:11.261 23:30:00 -- common/autotest_common.sh@10 -- # set +x 00:27:11.261 [2024-04-26 23:30:00.273607] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:11.261 23:30:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:11.261 23:30:00 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:27:11.261 23:30:00 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:27:11.261 23:30:00 -- common/autotest_common.sh@710 -- # xtrace_disable 00:27:11.261 23:30:00 -- common/autotest_common.sh@10 -- # set +x 00:27:11.261 23:30:00 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:27:11.261 23:30:00 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:11.261 23:30:00 -- target/shutdown.sh@28 -- # cat 00:27:11.261 23:30:00 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:11.261 23:30:00 -- target/shutdown.sh@28 -- # cat 00:27:11.261 23:30:00 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:11.261 23:30:00 -- target/shutdown.sh@28 -- # cat 00:27:11.261 23:30:00 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:11.261 23:30:00 -- target/shutdown.sh@28 -- # cat 00:27:11.261 23:30:00 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:11.261 23:30:00 -- target/shutdown.sh@28 -- # cat 00:27:11.261 23:30:00 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:11.261 23:30:00 -- target/shutdown.sh@28 -- # cat 00:27:11.261 23:30:00 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:11.261 23:30:00 -- target/shutdown.sh@28 -- # cat 00:27:11.261 23:30:00 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:11.261 23:30:00 -- target/shutdown.sh@28 -- # cat 00:27:11.261 23:30:00 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:11.261 23:30:00 -- target/shutdown.sh@28 -- # cat 00:27:11.261 23:30:00 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:11.261 23:30:00 -- target/shutdown.sh@28 -- # cat 00:27:11.261 23:30:00 -- target/shutdown.sh@35 -- # rpc_cmd 00:27:11.261 23:30:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:11.261 23:30:00 -- common/autotest_common.sh@10 -- # set +x 00:27:11.261 Malloc1 00:27:11.261 [2024-04-26 23:30:00.373914] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:11.261 Malloc2 00:27:11.261 Malloc3 00:27:11.261 Malloc4 00:27:11.261 Malloc5 00:27:11.520 Malloc6 00:27:11.520 Malloc7 00:27:11.520 Malloc8 00:27:11.520 Malloc9 00:27:11.520 Malloc10 00:27:11.520 23:30:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:11.520 23:30:00 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:27:11.520 23:30:00 -- common/autotest_common.sh@716 -- # xtrace_disable 00:27:11.520 23:30:00 -- common/autotest_common.sh@10 -- # set +x 00:27:11.781 23:30:00 -- target/shutdown.sh@103 -- # perfpid=4068823 00:27:11.781 23:30:00 -- target/shutdown.sh@104 -- # waitforlisten 4068823 /var/tmp/bdevperf.sock 00:27:11.781 23:30:00 -- common/autotest_common.sh@817 -- # '[' -z 4068823 ']' 00:27:11.781 23:30:00 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:27:11.781 23:30:00 -- common/autotest_common.sh@822 -- # local max_retries=100 00:27:11.781 23:30:00 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:27:11.781 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:27:11.781 23:30:00 -- target/shutdown.sh@102 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:27:11.781 23:30:00 -- common/autotest_common.sh@826 -- # xtrace_disable 00:27:11.781 23:30:00 -- target/shutdown.sh@102 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:27:11.781 23:30:00 -- common/autotest_common.sh@10 -- # set +x 00:27:11.781 23:30:00 -- nvmf/common.sh@521 -- # config=() 00:27:11.781 23:30:00 -- nvmf/common.sh@521 -- # local subsystem config 00:27:11.781 23:30:00 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:27:11.781 23:30:00 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:27:11.781 { 00:27:11.781 "params": { 00:27:11.781 "name": "Nvme$subsystem", 00:27:11.781 "trtype": "$TEST_TRANSPORT", 00:27:11.781 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:11.781 "adrfam": "ipv4", 00:27:11.781 "trsvcid": "$NVMF_PORT", 00:27:11.781 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:11.781 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:11.781 "hdgst": ${hdgst:-false}, 00:27:11.781 "ddgst": ${ddgst:-false} 00:27:11.781 }, 00:27:11.781 "method": "bdev_nvme_attach_controller" 00:27:11.781 } 00:27:11.781 EOF 00:27:11.781 )") 00:27:11.781 23:30:00 -- nvmf/common.sh@543 -- # cat 00:27:11.781 23:30:00 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:27:11.781 23:30:00 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:27:11.781 { 00:27:11.781 "params": { 00:27:11.781 "name": "Nvme$subsystem", 00:27:11.781 "trtype": "$TEST_TRANSPORT", 00:27:11.781 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:11.781 "adrfam": "ipv4", 00:27:11.781 "trsvcid": "$NVMF_PORT", 00:27:11.781 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:11.781 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:11.781 "hdgst": ${hdgst:-false}, 00:27:11.781 "ddgst": ${ddgst:-false} 00:27:11.781 }, 00:27:11.781 "method": "bdev_nvme_attach_controller" 00:27:11.781 } 00:27:11.781 EOF 00:27:11.781 )") 00:27:11.781 23:30:00 -- nvmf/common.sh@543 -- # cat 00:27:11.781 23:30:00 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:27:11.781 23:30:00 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:27:11.781 { 00:27:11.781 "params": { 00:27:11.781 "name": "Nvme$subsystem", 00:27:11.781 "trtype": "$TEST_TRANSPORT", 00:27:11.781 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:11.781 "adrfam": "ipv4", 00:27:11.781 "trsvcid": "$NVMF_PORT", 00:27:11.781 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:11.781 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:11.781 "hdgst": ${hdgst:-false}, 00:27:11.781 "ddgst": ${ddgst:-false} 00:27:11.781 }, 00:27:11.781 "method": "bdev_nvme_attach_controller" 00:27:11.781 } 00:27:11.781 EOF 00:27:11.781 )") 00:27:11.781 23:30:00 -- nvmf/common.sh@543 -- # cat 00:27:11.781 23:30:00 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:27:11.781 23:30:00 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:27:11.781 { 00:27:11.781 "params": { 00:27:11.781 "name": "Nvme$subsystem", 00:27:11.781 "trtype": "$TEST_TRANSPORT", 00:27:11.781 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:11.781 "adrfam": "ipv4", 00:27:11.781 "trsvcid": "$NVMF_PORT", 00:27:11.781 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:11.781 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:11.781 "hdgst": ${hdgst:-false}, 00:27:11.781 "ddgst": ${ddgst:-false} 00:27:11.781 }, 00:27:11.781 "method": "bdev_nvme_attach_controller" 00:27:11.781 } 00:27:11.781 EOF 00:27:11.781 )") 00:27:11.781 23:30:00 -- nvmf/common.sh@543 -- # cat 00:27:11.781 23:30:00 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:27:11.781 23:30:00 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:27:11.781 { 00:27:11.781 "params": { 00:27:11.781 "name": "Nvme$subsystem", 00:27:11.781 "trtype": "$TEST_TRANSPORT", 00:27:11.781 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:11.781 "adrfam": "ipv4", 00:27:11.781 "trsvcid": "$NVMF_PORT", 00:27:11.781 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:11.781 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:11.781 "hdgst": ${hdgst:-false}, 00:27:11.781 "ddgst": ${ddgst:-false} 00:27:11.781 }, 00:27:11.781 "method": "bdev_nvme_attach_controller" 00:27:11.781 } 00:27:11.781 EOF 00:27:11.781 )") 00:27:11.781 23:30:00 -- nvmf/common.sh@543 -- # cat 00:27:11.781 23:30:00 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:27:11.781 23:30:00 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:27:11.781 { 00:27:11.781 "params": { 00:27:11.781 "name": "Nvme$subsystem", 00:27:11.781 "trtype": "$TEST_TRANSPORT", 00:27:11.781 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:11.781 "adrfam": "ipv4", 00:27:11.781 "trsvcid": "$NVMF_PORT", 00:27:11.781 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:11.781 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:11.781 "hdgst": ${hdgst:-false}, 00:27:11.781 "ddgst": ${ddgst:-false} 00:27:11.781 }, 00:27:11.781 "method": "bdev_nvme_attach_controller" 00:27:11.781 } 00:27:11.781 EOF 00:27:11.781 )") 00:27:11.781 23:30:00 -- nvmf/common.sh@543 -- # cat 00:27:11.781 [2024-04-26 23:30:00.827778] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:27:11.781 [2024-04-26 23:30:00.827828] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4068823 ] 00:27:11.781 23:30:00 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:27:11.781 23:30:00 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:27:11.781 { 00:27:11.781 "params": { 00:27:11.781 "name": "Nvme$subsystem", 00:27:11.781 "trtype": "$TEST_TRANSPORT", 00:27:11.781 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:11.781 "adrfam": "ipv4", 00:27:11.781 "trsvcid": "$NVMF_PORT", 00:27:11.781 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:11.781 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:11.781 "hdgst": ${hdgst:-false}, 00:27:11.781 "ddgst": ${ddgst:-false} 00:27:11.781 }, 00:27:11.781 "method": "bdev_nvme_attach_controller" 00:27:11.781 } 00:27:11.781 EOF 00:27:11.781 )") 00:27:11.781 23:30:00 -- nvmf/common.sh@543 -- # cat 00:27:11.781 23:30:00 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:27:11.781 23:30:00 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:27:11.781 { 00:27:11.781 "params": { 00:27:11.781 "name": "Nvme$subsystem", 00:27:11.781 "trtype": "$TEST_TRANSPORT", 00:27:11.781 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:11.781 "adrfam": "ipv4", 00:27:11.781 "trsvcid": "$NVMF_PORT", 00:27:11.781 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:11.781 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:11.781 "hdgst": ${hdgst:-false}, 00:27:11.781 "ddgst": ${ddgst:-false} 00:27:11.781 }, 00:27:11.781 "method": "bdev_nvme_attach_controller" 00:27:11.781 } 00:27:11.781 EOF 00:27:11.781 )") 00:27:11.781 23:30:00 -- nvmf/common.sh@543 -- # cat 00:27:11.781 23:30:00 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:27:11.781 23:30:00 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:27:11.781 { 00:27:11.781 "params": { 00:27:11.781 "name": "Nvme$subsystem", 00:27:11.781 "trtype": "$TEST_TRANSPORT", 00:27:11.781 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:11.781 "adrfam": "ipv4", 00:27:11.781 "trsvcid": "$NVMF_PORT", 00:27:11.781 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:11.781 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:11.781 "hdgst": ${hdgst:-false}, 00:27:11.781 "ddgst": ${ddgst:-false} 00:27:11.781 }, 00:27:11.781 "method": "bdev_nvme_attach_controller" 00:27:11.781 } 00:27:11.781 EOF 00:27:11.781 )") 00:27:11.781 23:30:00 -- nvmf/common.sh@543 -- # cat 00:27:11.781 23:30:00 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:27:11.781 23:30:00 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:27:11.781 { 00:27:11.781 "params": { 00:27:11.781 "name": "Nvme$subsystem", 00:27:11.781 "trtype": "$TEST_TRANSPORT", 00:27:11.781 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:11.781 "adrfam": "ipv4", 00:27:11.781 "trsvcid": "$NVMF_PORT", 00:27:11.781 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:11.781 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:11.781 "hdgst": ${hdgst:-false}, 00:27:11.781 "ddgst": ${ddgst:-false} 00:27:11.781 }, 00:27:11.781 "method": "bdev_nvme_attach_controller" 00:27:11.781 } 00:27:11.781 EOF 00:27:11.781 )") 00:27:11.781 EAL: No free 2048 kB hugepages reported on node 1 00:27:11.781 23:30:00 -- nvmf/common.sh@543 -- # cat 00:27:11.781 23:30:00 -- nvmf/common.sh@545 -- # jq . 00:27:11.781 23:30:00 -- nvmf/common.sh@546 -- # IFS=, 00:27:11.781 23:30:00 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:27:11.781 "params": { 00:27:11.781 "name": "Nvme1", 00:27:11.781 "trtype": "tcp", 00:27:11.781 "traddr": "10.0.0.2", 00:27:11.781 "adrfam": "ipv4", 00:27:11.781 "trsvcid": "4420", 00:27:11.781 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:27:11.781 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:27:11.781 "hdgst": false, 00:27:11.781 "ddgst": false 00:27:11.781 }, 00:27:11.781 "method": "bdev_nvme_attach_controller" 00:27:11.781 },{ 00:27:11.781 "params": { 00:27:11.781 "name": "Nvme2", 00:27:11.781 "trtype": "tcp", 00:27:11.781 "traddr": "10.0.0.2", 00:27:11.781 "adrfam": "ipv4", 00:27:11.781 "trsvcid": "4420", 00:27:11.781 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:27:11.781 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:27:11.781 "hdgst": false, 00:27:11.781 "ddgst": false 00:27:11.781 }, 00:27:11.781 "method": "bdev_nvme_attach_controller" 00:27:11.781 },{ 00:27:11.781 "params": { 00:27:11.781 "name": "Nvme3", 00:27:11.781 "trtype": "tcp", 00:27:11.781 "traddr": "10.0.0.2", 00:27:11.781 "adrfam": "ipv4", 00:27:11.781 "trsvcid": "4420", 00:27:11.781 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:27:11.781 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:27:11.781 "hdgst": false, 00:27:11.781 "ddgst": false 00:27:11.781 }, 00:27:11.781 "method": "bdev_nvme_attach_controller" 00:27:11.781 },{ 00:27:11.781 "params": { 00:27:11.781 "name": "Nvme4", 00:27:11.781 "trtype": "tcp", 00:27:11.781 "traddr": "10.0.0.2", 00:27:11.781 "adrfam": "ipv4", 00:27:11.781 "trsvcid": "4420", 00:27:11.781 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:27:11.781 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:27:11.781 "hdgst": false, 00:27:11.781 "ddgst": false 00:27:11.781 }, 00:27:11.781 "method": "bdev_nvme_attach_controller" 00:27:11.781 },{ 00:27:11.781 "params": { 00:27:11.781 "name": "Nvme5", 00:27:11.781 "trtype": "tcp", 00:27:11.781 "traddr": "10.0.0.2", 00:27:11.781 "adrfam": "ipv4", 00:27:11.781 "trsvcid": "4420", 00:27:11.781 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:27:11.781 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:27:11.781 "hdgst": false, 00:27:11.782 "ddgst": false 00:27:11.782 }, 00:27:11.782 "method": "bdev_nvme_attach_controller" 00:27:11.782 },{ 00:27:11.782 "params": { 00:27:11.782 "name": "Nvme6", 00:27:11.782 "trtype": "tcp", 00:27:11.782 "traddr": "10.0.0.2", 00:27:11.782 "adrfam": "ipv4", 00:27:11.782 "trsvcid": "4420", 00:27:11.782 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:27:11.782 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:27:11.782 "hdgst": false, 00:27:11.782 "ddgst": false 00:27:11.782 }, 00:27:11.782 "method": "bdev_nvme_attach_controller" 00:27:11.782 },{ 00:27:11.782 "params": { 00:27:11.782 "name": "Nvme7", 00:27:11.782 "trtype": "tcp", 00:27:11.782 "traddr": "10.0.0.2", 00:27:11.782 "adrfam": "ipv4", 00:27:11.782 "trsvcid": "4420", 00:27:11.782 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:27:11.782 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:27:11.782 "hdgst": false, 00:27:11.782 "ddgst": false 00:27:11.782 }, 00:27:11.782 "method": "bdev_nvme_attach_controller" 00:27:11.782 },{ 00:27:11.782 "params": { 00:27:11.782 "name": "Nvme8", 00:27:11.782 "trtype": "tcp", 00:27:11.782 "traddr": "10.0.0.2", 00:27:11.782 "adrfam": "ipv4", 00:27:11.782 "trsvcid": "4420", 00:27:11.782 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:27:11.782 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:27:11.782 "hdgst": false, 00:27:11.782 "ddgst": false 00:27:11.782 }, 00:27:11.782 "method": "bdev_nvme_attach_controller" 00:27:11.782 },{ 00:27:11.782 "params": { 00:27:11.782 "name": "Nvme9", 00:27:11.782 "trtype": "tcp", 00:27:11.782 "traddr": "10.0.0.2", 00:27:11.782 "adrfam": "ipv4", 00:27:11.782 "trsvcid": "4420", 00:27:11.782 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:27:11.782 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:27:11.782 "hdgst": false, 00:27:11.782 "ddgst": false 00:27:11.782 }, 00:27:11.782 "method": "bdev_nvme_attach_controller" 00:27:11.782 },{ 00:27:11.782 "params": { 00:27:11.782 "name": "Nvme10", 00:27:11.782 "trtype": "tcp", 00:27:11.782 "traddr": "10.0.0.2", 00:27:11.782 "adrfam": "ipv4", 00:27:11.782 "trsvcid": "4420", 00:27:11.782 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:27:11.782 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:27:11.782 "hdgst": false, 00:27:11.782 "ddgst": false 00:27:11.782 }, 00:27:11.782 "method": "bdev_nvme_attach_controller" 00:27:11.782 }' 00:27:11.782 [2024-04-26 23:30:00.889175] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:11.782 [2024-04-26 23:30:00.918336] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:13.166 Running I/O for 10 seconds... 00:27:13.166 23:30:02 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:27:13.166 23:30:02 -- common/autotest_common.sh@850 -- # return 0 00:27:13.166 23:30:02 -- target/shutdown.sh@105 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:27:13.166 23:30:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:13.166 23:30:02 -- common/autotest_common.sh@10 -- # set +x 00:27:13.426 23:30:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:13.426 23:30:02 -- target/shutdown.sh@107 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:27:13.426 23:30:02 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:27:13.426 23:30:02 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:27:13.426 23:30:02 -- target/shutdown.sh@57 -- # local ret=1 00:27:13.426 23:30:02 -- target/shutdown.sh@58 -- # local i 00:27:13.426 23:30:02 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:27:13.426 23:30:02 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:27:13.426 23:30:02 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:27:13.426 23:30:02 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:27:13.426 23:30:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:13.426 23:30:02 -- common/autotest_common.sh@10 -- # set +x 00:27:13.426 23:30:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:13.426 23:30:02 -- target/shutdown.sh@60 -- # read_io_count=3 00:27:13.426 23:30:02 -- target/shutdown.sh@63 -- # '[' 3 -ge 100 ']' 00:27:13.426 23:30:02 -- target/shutdown.sh@67 -- # sleep 0.25 00:27:13.687 23:30:02 -- target/shutdown.sh@59 -- # (( i-- )) 00:27:13.687 23:30:02 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:27:13.687 23:30:02 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:27:13.687 23:30:02 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:27:13.687 23:30:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:13.687 23:30:02 -- common/autotest_common.sh@10 -- # set +x 00:27:13.687 23:30:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:13.687 23:30:02 -- target/shutdown.sh@60 -- # read_io_count=67 00:27:13.687 23:30:02 -- target/shutdown.sh@63 -- # '[' 67 -ge 100 ']' 00:27:13.687 23:30:02 -- target/shutdown.sh@67 -- # sleep 0.25 00:27:13.948 23:30:03 -- target/shutdown.sh@59 -- # (( i-- )) 00:27:13.948 23:30:03 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:27:13.948 23:30:03 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:27:13.948 23:30:03 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:27:13.948 23:30:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:13.948 23:30:03 -- common/autotest_common.sh@10 -- # set +x 00:27:13.948 23:30:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:13.948 23:30:03 -- target/shutdown.sh@60 -- # read_io_count=195 00:27:13.948 23:30:03 -- target/shutdown.sh@63 -- # '[' 195 -ge 100 ']' 00:27:13.948 23:30:03 -- target/shutdown.sh@64 -- # ret=0 00:27:13.948 23:30:03 -- target/shutdown.sh@65 -- # break 00:27:13.948 23:30:03 -- target/shutdown.sh@69 -- # return 0 00:27:13.948 23:30:03 -- target/shutdown.sh@110 -- # killprocess 4068823 00:27:13.948 23:30:03 -- common/autotest_common.sh@936 -- # '[' -z 4068823 ']' 00:27:13.948 23:30:03 -- common/autotest_common.sh@940 -- # kill -0 4068823 00:27:13.948 23:30:03 -- common/autotest_common.sh@941 -- # uname 00:27:13.948 23:30:03 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:27:13.948 23:30:03 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 4068823 00:27:13.948 23:30:03 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:27:13.948 23:30:03 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:27:13.948 23:30:03 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 4068823' 00:27:13.948 killing process with pid 4068823 00:27:13.948 23:30:03 -- common/autotest_common.sh@955 -- # kill 4068823 00:27:13.948 23:30:03 -- common/autotest_common.sh@960 -- # wait 4068823 00:27:14.209 Received shutdown signal, test time was about 0.972436 seconds 00:27:14.209 00:27:14.209 Latency(us) 00:27:14.209 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:14.210 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:14.210 Verification LBA range: start 0x0 length 0x400 00:27:14.210 Nvme1n1 : 0.96 267.66 16.73 0.00 0.00 236272.21 18896.21 244667.73 00:27:14.210 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:14.210 Verification LBA range: start 0x0 length 0x400 00:27:14.210 Nvme2n1 : 0.97 264.74 16.55 0.00 0.00 234054.83 16493.23 246415.36 00:27:14.210 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:14.210 Verification LBA range: start 0x0 length 0x400 00:27:14.210 Nvme3n1 : 0.97 263.50 16.47 0.00 0.00 230366.29 29928.11 249910.61 00:27:14.210 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:14.210 Verification LBA range: start 0x0 length 0x400 00:27:14.210 Nvme4n1 : 0.96 267.38 16.71 0.00 0.00 221486.29 21845.33 241172.48 00:27:14.210 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:14.210 Verification LBA range: start 0x0 length 0x400 00:27:14.210 Nvme5n1 : 0.94 203.54 12.72 0.00 0.00 284896.14 18896.21 251658.24 00:27:14.210 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:14.210 Verification LBA range: start 0x0 length 0x400 00:27:14.210 Nvme6n1 : 0.95 202.74 12.67 0.00 0.00 279688.25 27852.80 248162.99 00:27:14.210 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:14.210 Verification LBA range: start 0x0 length 0x400 00:27:14.210 Nvme7n1 : 0.96 265.67 16.60 0.00 0.00 208998.19 18459.31 218453.33 00:27:14.210 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:14.210 Verification LBA range: start 0x0 length 0x400 00:27:14.210 Nvme8n1 : 0.95 269.19 16.82 0.00 0.00 201090.56 31238.83 223696.21 00:27:14.210 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:14.210 Verification LBA range: start 0x0 length 0x400 00:27:14.210 Nvme9n1 : 0.93 205.94 12.87 0.00 0.00 255532.66 17148.59 249910.61 00:27:14.210 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:14.210 Verification LBA range: start 0x0 length 0x400 00:27:14.210 Nvme10n1 : 0.96 204.24 12.77 0.00 0.00 251238.33 5789.01 272629.76 00:27:14.210 =================================================================================================================== 00:27:14.210 Total : 2414.61 150.91 0.00 0.00 237333.57 5789.01 272629.76 00:27:14.210 23:30:03 -- target/shutdown.sh@113 -- # sleep 1 00:27:15.152 23:30:04 -- target/shutdown.sh@114 -- # kill -0 4068401 00:27:15.152 23:30:04 -- target/shutdown.sh@116 -- # stoptarget 00:27:15.152 23:30:04 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:27:15.152 23:30:04 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:27:15.152 23:30:04 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:27:15.152 23:30:04 -- target/shutdown.sh@45 -- # nvmftestfini 00:27:15.152 23:30:04 -- nvmf/common.sh@477 -- # nvmfcleanup 00:27:15.152 23:30:04 -- nvmf/common.sh@117 -- # sync 00:27:15.152 23:30:04 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:27:15.152 23:30:04 -- nvmf/common.sh@120 -- # set +e 00:27:15.152 23:30:04 -- nvmf/common.sh@121 -- # for i in {1..20} 00:27:15.152 23:30:04 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:27:15.152 rmmod nvme_tcp 00:27:15.152 rmmod nvme_fabrics 00:27:15.412 rmmod nvme_keyring 00:27:15.412 23:30:04 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:27:15.412 23:30:04 -- nvmf/common.sh@124 -- # set -e 00:27:15.412 23:30:04 -- nvmf/common.sh@125 -- # return 0 00:27:15.412 23:30:04 -- nvmf/common.sh@478 -- # '[' -n 4068401 ']' 00:27:15.412 23:30:04 -- nvmf/common.sh@479 -- # killprocess 4068401 00:27:15.412 23:30:04 -- common/autotest_common.sh@936 -- # '[' -z 4068401 ']' 00:27:15.412 23:30:04 -- common/autotest_common.sh@940 -- # kill -0 4068401 00:27:15.412 23:30:04 -- common/autotest_common.sh@941 -- # uname 00:27:15.412 23:30:04 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:27:15.412 23:30:04 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 4068401 00:27:15.412 23:30:04 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:27:15.412 23:30:04 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:27:15.412 23:30:04 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 4068401' 00:27:15.413 killing process with pid 4068401 00:27:15.413 23:30:04 -- common/autotest_common.sh@955 -- # kill 4068401 00:27:15.413 23:30:04 -- common/autotest_common.sh@960 -- # wait 4068401 00:27:15.673 23:30:04 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:27:15.673 23:30:04 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:27:15.673 23:30:04 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:27:15.673 23:30:04 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:15.673 23:30:04 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:27:15.673 23:30:04 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:15.673 23:30:04 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:15.673 23:30:04 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:17.583 23:30:06 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:27:17.583 00:27:17.583 real 0m7.809s 00:27:17.583 user 0m23.344s 00:27:17.583 sys 0m1.261s 00:27:17.583 23:30:06 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:27:17.583 23:30:06 -- common/autotest_common.sh@10 -- # set +x 00:27:17.584 ************************************ 00:27:17.584 END TEST nvmf_shutdown_tc2 00:27:17.584 ************************************ 00:27:17.844 23:30:06 -- target/shutdown.sh@149 -- # run_test nvmf_shutdown_tc3 nvmf_shutdown_tc3 00:27:17.844 23:30:06 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:27:17.844 23:30:06 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:27:17.844 23:30:06 -- common/autotest_common.sh@10 -- # set +x 00:27:17.844 ************************************ 00:27:17.844 START TEST nvmf_shutdown_tc3 00:27:17.844 ************************************ 00:27:17.844 23:30:07 -- common/autotest_common.sh@1111 -- # nvmf_shutdown_tc3 00:27:17.844 23:30:07 -- target/shutdown.sh@121 -- # starttarget 00:27:17.844 23:30:07 -- target/shutdown.sh@15 -- # nvmftestinit 00:27:17.844 23:30:07 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:27:17.844 23:30:07 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:17.844 23:30:07 -- nvmf/common.sh@437 -- # prepare_net_devs 00:27:17.844 23:30:07 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:27:17.844 23:30:07 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:27:17.844 23:30:07 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:17.844 23:30:07 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:17.844 23:30:07 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:17.844 23:30:07 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:27:17.844 23:30:07 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:27:17.844 23:30:07 -- nvmf/common.sh@285 -- # xtrace_disable 00:27:17.844 23:30:07 -- common/autotest_common.sh@10 -- # set +x 00:27:17.844 23:30:07 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:27:17.844 23:30:07 -- nvmf/common.sh@291 -- # pci_devs=() 00:27:17.845 23:30:07 -- nvmf/common.sh@291 -- # local -a pci_devs 00:27:17.845 23:30:07 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:27:17.845 23:30:07 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:27:17.845 23:30:07 -- nvmf/common.sh@293 -- # pci_drivers=() 00:27:17.845 23:30:07 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:27:17.845 23:30:07 -- nvmf/common.sh@295 -- # net_devs=() 00:27:17.845 23:30:07 -- nvmf/common.sh@295 -- # local -ga net_devs 00:27:17.845 23:30:07 -- nvmf/common.sh@296 -- # e810=() 00:27:17.845 23:30:07 -- nvmf/common.sh@296 -- # local -ga e810 00:27:17.845 23:30:07 -- nvmf/common.sh@297 -- # x722=() 00:27:17.845 23:30:07 -- nvmf/common.sh@297 -- # local -ga x722 00:27:17.845 23:30:07 -- nvmf/common.sh@298 -- # mlx=() 00:27:17.845 23:30:07 -- nvmf/common.sh@298 -- # local -ga mlx 00:27:17.845 23:30:07 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:17.845 23:30:07 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:17.845 23:30:07 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:17.845 23:30:07 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:17.845 23:30:07 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:17.845 23:30:07 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:17.845 23:30:07 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:17.845 23:30:07 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:17.845 23:30:07 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:17.845 23:30:07 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:17.845 23:30:07 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:17.845 23:30:07 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:27:17.845 23:30:07 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:27:17.845 23:30:07 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:27:17.845 23:30:07 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:27:17.845 23:30:07 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:27:17.845 23:30:07 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:27:17.845 23:30:07 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:17.845 23:30:07 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:27:17.845 Found 0000:31:00.0 (0x8086 - 0x159b) 00:27:17.845 23:30:07 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:17.845 23:30:07 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:17.845 23:30:07 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:17.845 23:30:07 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:17.845 23:30:07 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:17.845 23:30:07 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:17.845 23:30:07 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:27:17.845 Found 0000:31:00.1 (0x8086 - 0x159b) 00:27:17.845 23:30:07 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:17.845 23:30:07 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:17.845 23:30:07 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:17.845 23:30:07 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:17.845 23:30:07 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:17.845 23:30:07 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:27:17.845 23:30:07 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:27:17.845 23:30:07 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:27:17.845 23:30:07 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:17.845 23:30:07 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:17.845 23:30:07 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:27:17.845 23:30:07 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:17.845 23:30:07 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:27:17.845 Found net devices under 0000:31:00.0: cvl_0_0 00:27:17.845 23:30:07 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:27:17.845 23:30:07 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:17.845 23:30:07 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:17.845 23:30:07 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:27:17.845 23:30:07 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:17.845 23:30:07 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:27:17.845 Found net devices under 0000:31:00.1: cvl_0_1 00:27:17.845 23:30:07 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:27:17.845 23:30:07 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:27:17.845 23:30:07 -- nvmf/common.sh@403 -- # is_hw=yes 00:27:17.845 23:30:07 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:27:17.845 23:30:07 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:27:17.845 23:30:07 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:27:17.845 23:30:07 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:17.845 23:30:07 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:17.845 23:30:07 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:17.845 23:30:07 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:27:17.845 23:30:07 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:17.845 23:30:07 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:17.845 23:30:07 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:27:17.845 23:30:07 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:17.845 23:30:07 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:17.845 23:30:07 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:27:17.845 23:30:07 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:27:17.845 23:30:07 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:27:17.845 23:30:07 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:18.106 23:30:07 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:18.106 23:30:07 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:18.106 23:30:07 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:27:18.106 23:30:07 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:18.106 23:30:07 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:18.106 23:30:07 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:18.106 23:30:07 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:27:18.106 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:18.106 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.616 ms 00:27:18.106 00:27:18.106 --- 10.0.0.2 ping statistics --- 00:27:18.106 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:18.106 rtt min/avg/max/mdev = 0.616/0.616/0.616/0.000 ms 00:27:18.106 23:30:07 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:18.106 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:18.106 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.372 ms 00:27:18.106 00:27:18.106 --- 10.0.0.1 ping statistics --- 00:27:18.106 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:18.106 rtt min/avg/max/mdev = 0.372/0.372/0.372/0.000 ms 00:27:18.366 23:30:07 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:18.366 23:30:07 -- nvmf/common.sh@411 -- # return 0 00:27:18.366 23:30:07 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:27:18.366 23:30:07 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:18.366 23:30:07 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:27:18.366 23:30:07 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:27:18.366 23:30:07 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:18.366 23:30:07 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:27:18.366 23:30:07 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:27:18.366 23:30:07 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:27:18.366 23:30:07 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:27:18.366 23:30:07 -- common/autotest_common.sh@710 -- # xtrace_disable 00:27:18.366 23:30:07 -- common/autotest_common.sh@10 -- # set +x 00:27:18.366 23:30:07 -- nvmf/common.sh@470 -- # nvmfpid=4070207 00:27:18.366 23:30:07 -- nvmf/common.sh@471 -- # waitforlisten 4070207 00:27:18.366 23:30:07 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:27:18.366 23:30:07 -- common/autotest_common.sh@817 -- # '[' -z 4070207 ']' 00:27:18.366 23:30:07 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:18.366 23:30:07 -- common/autotest_common.sh@822 -- # local max_retries=100 00:27:18.366 23:30:07 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:18.366 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:18.366 23:30:07 -- common/autotest_common.sh@826 -- # xtrace_disable 00:27:18.366 23:30:07 -- common/autotest_common.sh@10 -- # set +x 00:27:18.366 [2024-04-26 23:30:07.480922] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:27:18.366 [2024-04-26 23:30:07.481005] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:18.366 EAL: No free 2048 kB hugepages reported on node 1 00:27:18.366 [2024-04-26 23:30:07.553669] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:18.366 [2024-04-26 23:30:07.591231] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:18.366 [2024-04-26 23:30:07.591302] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:18.366 [2024-04-26 23:30:07.591310] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:18.366 [2024-04-26 23:30:07.591317] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:18.366 [2024-04-26 23:30:07.591323] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:18.366 [2024-04-26 23:30:07.591612] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:27:18.366 [2024-04-26 23:30:07.591741] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:27:18.366 [2024-04-26 23:30:07.592092] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:27:18.366 [2024-04-26 23:30:07.592190] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:27:19.310 23:30:08 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:27:19.310 23:30:08 -- common/autotest_common.sh@850 -- # return 0 00:27:19.310 23:30:08 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:27:19.310 23:30:08 -- common/autotest_common.sh@716 -- # xtrace_disable 00:27:19.310 23:30:08 -- common/autotest_common.sh@10 -- # set +x 00:27:19.310 23:30:08 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:19.310 23:30:08 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:27:19.310 23:30:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:19.310 23:30:08 -- common/autotest_common.sh@10 -- # set +x 00:27:19.310 [2024-04-26 23:30:08.286450] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:19.310 23:30:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:19.310 23:30:08 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:27:19.310 23:30:08 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:27:19.310 23:30:08 -- common/autotest_common.sh@710 -- # xtrace_disable 00:27:19.310 23:30:08 -- common/autotest_common.sh@10 -- # set +x 00:27:19.310 23:30:08 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:27:19.310 23:30:08 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:19.310 23:30:08 -- target/shutdown.sh@28 -- # cat 00:27:19.310 23:30:08 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:19.310 23:30:08 -- target/shutdown.sh@28 -- # cat 00:27:19.310 23:30:08 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:19.310 23:30:08 -- target/shutdown.sh@28 -- # cat 00:27:19.310 23:30:08 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:19.310 23:30:08 -- target/shutdown.sh@28 -- # cat 00:27:19.310 23:30:08 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:19.310 23:30:08 -- target/shutdown.sh@28 -- # cat 00:27:19.310 23:30:08 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:19.310 23:30:08 -- target/shutdown.sh@28 -- # cat 00:27:19.310 23:30:08 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:19.310 23:30:08 -- target/shutdown.sh@28 -- # cat 00:27:19.310 23:30:08 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:19.310 23:30:08 -- target/shutdown.sh@28 -- # cat 00:27:19.310 23:30:08 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:19.310 23:30:08 -- target/shutdown.sh@28 -- # cat 00:27:19.310 23:30:08 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:19.310 23:30:08 -- target/shutdown.sh@28 -- # cat 00:27:19.310 23:30:08 -- target/shutdown.sh@35 -- # rpc_cmd 00:27:19.310 23:30:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:19.310 23:30:08 -- common/autotest_common.sh@10 -- # set +x 00:27:19.310 Malloc1 00:27:19.310 [2024-04-26 23:30:08.386726] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:19.310 Malloc2 00:27:19.310 Malloc3 00:27:19.310 Malloc4 00:27:19.310 Malloc5 00:27:19.310 Malloc6 00:27:19.571 Malloc7 00:27:19.571 Malloc8 00:27:19.571 Malloc9 00:27:19.571 Malloc10 00:27:19.571 23:30:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:19.571 23:30:08 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:27:19.571 23:30:08 -- common/autotest_common.sh@716 -- # xtrace_disable 00:27:19.571 23:30:08 -- common/autotest_common.sh@10 -- # set +x 00:27:19.571 23:30:08 -- target/shutdown.sh@125 -- # perfpid=4070754 00:27:19.571 23:30:08 -- target/shutdown.sh@126 -- # waitforlisten 4070754 /var/tmp/bdevperf.sock 00:27:19.571 23:30:08 -- common/autotest_common.sh@817 -- # '[' -z 4070754 ']' 00:27:19.571 23:30:08 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:27:19.571 23:30:08 -- common/autotest_common.sh@822 -- # local max_retries=100 00:27:19.571 23:30:08 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:27:19.571 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:27:19.571 23:30:08 -- target/shutdown.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:27:19.571 23:30:08 -- common/autotest_common.sh@826 -- # xtrace_disable 00:27:19.571 23:30:08 -- target/shutdown.sh@124 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:27:19.571 23:30:08 -- common/autotest_common.sh@10 -- # set +x 00:27:19.571 23:30:08 -- nvmf/common.sh@521 -- # config=() 00:27:19.571 23:30:08 -- nvmf/common.sh@521 -- # local subsystem config 00:27:19.571 23:30:08 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:27:19.571 23:30:08 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:27:19.571 { 00:27:19.571 "params": { 00:27:19.571 "name": "Nvme$subsystem", 00:27:19.571 "trtype": "$TEST_TRANSPORT", 00:27:19.571 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:19.571 "adrfam": "ipv4", 00:27:19.571 "trsvcid": "$NVMF_PORT", 00:27:19.571 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:19.571 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:19.571 "hdgst": ${hdgst:-false}, 00:27:19.571 "ddgst": ${ddgst:-false} 00:27:19.571 }, 00:27:19.571 "method": "bdev_nvme_attach_controller" 00:27:19.571 } 00:27:19.571 EOF 00:27:19.571 )") 00:27:19.571 23:30:08 -- nvmf/common.sh@543 -- # cat 00:27:19.571 23:30:08 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:27:19.571 23:30:08 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:27:19.571 { 00:27:19.571 "params": { 00:27:19.571 "name": "Nvme$subsystem", 00:27:19.571 "trtype": "$TEST_TRANSPORT", 00:27:19.571 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:19.571 "adrfam": "ipv4", 00:27:19.571 "trsvcid": "$NVMF_PORT", 00:27:19.571 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:19.571 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:19.571 "hdgst": ${hdgst:-false}, 00:27:19.571 "ddgst": ${ddgst:-false} 00:27:19.571 }, 00:27:19.571 "method": "bdev_nvme_attach_controller" 00:27:19.571 } 00:27:19.571 EOF 00:27:19.571 )") 00:27:19.571 23:30:08 -- nvmf/common.sh@543 -- # cat 00:27:19.571 23:30:08 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:27:19.571 23:30:08 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:27:19.571 { 00:27:19.571 "params": { 00:27:19.571 "name": "Nvme$subsystem", 00:27:19.571 "trtype": "$TEST_TRANSPORT", 00:27:19.571 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:19.571 "adrfam": "ipv4", 00:27:19.571 "trsvcid": "$NVMF_PORT", 00:27:19.571 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:19.571 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:19.571 "hdgst": ${hdgst:-false}, 00:27:19.571 "ddgst": ${ddgst:-false} 00:27:19.571 }, 00:27:19.571 "method": "bdev_nvme_attach_controller" 00:27:19.571 } 00:27:19.571 EOF 00:27:19.571 )") 00:27:19.571 23:30:08 -- nvmf/common.sh@543 -- # cat 00:27:19.571 23:30:08 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:27:19.571 23:30:08 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:27:19.571 { 00:27:19.571 "params": { 00:27:19.571 "name": "Nvme$subsystem", 00:27:19.571 "trtype": "$TEST_TRANSPORT", 00:27:19.571 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:19.571 "adrfam": "ipv4", 00:27:19.571 "trsvcid": "$NVMF_PORT", 00:27:19.571 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:19.571 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:19.571 "hdgst": ${hdgst:-false}, 00:27:19.571 "ddgst": ${ddgst:-false} 00:27:19.571 }, 00:27:19.571 "method": "bdev_nvme_attach_controller" 00:27:19.571 } 00:27:19.571 EOF 00:27:19.571 )") 00:27:19.571 23:30:08 -- nvmf/common.sh@543 -- # cat 00:27:19.571 23:30:08 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:27:19.832 23:30:08 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:27:19.832 { 00:27:19.832 "params": { 00:27:19.832 "name": "Nvme$subsystem", 00:27:19.832 "trtype": "$TEST_TRANSPORT", 00:27:19.832 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:19.832 "adrfam": "ipv4", 00:27:19.832 "trsvcid": "$NVMF_PORT", 00:27:19.832 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:19.832 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:19.832 "hdgst": ${hdgst:-false}, 00:27:19.832 "ddgst": ${ddgst:-false} 00:27:19.832 }, 00:27:19.832 "method": "bdev_nvme_attach_controller" 00:27:19.832 } 00:27:19.832 EOF 00:27:19.832 )") 00:27:19.832 23:30:08 -- nvmf/common.sh@543 -- # cat 00:27:19.832 23:30:08 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:27:19.832 23:30:08 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:27:19.832 { 00:27:19.832 "params": { 00:27:19.832 "name": "Nvme$subsystem", 00:27:19.832 "trtype": "$TEST_TRANSPORT", 00:27:19.832 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:19.832 "adrfam": "ipv4", 00:27:19.832 "trsvcid": "$NVMF_PORT", 00:27:19.832 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:19.832 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:19.832 "hdgst": ${hdgst:-false}, 00:27:19.832 "ddgst": ${ddgst:-false} 00:27:19.832 }, 00:27:19.832 "method": "bdev_nvme_attach_controller" 00:27:19.832 } 00:27:19.832 EOF 00:27:19.832 )") 00:27:19.832 23:30:08 -- nvmf/common.sh@543 -- # cat 00:27:19.832 [2024-04-26 23:30:08.838649] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:27:19.832 [2024-04-26 23:30:08.838703] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4070754 ] 00:27:19.832 23:30:08 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:27:19.832 23:30:08 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:27:19.832 { 00:27:19.832 "params": { 00:27:19.832 "name": "Nvme$subsystem", 00:27:19.832 "trtype": "$TEST_TRANSPORT", 00:27:19.832 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:19.832 "adrfam": "ipv4", 00:27:19.832 "trsvcid": "$NVMF_PORT", 00:27:19.832 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:19.832 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:19.832 "hdgst": ${hdgst:-false}, 00:27:19.832 "ddgst": ${ddgst:-false} 00:27:19.832 }, 00:27:19.832 "method": "bdev_nvme_attach_controller" 00:27:19.832 } 00:27:19.832 EOF 00:27:19.832 )") 00:27:19.832 23:30:08 -- nvmf/common.sh@543 -- # cat 00:27:19.832 23:30:08 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:27:19.832 23:30:08 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:27:19.832 { 00:27:19.832 "params": { 00:27:19.832 "name": "Nvme$subsystem", 00:27:19.832 "trtype": "$TEST_TRANSPORT", 00:27:19.832 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:19.832 "adrfam": "ipv4", 00:27:19.832 "trsvcid": "$NVMF_PORT", 00:27:19.832 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:19.832 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:19.832 "hdgst": ${hdgst:-false}, 00:27:19.832 "ddgst": ${ddgst:-false} 00:27:19.832 }, 00:27:19.832 "method": "bdev_nvme_attach_controller" 00:27:19.832 } 00:27:19.832 EOF 00:27:19.832 )") 00:27:19.832 23:30:08 -- nvmf/common.sh@543 -- # cat 00:27:19.832 23:30:08 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:27:19.832 23:30:08 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:27:19.832 { 00:27:19.832 "params": { 00:27:19.832 "name": "Nvme$subsystem", 00:27:19.832 "trtype": "$TEST_TRANSPORT", 00:27:19.832 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:19.832 "adrfam": "ipv4", 00:27:19.832 "trsvcid": "$NVMF_PORT", 00:27:19.832 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:19.832 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:19.832 "hdgst": ${hdgst:-false}, 00:27:19.832 "ddgst": ${ddgst:-false} 00:27:19.832 }, 00:27:19.832 "method": "bdev_nvme_attach_controller" 00:27:19.832 } 00:27:19.832 EOF 00:27:19.832 )") 00:27:19.832 23:30:08 -- nvmf/common.sh@543 -- # cat 00:27:19.832 23:30:08 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:27:19.832 23:30:08 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:27:19.832 { 00:27:19.832 "params": { 00:27:19.832 "name": "Nvme$subsystem", 00:27:19.832 "trtype": "$TEST_TRANSPORT", 00:27:19.832 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:19.832 "adrfam": "ipv4", 00:27:19.832 "trsvcid": "$NVMF_PORT", 00:27:19.832 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:19.832 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:19.832 "hdgst": ${hdgst:-false}, 00:27:19.832 "ddgst": ${ddgst:-false} 00:27:19.832 }, 00:27:19.832 "method": "bdev_nvme_attach_controller" 00:27:19.832 } 00:27:19.832 EOF 00:27:19.832 )") 00:27:19.832 EAL: No free 2048 kB hugepages reported on node 1 00:27:19.832 23:30:08 -- nvmf/common.sh@543 -- # cat 00:27:19.832 23:30:08 -- nvmf/common.sh@545 -- # jq . 00:27:19.832 23:30:08 -- nvmf/common.sh@546 -- # IFS=, 00:27:19.832 23:30:08 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:27:19.832 "params": { 00:27:19.832 "name": "Nvme1", 00:27:19.832 "trtype": "tcp", 00:27:19.832 "traddr": "10.0.0.2", 00:27:19.832 "adrfam": "ipv4", 00:27:19.832 "trsvcid": "4420", 00:27:19.832 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:27:19.832 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:27:19.832 "hdgst": false, 00:27:19.832 "ddgst": false 00:27:19.832 }, 00:27:19.832 "method": "bdev_nvme_attach_controller" 00:27:19.832 },{ 00:27:19.832 "params": { 00:27:19.832 "name": "Nvme2", 00:27:19.832 "trtype": "tcp", 00:27:19.832 "traddr": "10.0.0.2", 00:27:19.832 "adrfam": "ipv4", 00:27:19.832 "trsvcid": "4420", 00:27:19.832 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:27:19.832 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:27:19.832 "hdgst": false, 00:27:19.832 "ddgst": false 00:27:19.832 }, 00:27:19.832 "method": "bdev_nvme_attach_controller" 00:27:19.832 },{ 00:27:19.832 "params": { 00:27:19.832 "name": "Nvme3", 00:27:19.832 "trtype": "tcp", 00:27:19.832 "traddr": "10.0.0.2", 00:27:19.832 "adrfam": "ipv4", 00:27:19.832 "trsvcid": "4420", 00:27:19.832 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:27:19.832 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:27:19.832 "hdgst": false, 00:27:19.832 "ddgst": false 00:27:19.832 }, 00:27:19.832 "method": "bdev_nvme_attach_controller" 00:27:19.832 },{ 00:27:19.832 "params": { 00:27:19.832 "name": "Nvme4", 00:27:19.832 "trtype": "tcp", 00:27:19.832 "traddr": "10.0.0.2", 00:27:19.832 "adrfam": "ipv4", 00:27:19.832 "trsvcid": "4420", 00:27:19.832 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:27:19.832 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:27:19.832 "hdgst": false, 00:27:19.832 "ddgst": false 00:27:19.832 }, 00:27:19.832 "method": "bdev_nvme_attach_controller" 00:27:19.832 },{ 00:27:19.832 "params": { 00:27:19.832 "name": "Nvme5", 00:27:19.832 "trtype": "tcp", 00:27:19.832 "traddr": "10.0.0.2", 00:27:19.832 "adrfam": "ipv4", 00:27:19.832 "trsvcid": "4420", 00:27:19.832 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:27:19.832 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:27:19.832 "hdgst": false, 00:27:19.832 "ddgst": false 00:27:19.832 }, 00:27:19.832 "method": "bdev_nvme_attach_controller" 00:27:19.832 },{ 00:27:19.832 "params": { 00:27:19.832 "name": "Nvme6", 00:27:19.832 "trtype": "tcp", 00:27:19.832 "traddr": "10.0.0.2", 00:27:19.832 "adrfam": "ipv4", 00:27:19.832 "trsvcid": "4420", 00:27:19.832 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:27:19.832 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:27:19.832 "hdgst": false, 00:27:19.832 "ddgst": false 00:27:19.832 }, 00:27:19.832 "method": "bdev_nvme_attach_controller" 00:27:19.832 },{ 00:27:19.832 "params": { 00:27:19.832 "name": "Nvme7", 00:27:19.832 "trtype": "tcp", 00:27:19.832 "traddr": "10.0.0.2", 00:27:19.832 "adrfam": "ipv4", 00:27:19.832 "trsvcid": "4420", 00:27:19.832 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:27:19.832 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:27:19.833 "hdgst": false, 00:27:19.833 "ddgst": false 00:27:19.833 }, 00:27:19.833 "method": "bdev_nvme_attach_controller" 00:27:19.833 },{ 00:27:19.833 "params": { 00:27:19.833 "name": "Nvme8", 00:27:19.833 "trtype": "tcp", 00:27:19.833 "traddr": "10.0.0.2", 00:27:19.833 "adrfam": "ipv4", 00:27:19.833 "trsvcid": "4420", 00:27:19.833 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:27:19.833 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:27:19.833 "hdgst": false, 00:27:19.833 "ddgst": false 00:27:19.833 }, 00:27:19.833 "method": "bdev_nvme_attach_controller" 00:27:19.833 },{ 00:27:19.833 "params": { 00:27:19.833 "name": "Nvme9", 00:27:19.833 "trtype": "tcp", 00:27:19.833 "traddr": "10.0.0.2", 00:27:19.833 "adrfam": "ipv4", 00:27:19.833 "trsvcid": "4420", 00:27:19.833 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:27:19.833 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:27:19.833 "hdgst": false, 00:27:19.833 "ddgst": false 00:27:19.833 }, 00:27:19.833 "method": "bdev_nvme_attach_controller" 00:27:19.833 },{ 00:27:19.833 "params": { 00:27:19.833 "name": "Nvme10", 00:27:19.833 "trtype": "tcp", 00:27:19.833 "traddr": "10.0.0.2", 00:27:19.833 "adrfam": "ipv4", 00:27:19.833 "trsvcid": "4420", 00:27:19.833 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:27:19.833 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:27:19.833 "hdgst": false, 00:27:19.833 "ddgst": false 00:27:19.833 }, 00:27:19.833 "method": "bdev_nvme_attach_controller" 00:27:19.833 }' 00:27:19.833 [2024-04-26 23:30:08.900078] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:19.833 [2024-04-26 23:30:08.929236] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:21.771 Running I/O for 10 seconds... 00:27:22.342 23:30:11 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:27:22.342 23:30:11 -- common/autotest_common.sh@850 -- # return 0 00:27:22.342 23:30:11 -- target/shutdown.sh@127 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:27:22.342 23:30:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:22.342 23:30:11 -- common/autotest_common.sh@10 -- # set +x 00:27:22.342 23:30:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:22.342 23:30:11 -- target/shutdown.sh@130 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:27:22.342 23:30:11 -- target/shutdown.sh@132 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:27:22.342 23:30:11 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:27:22.342 23:30:11 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:27:22.342 23:30:11 -- target/shutdown.sh@57 -- # local ret=1 00:27:22.342 23:30:11 -- target/shutdown.sh@58 -- # local i 00:27:22.342 23:30:11 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:27:22.342 23:30:11 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:27:22.342 23:30:11 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:27:22.342 23:30:11 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:27:22.342 23:30:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:22.342 23:30:11 -- common/autotest_common.sh@10 -- # set +x 00:27:22.342 23:30:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:22.342 23:30:11 -- target/shutdown.sh@60 -- # read_io_count=67 00:27:22.342 23:30:11 -- target/shutdown.sh@63 -- # '[' 67 -ge 100 ']' 00:27:22.342 23:30:11 -- target/shutdown.sh@67 -- # sleep 0.25 00:27:22.618 23:30:11 -- target/shutdown.sh@59 -- # (( i-- )) 00:27:22.618 23:30:11 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:27:22.618 23:30:11 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:27:22.618 23:30:11 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:27:22.618 23:30:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:22.618 23:30:11 -- common/autotest_common.sh@10 -- # set +x 00:27:22.618 23:30:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:22.618 23:30:11 -- target/shutdown.sh@60 -- # read_io_count=131 00:27:22.618 23:30:11 -- target/shutdown.sh@63 -- # '[' 131 -ge 100 ']' 00:27:22.618 23:30:11 -- target/shutdown.sh@64 -- # ret=0 00:27:22.618 23:30:11 -- target/shutdown.sh@65 -- # break 00:27:22.618 23:30:11 -- target/shutdown.sh@69 -- # return 0 00:27:22.618 23:30:11 -- target/shutdown.sh@135 -- # killprocess 4070207 00:27:22.619 23:30:11 -- common/autotest_common.sh@936 -- # '[' -z 4070207 ']' 00:27:22.619 23:30:11 -- common/autotest_common.sh@940 -- # kill -0 4070207 00:27:22.619 23:30:11 -- common/autotest_common.sh@941 -- # uname 00:27:22.619 23:30:11 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:27:22.619 23:30:11 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 4070207 00:27:22.619 23:30:11 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:27:22.619 23:30:11 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:27:22.619 23:30:11 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 4070207' 00:27:22.619 killing process with pid 4070207 00:27:22.619 23:30:11 -- common/autotest_common.sh@955 -- # kill 4070207 00:27:22.619 23:30:11 -- common/autotest_common.sh@960 -- # wait 4070207 00:27:22.619 [2024-04-26 23:30:11.734645] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987830 is same with the state(5) to be set 00:27:22.619 [2024-04-26 23:30:11.734691] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987830 is same with the state(5) to be set 00:27:22.619 [2024-04-26 23:30:11.734697] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987830 is same with the state(5) to be set 00:27:22.619 [2024-04-26 23:30:11.734702] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987830 is same with the state(5) to be set 00:27:22.619 [2024-04-26 23:30:11.734707] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987830 is same with the state(5) to be set 00:27:22.619 [2024-04-26 23:30:11.734711] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987830 is same with the state(5) to be set 00:27:22.619 [2024-04-26 23:30:11.734716] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987830 is same with the state(5) to be set 00:27:22.619 [2024-04-26 23:30:11.734720] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987830 is same with the state(5) to be set 00:27:22.619 [2024-04-26 23:30:11.734725] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987830 is same with the state(5) to be set 00:27:22.619 [2024-04-26 23:30:11.734729] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987830 is same with the state(5) to be set 00:27:22.619 [2024-04-26 23:30:11.734734] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987830 is same with the state(5) to be set 00:27:22.619 [2024-04-26 23:30:11.734738] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987830 is same with the state(5) to be set 00:27:22.619 [2024-04-26 23:30:11.734743] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987830 is same with the state(5) to be set 00:27:22.619 [2024-04-26 23:30:11.734747] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987830 is same with the state(5) to be set 00:27:22.619 [2024-04-26 23:30:11.734751] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987830 is same with the state(5) to be set 00:27:22.619 [2024-04-26 23:30:11.734755] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987830 is same with the state(5) to be set 00:27:22.619 [2024-04-26 23:30:11.734760] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987830 is same with the state(5) to be set 00:27:22.619 [2024-04-26 23:30:11.734765] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987830 is same with the state(5) to be set 00:27:22.619 [2024-04-26 23:30:11.734769] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987830 is same with the state(5) to be set 00:27:22.619 [2024-04-26 23:30:11.734774] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987830 is same with the state(5) to be set 00:27:22.619 [2024-04-26 23:30:11.734783] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987830 is same with the state(5) to be set 00:27:22.619 [2024-04-26 23:30:11.734788] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987830 is same with the state(5) to be set 00:27:22.619 [2024-04-26 23:30:11.734792] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987830 is same with the state(5) to be set 00:27:22.619 [2024-04-26 23:30:11.734796] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987830 is same with the state(5) to be set 00:27:22.619 [2024-04-26 23:30:11.734800] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987830 is same with the state(5) to be set 00:27:22.619 [2024-04-26 23:30:11.734805] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987830 is same with the state(5) to be set 00:27:22.619 [2024-04-26 23:30:11.734809] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987830 is same with the state(5) to be set 00:27:22.619 [2024-04-26 23:30:11.734813] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987830 is same with the state(5) to be set 00:27:22.619 [2024-04-26 23:30:11.734818] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987830 is same with the state(5) to be set 00:27:22.619 [2024-04-26 23:30:11.734822] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987830 is same with the state(5) to be set 00:27:22.619 [2024-04-26 23:30:11.734826] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987830 is same with the state(5) to be set 00:27:22.619 [2024-04-26 23:30:11.734831] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987830 is same with the state(5) to be set 00:27:22.619 [2024-04-26 23:30:11.734835] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987830 is same with the state(5) to be set 00:27:22.619 [2024-04-26 23:30:11.734844] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987830 is same with the state(5) to be set 00:27:22.619 [2024-04-26 23:30:11.734849] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987830 is same with the state(5) to be set 00:27:22.619 [2024-04-26 23:30:11.734853] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987830 is same with the state(5) to be set 00:27:22.619 [2024-04-26 23:30:11.734857] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987830 is same with the state(5) to be set 00:27:22.619 [2024-04-26 23:30:11.734862] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987830 is same with the state(5) to be set 00:27:22.619 [2024-04-26 23:30:11.734866] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987830 is same with the state(5) to be set 00:27:22.619 [2024-04-26 23:30:11.734870] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987830 is same with the state(5) to be set 00:27:22.619 [2024-04-26 23:30:11.734875] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987830 is same with the state(5) to be set 00:27:22.619 [2024-04-26 23:30:11.734879] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987830 is same with the state(5) to be set 00:27:22.619 [2024-04-26 23:30:11.734884] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987830 is same with the state(5) to be set 00:27:22.619 [2024-04-26 23:30:11.734888] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987830 is same with the state(5) to be set 00:27:22.619 [2024-04-26 23:30:11.734892] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987830 is same with the state(5) to be set 00:27:22.619 [2024-04-26 23:30:11.734897] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987830 is same with the state(5) to be set 00:27:22.619 [2024-04-26 23:30:11.734901] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987830 is same with the state(5) to be set 00:27:22.619 [2024-04-26 23:30:11.734910] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987830 is same with the state(5) to be set 00:27:22.619 [2024-04-26 23:30:11.734915] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987830 is same with the state(5) to be set 00:27:22.619 [2024-04-26 23:30:11.734919] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987830 is same with the state(5) to be set 00:27:22.619 [2024-04-26 23:30:11.734924] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987830 is same with the state(5) to be set 00:27:22.619 [2024-04-26 23:30:11.734928] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987830 is same with the state(5) to be set 00:27:22.619 [2024-04-26 23:30:11.734932] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987830 is same with the state(5) to be set 00:27:22.619 [2024-04-26 23:30:11.734936] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987830 is same with the state(5) to be set 00:27:22.619 [2024-04-26 23:30:11.734941] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987830 is same with the state(5) to be set 00:27:22.619 [2024-04-26 23:30:11.734945] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987830 is same with the state(5) to be set 00:27:22.619 [2024-04-26 23:30:11.734949] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987830 is same with the state(5) to be set 00:27:22.619 [2024-04-26 23:30:11.734954] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987830 is same with the state(5) to be set 00:27:22.619 [2024-04-26 23:30:11.734958] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987830 is same with the state(5) to be set 00:27:22.619 [2024-04-26 23:30:11.734962] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987830 is same with the state(5) to be set 00:27:22.619 [2024-04-26 23:30:11.734966] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987830 is same with the state(5) to be set 00:27:22.619 [2024-04-26 23:30:11.734971] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987830 is same with the state(5) to be set 00:27:22.619 [2024-04-26 23:30:11.735945] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x98a050 is same with the state(5) to be set 00:27:22.619 [2024-04-26 23:30:11.735966] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x98a050 is same with the state(5) to be set 00:27:22.619 [2024-04-26 23:30:11.735972] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x98a050 is same with the state(5) to be set 00:27:22.619 [2024-04-26 23:30:11.735977] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x98a050 is same with the state(5) to be set 00:27:22.619 [2024-04-26 23:30:11.735981] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x98a050 is same with the state(5) to be set 00:27:22.619 [2024-04-26 23:30:11.735987] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x98a050 is same with the state(5) to be set 00:27:22.619 [2024-04-26 23:30:11.735992] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x98a050 is same with the state(5) to be set 00:27:22.619 [2024-04-26 23:30:11.735996] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x98a050 is same with the state(5) to be set 00:27:22.619 [2024-04-26 23:30:11.736001] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x98a050 is same with the state(5) to be set 00:27:22.619 [2024-04-26 23:30:11.736005] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x98a050 is same with the state(5) to be set 00:27:22.619 [2024-04-26 23:30:11.736010] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x98a050 is same with the state(5) to be set 00:27:22.619 [2024-04-26 23:30:11.736014] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x98a050 is same with the state(5) to be set 00:27:22.619 [2024-04-26 23:30:11.736019] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x98a050 is same with the state(5) to be set 00:27:22.619 [2024-04-26 23:30:11.736027] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x98a050 is same with the state(5) to be set 00:27:22.619 [2024-04-26 23:30:11.736031] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x98a050 is same with the state(5) to be set 00:27:22.619 [2024-04-26 23:30:11.736036] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x98a050 is same with the state(5) to be set 00:27:22.619 [2024-04-26 23:30:11.736041] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x98a050 is same with the state(5) to be set 00:27:22.620 [2024-04-26 23:30:11.736045] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x98a050 is same with the state(5) to be set 00:27:22.620 [2024-04-26 23:30:11.736050] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x98a050 is same with the state(5) to be set 00:27:22.620 [2024-04-26 23:30:11.736055] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x98a050 is same with the state(5) to be set 00:27:22.620 [2024-04-26 23:30:11.736060] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x98a050 is same with the state(5) to be set 00:27:22.620 [2024-04-26 23:30:11.736064] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x98a050 is same with the state(5) to be set 00:27:22.620 [2024-04-26 23:30:11.736068] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x98a050 is same with the state(5) to be set 00:27:22.620 [2024-04-26 23:30:11.736073] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x98a050 is same with the state(5) to be set 00:27:22.620 [2024-04-26 23:30:11.736078] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x98a050 is same with the state(5) to be set 00:27:22.620 [2024-04-26 23:30:11.736082] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x98a050 is same with the state(5) to be set 00:27:22.620 [2024-04-26 23:30:11.736087] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x98a050 is same with the state(5) to be set 00:27:22.620 [2024-04-26 23:30:11.736092] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x98a050 is same with the state(5) to be set 00:27:22.620 [2024-04-26 23:30:11.736097] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x98a050 is same with the state(5) to be set 00:27:22.620 [2024-04-26 23:30:11.736102] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x98a050 is same with the state(5) to be set 00:27:22.620 [2024-04-26 23:30:11.736106] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x98a050 is same with the state(5) to be set 00:27:22.620 [2024-04-26 23:30:11.736111] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x98a050 is same with the state(5) to be set 00:27:22.620 [2024-04-26 23:30:11.736115] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x98a050 is same with the state(5) to be set 00:27:22.620 [2024-04-26 23:30:11.736119] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x98a050 is same with the state(5) to be set 00:27:22.620 [2024-04-26 23:30:11.736124] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x98a050 is same with the state(5) to be set 00:27:22.620 [2024-04-26 23:30:11.736128] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x98a050 is same with the state(5) to be set 00:27:22.620 [2024-04-26 23:30:11.736133] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x98a050 is same with the state(5) to be set 00:27:22.620 [2024-04-26 23:30:11.736137] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x98a050 is same with the state(5) to be set 00:27:22.620 [2024-04-26 23:30:11.736142] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x98a050 is same with the state(5) to be set 00:27:22.620 [2024-04-26 23:30:11.736146] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x98a050 is same with the state(5) to be set 00:27:22.620 [2024-04-26 23:30:11.736152] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x98a050 is same with the state(5) to be set 00:27:22.620 [2024-04-26 23:30:11.736156] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x98a050 is same with the state(5) to be set 00:27:22.620 [2024-04-26 23:30:11.736161] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x98a050 is same with the state(5) to be set 00:27:22.620 [2024-04-26 23:30:11.736165] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x98a050 is same with the state(5) to be set 00:27:22.620 [2024-04-26 23:30:11.736170] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x98a050 is same with the state(5) to be set 00:27:22.620 [2024-04-26 23:30:11.736174] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x98a050 is same with the state(5) to be set 00:27:22.620 [2024-04-26 23:30:11.736178] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x98a050 is same with the state(5) to be set 00:27:22.620 [2024-04-26 23:30:11.736183] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x98a050 is same with the state(5) to be set 00:27:22.620 [2024-04-26 23:30:11.736187] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x98a050 is same with the state(5) to be set 00:27:22.620 [2024-04-26 23:30:11.736191] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x98a050 is same with the state(5) to be set 00:27:22.620 [2024-04-26 23:30:11.736196] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x98a050 is same with the state(5) to be set 00:27:22.620 [2024-04-26 23:30:11.736200] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x98a050 is same with the state(5) to be set 00:27:22.620 [2024-04-26 23:30:11.736204] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x98a050 is same with the state(5) to be set 00:27:22.620 [2024-04-26 23:30:11.736208] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x98a050 is same with the state(5) to be set 00:27:22.620 [2024-04-26 23:30:11.736213] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x98a050 is same with the state(5) to be set 00:27:22.620 [2024-04-26 23:30:11.736217] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x98a050 is same with the state(5) to be set 00:27:22.620 [2024-04-26 23:30:11.736221] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x98a050 is same with the state(5) to be set 00:27:22.620 [2024-04-26 23:30:11.736226] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x98a050 is same with the state(5) to be set 00:27:22.620 [2024-04-26 23:30:11.736231] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x98a050 is same with the state(5) to be set 00:27:22.620 [2024-04-26 23:30:11.736235] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x98a050 is same with the state(5) to be set 00:27:22.620 [2024-04-26 23:30:11.736239] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x98a050 is same with the state(5) to be set 00:27:22.620 [2024-04-26 23:30:11.736243] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x98a050 is same with the state(5) to be set 00:27:22.620 [2024-04-26 23:30:11.736248] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x98a050 is same with the state(5) to be set 00:27:22.620 [2024-04-26 23:30:11.737419] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987ce0 is same with the state(5) to be set 00:27:22.620 [2024-04-26 23:30:11.737428] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987ce0 is same with the state(5) to be set 00:27:22.620 [2024-04-26 23:30:11.737433] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987ce0 is same with the state(5) to be set 00:27:22.620 [2024-04-26 23:30:11.737437] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987ce0 is same with the state(5) to be set 00:27:22.620 [2024-04-26 23:30:11.737445] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987ce0 is same with the state(5) to be set 00:27:22.620 [2024-04-26 23:30:11.737450] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987ce0 is same with the state(5) to be set 00:27:22.620 [2024-04-26 23:30:11.737455] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987ce0 is same with the state(5) to be set 00:27:22.620 [2024-04-26 23:30:11.737459] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987ce0 is same with the state(5) to be set 00:27:22.620 [2024-04-26 23:30:11.737464] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987ce0 is same with the state(5) to be set 00:27:22.620 [2024-04-26 23:30:11.737468] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987ce0 is same with the state(5) to be set 00:27:22.620 [2024-04-26 23:30:11.737473] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987ce0 is same with the state(5) to be set 00:27:22.620 [2024-04-26 23:30:11.737477] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987ce0 is same with the state(5) to be set 00:27:22.620 [2024-04-26 23:30:11.737481] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987ce0 is same with the state(5) to be set 00:27:22.620 [2024-04-26 23:30:11.737486] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987ce0 is same with the state(5) to be set 00:27:22.620 [2024-04-26 23:30:11.737490] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987ce0 is same with the state(5) to be set 00:27:22.620 [2024-04-26 23:30:11.737495] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987ce0 is same with the state(5) to be set 00:27:22.620 [2024-04-26 23:30:11.737499] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987ce0 is same with the state(5) to be set 00:27:22.620 [2024-04-26 23:30:11.737504] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987ce0 is same with the state(5) to be set 00:27:22.620 [2024-04-26 23:30:11.737508] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987ce0 is same with the state(5) to be set 00:27:22.620 [2024-04-26 23:30:11.737513] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987ce0 is same with the state(5) to be set 00:27:22.620 [2024-04-26 23:30:11.737517] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987ce0 is same with the state(5) to be set 00:27:22.620 [2024-04-26 23:30:11.737522] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987ce0 is same with the state(5) to be set 00:27:22.620 [2024-04-26 23:30:11.737526] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987ce0 is same with the state(5) to be set 00:27:22.620 [2024-04-26 23:30:11.737530] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987ce0 is same with the state(5) to be set 00:27:22.620 [2024-04-26 23:30:11.737535] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987ce0 is same with the state(5) to be set 00:27:22.620 [2024-04-26 23:30:11.737540] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987ce0 is same with the state(5) to be set 00:27:22.620 [2024-04-26 23:30:11.737546] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987ce0 is same with the state(5) to be set 00:27:22.620 [2024-04-26 23:30:11.737550] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987ce0 is same with the state(5) to be set 00:27:22.620 [2024-04-26 23:30:11.737554] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987ce0 is same with the state(5) to be set 00:27:22.620 [2024-04-26 23:30:11.737559] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987ce0 is same with the state(5) to be set 00:27:22.620 [2024-04-26 23:30:11.737563] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987ce0 is same with the state(5) to be set 00:27:22.620 [2024-04-26 23:30:11.737569] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987ce0 is same with the state(5) to be set 00:27:22.620 [2024-04-26 23:30:11.737573] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987ce0 is same with the state(5) to be set 00:27:22.620 [2024-04-26 23:30:11.737578] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987ce0 is same with the state(5) to be set 00:27:22.620 [2024-04-26 23:30:11.737583] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987ce0 is same with the state(5) to be set 00:27:22.620 [2024-04-26 23:30:11.737587] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987ce0 is same with the state(5) to be set 00:27:22.620 [2024-04-26 23:30:11.737591] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987ce0 is same with the state(5) to be set 00:27:22.620 [2024-04-26 23:30:11.737596] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987ce0 is same with the state(5) to be set 00:27:22.621 [2024-04-26 23:30:11.737600] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987ce0 is same with the state(5) to be set 00:27:22.621 [2024-04-26 23:30:11.737605] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987ce0 is same with the state(5) to be set 00:27:22.621 [2024-04-26 23:30:11.737609] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987ce0 is same with the state(5) to be set 00:27:22.621 [2024-04-26 23:30:11.737614] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987ce0 is same with the state(5) to be set 00:27:22.621 [2024-04-26 23:30:11.737618] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987ce0 is same with the state(5) to be set 00:27:22.621 [2024-04-26 23:30:11.737622] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987ce0 is same with the state(5) to be set 00:27:22.621 [2024-04-26 23:30:11.737627] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987ce0 is same with the state(5) to be set 00:27:22.621 [2024-04-26 23:30:11.737631] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987ce0 is same with the state(5) to be set 00:27:22.621 [2024-04-26 23:30:11.737635] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987ce0 is same with the state(5) to be set 00:27:22.621 [2024-04-26 23:30:11.737640] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987ce0 is same with the state(5) to be set 00:27:22.621 [2024-04-26 23:30:11.737645] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987ce0 is same with the state(5) to be set 00:27:22.621 [2024-04-26 23:30:11.737649] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987ce0 is same with the state(5) to be set 00:27:22.621 [2024-04-26 23:30:11.737654] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987ce0 is same with the state(5) to be set 00:27:22.621 [2024-04-26 23:30:11.737658] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987ce0 is same with the state(5) to be set 00:27:22.621 [2024-04-26 23:30:11.737662] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987ce0 is same with the state(5) to be set 00:27:22.621 [2024-04-26 23:30:11.737666] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987ce0 is same with the state(5) to be set 00:27:22.621 [2024-04-26 23:30:11.737671] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987ce0 is same with the state(5) to be set 00:27:22.621 [2024-04-26 23:30:11.737675] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987ce0 is same with the state(5) to be set 00:27:22.621 [2024-04-26 23:30:11.737679] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987ce0 is same with the state(5) to be set 00:27:22.621 [2024-04-26 23:30:11.737684] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987ce0 is same with the state(5) to be set 00:27:22.621 [2024-04-26 23:30:11.737688] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987ce0 is same with the state(5) to be set 00:27:22.621 [2024-04-26 23:30:11.737694] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987ce0 is same with the state(5) to be set 00:27:22.621 [2024-04-26 23:30:11.737698] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987ce0 is same with the state(5) to be set 00:27:22.621 [2024-04-26 23:30:11.737702] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987ce0 is same with the state(5) to be set 00:27:22.621 [2024-04-26 23:30:11.737707] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987ce0 is same with the state(5) to be set 00:27:22.621 [2024-04-26 23:30:11.738781] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x988170 is same with the state(5) to be set 00:27:22.621 [2024-04-26 23:30:11.738803] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x988170 is same with the state(5) to be set 00:27:22.621 [2024-04-26 23:30:11.738808] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x988170 is same with the state(5) to be set 00:27:22.621 [2024-04-26 23:30:11.738813] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x988170 is same with the state(5) to be set 00:27:22.621 [2024-04-26 23:30:11.738818] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x988170 is same with the state(5) to be set 00:27:22.621 [2024-04-26 23:30:11.738823] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x988170 is same with the state(5) to be set 00:27:22.621 [2024-04-26 23:30:11.738827] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x988170 is same with the state(5) to be set 00:27:22.621 [2024-04-26 23:30:11.738832] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x988170 is same with the state(5) to be set 00:27:22.621 [2024-04-26 23:30:11.738842] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x988170 is same with the state(5) to be set 00:27:22.621 [2024-04-26 23:30:11.738846] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x988170 is same with the state(5) to be set 00:27:22.621 [2024-04-26 23:30:11.738851] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x988170 is same with the state(5) to be set 00:27:22.621 [2024-04-26 23:30:11.738855] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x988170 is same with the state(5) to be set 00:27:22.621 [2024-04-26 23:30:11.738860] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x988170 is same with the state(5) to be set 00:27:22.621 [2024-04-26 23:30:11.738864] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x988170 is same with the state(5) to be set 00:27:22.621 [2024-04-26 23:30:11.738869] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x988170 is same with the state(5) to be set 00:27:22.621 [2024-04-26 23:30:11.738874] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x988170 is same with the state(5) to be set 00:27:22.621 [2024-04-26 23:30:11.738879] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x988170 is same with the state(5) to be set 00:27:22.621 [2024-04-26 23:30:11.738883] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x988170 is same with the state(5) to be set 00:27:22.621 [2024-04-26 23:30:11.738888] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x988170 is same with the state(5) to be set 00:27:22.621 [2024-04-26 23:30:11.738892] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x988170 is same with the state(5) to be set 00:27:22.621 [2024-04-26 23:30:11.738897] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x988170 is same with the state(5) to be set 00:27:22.621 [2024-04-26 23:30:11.738902] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x988170 is same with the state(5) to be set 00:27:22.621 [2024-04-26 23:30:11.738906] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x988170 is same with the state(5) to be set 00:27:22.621 [2024-04-26 23:30:11.738914] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x988170 is same with the state(5) to be set 00:27:22.621 [2024-04-26 23:30:11.738918] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x988170 is same with the state(5) to be set 00:27:22.621 [2024-04-26 23:30:11.738923] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x988170 is same with the state(5) to be set 00:27:22.621 [2024-04-26 23:30:11.738928] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x988170 is same with the state(5) to be set 00:27:22.621 [2024-04-26 23:30:11.738932] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x988170 is same with the state(5) to be set 00:27:22.621 [2024-04-26 23:30:11.738937] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x988170 is same with the state(5) to be set 00:27:22.621 [2024-04-26 23:30:11.738941] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x988170 is same with the state(5) to be set 00:27:22.621 [2024-04-26 23:30:11.738946] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x988170 is same with the state(5) to be set 00:27:22.621 [2024-04-26 23:30:11.738950] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x988170 is same with the state(5) to be set 00:27:22.621 [2024-04-26 23:30:11.738955] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x988170 is same with the state(5) to be set 00:27:22.621 [2024-04-26 23:30:11.738960] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x988170 is same with the state(5) to be set 00:27:22.621 [2024-04-26 23:30:11.738964] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x988170 is same with the state(5) to be set 00:27:22.621 [2024-04-26 23:30:11.738969] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x988170 is same with the state(5) to be set 00:27:22.621 [2024-04-26 23:30:11.738974] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x988170 is same with the state(5) to be set 00:27:22.621 [2024-04-26 23:30:11.738978] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x988170 is same with the state(5) to be set 00:27:22.621 [2024-04-26 23:30:11.738982] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x988170 is same with the state(5) to be set 00:27:22.621 [2024-04-26 23:30:11.738987] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x988170 is same with the state(5) to be set 00:27:22.621 [2024-04-26 23:30:11.738991] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x988170 is same with the state(5) to be set 00:27:22.621 [2024-04-26 23:30:11.738996] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x988170 is same with the state(5) to be set 00:27:22.621 [2024-04-26 23:30:11.739000] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x988170 is same with the state(5) to be set 00:27:22.621 [2024-04-26 23:30:11.739004] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x988170 is same with the state(5) to be set 00:27:22.621 [2024-04-26 23:30:11.739009] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x988170 is same with the state(5) to be set 00:27:22.621 [2024-04-26 23:30:11.739013] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x988170 is same with the state(5) to be set 00:27:22.621 [2024-04-26 23:30:11.739018] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x988170 is same with the state(5) to be set 00:27:22.621 [2024-04-26 23:30:11.739023] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x988170 is same with the state(5) to be set 00:27:22.621 [2024-04-26 23:30:11.739027] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x988170 is same with the state(5) to be set 00:27:22.621 [2024-04-26 23:30:11.739032] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x988170 is same with the state(5) to be set 00:27:22.621 [2024-04-26 23:30:11.739037] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x988170 is same with the state(5) to be set 00:27:22.621 [2024-04-26 23:30:11.739042] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x988170 is same with the state(5) to be set 00:27:22.621 [2024-04-26 23:30:11.739046] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x988170 is same with the state(5) to be set 00:27:22.621 [2024-04-26 23:30:11.739050] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x988170 is same with the state(5) to be set 00:27:22.621 [2024-04-26 23:30:11.739055] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x988170 is same with the state(5) to be set 00:27:22.621 [2024-04-26 23:30:11.739059] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x988170 is same with the state(5) to be set 00:27:22.621 [2024-04-26 23:30:11.739064] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x988170 is same with the state(5) to be set 00:27:22.621 [2024-04-26 23:30:11.739068] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x988170 is same with the state(5) to be set 00:27:22.621 [2024-04-26 23:30:11.740150] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x988600 is same with the state(5) to be set 00:27:22.622 [2024-04-26 23:30:11.740177] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x988600 is same with the state(5) to be set 00:27:22.622 [2024-04-26 23:30:11.740185] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x988600 is same with the state(5) to be set 00:27:22.622 [2024-04-26 23:30:11.740191] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x988600 is same with the state(5) to be set 00:27:22.622 [2024-04-26 23:30:11.740198] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x988600 is same with the state(5) to be set 00:27:22.622 [2024-04-26 23:30:11.740205] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x988600 is same with the state(5) to be set 00:27:22.622 [2024-04-26 23:30:11.740211] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x988600 is same with the state(5) to be set 00:27:22.622 [2024-04-26 23:30:11.740219] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x988600 is same with the state(5) to be set 00:27:22.622 [2024-04-26 23:30:11.740225] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x988600 is same with the state(5) to be set 00:27:22.622 [2024-04-26 23:30:11.740231] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x988600 is same with the state(5) to be set 00:27:22.622 [2024-04-26 23:30:11.740238] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x988600 is same with the state(5) to be set 00:27:22.622 [2024-04-26 23:30:11.740244] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x988600 is same with the state(5) to be set 00:27:22.622 [2024-04-26 23:30:11.740250] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x988600 is same with the state(5) to be set 00:27:22.622 [2024-04-26 23:30:11.740257] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x988600 is same with the state(5) to be set 00:27:22.622 [2024-04-26 23:30:11.740263] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x988600 is same with the state(5) to be set 00:27:22.622 [2024-04-26 23:30:11.740269] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x988600 is same with the state(5) to be set 00:27:22.622 [2024-04-26 23:30:11.740276] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x988600 is same with the state(5) to be set 00:27:22.622 [2024-04-26 23:30:11.740282] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x988600 is same with the state(5) to be set 00:27:22.622 [2024-04-26 23:30:11.740289] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x988600 is same with the state(5) to be set 00:27:22.622 [2024-04-26 23:30:11.740295] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x988600 is same with the state(5) to be set 00:27:22.622 [2024-04-26 23:30:11.740305] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x988600 is same with the state(5) to be set 00:27:22.622 [2024-04-26 23:30:11.740311] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x988600 is same with the state(5) to be set 00:27:22.622 [2024-04-26 23:30:11.740318] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x988600 is same with the state(5) to be set 00:27:22.622 [2024-04-26 23:30:11.740324] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x988600 is same with the state(5) to be set 00:27:22.622 [2024-04-26 23:30:11.740331] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x988600 is same with the state(5) to be set 00:27:22.622 [2024-04-26 23:30:11.740337] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x988600 is same with the state(5) to be set 00:27:22.622 [2024-04-26 23:30:11.740343] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x988600 is same with the state(5) to be set 00:27:22.622 [2024-04-26 23:30:11.740349] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x988600 is same with the state(5) to be set 00:27:22.622 [2024-04-26 23:30:11.740356] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x988600 is same with the state(5) to be set 00:27:22.622 [2024-04-26 23:30:11.740362] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x988600 is same with the state(5) to be set 00:27:22.622 [2024-04-26 23:30:11.740368] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x988600 is same with the state(5) to be set 00:27:22.622 [2024-04-26 23:30:11.740375] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x988600 is same with the state(5) to be set 00:27:22.622 [2024-04-26 23:30:11.740381] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x988600 is same with the state(5) to be set 00:27:22.622 [2024-04-26 23:30:11.740387] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x988600 is same with the state(5) to be set 00:27:22.622 [2024-04-26 23:30:11.740393] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x988600 is same with the state(5) to be set 00:27:22.622 [2024-04-26 23:30:11.740399] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x988600 is same with the state(5) to be set 00:27:22.622 [2024-04-26 23:30:11.740406] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x988600 is same with the state(5) to be set 00:27:22.622 [2024-04-26 23:30:11.740412] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x988600 is same with the state(5) to be set 00:27:22.622 [2024-04-26 23:30:11.740419] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x988600 is same with the state(5) to be set 00:27:22.622 [2024-04-26 23:30:11.740425] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x988600 is same with the state(5) to be set 00:27:22.622 [2024-04-26 23:30:11.740431] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x988600 is same with the state(5) to be set 00:27:22.622 [2024-04-26 23:30:11.740437] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x988600 is same with the state(5) to be set 00:27:22.622 [2024-04-26 23:30:11.740444] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x988600 is same with the state(5) to be set 00:27:22.622 [2024-04-26 23:30:11.740450] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x988600 is same with the state(5) to be set 00:27:22.622 [2024-04-26 23:30:11.740456] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x988600 is same with the state(5) to be set 00:27:22.622 [2024-04-26 23:30:11.740463] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x988600 is same with the state(5) to be set 00:27:22.622 [2024-04-26 23:30:11.740469] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x988600 is same with the state(5) to be set 00:27:22.622 [2024-04-26 23:30:11.740477] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x988600 is same with the state(5) to be set 00:27:22.622 [2024-04-26 23:30:11.740484] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x988600 is same with the state(5) to be set 00:27:22.622 [2024-04-26 23:30:11.740490] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x988600 is same with the state(5) to be set 00:27:22.622 [2024-04-26 23:30:11.740496] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x988600 is same with the state(5) to be set 00:27:22.622 [2024-04-26 23:30:11.740503] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x988600 is same with the state(5) to be set 00:27:22.622 [2024-04-26 23:30:11.740510] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x988600 is same with the state(5) to be set 00:27:22.622 [2024-04-26 23:30:11.740517] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x988600 is same with the state(5) to be set 00:27:22.622 [2024-04-26 23:30:11.740523] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x988600 is same with the state(5) to be set 00:27:22.622 [2024-04-26 23:30:11.740530] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x988600 is same with the state(5) to be set 00:27:22.622 [2024-04-26 23:30:11.740536] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x988600 is same with the state(5) to be set 00:27:22.622 [2024-04-26 23:30:11.740542] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x988600 is same with the state(5) to be set 00:27:22.622 [2024-04-26 23:30:11.740549] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x988600 is same with the state(5) to be set 00:27:22.622 [2024-04-26 23:30:11.740555] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x988600 is same with the state(5) to be set 00:27:22.622 [2024-04-26 23:30:11.740562] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x988600 is same with the state(5) to be set 00:27:22.622 [2024-04-26 23:30:11.740568] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x988600 is same with the state(5) to be set 00:27:22.622 [2024-04-26 23:30:11.740574] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x988600 is same with the state(5) to be set 00:27:22.622 [2024-04-26 23:30:11.741142] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x988a90 is same with the state(5) to be set 00:27:22.622 [2024-04-26 23:30:11.741156] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x988a90 is same with the state(5) to be set 00:27:22.622 [2024-04-26 23:30:11.741161] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x988a90 is same with the state(5) to be set 00:27:22.622 [2024-04-26 23:30:11.741165] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x988a90 is same with the state(5) to be set 00:27:22.622 [2024-04-26 23:30:11.741170] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x988a90 is same with the state(5) to be set 00:27:22.622 [2024-04-26 23:30:11.741174] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x988a90 is same with the state(5) to be set 00:27:22.622 [2024-04-26 23:30:11.741179] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x988a90 is same with the state(5) to be set 00:27:22.622 [2024-04-26 23:30:11.741183] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x988a90 is same with the state(5) to be set 00:27:22.622 [2024-04-26 23:30:11.741188] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x988a90 is same with the state(5) to be set 00:27:22.622 [2024-04-26 23:30:11.741192] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x988a90 is same with the state(5) to be set 00:27:22.622 [2024-04-26 23:30:11.741197] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x988a90 is same with the state(5) to be set 00:27:22.622 [2024-04-26 23:30:11.741204] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x988a90 is same with the state(5) to be set 00:27:22.622 [2024-04-26 23:30:11.741208] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x988a90 is same with the state(5) to be set 00:27:22.622 [2024-04-26 23:30:11.741213] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x988a90 is same with the state(5) to be set 00:27:22.622 [2024-04-26 23:30:11.741217] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x988a90 is same with the state(5) to be set 00:27:22.622 [2024-04-26 23:30:11.741221] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x988a90 is same with the state(5) to be set 00:27:22.622 [2024-04-26 23:30:11.741226] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x988a90 is same with the state(5) to be set 00:27:22.622 [2024-04-26 23:30:11.741230] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x988a90 is same with the state(5) to be set 00:27:22.622 [2024-04-26 23:30:11.741235] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x988a90 is same with the state(5) to be set 00:27:22.622 [2024-04-26 23:30:11.741239] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x988a90 is same with the state(5) to be set 00:27:22.622 [2024-04-26 23:30:11.741244] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x988a90 is same with the state(5) to be set 00:27:22.623 [2024-04-26 23:30:11.741248] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x988a90 is same with the state(5) to be set 00:27:22.623 [2024-04-26 23:30:11.741253] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x988a90 is same with the state(5) to be set 00:27:22.623 [2024-04-26 23:30:11.741257] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x988a90 is same with the state(5) to be set 00:27:22.623 [2024-04-26 23:30:11.741261] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x988a90 is same with the state(5) to be set 00:27:22.623 [2024-04-26 23:30:11.741266] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x988a90 is same with the state(5) to be set 00:27:22.623 [2024-04-26 23:30:11.741270] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x988a90 is same with the state(5) to be set 00:27:22.623 [2024-04-26 23:30:11.741274] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x988a90 is same with the state(5) to be set 00:27:22.623 [2024-04-26 23:30:11.741279] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x988a90 is same with the state(5) to be set 00:27:22.623 [2024-04-26 23:30:11.741283] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x988a90 is same with the state(5) to be set 00:27:22.623 [2024-04-26 23:30:11.741287] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x988a90 is same with the state(5) to be set 00:27:22.623 [2024-04-26 23:30:11.741292] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x988a90 is same with the state(5) to be set 00:27:22.623 [2024-04-26 23:30:11.741297] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x988a90 is same with the state(5) to be set 00:27:22.623 [2024-04-26 23:30:11.741301] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x988a90 is same with the state(5) to be set 00:27:22.623 [2024-04-26 23:30:11.741305] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x988a90 is same with the state(5) to be set 00:27:22.623 [2024-04-26 23:30:11.741310] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x988a90 is same with the state(5) to be set 00:27:22.623 [2024-04-26 23:30:11.741314] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x988a90 is same with the state(5) to be set 00:27:22.623 [2024-04-26 23:30:11.741318] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x988a90 is same with the state(5) to be set 00:27:22.623 [2024-04-26 23:30:11.741322] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x988a90 is same with the state(5) to be set 00:27:22.623 [2024-04-26 23:30:11.741328] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x988a90 is same with the state(5) to be set 00:27:22.623 [2024-04-26 23:30:11.741332] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x988a90 is same with the state(5) to be set 00:27:22.623 [2024-04-26 23:30:11.741337] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x988a90 is same with the state(5) to be set 00:27:22.623 [2024-04-26 23:30:11.741341] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x988a90 is same with the state(5) to be set 00:27:22.623 [2024-04-26 23:30:11.741345] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x988a90 is same with the state(5) to be set 00:27:22.623 [2024-04-26 23:30:11.741350] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x988a90 is same with the state(5) to be set 00:27:22.623 [2024-04-26 23:30:11.741354] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x988a90 is same with the state(5) to be set 00:27:22.623 [2024-04-26 23:30:11.741359] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x988a90 is same with the state(5) to be set 00:27:22.623 [2024-04-26 23:30:11.741363] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x988a90 is same with the state(5) to be set 00:27:22.623 [2024-04-26 23:30:11.741368] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x988a90 is same with the state(5) to be set 00:27:22.623 [2024-04-26 23:30:11.741372] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x988a90 is same with the state(5) to be set 00:27:22.623 [2024-04-26 23:30:11.741377] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x988a90 is same with the state(5) to be set 00:27:22.623 [2024-04-26 23:30:11.741381] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x988a90 is same with the state(5) to be set 00:27:22.623 [2024-04-26 23:30:11.741386] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x988a90 is same with the state(5) to be set 00:27:22.623 [2024-04-26 23:30:11.741390] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x988a90 is same with the state(5) to be set 00:27:22.623 [2024-04-26 23:30:11.741395] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x988a90 is same with the state(5) to be set 00:27:22.623 [2024-04-26 23:30:11.741399] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x988a90 is same with the state(5) to be set 00:27:22.623 [2024-04-26 23:30:11.741404] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x988a90 is same with the state(5) to be set 00:27:22.623 [2024-04-26 23:30:11.741408] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x988a90 is same with the state(5) to be set 00:27:22.623 [2024-04-26 23:30:11.741412] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x988a90 is same with the state(5) to be set 00:27:22.623 [2024-04-26 23:30:11.741416] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x988a90 is same with the state(5) to be set 00:27:22.623 [2024-04-26 23:30:11.741421] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x988a90 is same with the state(5) to be set 00:27:22.623 [2024-04-26 23:30:11.741425] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x988a90 is same with the state(5) to be set 00:27:22.623 [2024-04-26 23:30:11.741429] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x988a90 is same with the state(5) to be set 00:27:22.623 [2024-04-26 23:30:11.742013] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x988f40 is same with the state(5) to be set 00:27:22.623 [2024-04-26 23:30:11.742023] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x988f40 is same with the state(5) to be set 00:27:22.623 [2024-04-26 23:30:11.742027] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x988f40 is same with the state(5) to be set 00:27:22.623 [2024-04-26 23:30:11.742034] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x988f40 is same with the state(5) to be set 00:27:22.623 [2024-04-26 23:30:11.742039] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x988f40 is same with the state(5) to be set 00:27:22.623 [2024-04-26 23:30:11.742043] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x988f40 is same with the state(5) to be set 00:27:22.623 [2024-04-26 23:30:11.742048] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x988f40 is same with the state(5) to be set 00:27:22.623 [2024-04-26 23:30:11.742052] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x988f40 is same with the state(5) to be set 00:27:22.623 [2024-04-26 23:30:11.742057] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x988f40 is same with the state(5) to be set 00:27:22.623 [2024-04-26 23:30:11.742061] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x988f40 is same with the state(5) to be set 00:27:22.623 [2024-04-26 23:30:11.742065] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x988f40 is same with the state(5) to be set 00:27:22.623 [2024-04-26 23:30:11.742070] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x988f40 is same with the state(5) to be set 00:27:22.623 [2024-04-26 23:30:11.742075] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x988f40 is same with the state(5) to be set 00:27:22.623 [2024-04-26 23:30:11.742079] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x988f40 is same with the state(5) to be set 00:27:22.623 [2024-04-26 23:30:11.742084] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x988f40 is same with the state(5) to be set 00:27:22.623 [2024-04-26 23:30:11.742089] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x988f40 is same with the state(5) to be set 00:27:22.624 [2024-04-26 23:30:11.742095] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x988f40 is same with the state(5) to be set 00:27:22.624 [2024-04-26 23:30:11.742099] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x988f40 is same with the state(5) to be set 00:27:22.624 [2024-04-26 23:30:11.742104] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x988f40 is same with the state(5) to be set 00:27:22.624 [2024-04-26 23:30:11.742108] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x988f40 is same with the state(5) to be set 00:27:22.624 [2024-04-26 23:30:11.742112] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x988f40 is same with the state(5) to be set 00:27:22.624 [2024-04-26 23:30:11.742117] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x988f40 is same with the state(5) to be set 00:27:22.624 [2024-04-26 23:30:11.742122] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x988f40 is same with the state(5) to be set 00:27:22.624 [2024-04-26 23:30:11.742126] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x988f40 is same with the state(5) to be set 00:27:22.624 [2024-04-26 23:30:11.742131] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x988f40 is same with the state(5) to be set 00:27:22.624 [2024-04-26 23:30:11.742135] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x988f40 is same with the state(5) to be set 00:27:22.624 [2024-04-26 23:30:11.742139] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x988f40 is same with the state(5) to be set 00:27:22.624 [2024-04-26 23:30:11.742144] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x988f40 is same with the state(5) to be set 00:27:22.624 [2024-04-26 23:30:11.742148] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x988f40 is same with the state(5) to be set 00:27:22.624 [2024-04-26 23:30:11.742153] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x988f40 is same with the state(5) to be set 00:27:22.624 [2024-04-26 23:30:11.742161] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x988f40 is same with the state(5) to be set 00:27:22.624 [2024-04-26 23:30:11.742165] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x988f40 is same with the state(5) to be set 00:27:22.624 [2024-04-26 23:30:11.742170] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x988f40 is same with the state(5) to be set 00:27:22.624 [2024-04-26 23:30:11.742174] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x988f40 is same with the state(5) to be set 00:27:22.624 [2024-04-26 23:30:11.742179] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x988f40 is same with the state(5) to be set 00:27:22.624 [2024-04-26 23:30:11.742183] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x988f40 is same with the state(5) to be set 00:27:22.624 [2024-04-26 23:30:11.742188] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x988f40 is same with the state(5) to be set 00:27:22.624 [2024-04-26 23:30:11.742192] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x988f40 is same with the state(5) to be set 00:27:22.624 [2024-04-26 23:30:11.742197] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x988f40 is same with the state(5) to be set 00:27:22.624 [2024-04-26 23:30:11.742201] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x988f40 is same with the state(5) to be set 00:27:22.624 [2024-04-26 23:30:11.742205] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x988f40 is same with the state(5) to be set 00:27:22.624 [2024-04-26 23:30:11.742210] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x988f40 is same with the state(5) to be set 00:27:22.624 [2024-04-26 23:30:11.742214] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x988f40 is same with the state(5) to be set 00:27:22.624 [2024-04-26 23:30:11.742218] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x988f40 is same with the state(5) to be set 00:27:22.624 [2024-04-26 23:30:11.742223] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x988f40 is same with the state(5) to be set 00:27:22.624 [2024-04-26 23:30:11.742227] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x988f40 is same with the state(5) to be set 00:27:22.624 [2024-04-26 23:30:11.742231] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x988f40 is same with the state(5) to be set 00:27:22.624 [2024-04-26 23:30:11.749463] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:22.624 [2024-04-26 23:30:11.749498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.624 [2024-04-26 23:30:11.749508] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:22.624 [2024-04-26 23:30:11.749516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.624 [2024-04-26 23:30:11.749523] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:22.624 [2024-04-26 23:30:11.749531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.624 [2024-04-26 23:30:11.749539] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:22.624 [2024-04-26 23:30:11.749546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.624 [2024-04-26 23:30:11.749554] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1ac90 is same with the state(5) to be set 00:27:22.624 [2024-04-26 23:30:11.749579] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:22.624 [2024-04-26 23:30:11.749592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.624 [2024-04-26 23:30:11.749600] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:22.624 [2024-04-26 23:30:11.749607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.624 [2024-04-26 23:30:11.749615] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:22.624 [2024-04-26 23:30:11.749622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.624 [2024-04-26 23:30:11.749630] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:22.624 [2024-04-26 23:30:11.749637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.624 [2024-04-26 23:30:11.749644] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2183c90 is same with the state(5) to be set 00:27:22.624 [2024-04-26 23:30:11.749681] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:22.624 [2024-04-26 23:30:11.749690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.624 [2024-04-26 23:30:11.749698] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:22.624 [2024-04-26 23:30:11.749704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.624 [2024-04-26 23:30:11.749712] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:22.624 [2024-04-26 23:30:11.749719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.624 [2024-04-26 23:30:11.749727] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:22.624 [2024-04-26 23:30:11.749734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.624 [2024-04-26 23:30:11.749741] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1baf3f0 is same with the state(5) to be set 00:27:22.624 [2024-04-26 23:30:11.749762] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:22.624 [2024-04-26 23:30:11.749770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.624 [2024-04-26 23:30:11.749778] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:22.624 [2024-04-26 23:30:11.749785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.624 [2024-04-26 23:30:11.749793] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:22.624 [2024-04-26 23:30:11.749800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.624 [2024-04-26 23:30:11.749808] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:22.624 [2024-04-26 23:30:11.749815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.624 [2024-04-26 23:30:11.749827] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fedc20 is same with the state(5) to be set 00:27:22.624 [2024-04-26 23:30:11.749867] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:22.624 [2024-04-26 23:30:11.749877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.624 [2024-04-26 23:30:11.749885] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:22.624 [2024-04-26 23:30:11.749892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.624 [2024-04-26 23:30:11.749900] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:22.624 [2024-04-26 23:30:11.749907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.624 [2024-04-26 23:30:11.749915] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:22.624 [2024-04-26 23:30:11.749925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.624 [2024-04-26 23:30:11.749933] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20913a0 is same with the state(5) to be set 00:27:22.624 [2024-04-26 23:30:11.749959] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:22.624 [2024-04-26 23:30:11.749967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.624 [2024-04-26 23:30:11.749975] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:22.624 [2024-04-26 23:30:11.749982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.624 [2024-04-26 23:30:11.749990] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:22.624 [2024-04-26 23:30:11.749997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.625 [2024-04-26 23:30:11.750005] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:22.625 [2024-04-26 23:30:11.750012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.625 [2024-04-26 23:30:11.750019] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218cfb0 is same with the state(5) to be set 00:27:22.625 [2024-04-26 23:30:11.752178] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x988f40 is same with the state(5) to be set 00:27:22.625 [2024-04-26 23:30:11.752205] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x988f40 is same with the state(5) to be set 00:27:22.625 [2024-04-26 23:30:11.752214] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x988f40 is same with the state(5) to be set 00:27:22.625 [2024-04-26 23:30:11.752221] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x988f40 is same with the state(5) to be set 00:27:22.625 [2024-04-26 23:30:11.752229] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x988f40 is same with the state(5) to be set 00:27:22.625 [2024-04-26 23:30:11.752236] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x988f40 is same with the state(5) to be set 00:27:22.625 [2024-04-26 23:30:11.752244] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x988f40 is same with the state(5) to be set 00:27:22.625 [2024-04-26 23:30:11.752255] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x988f40 is same with the state(5) to be set 00:27:22.625 [2024-04-26 23:30:11.752262] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x988f40 is same with the state(5) to be set 00:27:22.625 [2024-04-26 23:30:11.752269] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x988f40 is same with the state(5) to be set 00:27:22.625 [2024-04-26 23:30:11.752277] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x988f40 is same with the state(5) to be set 00:27:22.625 [2024-04-26 23:30:11.752283] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x988f40 is same with the state(5) to be set 00:27:22.625 [2024-04-26 23:30:11.752291] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x988f40 is same with the state(5) to be set 00:27:22.625 [2024-04-26 23:30:11.752298] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x988f40 is same with the state(5) to be set 00:27:22.625 [2024-04-26 23:30:11.752303] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x988f40 is same with the state(5) to be set 00:27:22.625 [2024-04-26 23:30:11.752309] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x988f40 is same with the state(5) to be set 00:27:22.625 [2024-04-26 23:30:11.753339] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9892a0 is same with the state(5) to be set 00:27:22.625 [2024-04-26 23:30:11.753353] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9892a0 is same with the state(5) to be set 00:27:22.625 [2024-04-26 23:30:11.753358] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9892a0 is same with the state(5) to be set 00:27:22.625 [2024-04-26 23:30:11.753363] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9892a0 is same with the state(5) to be set 00:27:22.625 [2024-04-26 23:30:11.753367] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9892a0 is same with the state(5) to be set 00:27:22.625 [2024-04-26 23:30:11.753372] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9892a0 is same with the state(5) to be set 00:27:22.625 [2024-04-26 23:30:11.753377] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9892a0 is same with the state(5) to be set 00:27:22.625 [2024-04-26 23:30:11.753381] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9892a0 is same with the state(5) to be set 00:27:22.625 [2024-04-26 23:30:11.753386] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9892a0 is same with the state(5) to be set 00:27:22.625 [2024-04-26 23:30:11.753390] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9892a0 is same with the state(5) to be set 00:27:22.625 [2024-04-26 23:30:11.753394] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9892a0 is same with the state(5) to be set 00:27:22.625 [2024-04-26 23:30:11.753399] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9892a0 is same with the state(5) to be set 00:27:22.625 [2024-04-26 23:30:11.753403] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9892a0 is same with the state(5) to be set 00:27:22.625 [2024-04-26 23:30:11.753408] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9892a0 is same with the state(5) to be set 00:27:22.625 [2024-04-26 23:30:11.753412] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9892a0 is same with the state(5) to be set 00:27:22.625 [2024-04-26 23:30:11.753416] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9892a0 is same with the state(5) to be set 00:27:22.625 [2024-04-26 23:30:11.753421] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9892a0 is same with the state(5) to be set 00:27:22.625 [2024-04-26 23:30:11.753425] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9892a0 is same with the state(5) to be set 00:27:22.625 [2024-04-26 23:30:11.753429] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9892a0 is same with the state(5) to be set 00:27:22.625 [2024-04-26 23:30:11.753437] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9892a0 is same with the state(5) to be set 00:27:22.625 [2024-04-26 23:30:11.753441] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9892a0 is same with the state(5) to be set 00:27:22.625 [2024-04-26 23:30:11.753445] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9892a0 is same with the state(5) to be set 00:27:22.625 [2024-04-26 23:30:11.753450] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9892a0 is same with the state(5) to be set 00:27:22.625 [2024-04-26 23:30:11.753454] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9892a0 is same with the state(5) to be set 00:27:22.625 [2024-04-26 23:30:11.753458] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9892a0 is same with the state(5) to be set 00:27:22.625 [2024-04-26 23:30:11.753462] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9892a0 is same with the state(5) to be set 00:27:22.625 [2024-04-26 23:30:11.753467] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9892a0 is same with the state(5) to be set 00:27:22.625 [2024-04-26 23:30:11.753471] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9892a0 is same with the state(5) to be set 00:27:22.625 [2024-04-26 23:30:11.753476] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9892a0 is same with the state(5) to be set 00:27:22.625 [2024-04-26 23:30:11.753480] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9892a0 is same with the state(5) to be set 00:27:22.625 [2024-04-26 23:30:11.753484] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9892a0 is same with the state(5) to be set 00:27:22.625 [2024-04-26 23:30:11.753489] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9892a0 is same with the state(5) to be set 00:27:22.625 [2024-04-26 23:30:11.753493] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9892a0 is same with the state(5) to be set 00:27:22.625 [2024-04-26 23:30:11.753497] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9892a0 is same with the state(5) to be set 00:27:22.625 [2024-04-26 23:30:11.753502] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9892a0 is same with the state(5) to be set 00:27:22.625 [2024-04-26 23:30:11.753506] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9892a0 is same with the state(5) to be set 00:27:22.625 [2024-04-26 23:30:11.753510] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9892a0 is same with the state(5) to be set 00:27:22.625 [2024-04-26 23:30:11.753514] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9892a0 is same with the state(5) to be set 00:27:22.625 [2024-04-26 23:30:11.753519] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9892a0 is same with the state(5) to be set 00:27:22.625 [2024-04-26 23:30:11.753523] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9892a0 is same with the state(5) to be set 00:27:22.625 [2024-04-26 23:30:11.753527] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9892a0 is same with the state(5) to be set 00:27:22.625 [2024-04-26 23:30:11.753532] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9892a0 is same with the state(5) to be set 00:27:22.625 [2024-04-26 23:30:11.753536] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9892a0 is same with the state(5) to be set 00:27:22.625 [2024-04-26 23:30:11.753542] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9892a0 is same with the state(5) to be set 00:27:22.625 [2024-04-26 23:30:11.753546] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9892a0 is same with the state(5) to be set 00:27:22.625 [2024-04-26 23:30:11.753550] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9892a0 is same with the state(5) to be set 00:27:22.625 [2024-04-26 23:30:11.753556] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9892a0 is same with the state(5) to be set 00:27:22.625 [2024-04-26 23:30:11.753561] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9892a0 is same with the state(5) to be set 00:27:22.625 [2024-04-26 23:30:11.753565] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9892a0 is same with the state(5) to be set 00:27:22.625 [2024-04-26 23:30:11.753569] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9892a0 is same with the state(5) to be set 00:27:22.625 [2024-04-26 23:30:11.753574] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9892a0 is same with the state(5) to be set 00:27:22.625 [2024-04-26 23:30:11.753578] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9892a0 is same with the state(5) to be set 00:27:22.625 [2024-04-26 23:30:11.753583] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9892a0 is same with the state(5) to be set 00:27:22.625 [2024-04-26 23:30:11.753587] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9892a0 is same with the state(5) to be set 00:27:22.625 [2024-04-26 23:30:11.753591] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9892a0 is same with the state(5) to be set 00:27:22.625 [2024-04-26 23:30:11.753596] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9892a0 is same with the state(5) to be set 00:27:22.625 [2024-04-26 23:30:11.753600] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9892a0 is same with the state(5) to be set 00:27:22.625 [2024-04-26 23:30:11.753604] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9892a0 is same with the state(5) to be set 00:27:22.625 [2024-04-26 23:30:11.753608] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9892a0 is same with the state(5) to be set 00:27:22.625 [2024-04-26 23:30:11.753612] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9892a0 is same with the state(5) to be set 00:27:22.625 [2024-04-26 23:30:11.753617] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9892a0 is same with the state(5) to be set 00:27:22.625 [2024-04-26 23:30:11.753622] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9892a0 is same with the state(5) to be set 00:27:22.625 [2024-04-26 23:30:11.753626] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9892a0 is same with the state(5) to be set 00:27:22.625 [2024-04-26 23:30:11.753717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.625 [2024-04-26 23:30:11.753740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.625 [2024-04-26 23:30:11.753756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.626 [2024-04-26 23:30:11.753763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.626 [2024-04-26 23:30:11.753773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.626 [2024-04-26 23:30:11.753780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.626 [2024-04-26 23:30:11.753789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.626 [2024-04-26 23:30:11.753796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.626 [2024-04-26 23:30:11.753806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.626 [2024-04-26 23:30:11.753813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.626 [2024-04-26 23:30:11.753826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.626 [2024-04-26 23:30:11.753833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.626 [2024-04-26 23:30:11.753849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.626 [2024-04-26 23:30:11.753856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.626 [2024-04-26 23:30:11.753866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.626 [2024-04-26 23:30:11.753873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.626 [2024-04-26 23:30:11.753882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.626 [2024-04-26 23:30:11.753889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.626 [2024-04-26 23:30:11.753898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.626 [2024-04-26 23:30:11.753905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.626 [2024-04-26 23:30:11.753915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.626 [2024-04-26 23:30:11.753922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.626 [2024-04-26 23:30:11.753932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.626 [2024-04-26 23:30:11.753939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.626 [2024-04-26 23:30:11.753948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.626 [2024-04-26 23:30:11.753955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.626 [2024-04-26 23:30:11.753965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.626 [2024-04-26 23:30:11.753972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.626 [2024-04-26 23:30:11.753981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.626 [2024-04-26 23:30:11.753988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.626 [2024-04-26 23:30:11.753997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.626 [2024-04-26 23:30:11.754005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.626 [2024-04-26 23:30:11.754014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.626 [2024-04-26 23:30:11.754021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.626 [2024-04-26 23:30:11.754030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.626 [2024-04-26 23:30:11.754039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.626 [2024-04-26 23:30:11.754048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.626 [2024-04-26 23:30:11.754055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.626 [2024-04-26 23:30:11.754065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.626 [2024-04-26 23:30:11.754071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.626 [2024-04-26 23:30:11.754080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.626 [2024-04-26 23:30:11.754087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.626 [2024-04-26 23:30:11.754096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.626 [2024-04-26 23:30:11.754103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.626 [2024-04-26 23:30:11.754112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.626 [2024-04-26 23:30:11.754119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.626 [2024-04-26 23:30:11.754128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.626 [2024-04-26 23:30:11.754135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.626 [2024-04-26 23:30:11.754144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.626 [2024-04-26 23:30:11.754151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.626 [2024-04-26 23:30:11.754160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.626 [2024-04-26 23:30:11.754167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.626 [2024-04-26 23:30:11.754176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.626 [2024-04-26 23:30:11.754183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.626 [2024-04-26 23:30:11.754192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.626 [2024-04-26 23:30:11.754199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.626 [2024-04-26 23:30:11.754208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.626 [2024-04-26 23:30:11.754215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.626 [2024-04-26 23:30:11.754224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.626 [2024-04-26 23:30:11.754231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.626 [2024-04-26 23:30:11.754242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.626 [2024-04-26 23:30:11.754249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.626 [2024-04-26 23:30:11.754258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.626 [2024-04-26 23:30:11.754265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.626 [2024-04-26 23:30:11.754274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.626 [2024-04-26 23:30:11.754281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.626 [2024-04-26 23:30:11.754275] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x989730 is same with the state(5) to be set 00:27:22.626 [2024-04-26 23:30:11.754291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.626 [2024-04-26 23:30:11.754296] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x989730 is same with the state(5) to be set 00:27:22.626 [2024-04-26 23:30:11.754299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.626 [2024-04-26 23:30:11.754303] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x989730 is same with the state(5) to be set 00:27:22.626 [2024-04-26 23:30:11.754308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.626 [2024-04-26 23:30:11.754316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.626 [2024-04-26 23:30:11.754325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.626 [2024-04-26 23:30:11.754332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.626 [2024-04-26 23:30:11.754345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.626 [2024-04-26 23:30:11.754357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.626 [2024-04-26 23:30:11.754372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.626 [2024-04-26 23:30:11.754379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.626 [2024-04-26 23:30:11.754388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.626 [2024-04-26 23:30:11.754395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.626 [2024-04-26 23:30:11.754404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.626 [2024-04-26 23:30:11.754411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.627 [2024-04-26 23:30:11.754420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.627 [2024-04-26 23:30:11.754427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.627 [2024-04-26 23:30:11.754438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.627 [2024-04-26 23:30:11.754445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.627 [2024-04-26 23:30:11.754453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.627 [2024-04-26 23:30:11.754460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.627 [2024-04-26 23:30:11.754469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.627 [2024-04-26 23:30:11.754476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.627 [2024-04-26 23:30:11.754485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.627 [2024-04-26 23:30:11.754492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.627 [2024-04-26 23:30:11.754501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.627 [2024-04-26 23:30:11.754508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.627 [2024-04-26 23:30:11.754518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.627 [2024-04-26 23:30:11.754525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.627 [2024-04-26 23:30:11.754533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.627 [2024-04-26 23:30:11.754540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.627 [2024-04-26 23:30:11.754549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.627 [2024-04-26 23:30:11.754556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.627 [2024-04-26 23:30:11.754553] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x989bc0 is same with the state(5) to be set 00:27:22.627 [2024-04-26 23:30:11.754566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.627 [2024-04-26 23:30:11.754573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.627 [2024-04-26 23:30:11.754582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.627 [2024-04-26 23:30:11.754589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.627 [2024-04-26 23:30:11.754598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.627 [2024-04-26 23:30:11.754605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.627 [2024-04-26 23:30:11.754614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.627 [2024-04-26 23:30:11.754621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.627 [2024-04-26 23:30:11.754632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.627 [2024-04-26 23:30:11.754639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.627 [2024-04-26 23:30:11.754647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.627 [2024-04-26 23:30:11.754654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.627 [2024-04-26 23:30:11.754663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.627 [2024-04-26 23:30:11.754670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.627 [2024-04-26 23:30:11.754679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.627 [2024-04-26 23:30:11.754686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.627 [2024-04-26 23:30:11.754695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.627 [2024-04-26 23:30:11.754702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.627 [2024-04-26 23:30:11.754711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.627 [2024-04-26 23:30:11.754718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.627 [2024-04-26 23:30:11.754726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.627 [2024-04-26 23:30:11.754733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.627 [2024-04-26 23:30:11.754742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.627 [2024-04-26 23:30:11.754749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.627 [2024-04-26 23:30:11.754758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.627 [2024-04-26 23:30:11.754765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.627 [2024-04-26 23:30:11.754773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.627 [2024-04-26 23:30:11.754780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.627 [2024-04-26 23:30:11.754789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.627 [2024-04-26 23:30:11.754796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.627 [2024-04-26 23:30:11.754859] bdev_nvme.c:1601:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x20f7c10 was disconnected and freed. reset controller. 00:27:22.627 [2024-04-26 23:30:11.755785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.627 [2024-04-26 23:30:11.755806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.627 [2024-04-26 23:30:11.755822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.627 [2024-04-26 23:30:11.755830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.627 [2024-04-26 23:30:11.755846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.627 [2024-04-26 23:30:11.755853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.627 [2024-04-26 23:30:11.755862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.627 [2024-04-26 23:30:11.755870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.627 [2024-04-26 23:30:11.755878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.627 [2024-04-26 23:30:11.755885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.627 [2024-04-26 23:30:11.755894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.627 [2024-04-26 23:30:11.755902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.627 [2024-04-26 23:30:11.755911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.627 [2024-04-26 23:30:11.755918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.627 [2024-04-26 23:30:11.755926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.627 [2024-04-26 23:30:11.755934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.627 [2024-04-26 23:30:11.755943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.627 [2024-04-26 23:30:11.755950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.627 [2024-04-26 23:30:11.755959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.627 [2024-04-26 23:30:11.755966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.627 [2024-04-26 23:30:11.755975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.627 [2024-04-26 23:30:11.755982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.627 [2024-04-26 23:30:11.755991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.627 [2024-04-26 23:30:11.755999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.627 [2024-04-26 23:30:11.756008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.627 [2024-04-26 23:30:11.756014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.627 [2024-04-26 23:30:11.756023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.627 [2024-04-26 23:30:11.756032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.627 [2024-04-26 23:30:11.756041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.628 [2024-04-26 23:30:11.756048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.628 [2024-04-26 23:30:11.756057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.628 [2024-04-26 23:30:11.756064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.628 [2024-04-26 23:30:11.756072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.628 [2024-04-26 23:30:11.756079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.628 [2024-04-26 23:30:11.756088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.628 [2024-04-26 23:30:11.756095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.628 [2024-04-26 23:30:11.756104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.628 [2024-04-26 23:30:11.756111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.628 [2024-04-26 23:30:11.756120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.628 [2024-04-26 23:30:11.756127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.628 [2024-04-26 23:30:11.756136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.628 [2024-04-26 23:30:11.756143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.628 [2024-04-26 23:30:11.756152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.628 [2024-04-26 23:30:11.756159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.628 [2024-04-26 23:30:11.756168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.628 [2024-04-26 23:30:11.756174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.628 [2024-04-26 23:30:11.756183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.628 [2024-04-26 23:30:11.756190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.628 [2024-04-26 23:30:11.756199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.628 [2024-04-26 23:30:11.756206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.628 [2024-04-26 23:30:11.756215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.628 [2024-04-26 23:30:11.756222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.628 [2024-04-26 23:30:11.756232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.628 [2024-04-26 23:30:11.756239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.628 [2024-04-26 23:30:11.756248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.628 [2024-04-26 23:30:11.756255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.628 [2024-04-26 23:30:11.756264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.628 [2024-04-26 23:30:11.756271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.628 [2024-04-26 23:30:11.756280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.628 [2024-04-26 23:30:11.756287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.628 [2024-04-26 23:30:11.756296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.628 [2024-04-26 23:30:11.756303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.628 [2024-04-26 23:30:11.756312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.628 [2024-04-26 23:30:11.756319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.628 [2024-04-26 23:30:11.756328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.628 [2024-04-26 23:30:11.756335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.628 [2024-04-26 23:30:11.756344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.628 [2024-04-26 23:30:11.756350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.628 [2024-04-26 23:30:11.756359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.628 [2024-04-26 23:30:11.756367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.628 [2024-04-26 23:30:11.756376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.628 [2024-04-26 23:30:11.756382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.628 [2024-04-26 23:30:11.756391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.628 [2024-04-26 23:30:11.756398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.628 [2024-04-26 23:30:11.756410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.628 [2024-04-26 23:30:11.756417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.628 [2024-04-26 23:30:11.756425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.628 [2024-04-26 23:30:11.756434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.628 [2024-04-26 23:30:11.756443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.628 [2024-04-26 23:30:11.756450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.628 [2024-04-26 23:30:11.756459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.628 [2024-04-26 23:30:11.756466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.628 [2024-04-26 23:30:11.756475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.628 [2024-04-26 23:30:11.756481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.628 [2024-04-26 23:30:11.756490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.628 [2024-04-26 23:30:11.756497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.628 [2024-04-26 23:30:11.756506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.628 [2024-04-26 23:30:11.756513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.628 [2024-04-26 23:30:11.756521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.628 [2024-04-26 23:30:11.756528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.628 [2024-04-26 23:30:11.756537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.628 [2024-04-26 23:30:11.756544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.628 [2024-04-26 23:30:11.756553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.628 [2024-04-26 23:30:11.756560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.628 [2024-04-26 23:30:11.756569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.628 [2024-04-26 23:30:11.756576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.629 [2024-04-26 23:30:11.756584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.629 [2024-04-26 23:30:11.756591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.629 [2024-04-26 23:30:11.756600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.629 [2024-04-26 23:30:11.756607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.629 [2024-04-26 23:30:11.756616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.629 [2024-04-26 23:30:11.756623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.629 [2024-04-26 23:30:11.756634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.629 [2024-04-26 23:30:11.756641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.629 [2024-04-26 23:30:11.756649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.629 [2024-04-26 23:30:11.756656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.629 [2024-04-26 23:30:11.756665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.629 [2024-04-26 23:30:11.756672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.629 [2024-04-26 23:30:11.756681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.629 [2024-04-26 23:30:11.756688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.629 [2024-04-26 23:30:11.756697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.629 [2024-04-26 23:30:11.756704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.629 [2024-04-26 23:30:11.756712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.629 [2024-04-26 23:30:11.756719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.629 [2024-04-26 23:30:11.756728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.629 [2024-04-26 23:30:11.756735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.629 [2024-04-26 23:30:11.756744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.629 [2024-04-26 23:30:11.756751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.629 [2024-04-26 23:30:11.756760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.629 [2024-04-26 23:30:11.756767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.629 [2024-04-26 23:30:11.756775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.629 [2024-04-26 23:30:11.756782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.629 [2024-04-26 23:30:11.756791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.629 [2024-04-26 23:30:11.756798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.629 [2024-04-26 23:30:11.756807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.629 [2024-04-26 23:30:11.756814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.629 [2024-04-26 23:30:11.756823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.629 [2024-04-26 23:30:11.756831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.629 [2024-04-26 23:30:11.756859] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:22.629 [2024-04-26 23:30:11.756896] bdev_nvme.c:1601:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1fbdc60 was disconnected and freed. reset controller. 00:27:22.629 [2024-04-26 23:30:11.758443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:39936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.629 [2024-04-26 23:30:11.758462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.629 [2024-04-26 23:30:11.758473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:40064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.629 [2024-04-26 23:30:11.758480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.629 [2024-04-26 23:30:11.758490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:40192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.629 [2024-04-26 23:30:11.758497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.629 [2024-04-26 23:30:11.758506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:40320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.629 [2024-04-26 23:30:11.758512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.629 [2024-04-26 23:30:11.758524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:40448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.629 [2024-04-26 23:30:11.758532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.629 [2024-04-26 23:30:11.758545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:40576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.629 [2024-04-26 23:30:11.758553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.629 [2024-04-26 23:30:11.758562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:40704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.629 [2024-04-26 23:30:11.758568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.629 [2024-04-26 23:30:11.758577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:40832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.629 [2024-04-26 23:30:11.758584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.629 [2024-04-26 23:30:11.758593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:40960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.629 [2024-04-26 23:30:11.758600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.629 [2024-04-26 23:30:11.758609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:41088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.629 [2024-04-26 23:30:11.758616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.629 [2024-04-26 23:30:11.765569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:41216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.629 [2024-04-26 23:30:11.765602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.629 [2024-04-26 23:30:11.765617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:41344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.629 [2024-04-26 23:30:11.765625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.629 [2024-04-26 23:30:11.765634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:33280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.629 [2024-04-26 23:30:11.765641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.629 [2024-04-26 23:30:11.765650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:33408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.629 [2024-04-26 23:30:11.765657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.629 [2024-04-26 23:30:11.765667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:33536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.629 [2024-04-26 23:30:11.765674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.629 [2024-04-26 23:30:11.765683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:33664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.629 [2024-04-26 23:30:11.765690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.629 [2024-04-26 23:30:11.765699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:33792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.629 [2024-04-26 23:30:11.765705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.629 [2024-04-26 23:30:11.765715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:33920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.629 [2024-04-26 23:30:11.765722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.629 [2024-04-26 23:30:11.765731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:34048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.629 [2024-04-26 23:30:11.765738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.629 [2024-04-26 23:30:11.765747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:34176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.629 [2024-04-26 23:30:11.765753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.629 [2024-04-26 23:30:11.765763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:34304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.629 [2024-04-26 23:30:11.765769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.629 [2024-04-26 23:30:11.765778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:34432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.629 [2024-04-26 23:30:11.765785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.629 [2024-04-26 23:30:11.765794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:34560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.629 [2024-04-26 23:30:11.765801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.630 [2024-04-26 23:30:11.765810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:34688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.630 [2024-04-26 23:30:11.765818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.630 [2024-04-26 23:30:11.765827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:34816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.630 [2024-04-26 23:30:11.765835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.630 [2024-04-26 23:30:11.765853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:34944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.630 [2024-04-26 23:30:11.765861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.630 [2024-04-26 23:30:11.765870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:35072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.630 [2024-04-26 23:30:11.765876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.630 [2024-04-26 23:30:11.765886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:35200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.630 [2024-04-26 23:30:11.765892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.630 [2024-04-26 23:30:11.765901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:35328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.630 [2024-04-26 23:30:11.765908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.630 [2024-04-26 23:30:11.765918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:35456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.630 [2024-04-26 23:30:11.765924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.630 [2024-04-26 23:30:11.765934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:35584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.630 [2024-04-26 23:30:11.765941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.630 [2024-04-26 23:30:11.765950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:35712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.630 [2024-04-26 23:30:11.765957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.630 [2024-04-26 23:30:11.765966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:35840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.630 [2024-04-26 23:30:11.765973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.630 [2024-04-26 23:30:11.765983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:35968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.630 [2024-04-26 23:30:11.765989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.630 [2024-04-26 23:30:11.765998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:36096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.630 [2024-04-26 23:30:11.766005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.630 [2024-04-26 23:30:11.766014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:36224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.630 [2024-04-26 23:30:11.766021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.630 [2024-04-26 23:30:11.766032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:36352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.630 [2024-04-26 23:30:11.766039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.630 [2024-04-26 23:30:11.766049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:36480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.630 [2024-04-26 23:30:11.766056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.630 [2024-04-26 23:30:11.766064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:36608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.630 [2024-04-26 23:30:11.766071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.630 [2024-04-26 23:30:11.766080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:36736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.630 [2024-04-26 23:30:11.766087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.630 [2024-04-26 23:30:11.766096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:36864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.630 [2024-04-26 23:30:11.766103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.630 [2024-04-26 23:30:11.766112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:36992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.630 [2024-04-26 23:30:11.766119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.630 [2024-04-26 23:30:11.766128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:37120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.630 [2024-04-26 23:30:11.766135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.630 [2024-04-26 23:30:11.766144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:37248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.630 [2024-04-26 23:30:11.766151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.630 [2024-04-26 23:30:11.766160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:37376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.630 [2024-04-26 23:30:11.766167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.630 [2024-04-26 23:30:11.766177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:37504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.630 [2024-04-26 23:30:11.766184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.630 [2024-04-26 23:30:11.766193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:37632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.630 [2024-04-26 23:30:11.766200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.630 [2024-04-26 23:30:11.766209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:37760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.630 [2024-04-26 23:30:11.766216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.630 [2024-04-26 23:30:11.766225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:37888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.630 [2024-04-26 23:30:11.766233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.630 [2024-04-26 23:30:11.766242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:38016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.630 [2024-04-26 23:30:11.766249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.630 [2024-04-26 23:30:11.766258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:38144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.630 [2024-04-26 23:30:11.766265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.630 [2024-04-26 23:30:11.766274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:38272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.630 [2024-04-26 23:30:11.766281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.630 [2024-04-26 23:30:11.766290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:38400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.630 [2024-04-26 23:30:11.766297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.630 [2024-04-26 23:30:11.766306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:38528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.630 [2024-04-26 23:30:11.766313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.630 [2024-04-26 23:30:11.766321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:38656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.630 [2024-04-26 23:30:11.766328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.630 [2024-04-26 23:30:11.766337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:38784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.630 [2024-04-26 23:30:11.766344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.630 [2024-04-26 23:30:11.766353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:38912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.630 [2024-04-26 23:30:11.766360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.630 [2024-04-26 23:30:11.766370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:39040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.630 [2024-04-26 23:30:11.766377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.630 [2024-04-26 23:30:11.766387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:39168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.630 [2024-04-26 23:30:11.766394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.630 [2024-04-26 23:30:11.766402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:39296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.630 [2024-04-26 23:30:11.766409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.630 [2024-04-26 23:30:11.766418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:39424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.630 [2024-04-26 23:30:11.766425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.630 [2024-04-26 23:30:11.766434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:39552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.630 [2024-04-26 23:30:11.766443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.630 [2024-04-26 23:30:11.766452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:39680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.630 [2024-04-26 23:30:11.766458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.631 [2024-04-26 23:30:11.766467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:39808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.631 [2024-04-26 23:30:11.766474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.631 [2024-04-26 23:30:11.766535] bdev_nvme.c:1601:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1fbc9b0 was disconnected and freed. reset controller. 00:27:22.631 [2024-04-26 23:30:11.767799] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:22.631 [2024-04-26 23:30:11.767829] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9] resetting controller 00:27:22.631 [2024-04-26 23:30:11.767877] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2070bf0 (9): Bad file descriptor 00:27:22.631 [2024-04-26 23:30:11.767892] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1baf3f0 (9): Bad file descriptor 00:27:22.631 [2024-04-26 23:30:11.767936] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:22.631 [2024-04-26 23:30:11.767948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.631 [2024-04-26 23:30:11.767957] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:22.631 [2024-04-26 23:30:11.767966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.631 [2024-04-26 23:30:11.767974] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:22.631 [2024-04-26 23:30:11.767981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.631 [2024-04-26 23:30:11.767989] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:22.631 [2024-04-26 23:30:11.767996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.631 [2024-04-26 23:30:11.768003] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217a3b0 is same with the state(5) to be set 00:27:22.631 [2024-04-26 23:30:11.768014] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1ac90 (9): Bad file descriptor 00:27:22.631 [2024-04-26 23:30:11.768029] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2183c90 (9): Bad file descriptor 00:27:22.631 [2024-04-26 23:30:11.768054] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:22.631 [2024-04-26 23:30:11.768063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.631 [2024-04-26 23:30:11.768071] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:22.631 [2024-04-26 23:30:11.768078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.631 [2024-04-26 23:30:11.768086] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:22.631 [2024-04-26 23:30:11.768096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.631 [2024-04-26 23:30:11.768104] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:22.631 [2024-04-26 23:30:11.768111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.631 [2024-04-26 23:30:11.768117] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2180060 is same with the state(5) to be set 00:27:22.631 [2024-04-26 23:30:11.768142] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:22.631 [2024-04-26 23:30:11.768151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.631 [2024-04-26 23:30:11.768159] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:22.631 [2024-04-26 23:30:11.768165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.631 [2024-04-26 23:30:11.768173] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:22.631 [2024-04-26 23:30:11.768180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.631 [2024-04-26 23:30:11.768187] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:22.631 [2024-04-26 23:30:11.768195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.631 [2024-04-26 23:30:11.768201] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe6870 is same with the state(5) to be set 00:27:22.631 [2024-04-26 23:30:11.768217] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fedc20 (9): Bad file descriptor 00:27:22.631 [2024-04-26 23:30:11.768234] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20913a0 (9): Bad file descriptor 00:27:22.631 [2024-04-26 23:30:11.768253] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218cfb0 (9): Bad file descriptor 00:27:22.631 [2024-04-26 23:30:11.769779] nvme_tcp.c:1215:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:27:22.631 [2024-04-26 23:30:11.769810] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8] resetting controller 00:27:22.631 [2024-04-26 23:30:11.769824] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x217a3b0 (9): Bad file descriptor 00:27:22.631 [2024-04-26 23:30:11.770149] nvme_tcp.c:1215:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:27:22.631 [2024-04-26 23:30:11.770188] nvme_tcp.c:1215:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:27:22.631 [2024-04-26 23:30:11.770226] nvme_tcp.c:1215:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:27:22.631 [2024-04-26 23:30:11.770262] nvme_tcp.c:1215:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:27:22.631 [2024-04-26 23:30:11.771070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.631 [2024-04-26 23:30:11.771528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.631 [2024-04-26 23:30:11.771541] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baf3f0 with addr=10.0.0.2, port=4420 00:27:22.631 [2024-04-26 23:30:11.771551] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1baf3f0 is same with the state(5) to be set 00:27:22.631 [2024-04-26 23:30:11.772035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.631 [2024-04-26 23:30:11.772282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.631 [2024-04-26 23:30:11.772300] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2070bf0 with addr=10.0.0.2, port=4420 00:27:22.631 [2024-04-26 23:30:11.772309] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2070bf0 is same with the state(5) to be set 00:27:22.631 [2024-04-26 23:30:11.772380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.631 [2024-04-26 23:30:11.772392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.631 [2024-04-26 23:30:11.772407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.631 [2024-04-26 23:30:11.772414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.631 [2024-04-26 23:30:11.772424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.631 [2024-04-26 23:30:11.772431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.631 [2024-04-26 23:30:11.772440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.631 [2024-04-26 23:30:11.772448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.631 [2024-04-26 23:30:11.772457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.631 [2024-04-26 23:30:11.772464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.631 [2024-04-26 23:30:11.772473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.631 [2024-04-26 23:30:11.772480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.631 [2024-04-26 23:30:11.772489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.631 [2024-04-26 23:30:11.772496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.631 [2024-04-26 23:30:11.772506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.631 [2024-04-26 23:30:11.772513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.631 [2024-04-26 23:30:11.772522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.631 [2024-04-26 23:30:11.772529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.631 [2024-04-26 23:30:11.772538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.631 [2024-04-26 23:30:11.772546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.631 [2024-04-26 23:30:11.772555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.631 [2024-04-26 23:30:11.772562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.631 [2024-04-26 23:30:11.772571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.631 [2024-04-26 23:30:11.772581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.631 [2024-04-26 23:30:11.772590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.631 [2024-04-26 23:30:11.772597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.631 [2024-04-26 23:30:11.772606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.631 [2024-04-26 23:30:11.772613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.631 [2024-04-26 23:30:11.772622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.632 [2024-04-26 23:30:11.772629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.632 [2024-04-26 23:30:11.772637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.632 [2024-04-26 23:30:11.772645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.632 [2024-04-26 23:30:11.772653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.632 [2024-04-26 23:30:11.772660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.632 [2024-04-26 23:30:11.772669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.632 [2024-04-26 23:30:11.772676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.632 [2024-04-26 23:30:11.772685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.632 [2024-04-26 23:30:11.772692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.632 [2024-04-26 23:30:11.772701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.632 [2024-04-26 23:30:11.772707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.632 [2024-04-26 23:30:11.772716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.632 [2024-04-26 23:30:11.772723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.632 [2024-04-26 23:30:11.772732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.632 [2024-04-26 23:30:11.772739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.632 [2024-04-26 23:30:11.772748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.632 [2024-04-26 23:30:11.772755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.632 [2024-04-26 23:30:11.772764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.632 [2024-04-26 23:30:11.772771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.632 [2024-04-26 23:30:11.772782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.632 [2024-04-26 23:30:11.772789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.632 [2024-04-26 23:30:11.772798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.632 [2024-04-26 23:30:11.772805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.632 [2024-04-26 23:30:11.772814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.632 [2024-04-26 23:30:11.772821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.632 [2024-04-26 23:30:11.772830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.632 [2024-04-26 23:30:11.772844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.632 [2024-04-26 23:30:11.772854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.632 [2024-04-26 23:30:11.772861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.632 [2024-04-26 23:30:11.772870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.632 [2024-04-26 23:30:11.772877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.632 [2024-04-26 23:30:11.772885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.632 [2024-04-26 23:30:11.772893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.632 [2024-04-26 23:30:11.772902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.632 [2024-04-26 23:30:11.772908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.632 [2024-04-26 23:30:11.772917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.632 [2024-04-26 23:30:11.772924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.632 [2024-04-26 23:30:11.772933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.632 [2024-04-26 23:30:11.772940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.632 [2024-04-26 23:30:11.772949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.632 [2024-04-26 23:30:11.772956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.632 [2024-04-26 23:30:11.772965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.632 [2024-04-26 23:30:11.772972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.632 [2024-04-26 23:30:11.772981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.632 [2024-04-26 23:30:11.772989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.632 [2024-04-26 23:30:11.772998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.632 [2024-04-26 23:30:11.773005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.632 [2024-04-26 23:30:11.773014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.632 [2024-04-26 23:30:11.773021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.632 [2024-04-26 23:30:11.773031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.632 [2024-04-26 23:30:11.773038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.632 [2024-04-26 23:30:11.773047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.632 [2024-04-26 23:30:11.773055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.632 [2024-04-26 23:30:11.773064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.632 [2024-04-26 23:30:11.773071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.632 [2024-04-26 23:30:11.773080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.632 [2024-04-26 23:30:11.773087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.632 [2024-04-26 23:30:11.773096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.632 [2024-04-26 23:30:11.773104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.632 [2024-04-26 23:30:11.773113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.632 [2024-04-26 23:30:11.773120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.632 [2024-04-26 23:30:11.773129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.632 [2024-04-26 23:30:11.773136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.632 [2024-04-26 23:30:11.773145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.632 [2024-04-26 23:30:11.773152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.632 [2024-04-26 23:30:11.773162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.633 [2024-04-26 23:30:11.773169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.633 [2024-04-26 23:30:11.773178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.633 [2024-04-26 23:30:11.773185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.633 [2024-04-26 23:30:11.773196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.633 [2024-04-26 23:30:11.773203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.633 [2024-04-26 23:30:11.773212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.633 [2024-04-26 23:30:11.773219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.633 [2024-04-26 23:30:11.773228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.633 [2024-04-26 23:30:11.773235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.633 [2024-04-26 23:30:11.773244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.633 [2024-04-26 23:30:11.773251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.633 [2024-04-26 23:30:11.773259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.633 [2024-04-26 23:30:11.773266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.633 [2024-04-26 23:30:11.773275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.633 [2024-04-26 23:30:11.773282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.633 [2024-04-26 23:30:11.773291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.633 [2024-04-26 23:30:11.773298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.633 [2024-04-26 23:30:11.773307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.633 [2024-04-26 23:30:11.773313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.633 [2024-04-26 23:30:11.773323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.633 [2024-04-26 23:30:11.773330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.633 [2024-04-26 23:30:11.773339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.633 [2024-04-26 23:30:11.773346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.633 [2024-04-26 23:30:11.773355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.633 [2024-04-26 23:30:11.773361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.633 [2024-04-26 23:30:11.773371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.633 [2024-04-26 23:30:11.773378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.633 [2024-04-26 23:30:11.773386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.633 [2024-04-26 23:30:11.773395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.633 [2024-04-26 23:30:11.773404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:33152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.633 [2024-04-26 23:30:11.773411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.633 [2024-04-26 23:30:11.773420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:33280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.633 [2024-04-26 23:30:11.773427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.633 [2024-04-26 23:30:11.773435] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2176920 is same with the state(5) to be set 00:27:22.633 [2024-04-26 23:30:11.773490] bdev_nvme.c:1601:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x2176920 was disconnected and freed. reset controller. 00:27:22.633 [2024-04-26 23:30:11.773555] nvme_tcp.c:1215:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:27:22.633 [2024-04-26 23:30:11.773595] nvme_tcp.c:1215:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:27:22.633 [2024-04-26 23:30:11.774087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.633 [2024-04-26 23:30:11.774268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.633 [2024-04-26 23:30:11.774280] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x217a3b0 with addr=10.0.0.2, port=4420 00:27:22.633 [2024-04-26 23:30:11.774287] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217a3b0 is same with the state(5) to be set 00:27:22.633 [2024-04-26 23:30:11.774298] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1baf3f0 (9): Bad file descriptor 00:27:22.633 [2024-04-26 23:30:11.774308] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2070bf0 (9): Bad file descriptor 00:27:22.633 [2024-04-26 23:30:11.775601] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] resetting controller 00:27:22.633 [2024-04-26 23:30:11.775629] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x217a3b0 (9): Bad file descriptor 00:27:22.633 [2024-04-26 23:30:11.775640] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:22.633 [2024-04-26 23:30:11.775647] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:22.633 [2024-04-26 23:30:11.775656] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:22.633 [2024-04-26 23:30:11.775670] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9] Ctrlr is in error state 00:27:22.633 [2024-04-26 23:30:11.775677] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9] controller reinitialization failed 00:27:22.633 [2024-04-26 23:30:11.775685] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9] in failed state. 00:27:22.633 [2024-04-26 23:30:11.775740] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:22.633 [2024-04-26 23:30:11.775753] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:22.633 [2024-04-26 23:30:11.775998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.633 [2024-04-26 23:30:11.776328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.633 [2024-04-26 23:30:11.776337] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fedc20 with addr=10.0.0.2, port=4420 00:27:22.633 [2024-04-26 23:30:11.776345] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fedc20 is same with the state(5) to be set 00:27:22.633 [2024-04-26 23:30:11.776352] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8] Ctrlr is in error state 00:27:22.633 [2024-04-26 23:30:11.776362] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8] controller reinitialization failed 00:27:22.633 [2024-04-26 23:30:11.776368] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8] in failed state. 00:27:22.633 [2024-04-26 23:30:11.776660] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:22.633 [2024-04-26 23:30:11.776670] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fedc20 (9): Bad file descriptor 00:27:22.633 [2024-04-26 23:30:11.776710] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3] Ctrlr is in error state 00:27:22.633 [2024-04-26 23:30:11.776717] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3] controller reinitialization failed 00:27:22.633 [2024-04-26 23:30:11.776724] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:27:22.633 [2024-04-26 23:30:11.776766] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:22.633 [2024-04-26 23:30:11.777850] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2180060 (9): Bad file descriptor 00:27:22.633 [2024-04-26 23:30:11.777870] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fe6870 (9): Bad file descriptor 00:27:22.633 [2024-04-26 23:30:11.777981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.633 [2024-04-26 23:30:11.777995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.633 [2024-04-26 23:30:11.778007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.633 [2024-04-26 23:30:11.778014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.633 [2024-04-26 23:30:11.778023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.633 [2024-04-26 23:30:11.778031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.633 [2024-04-26 23:30:11.778040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.633 [2024-04-26 23:30:11.778047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.633 [2024-04-26 23:30:11.778056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.633 [2024-04-26 23:30:11.778063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.633 [2024-04-26 23:30:11.778072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.633 [2024-04-26 23:30:11.778079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.633 [2024-04-26 23:30:11.778089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.633 [2024-04-26 23:30:11.778096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.633 [2024-04-26 23:30:11.778105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.633 [2024-04-26 23:30:11.778112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.633 [2024-04-26 23:30:11.778121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.634 [2024-04-26 23:30:11.778131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.634 [2024-04-26 23:30:11.778140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.634 [2024-04-26 23:30:11.778148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.634 [2024-04-26 23:30:11.778156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.634 [2024-04-26 23:30:11.778163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.634 [2024-04-26 23:30:11.778172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.634 [2024-04-26 23:30:11.778180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.634 [2024-04-26 23:30:11.778189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.634 [2024-04-26 23:30:11.778196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.634 [2024-04-26 23:30:11.778205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.634 [2024-04-26 23:30:11.778212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.634 [2024-04-26 23:30:11.778221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.634 [2024-04-26 23:30:11.778228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.634 [2024-04-26 23:30:11.778237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.634 [2024-04-26 23:30:11.778244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.634 [2024-04-26 23:30:11.778253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.634 [2024-04-26 23:30:11.778259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.634 [2024-04-26 23:30:11.778269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.634 [2024-04-26 23:30:11.778275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.634 [2024-04-26 23:30:11.778284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.634 [2024-04-26 23:30:11.778291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.634 [2024-04-26 23:30:11.778300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.634 [2024-04-26 23:30:11.778308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.634 [2024-04-26 23:30:11.778317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.634 [2024-04-26 23:30:11.778324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.634 [2024-04-26 23:30:11.778338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.634 [2024-04-26 23:30:11.778345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.634 [2024-04-26 23:30:11.778354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.634 [2024-04-26 23:30:11.778361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.634 [2024-04-26 23:30:11.778370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.634 [2024-04-26 23:30:11.778377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.634 [2024-04-26 23:30:11.778386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.634 [2024-04-26 23:30:11.778393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.634 [2024-04-26 23:30:11.778402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.634 [2024-04-26 23:30:11.778409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.634 [2024-04-26 23:30:11.778418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.634 [2024-04-26 23:30:11.778425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.634 [2024-04-26 23:30:11.778434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.634 [2024-04-26 23:30:11.778441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.634 [2024-04-26 23:30:11.778450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.634 [2024-04-26 23:30:11.778457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.634 [2024-04-26 23:30:11.778466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.634 [2024-04-26 23:30:11.778473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.634 [2024-04-26 23:30:11.778482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.634 [2024-04-26 23:30:11.778489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.634 [2024-04-26 23:30:11.778498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.634 [2024-04-26 23:30:11.778505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.634 [2024-04-26 23:30:11.778514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.634 [2024-04-26 23:30:11.778521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.634 [2024-04-26 23:30:11.778530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.634 [2024-04-26 23:30:11.778539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.634 [2024-04-26 23:30:11.778548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.634 [2024-04-26 23:30:11.778555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.634 [2024-04-26 23:30:11.778564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.634 [2024-04-26 23:30:11.778571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.634 [2024-04-26 23:30:11.778580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.634 [2024-04-26 23:30:11.778587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.634 [2024-04-26 23:30:11.778596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.634 [2024-04-26 23:30:11.778603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.634 [2024-04-26 23:30:11.778612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.634 [2024-04-26 23:30:11.778619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.634 [2024-04-26 23:30:11.778628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.634 [2024-04-26 23:30:11.778635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.634 [2024-04-26 23:30:11.778644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.634 [2024-04-26 23:30:11.778651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.634 [2024-04-26 23:30:11.778660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.634 [2024-04-26 23:30:11.778667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.634 [2024-04-26 23:30:11.778676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.634 [2024-04-26 23:30:11.778683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.634 [2024-04-26 23:30:11.778692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.634 [2024-04-26 23:30:11.778699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.634 [2024-04-26 23:30:11.778708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.634 [2024-04-26 23:30:11.778715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.634 [2024-04-26 23:30:11.778724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.634 [2024-04-26 23:30:11.778732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.634 [2024-04-26 23:30:11.778742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.634 [2024-04-26 23:30:11.778749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.634 [2024-04-26 23:30:11.778758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.634 [2024-04-26 23:30:11.778765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.634 [2024-04-26 23:30:11.778774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.635 [2024-04-26 23:30:11.778782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.635 [2024-04-26 23:30:11.778791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.635 [2024-04-26 23:30:11.778798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.635 [2024-04-26 23:30:11.778807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.635 [2024-04-26 23:30:11.778814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.635 [2024-04-26 23:30:11.778823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.635 [2024-04-26 23:30:11.778830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.635 [2024-04-26 23:30:11.778845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.635 [2024-04-26 23:30:11.778853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.635 [2024-04-26 23:30:11.778862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.635 [2024-04-26 23:30:11.778870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.635 [2024-04-26 23:30:11.778879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.635 [2024-04-26 23:30:11.778886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.635 [2024-04-26 23:30:11.778895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.635 [2024-04-26 23:30:11.778903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.635 [2024-04-26 23:30:11.778912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.635 [2024-04-26 23:30:11.778920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.635 [2024-04-26 23:30:11.778929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.635 [2024-04-26 23:30:11.778935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.635 [2024-04-26 23:30:11.778945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.635 [2024-04-26 23:30:11.778953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.635 [2024-04-26 23:30:11.778962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.635 [2024-04-26 23:30:11.778969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.635 [2024-04-26 23:30:11.778978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.635 [2024-04-26 23:30:11.778985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.635 [2024-04-26 23:30:11.778994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.635 [2024-04-26 23:30:11.779001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.635 [2024-04-26 23:30:11.779010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.635 [2024-04-26 23:30:11.779018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.635 [2024-04-26 23:30:11.779027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.635 [2024-04-26 23:30:11.779034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.635 [2024-04-26 23:30:11.779042] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21754b0 is same with the state(5) to be set 00:27:22.635 [2024-04-26 23:30:11.780304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.635 [2024-04-26 23:30:11.780318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.635 [2024-04-26 23:30:11.780330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.635 [2024-04-26 23:30:11.780339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.635 [2024-04-26 23:30:11.780350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.635 [2024-04-26 23:30:11.780358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.635 [2024-04-26 23:30:11.780369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.635 [2024-04-26 23:30:11.780377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.635 [2024-04-26 23:30:11.780388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.635 [2024-04-26 23:30:11.780396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.635 [2024-04-26 23:30:11.780407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.635 [2024-04-26 23:30:11.780415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.635 [2024-04-26 23:30:11.780426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.635 [2024-04-26 23:30:11.780436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.635 [2024-04-26 23:30:11.780447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.635 [2024-04-26 23:30:11.780454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.635 [2024-04-26 23:30:11.780463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.635 [2024-04-26 23:30:11.780471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.635 [2024-04-26 23:30:11.780480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.635 [2024-04-26 23:30:11.780487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.635 [2024-04-26 23:30:11.780496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.635 [2024-04-26 23:30:11.780503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.635 [2024-04-26 23:30:11.780513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.635 [2024-04-26 23:30:11.780519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.635 [2024-04-26 23:30:11.780529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.635 [2024-04-26 23:30:11.780536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.635 [2024-04-26 23:30:11.780545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.635 [2024-04-26 23:30:11.780552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.635 [2024-04-26 23:30:11.780560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.635 [2024-04-26 23:30:11.780568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.635 [2024-04-26 23:30:11.780577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.635 [2024-04-26 23:30:11.780584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.635 [2024-04-26 23:30:11.780593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.635 [2024-04-26 23:30:11.780600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.635 [2024-04-26 23:30:11.780609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.635 [2024-04-26 23:30:11.780617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.635 [2024-04-26 23:30:11.780626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.635 [2024-04-26 23:30:11.780633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.635 [2024-04-26 23:30:11.780642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.635 [2024-04-26 23:30:11.780650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.635 [2024-04-26 23:30:11.780660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.635 [2024-04-26 23:30:11.780667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.635 [2024-04-26 23:30:11.780676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.635 [2024-04-26 23:30:11.780683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.635 [2024-04-26 23:30:11.780692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.635 [2024-04-26 23:30:11.780699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.635 [2024-04-26 23:30:11.780708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.635 [2024-04-26 23:30:11.780715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.636 [2024-04-26 23:30:11.780724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.636 [2024-04-26 23:30:11.780731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.636 [2024-04-26 23:30:11.780740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.636 [2024-04-26 23:30:11.780748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.636 [2024-04-26 23:30:11.780757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.636 [2024-04-26 23:30:11.780764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.636 [2024-04-26 23:30:11.780773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.636 [2024-04-26 23:30:11.780780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.636 [2024-04-26 23:30:11.780789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.636 [2024-04-26 23:30:11.780797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.636 [2024-04-26 23:30:11.780806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.636 [2024-04-26 23:30:11.780813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.636 [2024-04-26 23:30:11.780822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.636 [2024-04-26 23:30:11.780829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.636 [2024-04-26 23:30:11.780842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.636 [2024-04-26 23:30:11.780850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.636 [2024-04-26 23:30:11.780860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.636 [2024-04-26 23:30:11.780867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.636 [2024-04-26 23:30:11.780876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.636 [2024-04-26 23:30:11.780883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.636 [2024-04-26 23:30:11.780893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.636 [2024-04-26 23:30:11.780900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.636 [2024-04-26 23:30:11.780909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.636 [2024-04-26 23:30:11.780916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.636 [2024-04-26 23:30:11.780925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.636 [2024-04-26 23:30:11.780932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.636 [2024-04-26 23:30:11.780941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.636 [2024-04-26 23:30:11.780948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.636 [2024-04-26 23:30:11.780957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.636 [2024-04-26 23:30:11.780965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.636 [2024-04-26 23:30:11.780975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.636 [2024-04-26 23:30:11.780982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.636 [2024-04-26 23:30:11.780991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.636 [2024-04-26 23:30:11.780998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.636 [2024-04-26 23:30:11.781007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.636 [2024-04-26 23:30:11.781015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.636 [2024-04-26 23:30:11.781024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.636 [2024-04-26 23:30:11.781031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.636 [2024-04-26 23:30:11.781040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.636 [2024-04-26 23:30:11.781047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.636 [2024-04-26 23:30:11.781056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.636 [2024-04-26 23:30:11.781065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.636 [2024-04-26 23:30:11.781074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.636 [2024-04-26 23:30:11.781081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.636 [2024-04-26 23:30:11.781090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.636 [2024-04-26 23:30:11.781097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.636 [2024-04-26 23:30:11.781106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.636 [2024-04-26 23:30:11.781113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.636 [2024-04-26 23:30:11.781122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.636 [2024-04-26 23:30:11.781129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.636 [2024-04-26 23:30:11.781139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.636 [2024-04-26 23:30:11.781146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.636 [2024-04-26 23:30:11.781155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.636 [2024-04-26 23:30:11.781162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.636 [2024-04-26 23:30:11.781172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.636 [2024-04-26 23:30:11.781179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.636 [2024-04-26 23:30:11.781188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.636 [2024-04-26 23:30:11.781195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.636 [2024-04-26 23:30:11.781204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.636 [2024-04-26 23:30:11.781211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.636 [2024-04-26 23:30:11.781220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.636 [2024-04-26 23:30:11.781227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.636 [2024-04-26 23:30:11.781237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.636 [2024-04-26 23:30:11.781244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.636 [2024-04-26 23:30:11.781254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.636 [2024-04-26 23:30:11.781261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.636 [2024-04-26 23:30:11.781271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.636 [2024-04-26 23:30:11.781278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.636 [2024-04-26 23:30:11.781287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.636 [2024-04-26 23:30:11.781294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.636 [2024-04-26 23:30:11.781304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.636 [2024-04-26 23:30:11.781311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.636 [2024-04-26 23:30:11.781320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.636 [2024-04-26 23:30:11.781327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.636 [2024-04-26 23:30:11.781336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.636 [2024-04-26 23:30:11.781343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.637 [2024-04-26 23:30:11.781352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.637 [2024-04-26 23:30:11.781359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.637 [2024-04-26 23:30:11.781369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.637 [2024-04-26 23:30:11.781376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.637 [2024-04-26 23:30:11.781384] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2177e20 is same with the state(5) to be set 00:27:22.637 [2024-04-26 23:30:11.782646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.637 [2024-04-26 23:30:11.782659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.637 [2024-04-26 23:30:11.782670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.637 [2024-04-26 23:30:11.782679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.637 [2024-04-26 23:30:11.782690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.637 [2024-04-26 23:30:11.782698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.637 [2024-04-26 23:30:11.782709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:33152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.637 [2024-04-26 23:30:11.782717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.637 [2024-04-26 23:30:11.782728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.637 [2024-04-26 23:30:11.782737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.637 [2024-04-26 23:30:11.782750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.637 [2024-04-26 23:30:11.782758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.637 [2024-04-26 23:30:11.782768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.637 [2024-04-26 23:30:11.782777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.637 [2024-04-26 23:30:11.782787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.637 [2024-04-26 23:30:11.782796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.637 [2024-04-26 23:30:11.782806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.637 [2024-04-26 23:30:11.782814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.637 [2024-04-26 23:30:11.782824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.637 [2024-04-26 23:30:11.782832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.637 [2024-04-26 23:30:11.782844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.637 [2024-04-26 23:30:11.782851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.637 [2024-04-26 23:30:11.782860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.637 [2024-04-26 23:30:11.782867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.637 [2024-04-26 23:30:11.782876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.637 [2024-04-26 23:30:11.782883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.637 [2024-04-26 23:30:11.782893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.637 [2024-04-26 23:30:11.782900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.637 [2024-04-26 23:30:11.782909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.637 [2024-04-26 23:30:11.782916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.637 [2024-04-26 23:30:11.782925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.637 [2024-04-26 23:30:11.782932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.637 [2024-04-26 23:30:11.782941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.637 [2024-04-26 23:30:11.782948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.637 [2024-04-26 23:30:11.782957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.637 [2024-04-26 23:30:11.782966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.637 [2024-04-26 23:30:11.782975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.637 [2024-04-26 23:30:11.782982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.637 [2024-04-26 23:30:11.782991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.637 [2024-04-26 23:30:11.782998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.637 [2024-04-26 23:30:11.783008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.637 [2024-04-26 23:30:11.783015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.637 [2024-04-26 23:30:11.783024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.637 [2024-04-26 23:30:11.783031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.637 [2024-04-26 23:30:11.783040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.637 [2024-04-26 23:30:11.783047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.637 [2024-04-26 23:30:11.783057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.637 [2024-04-26 23:30:11.783064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.637 [2024-04-26 23:30:11.783073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.637 [2024-04-26 23:30:11.783080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.637 [2024-04-26 23:30:11.783089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.637 [2024-04-26 23:30:11.783096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.637 [2024-04-26 23:30:11.783105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.637 [2024-04-26 23:30:11.783112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.637 [2024-04-26 23:30:11.783121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.637 [2024-04-26 23:30:11.783128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.637 [2024-04-26 23:30:11.783137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.637 [2024-04-26 23:30:11.783144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.637 [2024-04-26 23:30:11.783154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.637 [2024-04-26 23:30:11.783161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.637 [2024-04-26 23:30:11.783171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.637 [2024-04-26 23:30:11.783179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.637 [2024-04-26 23:30:11.783188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.637 [2024-04-26 23:30:11.783196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.637 [2024-04-26 23:30:11.783205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.637 [2024-04-26 23:30:11.783212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.638 [2024-04-26 23:30:11.783221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.638 [2024-04-26 23:30:11.783228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.638 [2024-04-26 23:30:11.783237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.638 [2024-04-26 23:30:11.783244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.638 [2024-04-26 23:30:11.783253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.638 [2024-04-26 23:30:11.783260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.638 [2024-04-26 23:30:11.783269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.638 [2024-04-26 23:30:11.783276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.638 [2024-04-26 23:30:11.783286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.638 [2024-04-26 23:30:11.783293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.638 [2024-04-26 23:30:11.783302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.638 [2024-04-26 23:30:11.783309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.638 [2024-04-26 23:30:11.783318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.638 [2024-04-26 23:30:11.783325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.638 [2024-04-26 23:30:11.783334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.638 [2024-04-26 23:30:11.783341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.638 [2024-04-26 23:30:11.783350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.638 [2024-04-26 23:30:11.783357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.638 [2024-04-26 23:30:11.783366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.638 [2024-04-26 23:30:11.783374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.638 [2024-04-26 23:30:11.783384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.638 [2024-04-26 23:30:11.783391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.638 [2024-04-26 23:30:11.783400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.638 [2024-04-26 23:30:11.783408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.638 [2024-04-26 23:30:11.783416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.638 [2024-04-26 23:30:11.783424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.638 [2024-04-26 23:30:11.783433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.638 [2024-04-26 23:30:11.783440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.638 [2024-04-26 23:30:11.783449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.638 [2024-04-26 23:30:11.783456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.638 [2024-04-26 23:30:11.783465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.638 [2024-04-26 23:30:11.783472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.638 [2024-04-26 23:30:11.783481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.638 [2024-04-26 23:30:11.783488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.638 [2024-04-26 23:30:11.783498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.638 [2024-04-26 23:30:11.783506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.638 [2024-04-26 23:30:11.783515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.638 [2024-04-26 23:30:11.783522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.638 [2024-04-26 23:30:11.783532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.638 [2024-04-26 23:30:11.783539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.638 [2024-04-26 23:30:11.783548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.638 [2024-04-26 23:30:11.783556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.638 [2024-04-26 23:30:11.783565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.638 [2024-04-26 23:30:11.783572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.638 [2024-04-26 23:30:11.783583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.638 [2024-04-26 23:30:11.783590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.638 [2024-04-26 23:30:11.783599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.638 [2024-04-26 23:30:11.783606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.638 [2024-04-26 23:30:11.783615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.638 [2024-04-26 23:30:11.783622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.638 [2024-04-26 23:30:11.783631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.638 [2024-04-26 23:30:11.783638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.638 [2024-04-26 23:30:11.783647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.638 [2024-04-26 23:30:11.783655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.638 [2024-04-26 23:30:11.783664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.638 [2024-04-26 23:30:11.783671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.638 [2024-04-26 23:30:11.783680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.638 [2024-04-26 23:30:11.783687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.638 [2024-04-26 23:30:11.783696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.638 [2024-04-26 23:30:11.783703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.638 [2024-04-26 23:30:11.783712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.638 [2024-04-26 23:30:11.783719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.638 [2024-04-26 23:30:11.783727] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21792d0 is same with the state(5) to be set 00:27:22.638 [2024-04-26 23:30:11.785007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.638 [2024-04-26 23:30:11.785020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.638 [2024-04-26 23:30:11.785033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.638 [2024-04-26 23:30:11.785042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.638 [2024-04-26 23:30:11.785053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.638 [2024-04-26 23:30:11.785062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.638 [2024-04-26 23:30:11.785073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.638 [2024-04-26 23:30:11.785081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.638 [2024-04-26 23:30:11.785091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.638 [2024-04-26 23:30:11.785098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.638 [2024-04-26 23:30:11.785108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.638 [2024-04-26 23:30:11.785115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.638 [2024-04-26 23:30:11.785125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.638 [2024-04-26 23:30:11.785132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.638 [2024-04-26 23:30:11.785141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:33152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.638 [2024-04-26 23:30:11.785148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.639 [2024-04-26 23:30:11.785157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.639 [2024-04-26 23:30:11.785164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.639 [2024-04-26 23:30:11.785173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.639 [2024-04-26 23:30:11.785181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.639 [2024-04-26 23:30:11.785190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.639 [2024-04-26 23:30:11.785197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.639 [2024-04-26 23:30:11.785206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.639 [2024-04-26 23:30:11.785214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.639 [2024-04-26 23:30:11.785223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.639 [2024-04-26 23:30:11.785230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.639 [2024-04-26 23:30:11.785239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.639 [2024-04-26 23:30:11.785246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.639 [2024-04-26 23:30:11.785255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.639 [2024-04-26 23:30:11.785262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.639 [2024-04-26 23:30:11.785271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.639 [2024-04-26 23:30:11.785283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.639 [2024-04-26 23:30:11.785292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.639 [2024-04-26 23:30:11.785299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.639 [2024-04-26 23:30:11.785308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.639 [2024-04-26 23:30:11.785315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.639 [2024-04-26 23:30:11.785324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.639 [2024-04-26 23:30:11.785332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.639 [2024-04-26 23:30:11.785340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.639 [2024-04-26 23:30:11.785348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.639 [2024-04-26 23:30:11.785356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.639 [2024-04-26 23:30:11.785364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.639 [2024-04-26 23:30:11.785372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.639 [2024-04-26 23:30:11.785380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.639 [2024-04-26 23:30:11.785389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.639 [2024-04-26 23:30:11.785396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.639 [2024-04-26 23:30:11.785405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.639 [2024-04-26 23:30:11.785412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.639 [2024-04-26 23:30:11.785421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.639 [2024-04-26 23:30:11.785428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.639 [2024-04-26 23:30:11.785437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.639 [2024-04-26 23:30:11.785444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.639 [2024-04-26 23:30:11.785454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.639 [2024-04-26 23:30:11.785461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.639 [2024-04-26 23:30:11.785470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.639 [2024-04-26 23:30:11.785477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.639 [2024-04-26 23:30:11.785488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.639 [2024-04-26 23:30:11.785495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.639 [2024-04-26 23:30:11.785504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.639 [2024-04-26 23:30:11.785511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.639 [2024-04-26 23:30:11.785520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.639 [2024-04-26 23:30:11.785527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.639 [2024-04-26 23:30:11.785536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.639 [2024-04-26 23:30:11.785543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.639 [2024-04-26 23:30:11.785553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.639 [2024-04-26 23:30:11.785560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.639 [2024-04-26 23:30:11.785569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.639 [2024-04-26 23:30:11.785576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.639 [2024-04-26 23:30:11.785585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.639 [2024-04-26 23:30:11.785592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.639 [2024-04-26 23:30:11.785601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.639 [2024-04-26 23:30:11.785608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.639 [2024-04-26 23:30:11.785617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.639 [2024-04-26 23:30:11.785624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.639 [2024-04-26 23:30:11.785633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.639 [2024-04-26 23:30:11.785640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.639 [2024-04-26 23:30:11.785649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.639 [2024-04-26 23:30:11.785656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.639 [2024-04-26 23:30:11.785665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.639 [2024-04-26 23:30:11.785672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.639 [2024-04-26 23:30:11.785681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.639 [2024-04-26 23:30:11.785689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.639 [2024-04-26 23:30:11.785699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.639 [2024-04-26 23:30:11.785705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.639 [2024-04-26 23:30:11.785715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.639 [2024-04-26 23:30:11.785722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.639 [2024-04-26 23:30:11.785731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.639 [2024-04-26 23:30:11.785738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.639 [2024-04-26 23:30:11.785747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.639 [2024-04-26 23:30:11.785754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.639 [2024-04-26 23:30:11.785763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.639 [2024-04-26 23:30:11.785770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.639 [2024-04-26 23:30:11.785779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.639 [2024-04-26 23:30:11.785786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.639 [2024-04-26 23:30:11.785795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.639 [2024-04-26 23:30:11.785802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.640 [2024-04-26 23:30:11.785811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.640 [2024-04-26 23:30:11.785819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.640 [2024-04-26 23:30:11.785828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.640 [2024-04-26 23:30:11.785835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.640 [2024-04-26 23:30:11.785847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.640 [2024-04-26 23:30:11.785855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.640 [2024-04-26 23:30:11.785864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.640 [2024-04-26 23:30:11.785871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.640 [2024-04-26 23:30:11.785880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.640 [2024-04-26 23:30:11.785887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.640 [2024-04-26 23:30:11.785899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.640 [2024-04-26 23:30:11.785906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.640 [2024-04-26 23:30:11.785915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.640 [2024-04-26 23:30:11.785922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.640 [2024-04-26 23:30:11.785932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.640 [2024-04-26 23:30:11.785939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.640 [2024-04-26 23:30:11.785948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.640 [2024-04-26 23:30:11.785955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.640 [2024-04-26 23:30:11.785964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.640 [2024-04-26 23:30:11.785971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.640 [2024-04-26 23:30:11.785980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.640 [2024-04-26 23:30:11.785987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.640 [2024-04-26 23:30:11.785996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.640 [2024-04-26 23:30:11.786003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.640 [2024-04-26 23:30:11.786012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.640 [2024-04-26 23:30:11.786019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.640 [2024-04-26 23:30:11.786028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.640 [2024-04-26 23:30:11.786036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.640 [2024-04-26 23:30:11.786045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.640 [2024-04-26 23:30:11.786052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.640 [2024-04-26 23:30:11.786062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.640 [2024-04-26 23:30:11.786069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.640 [2024-04-26 23:30:11.786077] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201fb70 is same with the state(5) to be set 00:27:22.640 [2024-04-26 23:30:11.787589] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:27:22.640 [2024-04-26 23:30:11.787611] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4] resetting controller 00:27:22.640 [2024-04-26 23:30:11.787620] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5] resetting controller 00:27:22.640 [2024-04-26 23:30:11.787633] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:27:22.640 [2024-04-26 23:30:11.788131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.640 [2024-04-26 23:30:11.788484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.640 [2024-04-26 23:30:11.788493] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1ac90 with addr=10.0.0.2, port=4420 00:27:22.640 [2024-04-26 23:30:11.788501] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1ac90 is same with the state(5) to be set 00:27:22.640 [2024-04-26 23:30:11.788851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.640 [2024-04-26 23:30:11.789231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.640 [2024-04-26 23:30:11.789241] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218cfb0 with addr=10.0.0.2, port=4420 00:27:22.640 [2024-04-26 23:30:11.789247] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218cfb0 is same with the state(5) to be set 00:27:22.640 [2024-04-26 23:30:11.789596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.640 [2024-04-26 23:30:11.789971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.640 [2024-04-26 23:30:11.789980] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2183c90 with addr=10.0.0.2, port=4420 00:27:22.640 [2024-04-26 23:30:11.789987] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2183c90 is same with the state(5) to be set 00:27:22.640 [2024-04-26 23:30:11.790346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.640 [2024-04-26 23:30:11.790682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.640 [2024-04-26 23:30:11.790691] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20913a0 with addr=10.0.0.2, port=4420 00:27:22.640 [2024-04-26 23:30:11.790699] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20913a0 is same with the state(5) to be set 00:27:22.640 [2024-04-26 23:30:11.791728] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9] resetting controller 00:27:22.640 [2024-04-26 23:30:11.791739] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:22.640 [2024-04-26 23:30:11.791748] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8] resetting controller 00:27:22.640 [2024-04-26 23:30:11.791756] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] resetting controller 00:27:22.640 [2024-04-26 23:30:11.791788] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1ac90 (9): Bad file descriptor 00:27:22.640 [2024-04-26 23:30:11.791797] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218cfb0 (9): Bad file descriptor 00:27:22.640 [2024-04-26 23:30:11.791806] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2183c90 (9): Bad file descriptor 00:27:22.640 [2024-04-26 23:30:11.791815] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20913a0 (9): Bad file descriptor 00:27:22.640 [2024-04-26 23:30:11.792299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.640 [2024-04-26 23:30:11.792496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.640 [2024-04-26 23:30:11.792505] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2070bf0 with addr=10.0.0.2, port=4420 00:27:22.640 [2024-04-26 23:30:11.792512] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2070bf0 is same with the state(5) to be set 00:27:22.640 [2024-04-26 23:30:11.792862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.640 [2024-04-26 23:30:11.793229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.640 [2024-04-26 23:30:11.793238] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baf3f0 with addr=10.0.0.2, port=4420 00:27:22.640 [2024-04-26 23:30:11.793248] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1baf3f0 is same with the state(5) to be set 00:27:22.640 [2024-04-26 23:30:11.793609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.640 [2024-04-26 23:30:11.793968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.640 [2024-04-26 23:30:11.793977] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x217a3b0 with addr=10.0.0.2, port=4420 00:27:22.640 [2024-04-26 23:30:11.793984] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217a3b0 is same with the state(5) to be set 00:27:22.640 [2024-04-26 23:30:11.794349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.640 [2024-04-26 23:30:11.794732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.640 [2024-04-26 23:30:11.794741] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fedc20 with addr=10.0.0.2, port=4420 00:27:22.640 [2024-04-26 23:30:11.794748] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fedc20 is same with the state(5) to be set 00:27:22.640 [2024-04-26 23:30:11.794756] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:27:22.640 [2024-04-26 23:30:11.794763] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2] controller reinitialization failed 00:27:22.640 [2024-04-26 23:30:11.794770] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:27:22.640 [2024-04-26 23:30:11.794781] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4] Ctrlr is in error state 00:27:22.640 [2024-04-26 23:30:11.794788] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4] controller reinitialization failed 00:27:22.640 [2024-04-26 23:30:11.794794] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4] in failed state. 00:27:22.640 [2024-04-26 23:30:11.794804] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5] Ctrlr is in error state 00:27:22.640 [2024-04-26 23:30:11.794811] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5] controller reinitialization failed 00:27:22.640 [2024-04-26 23:30:11.794817] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5] in failed state. 00:27:22.640 [2024-04-26 23:30:11.794828] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10] Ctrlr is in error state 00:27:22.640 [2024-04-26 23:30:11.794834] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10] controller reinitialization failed 00:27:22.640 [2024-04-26 23:30:11.794845] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:27:22.641 [2024-04-26 23:30:11.794896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.641 [2024-04-26 23:30:11.794905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.641 [2024-04-26 23:30:11.794918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.641 [2024-04-26 23:30:11.794925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.641 [2024-04-26 23:30:11.794934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.641 [2024-04-26 23:30:11.794941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.641 [2024-04-26 23:30:11.794951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.641 [2024-04-26 23:30:11.794957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.641 [2024-04-26 23:30:11.794969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.641 [2024-04-26 23:30:11.794976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.641 [2024-04-26 23:30:11.794985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.641 [2024-04-26 23:30:11.794993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.641 [2024-04-26 23:30:11.795002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.641 [2024-04-26 23:30:11.795009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.641 [2024-04-26 23:30:11.795018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.641 [2024-04-26 23:30:11.795025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.641 [2024-04-26 23:30:11.795034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.641 [2024-04-26 23:30:11.795041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.641 [2024-04-26 23:30:11.795050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.641 [2024-04-26 23:30:11.795057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.641 [2024-04-26 23:30:11.795066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.641 [2024-04-26 23:30:11.795073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.641 [2024-04-26 23:30:11.795082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.641 [2024-04-26 23:30:11.795089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.641 [2024-04-26 23:30:11.795098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.641 [2024-04-26 23:30:11.795105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.641 [2024-04-26 23:30:11.795114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.641 [2024-04-26 23:30:11.795121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.641 [2024-04-26 23:30:11.795130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.641 [2024-04-26 23:30:11.795138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.641 [2024-04-26 23:30:11.795146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.641 [2024-04-26 23:30:11.795154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.641 [2024-04-26 23:30:11.795162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.641 [2024-04-26 23:30:11.795171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.641 [2024-04-26 23:30:11.795180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.641 [2024-04-26 23:30:11.795187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.641 [2024-04-26 23:30:11.795196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.641 [2024-04-26 23:30:11.795203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.641 [2024-04-26 23:30:11.795212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.641 [2024-04-26 23:30:11.795219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.641 [2024-04-26 23:30:11.795228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.641 [2024-04-26 23:30:11.795235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.641 [2024-04-26 23:30:11.795244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.641 [2024-04-26 23:30:11.795251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.641 [2024-04-26 23:30:11.795260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.641 [2024-04-26 23:30:11.795267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.641 [2024-04-26 23:30:11.795277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.641 [2024-04-26 23:30:11.795284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.641 [2024-04-26 23:30:11.795293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.641 [2024-04-26 23:30:11.795300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.641 [2024-04-26 23:30:11.795310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.641 [2024-04-26 23:30:11.795316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.641 [2024-04-26 23:30:11.795325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.641 [2024-04-26 23:30:11.795332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.641 [2024-04-26 23:30:11.795341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.641 [2024-04-26 23:30:11.795348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.641 [2024-04-26 23:30:11.795357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.641 [2024-04-26 23:30:11.795364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.641 [2024-04-26 23:30:11.795374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.641 [2024-04-26 23:30:11.795381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.641 [2024-04-26 23:30:11.795390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.641 [2024-04-26 23:30:11.795398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.641 [2024-04-26 23:30:11.795407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.641 [2024-04-26 23:30:11.795414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.641 [2024-04-26 23:30:11.795423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.641 [2024-04-26 23:30:11.795430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.641 [2024-04-26 23:30:11.795439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.641 [2024-04-26 23:30:11.795446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.642 [2024-04-26 23:30:11.795455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.642 [2024-04-26 23:30:11.795462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.642 [2024-04-26 23:30:11.795471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.642 [2024-04-26 23:30:11.795477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.642 [2024-04-26 23:30:11.795487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.642 [2024-04-26 23:30:11.795493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.642 [2024-04-26 23:30:11.795502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.642 [2024-04-26 23:30:11.795509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.642 [2024-04-26 23:30:11.795518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.642 [2024-04-26 23:30:11.795525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.642 [2024-04-26 23:30:11.795534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.642 [2024-04-26 23:30:11.795541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.642 [2024-04-26 23:30:11.795550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.642 [2024-04-26 23:30:11.795557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.642 [2024-04-26 23:30:11.795566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.642 [2024-04-26 23:30:11.795575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.642 [2024-04-26 23:30:11.795584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.642 [2024-04-26 23:30:11.795591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.642 [2024-04-26 23:30:11.795600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.642 [2024-04-26 23:30:11.795607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.642 [2024-04-26 23:30:11.795616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.642 [2024-04-26 23:30:11.795624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.642 [2024-04-26 23:30:11.795633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.642 [2024-04-26 23:30:11.795640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.642 [2024-04-26 23:30:11.795649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.642 [2024-04-26 23:30:11.795656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.642 [2024-04-26 23:30:11.795665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.642 [2024-04-26 23:30:11.795672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.642 [2024-04-26 23:30:11.795681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.642 [2024-04-26 23:30:11.795688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.642 [2024-04-26 23:30:11.795697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.642 [2024-04-26 23:30:11.795704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.642 [2024-04-26 23:30:11.795713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.642 [2024-04-26 23:30:11.795720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.642 [2024-04-26 23:30:11.795729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.642 [2024-04-26 23:30:11.795737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.642 [2024-04-26 23:30:11.795746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.642 [2024-04-26 23:30:11.795753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.642 [2024-04-26 23:30:11.795763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.642 [2024-04-26 23:30:11.795770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.642 [2024-04-26 23:30:11.795781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.642 [2024-04-26 23:30:11.795788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.642 [2024-04-26 23:30:11.795798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.642 [2024-04-26 23:30:11.795805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.642 [2024-04-26 23:30:11.795814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.642 [2024-04-26 23:30:11.795821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.642 [2024-04-26 23:30:11.795830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.642 [2024-04-26 23:30:11.795840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.642 [2024-04-26 23:30:11.795850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.642 [2024-04-26 23:30:11.795857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.642 [2024-04-26 23:30:11.795866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.642 [2024-04-26 23:30:11.795873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.642 [2024-04-26 23:30:11.795882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.642 [2024-04-26 23:30:11.795889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.642 [2024-04-26 23:30:11.795898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.642 [2024-04-26 23:30:11.795905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.642 [2024-04-26 23:30:11.795914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.642 [2024-04-26 23:30:11.795922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.642 [2024-04-26 23:30:11.795931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.642 [2024-04-26 23:30:11.795938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.642 [2024-04-26 23:30:11.795946] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fba050 is same with the state(5) to be set 00:27:22.642 [2024-04-26 23:30:11.797215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.642 [2024-04-26 23:30:11.797226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.642 [2024-04-26 23:30:11.797237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.642 [2024-04-26 23:30:11.797244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.642 [2024-04-26 23:30:11.797256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.642 [2024-04-26 23:30:11.797263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.642 [2024-04-26 23:30:11.797273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.642 [2024-04-26 23:30:11.797280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.642 [2024-04-26 23:30:11.797290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.642 [2024-04-26 23:30:11.797297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.642 [2024-04-26 23:30:11.797307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.642 [2024-04-26 23:30:11.797314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.642 [2024-04-26 23:30:11.797323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.642 [2024-04-26 23:30:11.797330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.642 [2024-04-26 23:30:11.797339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.642 [2024-04-26 23:30:11.797346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.642 [2024-04-26 23:30:11.797356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.642 [2024-04-26 23:30:11.797362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.642 [2024-04-26 23:30:11.797372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.643 [2024-04-26 23:30:11.797379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.643 [2024-04-26 23:30:11.797388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.643 [2024-04-26 23:30:11.797395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.643 [2024-04-26 23:30:11.797404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.643 [2024-04-26 23:30:11.797411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.643 [2024-04-26 23:30:11.797420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.643 [2024-04-26 23:30:11.797427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.643 [2024-04-26 23:30:11.797436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.643 [2024-04-26 23:30:11.797443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.643 [2024-04-26 23:30:11.797453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.643 [2024-04-26 23:30:11.797461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.643 [2024-04-26 23:30:11.797470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.643 [2024-04-26 23:30:11.797477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.643 [2024-04-26 23:30:11.797486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.643 [2024-04-26 23:30:11.797493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.643 [2024-04-26 23:30:11.797503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.643 [2024-04-26 23:30:11.797509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.643 [2024-04-26 23:30:11.797519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.643 [2024-04-26 23:30:11.797526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.643 [2024-04-26 23:30:11.797535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.643 [2024-04-26 23:30:11.797542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.643 [2024-04-26 23:30:11.797551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.643 [2024-04-26 23:30:11.797558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.643 [2024-04-26 23:30:11.797567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.643 [2024-04-26 23:30:11.797574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.643 [2024-04-26 23:30:11.797584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.643 [2024-04-26 23:30:11.797591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.643 [2024-04-26 23:30:11.797600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.643 [2024-04-26 23:30:11.797607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.643 [2024-04-26 23:30:11.797616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.643 [2024-04-26 23:30:11.797624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.643 [2024-04-26 23:30:11.797633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.643 [2024-04-26 23:30:11.797640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.643 [2024-04-26 23:30:11.797649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.643 [2024-04-26 23:30:11.797656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.643 [2024-04-26 23:30:11.797667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.643 [2024-04-26 23:30:11.797674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.643 [2024-04-26 23:30:11.797683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.643 [2024-04-26 23:30:11.797691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.643 [2024-04-26 23:30:11.797700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.643 [2024-04-26 23:30:11.797707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.643 [2024-04-26 23:30:11.797716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.643 [2024-04-26 23:30:11.797723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.643 [2024-04-26 23:30:11.797732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.643 [2024-04-26 23:30:11.797739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.643 [2024-04-26 23:30:11.797748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.643 [2024-04-26 23:30:11.797755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.643 [2024-04-26 23:30:11.797764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.643 [2024-04-26 23:30:11.797772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.643 [2024-04-26 23:30:11.797780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.643 [2024-04-26 23:30:11.797788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.643 [2024-04-26 23:30:11.797796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.643 [2024-04-26 23:30:11.797804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.643 [2024-04-26 23:30:11.797813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.643 [2024-04-26 23:30:11.797820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.643 [2024-04-26 23:30:11.797829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.643 [2024-04-26 23:30:11.797839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.643 [2024-04-26 23:30:11.797848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.643 [2024-04-26 23:30:11.797855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.643 [2024-04-26 23:30:11.797864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.643 [2024-04-26 23:30:11.797873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.643 [2024-04-26 23:30:11.797882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.643 [2024-04-26 23:30:11.797889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.643 [2024-04-26 23:30:11.797898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.643 [2024-04-26 23:30:11.797905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.643 [2024-04-26 23:30:11.797914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.643 [2024-04-26 23:30:11.797921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.643 [2024-04-26 23:30:11.797930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.643 [2024-04-26 23:30:11.797937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.643 [2024-04-26 23:30:11.797947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.643 [2024-04-26 23:30:11.797954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.643 [2024-04-26 23:30:11.797963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.643 [2024-04-26 23:30:11.797970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.643 [2024-04-26 23:30:11.797979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.643 [2024-04-26 23:30:11.797986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.643 [2024-04-26 23:30:11.797995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.643 [2024-04-26 23:30:11.798003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.643 [2024-04-26 23:30:11.798011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.643 [2024-04-26 23:30:11.798019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.644 [2024-04-26 23:30:11.798028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.644 [2024-04-26 23:30:11.798035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.644 [2024-04-26 23:30:11.798045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.644 [2024-04-26 23:30:11.798052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.644 [2024-04-26 23:30:11.798061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.644 [2024-04-26 23:30:11.798068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.644 [2024-04-26 23:30:11.798078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.644 [2024-04-26 23:30:11.798086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.644 [2024-04-26 23:30:11.798095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.644 [2024-04-26 23:30:11.798102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.644 [2024-04-26 23:30:11.798111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.644 [2024-04-26 23:30:11.798118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.644 [2024-04-26 23:30:11.798127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.644 [2024-04-26 23:30:11.798134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.644 [2024-04-26 23:30:11.798143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.644 [2024-04-26 23:30:11.798150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.644 [2024-04-26 23:30:11.798159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.644 [2024-04-26 23:30:11.798166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.644 [2024-04-26 23:30:11.798175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.644 [2024-04-26 23:30:11.798182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.644 [2024-04-26 23:30:11.798192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.644 [2024-04-26 23:30:11.798199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.644 [2024-04-26 23:30:11.798208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.644 [2024-04-26 23:30:11.798215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.644 [2024-04-26 23:30:11.798224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.644 [2024-04-26 23:30:11.798231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.644 [2024-04-26 23:30:11.798240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.644 [2024-04-26 23:30:11.798247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.644 [2024-04-26 23:30:11.798257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.644 [2024-04-26 23:30:11.798264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.644 [2024-04-26 23:30:11.798272] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fbb500 is same with the state(5) to be set 00:27:22.644 [2024-04-26 23:30:11.799750] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:22.644 [2024-04-26 23:30:11.799770] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:22.644 [2024-04-26 23:30:11.799776] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:22.644 [2024-04-26 23:30:11.799782] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:22.644 [2024-04-26 23:30:11.799790] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6] resetting controller 00:27:22.644 task offset: 24576 on job bdev=Nvme1n1 fails 00:27:22.644 00:27:22.644 Latency(us) 00:27:22.644 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:22.644 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:22.644 Job: Nvme1n1 ended in about 1.09 seconds with error 00:27:22.644 Verification LBA range: start 0x0 length 0x400 00:27:22.644 Nvme1n1 : 1.09 175.83 10.99 58.61 0.00 270292.80 7154.35 262144.00 00:27:22.644 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:22.644 Job: Nvme2n1 ended in about 1.11 seconds with error 00:27:22.644 Verification LBA range: start 0x0 length 0x400 00:27:22.644 Nvme2n1 : 1.11 172.35 10.77 57.45 0.00 270967.47 22609.92 251658.24 00:27:22.644 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:22.644 Job: Nvme3n1 ended in about 1.11 seconds with error 00:27:22.644 Verification LBA range: start 0x0 length 0x400 00:27:22.644 Nvme3n1 : 1.11 177.58 11.10 57.69 0.00 259867.19 19333.12 265639.25 00:27:22.644 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:22.644 Job: Nvme4n1 ended in about 1.12 seconds with error 00:27:22.644 Verification LBA range: start 0x0 length 0x400 00:27:22.644 Nvme4n1 : 1.12 171.99 10.75 57.33 0.00 261845.76 21080.75 249910.61 00:27:22.644 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:22.644 Job: Nvme5n1 ended in about 1.12 seconds with error 00:27:22.644 Verification LBA range: start 0x0 length 0x400 00:27:22.644 Nvme5n1 : 1.12 175.20 10.95 57.21 0.00 253624.43 6389.76 237677.23 00:27:22.644 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:22.644 Job: Nvme6n1 ended in about 1.13 seconds with error 00:27:22.644 Verification LBA range: start 0x0 length 0x400 00:27:22.644 Nvme6n1 : 1.13 169.78 10.61 56.59 0.00 255773.65 16930.13 262144.00 00:27:22.644 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:22.644 Job: Nvme7n1 ended in about 1.13 seconds with error 00:27:22.644 Verification LBA range: start 0x0 length 0x400 00:27:22.644 Nvme7n1 : 1.13 169.43 10.59 56.48 0.00 251541.23 13762.56 248162.99 00:27:22.644 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:22.644 Job: Nvme8n1 ended in about 1.10 seconds with error 00:27:22.644 Verification LBA range: start 0x0 length 0x400 00:27:22.644 Nvme8n1 : 1.10 235.60 14.73 57.99 0.00 189072.37 10103.47 244667.73 00:27:22.644 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:22.644 Job: Nvme9n1 ended in about 1.10 seconds with error 00:27:22.644 Verification LBA range: start 0x0 length 0x400 00:27:22.644 Nvme9n1 : 1.10 174.31 10.89 58.10 0.00 234089.17 11086.51 274377.39 00:27:22.644 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:22.644 Job: Nvme10n1 ended in about 1.12 seconds with error 00:27:22.644 Verification LBA range: start 0x0 length 0x400 00:27:22.644 Nvme10n1 : 1.12 174.84 10.93 57.09 0.00 230586.74 7973.55 255153.49 00:27:22.644 =================================================================================================================== 00:27:22.644 Total : 1796.91 112.31 574.55 0.00 246260.61 6389.76 274377.39 00:27:22.644 [2024-04-26 23:30:11.823522] app.c: 966:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:27:22.644 [2024-04-26 23:30:11.823575] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7] resetting controller 00:27:22.644 [2024-04-26 23:30:11.823622] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2070bf0 (9): Bad file descriptor 00:27:22.644 [2024-04-26 23:30:11.823636] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1baf3f0 (9): Bad file descriptor 00:27:22.644 [2024-04-26 23:30:11.823645] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x217a3b0 (9): Bad file descriptor 00:27:22.644 [2024-04-26 23:30:11.823654] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fedc20 (9): Bad file descriptor 00:27:22.644 [2024-04-26 23:30:11.824123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.644 [2024-04-26 23:30:11.824318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.644 [2024-04-26 23:30:11.824328] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fe6870 with addr=10.0.0.2, port=4420 00:27:22.644 [2024-04-26 23:30:11.824337] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe6870 is same with the state(5) to be set 00:27:22.644 [2024-04-26 23:30:11.824691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.644 [2024-04-26 23:30:11.825037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.644 [2024-04-26 23:30:11.825047] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2180060 with addr=10.0.0.2, port=4420 00:27:22.644 [2024-04-26 23:30:11.825055] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2180060 is same with the state(5) to be set 00:27:22.644 [2024-04-26 23:30:11.825062] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9] Ctrlr is in error state 00:27:22.644 [2024-04-26 23:30:11.825069] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9] controller reinitialization failed 00:27:22.644 [2024-04-26 23:30:11.825077] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9] in failed state. 00:27:22.644 [2024-04-26 23:30:11.825089] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:22.644 [2024-04-26 23:30:11.825095] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:22.644 [2024-04-26 23:30:11.825102] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:22.644 [2024-04-26 23:30:11.825112] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8] Ctrlr is in error state 00:27:22.644 [2024-04-26 23:30:11.825118] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8] controller reinitialization failed 00:27:22.644 [2024-04-26 23:30:11.825125] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8] in failed state. 00:27:22.645 [2024-04-26 23:30:11.825134] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3] Ctrlr is in error state 00:27:22.645 [2024-04-26 23:30:11.825140] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3] controller reinitialization failed 00:27:22.645 [2024-04-26 23:30:11.825147] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:27:22.645 [2024-04-26 23:30:11.825163] bdev_nvme.c:2878:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:27:22.645 [2024-04-26 23:30:11.825174] bdev_nvme.c:2878:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:27:22.645 [2024-04-26 23:30:11.825183] bdev_nvme.c:2878:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:27:22.645 [2024-04-26 23:30:11.825194] bdev_nvme.c:2878:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:27:22.645 [2024-04-26 23:30:11.825213] bdev_nvme.c:2878:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:27:22.645 [2024-04-26 23:30:11.825227] bdev_nvme.c:2878:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:27:22.645 [2024-04-26 23:30:11.825237] bdev_nvme.c:2878:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:27:22.645 [2024-04-26 23:30:11.825247] bdev_nvme.c:2878:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:27:22.645 [2024-04-26 23:30:11.825779] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:27:22.645 [2024-04-26 23:30:11.825790] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5] resetting controller 00:27:22.645 [2024-04-26 23:30:11.825799] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4] resetting controller 00:27:22.645 [2024-04-26 23:30:11.825807] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:27:22.645 [2024-04-26 23:30:11.825829] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:22.645 [2024-04-26 23:30:11.825842] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:22.645 [2024-04-26 23:30:11.825849] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:22.645 [2024-04-26 23:30:11.825879] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fe6870 (9): Bad file descriptor 00:27:22.645 [2024-04-26 23:30:11.825890] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2180060 (9): Bad file descriptor 00:27:22.645 [2024-04-26 23:30:11.825925] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:22.645 [2024-04-26 23:30:11.826294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.645 [2024-04-26 23:30:11.826666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.645 [2024-04-26 23:30:11.826676] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20913a0 with addr=10.0.0.2, port=4420 00:27:22.645 [2024-04-26 23:30:11.826683] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20913a0 is same with the state(5) to be set 00:27:22.645 [2024-04-26 23:30:11.827040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.645 [2024-04-26 23:30:11.827400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.645 [2024-04-26 23:30:11.827409] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2183c90 with addr=10.0.0.2, port=4420 00:27:22.645 [2024-04-26 23:30:11.827416] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2183c90 is same with the state(5) to be set 00:27:22.645 [2024-04-26 23:30:11.827700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.645 [2024-04-26 23:30:11.827916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.645 [2024-04-26 23:30:11.827925] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218cfb0 with addr=10.0.0.2, port=4420 00:27:22.645 [2024-04-26 23:30:11.827932] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218cfb0 is same with the state(5) to be set 00:27:22.645 [2024-04-26 23:30:11.828267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.645 [2024-04-26 23:30:11.828451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.645 [2024-04-26 23:30:11.828460] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1ac90 with addr=10.0.0.2, port=4420 00:27:22.645 [2024-04-26 23:30:11.828467] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1ac90 is same with the state(5) to be set 00:27:22.645 [2024-04-26 23:30:11.828474] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6] Ctrlr is in error state 00:27:22.645 [2024-04-26 23:30:11.828480] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6] controller reinitialization failed 00:27:22.645 [2024-04-26 23:30:11.828487] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6] in failed state. 00:27:22.645 [2024-04-26 23:30:11.828500] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7] Ctrlr is in error state 00:27:22.645 [2024-04-26 23:30:11.828506] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7] controller reinitialization failed 00:27:22.645 [2024-04-26 23:30:11.828512] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7] in failed state. 00:27:22.645 [2024-04-26 23:30:11.828555] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:22.645 [2024-04-26 23:30:11.828563] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:22.645 [2024-04-26 23:30:11.828570] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20913a0 (9): Bad file descriptor 00:27:22.645 [2024-04-26 23:30:11.828580] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2183c90 (9): Bad file descriptor 00:27:22.645 [2024-04-26 23:30:11.828588] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218cfb0 (9): Bad file descriptor 00:27:22.645 [2024-04-26 23:30:11.828597] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1ac90 (9): Bad file descriptor 00:27:22.645 [2024-04-26 23:30:11.828631] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10] Ctrlr is in error state 00:27:22.645 [2024-04-26 23:30:11.828639] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10] controller reinitialization failed 00:27:22.645 [2024-04-26 23:30:11.828646] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:27:22.645 [2024-04-26 23:30:11.828655] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5] Ctrlr is in error state 00:27:22.645 [2024-04-26 23:30:11.828661] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5] controller reinitialization failed 00:27:22.645 [2024-04-26 23:30:11.828667] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5] in failed state. 00:27:22.645 [2024-04-26 23:30:11.828676] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4] Ctrlr is in error state 00:27:22.645 [2024-04-26 23:30:11.828683] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4] controller reinitialization failed 00:27:22.645 [2024-04-26 23:30:11.828689] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4] in failed state. 00:27:22.645 [2024-04-26 23:30:11.828698] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:27:22.645 [2024-04-26 23:30:11.828704] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2] controller reinitialization failed 00:27:22.645 [2024-04-26 23:30:11.828711] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:27:22.645 [2024-04-26 23:30:11.828740] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:22.645 [2024-04-26 23:30:11.828747] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:22.645 [2024-04-26 23:30:11.828754] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:22.645 [2024-04-26 23:30:11.828759] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:22.906 23:30:12 -- target/shutdown.sh@136 -- # nvmfpid= 00:27:22.906 23:30:12 -- target/shutdown.sh@139 -- # sleep 1 00:27:23.950 23:30:13 -- target/shutdown.sh@142 -- # kill -9 4070754 00:27:23.950 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 142: kill: (4070754) - No such process 00:27:23.950 23:30:13 -- target/shutdown.sh@142 -- # true 00:27:23.950 23:30:13 -- target/shutdown.sh@144 -- # stoptarget 00:27:23.950 23:30:13 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:27:23.950 23:30:13 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:27:23.950 23:30:13 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:27:23.950 23:30:13 -- target/shutdown.sh@45 -- # nvmftestfini 00:27:23.950 23:30:13 -- nvmf/common.sh@477 -- # nvmfcleanup 00:27:23.950 23:30:13 -- nvmf/common.sh@117 -- # sync 00:27:23.950 23:30:13 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:27:23.950 23:30:13 -- nvmf/common.sh@120 -- # set +e 00:27:23.950 23:30:13 -- nvmf/common.sh@121 -- # for i in {1..20} 00:27:23.950 23:30:13 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:27:23.950 rmmod nvme_tcp 00:27:23.950 rmmod nvme_fabrics 00:27:23.950 rmmod nvme_keyring 00:27:23.950 23:30:13 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:27:23.950 23:30:13 -- nvmf/common.sh@124 -- # set -e 00:27:23.950 23:30:13 -- nvmf/common.sh@125 -- # return 0 00:27:23.950 23:30:13 -- nvmf/common.sh@478 -- # '[' -n '' ']' 00:27:23.950 23:30:13 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:27:23.950 23:30:13 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:27:23.950 23:30:13 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:27:23.950 23:30:13 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:23.950 23:30:13 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:27:23.950 23:30:13 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:23.950 23:30:13 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:23.950 23:30:13 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:26.492 23:30:15 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:27:26.492 00:27:26.492 real 0m8.184s 00:27:26.492 user 0m20.907s 00:27:26.492 sys 0m1.303s 00:27:26.492 23:30:15 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:27:26.492 23:30:15 -- common/autotest_common.sh@10 -- # set +x 00:27:26.492 ************************************ 00:27:26.492 END TEST nvmf_shutdown_tc3 00:27:26.492 ************************************ 00:27:26.492 23:30:15 -- target/shutdown.sh@151 -- # trap - SIGINT SIGTERM EXIT 00:27:26.492 00:27:26.492 real 0m32.893s 00:27:26.492 user 1m17.209s 00:27:26.492 sys 0m9.372s 00:27:26.492 23:30:15 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:27:26.492 23:30:15 -- common/autotest_common.sh@10 -- # set +x 00:27:26.492 ************************************ 00:27:26.492 END TEST nvmf_shutdown 00:27:26.492 ************************************ 00:27:26.492 23:30:15 -- nvmf/nvmf.sh@84 -- # timing_exit target 00:27:26.492 23:30:15 -- common/autotest_common.sh@716 -- # xtrace_disable 00:27:26.492 23:30:15 -- common/autotest_common.sh@10 -- # set +x 00:27:26.492 23:30:15 -- nvmf/nvmf.sh@86 -- # timing_enter host 00:27:26.492 23:30:15 -- common/autotest_common.sh@710 -- # xtrace_disable 00:27:26.492 23:30:15 -- common/autotest_common.sh@10 -- # set +x 00:27:26.492 23:30:15 -- nvmf/nvmf.sh@88 -- # [[ 0 -eq 0 ]] 00:27:26.492 23:30:15 -- nvmf/nvmf.sh@89 -- # run_test nvmf_multicontroller /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:27:26.492 23:30:15 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:27:26.492 23:30:15 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:27:26.492 23:30:15 -- common/autotest_common.sh@10 -- # set +x 00:27:26.492 ************************************ 00:27:26.492 START TEST nvmf_multicontroller 00:27:26.492 ************************************ 00:27:26.492 23:30:15 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:27:26.492 * Looking for test storage... 00:27:26.492 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:27:26.492 23:30:15 -- host/multicontroller.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:26.492 23:30:15 -- nvmf/common.sh@7 -- # uname -s 00:27:26.492 23:30:15 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:26.492 23:30:15 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:26.492 23:30:15 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:26.492 23:30:15 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:26.492 23:30:15 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:26.492 23:30:15 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:26.492 23:30:15 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:26.492 23:30:15 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:26.492 23:30:15 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:26.492 23:30:15 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:26.492 23:30:15 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:27:26.492 23:30:15 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:27:26.492 23:30:15 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:26.492 23:30:15 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:26.492 23:30:15 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:26.492 23:30:15 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:26.492 23:30:15 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:26.492 23:30:15 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:26.492 23:30:15 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:26.492 23:30:15 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:26.492 23:30:15 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:26.492 23:30:15 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:26.493 23:30:15 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:26.493 23:30:15 -- paths/export.sh@5 -- # export PATH 00:27:26.493 23:30:15 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:26.493 23:30:15 -- nvmf/common.sh@47 -- # : 0 00:27:26.493 23:30:15 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:27:26.493 23:30:15 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:27:26.493 23:30:15 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:26.493 23:30:15 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:26.493 23:30:15 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:26.493 23:30:15 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:27:26.493 23:30:15 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:27:26.493 23:30:15 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:27:26.493 23:30:15 -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:27:26.493 23:30:15 -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:27:26.493 23:30:15 -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:27:26.493 23:30:15 -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:27:26.493 23:30:15 -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:27:26.493 23:30:15 -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:27:26.493 23:30:15 -- host/multicontroller.sh@23 -- # nvmftestinit 00:27:26.493 23:30:15 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:27:26.493 23:30:15 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:26.493 23:30:15 -- nvmf/common.sh@437 -- # prepare_net_devs 00:27:26.493 23:30:15 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:27:26.493 23:30:15 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:27:26.493 23:30:15 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:26.493 23:30:15 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:26.493 23:30:15 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:26.493 23:30:15 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:27:26.493 23:30:15 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:27:26.493 23:30:15 -- nvmf/common.sh@285 -- # xtrace_disable 00:27:26.493 23:30:15 -- common/autotest_common.sh@10 -- # set +x 00:27:34.636 23:30:22 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:27:34.636 23:30:22 -- nvmf/common.sh@291 -- # pci_devs=() 00:27:34.636 23:30:22 -- nvmf/common.sh@291 -- # local -a pci_devs 00:27:34.636 23:30:22 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:27:34.636 23:30:22 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:27:34.636 23:30:22 -- nvmf/common.sh@293 -- # pci_drivers=() 00:27:34.636 23:30:22 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:27:34.636 23:30:22 -- nvmf/common.sh@295 -- # net_devs=() 00:27:34.636 23:30:22 -- nvmf/common.sh@295 -- # local -ga net_devs 00:27:34.636 23:30:22 -- nvmf/common.sh@296 -- # e810=() 00:27:34.636 23:30:22 -- nvmf/common.sh@296 -- # local -ga e810 00:27:34.636 23:30:22 -- nvmf/common.sh@297 -- # x722=() 00:27:34.636 23:30:22 -- nvmf/common.sh@297 -- # local -ga x722 00:27:34.636 23:30:22 -- nvmf/common.sh@298 -- # mlx=() 00:27:34.636 23:30:22 -- nvmf/common.sh@298 -- # local -ga mlx 00:27:34.636 23:30:22 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:34.636 23:30:22 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:34.636 23:30:22 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:34.636 23:30:22 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:34.636 23:30:22 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:34.636 23:30:22 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:34.636 23:30:22 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:34.636 23:30:22 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:34.636 23:30:22 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:34.636 23:30:22 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:34.636 23:30:22 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:34.636 23:30:22 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:27:34.636 23:30:22 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:27:34.636 23:30:22 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:27:34.636 23:30:22 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:27:34.636 23:30:22 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:27:34.636 23:30:22 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:27:34.636 23:30:22 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:34.636 23:30:22 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:27:34.636 Found 0000:31:00.0 (0x8086 - 0x159b) 00:27:34.636 23:30:22 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:34.636 23:30:22 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:34.636 23:30:22 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:34.636 23:30:22 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:34.636 23:30:22 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:34.636 23:30:22 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:34.636 23:30:22 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:27:34.636 Found 0000:31:00.1 (0x8086 - 0x159b) 00:27:34.636 23:30:22 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:34.636 23:30:22 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:34.636 23:30:22 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:34.636 23:30:22 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:34.636 23:30:22 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:34.636 23:30:22 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:27:34.636 23:30:22 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:27:34.636 23:30:22 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:27:34.636 23:30:22 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:34.636 23:30:22 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:34.636 23:30:22 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:27:34.636 23:30:22 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:34.636 23:30:22 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:27:34.636 Found net devices under 0000:31:00.0: cvl_0_0 00:27:34.636 23:30:22 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:27:34.636 23:30:22 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:34.636 23:30:22 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:34.636 23:30:22 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:27:34.636 23:30:22 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:34.636 23:30:22 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:27:34.636 Found net devices under 0000:31:00.1: cvl_0_1 00:27:34.636 23:30:22 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:27:34.636 23:30:22 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:27:34.636 23:30:22 -- nvmf/common.sh@403 -- # is_hw=yes 00:27:34.636 23:30:22 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:27:34.636 23:30:22 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:27:34.636 23:30:22 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:27:34.636 23:30:22 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:34.636 23:30:22 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:34.636 23:30:22 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:34.636 23:30:22 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:27:34.636 23:30:22 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:34.636 23:30:22 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:34.636 23:30:22 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:27:34.636 23:30:22 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:34.636 23:30:22 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:34.636 23:30:22 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:27:34.636 23:30:22 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:27:34.636 23:30:22 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:27:34.636 23:30:22 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:34.636 23:30:22 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:34.636 23:30:22 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:34.636 23:30:22 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:27:34.636 23:30:22 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:34.636 23:30:22 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:34.636 23:30:22 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:34.636 23:30:22 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:27:34.636 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:34.636 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.637 ms 00:27:34.636 00:27:34.636 --- 10.0.0.2 ping statistics --- 00:27:34.636 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:34.636 rtt min/avg/max/mdev = 0.637/0.637/0.637/0.000 ms 00:27:34.637 23:30:22 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:34.637 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:34.637 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.340 ms 00:27:34.637 00:27:34.637 --- 10.0.0.1 ping statistics --- 00:27:34.637 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:34.637 rtt min/avg/max/mdev = 0.340/0.340/0.340/0.000 ms 00:27:34.637 23:30:22 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:34.637 23:30:22 -- nvmf/common.sh@411 -- # return 0 00:27:34.637 23:30:22 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:27:34.637 23:30:22 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:34.637 23:30:22 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:27:34.637 23:30:22 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:27:34.637 23:30:22 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:34.637 23:30:22 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:27:34.637 23:30:22 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:27:34.637 23:30:22 -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:27:34.637 23:30:22 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:27:34.637 23:30:22 -- common/autotest_common.sh@710 -- # xtrace_disable 00:27:34.637 23:30:22 -- common/autotest_common.sh@10 -- # set +x 00:27:34.637 23:30:22 -- nvmf/common.sh@470 -- # nvmfpid=4076015 00:27:34.637 23:30:22 -- nvmf/common.sh@471 -- # waitforlisten 4076015 00:27:34.637 23:30:22 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:27:34.637 23:30:22 -- common/autotest_common.sh@817 -- # '[' -z 4076015 ']' 00:27:34.637 23:30:22 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:34.637 23:30:22 -- common/autotest_common.sh@822 -- # local max_retries=100 00:27:34.637 23:30:22 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:34.637 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:34.637 23:30:22 -- common/autotest_common.sh@826 -- # xtrace_disable 00:27:34.637 23:30:22 -- common/autotest_common.sh@10 -- # set +x 00:27:34.637 [2024-04-26 23:30:22.870914] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:27:34.637 [2024-04-26 23:30:22.870978] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:34.637 EAL: No free 2048 kB hugepages reported on node 1 00:27:34.637 [2024-04-26 23:30:22.942261] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:27:34.637 [2024-04-26 23:30:22.979133] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:34.637 [2024-04-26 23:30:22.979185] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:34.637 [2024-04-26 23:30:22.979193] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:34.637 [2024-04-26 23:30:22.979200] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:34.637 [2024-04-26 23:30:22.979205] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:34.637 [2024-04-26 23:30:22.979322] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:27:34.637 [2024-04-26 23:30:22.979484] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:27:34.637 [2024-04-26 23:30:22.979485] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:27:34.637 23:30:23 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:27:34.637 23:30:23 -- common/autotest_common.sh@850 -- # return 0 00:27:34.637 23:30:23 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:27:34.637 23:30:23 -- common/autotest_common.sh@716 -- # xtrace_disable 00:27:34.637 23:30:23 -- common/autotest_common.sh@10 -- # set +x 00:27:34.637 23:30:23 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:34.637 23:30:23 -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:27:34.637 23:30:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:34.637 23:30:23 -- common/autotest_common.sh@10 -- # set +x 00:27:34.637 [2024-04-26 23:30:23.698810] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:34.637 23:30:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:34.637 23:30:23 -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:27:34.637 23:30:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:34.637 23:30:23 -- common/autotest_common.sh@10 -- # set +x 00:27:34.637 Malloc0 00:27:34.637 23:30:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:34.637 23:30:23 -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:27:34.637 23:30:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:34.637 23:30:23 -- common/autotest_common.sh@10 -- # set +x 00:27:34.637 23:30:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:34.637 23:30:23 -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:27:34.637 23:30:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:34.637 23:30:23 -- common/autotest_common.sh@10 -- # set +x 00:27:34.637 23:30:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:34.637 23:30:23 -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:34.637 23:30:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:34.637 23:30:23 -- common/autotest_common.sh@10 -- # set +x 00:27:34.637 [2024-04-26 23:30:23.766138] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:34.637 23:30:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:34.637 23:30:23 -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:27:34.637 23:30:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:34.637 23:30:23 -- common/autotest_common.sh@10 -- # set +x 00:27:34.637 [2024-04-26 23:30:23.778095] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:27:34.637 23:30:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:34.637 23:30:23 -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:27:34.637 23:30:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:34.637 23:30:23 -- common/autotest_common.sh@10 -- # set +x 00:27:34.637 Malloc1 00:27:34.637 23:30:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:34.637 23:30:23 -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:27:34.637 23:30:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:34.637 23:30:23 -- common/autotest_common.sh@10 -- # set +x 00:27:34.637 23:30:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:34.637 23:30:23 -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:27:34.637 23:30:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:34.637 23:30:23 -- common/autotest_common.sh@10 -- # set +x 00:27:34.637 23:30:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:34.637 23:30:23 -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:27:34.637 23:30:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:34.637 23:30:23 -- common/autotest_common.sh@10 -- # set +x 00:27:34.637 23:30:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:34.637 23:30:23 -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4421 00:27:34.637 23:30:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:34.637 23:30:23 -- common/autotest_common.sh@10 -- # set +x 00:27:34.637 23:30:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:34.637 23:30:23 -- host/multicontroller.sh@44 -- # bdevperf_pid=4076361 00:27:34.637 23:30:23 -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:27:34.637 23:30:23 -- host/multicontroller.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:27:34.637 23:30:23 -- host/multicontroller.sh@47 -- # waitforlisten 4076361 /var/tmp/bdevperf.sock 00:27:34.637 23:30:23 -- common/autotest_common.sh@817 -- # '[' -z 4076361 ']' 00:27:34.637 23:30:23 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:27:34.637 23:30:23 -- common/autotest_common.sh@822 -- # local max_retries=100 00:27:34.637 23:30:23 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:27:34.637 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:27:34.637 23:30:23 -- common/autotest_common.sh@826 -- # xtrace_disable 00:27:34.637 23:30:23 -- common/autotest_common.sh@10 -- # set +x 00:27:35.578 23:30:24 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:27:35.578 23:30:24 -- common/autotest_common.sh@850 -- # return 0 00:27:35.578 23:30:24 -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:27:35.578 23:30:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:35.578 23:30:24 -- common/autotest_common.sh@10 -- # set +x 00:27:35.838 NVMe0n1 00:27:35.838 23:30:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:35.838 23:30:24 -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:27:35.838 23:30:24 -- host/multicontroller.sh@54 -- # grep -c NVMe 00:27:35.838 23:30:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:35.838 23:30:24 -- common/autotest_common.sh@10 -- # set +x 00:27:35.838 23:30:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:35.838 1 00:27:35.838 23:30:24 -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:27:35.838 23:30:24 -- common/autotest_common.sh@638 -- # local es=0 00:27:35.838 23:30:24 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:27:35.838 23:30:24 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:27:35.838 23:30:24 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:27:35.838 23:30:24 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:27:35.838 23:30:24 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:27:35.838 23:30:24 -- common/autotest_common.sh@641 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:27:35.838 23:30:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:35.838 23:30:24 -- common/autotest_common.sh@10 -- # set +x 00:27:35.838 request: 00:27:35.838 { 00:27:35.838 "name": "NVMe0", 00:27:35.838 "trtype": "tcp", 00:27:35.838 "traddr": "10.0.0.2", 00:27:35.838 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:27:35.838 "hostaddr": "10.0.0.2", 00:27:35.838 "hostsvcid": "60000", 00:27:35.838 "adrfam": "ipv4", 00:27:35.838 "trsvcid": "4420", 00:27:35.838 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:27:35.838 "method": "bdev_nvme_attach_controller", 00:27:35.838 "req_id": 1 00:27:35.838 } 00:27:35.838 Got JSON-RPC error response 00:27:35.838 response: 00:27:35.838 { 00:27:35.838 "code": -114, 00:27:35.838 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:27:35.838 } 00:27:35.838 23:30:24 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:27:35.838 23:30:24 -- common/autotest_common.sh@641 -- # es=1 00:27:35.838 23:30:24 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:27:35.838 23:30:24 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:27:35.838 23:30:24 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:27:35.838 23:30:24 -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:27:35.838 23:30:24 -- common/autotest_common.sh@638 -- # local es=0 00:27:35.838 23:30:24 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:27:35.838 23:30:24 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:27:35.838 23:30:24 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:27:35.838 23:30:24 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:27:35.838 23:30:24 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:27:35.838 23:30:24 -- common/autotest_common.sh@641 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:27:35.838 23:30:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:35.838 23:30:24 -- common/autotest_common.sh@10 -- # set +x 00:27:35.838 request: 00:27:35.838 { 00:27:35.838 "name": "NVMe0", 00:27:35.838 "trtype": "tcp", 00:27:35.838 "traddr": "10.0.0.2", 00:27:35.838 "hostaddr": "10.0.0.2", 00:27:35.838 "hostsvcid": "60000", 00:27:35.838 "adrfam": "ipv4", 00:27:35.838 "trsvcid": "4420", 00:27:35.838 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:27:35.838 "method": "bdev_nvme_attach_controller", 00:27:35.838 "req_id": 1 00:27:35.838 } 00:27:35.838 Got JSON-RPC error response 00:27:35.838 response: 00:27:35.838 { 00:27:35.838 "code": -114, 00:27:35.838 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:27:35.838 } 00:27:35.838 23:30:24 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:27:35.838 23:30:24 -- common/autotest_common.sh@641 -- # es=1 00:27:35.838 23:30:24 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:27:35.838 23:30:24 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:27:35.838 23:30:24 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:27:35.838 23:30:24 -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:27:35.838 23:30:24 -- common/autotest_common.sh@638 -- # local es=0 00:27:35.838 23:30:24 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:27:35.838 23:30:24 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:27:35.838 23:30:24 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:27:35.838 23:30:24 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:27:35.838 23:30:24 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:27:35.838 23:30:24 -- common/autotest_common.sh@641 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:27:35.839 23:30:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:35.839 23:30:24 -- common/autotest_common.sh@10 -- # set +x 00:27:35.839 request: 00:27:35.839 { 00:27:35.839 "name": "NVMe0", 00:27:35.839 "trtype": "tcp", 00:27:35.839 "traddr": "10.0.0.2", 00:27:35.839 "hostaddr": "10.0.0.2", 00:27:35.839 "hostsvcid": "60000", 00:27:35.839 "adrfam": "ipv4", 00:27:35.839 "trsvcid": "4420", 00:27:35.839 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:27:35.839 "multipath": "disable", 00:27:35.839 "method": "bdev_nvme_attach_controller", 00:27:35.839 "req_id": 1 00:27:35.839 } 00:27:35.839 Got JSON-RPC error response 00:27:35.839 response: 00:27:35.839 { 00:27:35.839 "code": -114, 00:27:35.839 "message": "A controller named NVMe0 already exists and multipath is disabled\n" 00:27:35.839 } 00:27:35.839 23:30:24 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:27:35.839 23:30:24 -- common/autotest_common.sh@641 -- # es=1 00:27:35.839 23:30:24 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:27:35.839 23:30:24 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:27:35.839 23:30:24 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:27:35.839 23:30:24 -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:27:35.839 23:30:24 -- common/autotest_common.sh@638 -- # local es=0 00:27:35.839 23:30:24 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:27:35.839 23:30:24 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:27:35.839 23:30:24 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:27:35.839 23:30:24 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:27:35.839 23:30:24 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:27:35.839 23:30:24 -- common/autotest_common.sh@641 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:27:35.839 23:30:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:35.839 23:30:24 -- common/autotest_common.sh@10 -- # set +x 00:27:35.839 request: 00:27:35.839 { 00:27:35.839 "name": "NVMe0", 00:27:35.839 "trtype": "tcp", 00:27:35.839 "traddr": "10.0.0.2", 00:27:35.839 "hostaddr": "10.0.0.2", 00:27:35.839 "hostsvcid": "60000", 00:27:35.839 "adrfam": "ipv4", 00:27:35.839 "trsvcid": "4420", 00:27:35.839 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:27:35.839 "multipath": "failover", 00:27:35.839 "method": "bdev_nvme_attach_controller", 00:27:35.839 "req_id": 1 00:27:35.839 } 00:27:35.839 Got JSON-RPC error response 00:27:35.839 response: 00:27:35.839 { 00:27:35.839 "code": -114, 00:27:35.839 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:27:35.839 } 00:27:35.839 23:30:25 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:27:35.839 23:30:25 -- common/autotest_common.sh@641 -- # es=1 00:27:35.839 23:30:25 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:27:35.839 23:30:25 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:27:35.839 23:30:25 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:27:35.839 23:30:25 -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:27:35.839 23:30:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:35.839 23:30:25 -- common/autotest_common.sh@10 -- # set +x 00:27:35.839 00:27:35.839 23:30:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:35.839 23:30:25 -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:27:35.839 23:30:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:35.839 23:30:25 -- common/autotest_common.sh@10 -- # set +x 00:27:35.839 23:30:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:35.839 23:30:25 -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:27:35.839 23:30:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:35.839 23:30:25 -- common/autotest_common.sh@10 -- # set +x 00:27:36.100 00:27:36.100 23:30:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:36.100 23:30:25 -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:27:36.100 23:30:25 -- host/multicontroller.sh@90 -- # grep -c NVMe 00:27:36.100 23:30:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:36.100 23:30:25 -- common/autotest_common.sh@10 -- # set +x 00:27:36.100 23:30:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:36.100 23:30:25 -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:27:36.100 23:30:25 -- host/multicontroller.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:27:37.487 0 00:27:37.487 23:30:26 -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:27:37.487 23:30:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:37.487 23:30:26 -- common/autotest_common.sh@10 -- # set +x 00:27:37.487 23:30:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:37.487 23:30:26 -- host/multicontroller.sh@100 -- # killprocess 4076361 00:27:37.487 23:30:26 -- common/autotest_common.sh@936 -- # '[' -z 4076361 ']' 00:27:37.487 23:30:26 -- common/autotest_common.sh@940 -- # kill -0 4076361 00:27:37.487 23:30:26 -- common/autotest_common.sh@941 -- # uname 00:27:37.487 23:30:26 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:27:37.487 23:30:26 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 4076361 00:27:37.487 23:30:26 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:27:37.487 23:30:26 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:27:37.487 23:30:26 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 4076361' 00:27:37.487 killing process with pid 4076361 00:27:37.487 23:30:26 -- common/autotest_common.sh@955 -- # kill 4076361 00:27:37.487 23:30:26 -- common/autotest_common.sh@960 -- # wait 4076361 00:27:37.487 23:30:26 -- host/multicontroller.sh@102 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:27:37.487 23:30:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:37.487 23:30:26 -- common/autotest_common.sh@10 -- # set +x 00:27:37.487 23:30:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:37.487 23:30:26 -- host/multicontroller.sh@103 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:27:37.487 23:30:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:37.487 23:30:26 -- common/autotest_common.sh@10 -- # set +x 00:27:37.487 23:30:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:37.487 23:30:26 -- host/multicontroller.sh@105 -- # trap - SIGINT SIGTERM EXIT 00:27:37.487 23:30:26 -- host/multicontroller.sh@107 -- # pap /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:27:37.487 23:30:26 -- common/autotest_common.sh@1598 -- # read -r file 00:27:37.487 23:30:26 -- common/autotest_common.sh@1597 -- # find /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt -type f 00:27:37.487 23:30:26 -- common/autotest_common.sh@1597 -- # sort -u 00:27:37.487 23:30:26 -- common/autotest_common.sh@1599 -- # cat 00:27:37.487 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:27:37.487 [2024-04-26 23:30:23.900979] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:27:37.487 [2024-04-26 23:30:23.901075] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4076361 ] 00:27:37.487 EAL: No free 2048 kB hugepages reported on node 1 00:27:37.487 [2024-04-26 23:30:23.964591] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:37.487 [2024-04-26 23:30:23.993775] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:37.487 [2024-04-26 23:30:25.187104] bdev.c:4551:bdev_name_add: *ERROR*: Bdev name 6712ac9b-6320-449b-8782-eb0996e4e700 already exists 00:27:37.487 [2024-04-26 23:30:25.187134] bdev.c:7668:bdev_register: *ERROR*: Unable to add uuid:6712ac9b-6320-449b-8782-eb0996e4e700 alias for bdev NVMe1n1 00:27:37.487 [2024-04-26 23:30:25.187144] bdev_nvme.c:4276:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:27:37.487 Running I/O for 1 seconds... 00:27:37.487 00:27:37.487 Latency(us) 00:27:37.487 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:37.487 Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:27:37.487 NVMe0n1 : 1.00 20590.69 80.43 0.00 0.00 6202.99 4041.39 12288.00 00:27:37.487 =================================================================================================================== 00:27:37.487 Total : 20590.69 80.43 0.00 0.00 6202.99 4041.39 12288.00 00:27:37.487 Received shutdown signal, test time was about 1.000000 seconds 00:27:37.487 00:27:37.487 Latency(us) 00:27:37.487 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:37.487 =================================================================================================================== 00:27:37.487 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:27:37.487 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:27:37.487 23:30:26 -- common/autotest_common.sh@1604 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:27:37.487 23:30:26 -- common/autotest_common.sh@1598 -- # read -r file 00:27:37.487 23:30:26 -- host/multicontroller.sh@108 -- # nvmftestfini 00:27:37.487 23:30:26 -- nvmf/common.sh@477 -- # nvmfcleanup 00:27:37.487 23:30:26 -- nvmf/common.sh@117 -- # sync 00:27:37.487 23:30:26 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:27:37.487 23:30:26 -- nvmf/common.sh@120 -- # set +e 00:27:37.487 23:30:26 -- nvmf/common.sh@121 -- # for i in {1..20} 00:27:37.487 23:30:26 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:27:37.487 rmmod nvme_tcp 00:27:37.487 rmmod nvme_fabrics 00:27:37.487 rmmod nvme_keyring 00:27:37.487 23:30:26 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:27:37.487 23:30:26 -- nvmf/common.sh@124 -- # set -e 00:27:37.487 23:30:26 -- nvmf/common.sh@125 -- # return 0 00:27:37.487 23:30:26 -- nvmf/common.sh@478 -- # '[' -n 4076015 ']' 00:27:37.487 23:30:26 -- nvmf/common.sh@479 -- # killprocess 4076015 00:27:37.487 23:30:26 -- common/autotest_common.sh@936 -- # '[' -z 4076015 ']' 00:27:37.487 23:30:26 -- common/autotest_common.sh@940 -- # kill -0 4076015 00:27:37.487 23:30:26 -- common/autotest_common.sh@941 -- # uname 00:27:37.487 23:30:26 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:27:37.487 23:30:26 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 4076015 00:27:37.487 23:30:26 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:27:37.487 23:30:26 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:27:37.487 23:30:26 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 4076015' 00:27:37.487 killing process with pid 4076015 00:27:37.487 23:30:26 -- common/autotest_common.sh@955 -- # kill 4076015 00:27:37.487 23:30:26 -- common/autotest_common.sh@960 -- # wait 4076015 00:27:37.749 23:30:26 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:27:37.749 23:30:26 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:27:37.749 23:30:26 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:27:37.749 23:30:26 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:37.749 23:30:26 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:27:37.749 23:30:26 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:37.749 23:30:26 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:37.749 23:30:26 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:39.665 23:30:28 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:27:39.665 00:27:39.665 real 0m13.429s 00:27:39.665 user 0m16.535s 00:27:39.665 sys 0m6.018s 00:27:39.665 23:30:28 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:27:39.665 23:30:28 -- common/autotest_common.sh@10 -- # set +x 00:27:39.665 ************************************ 00:27:39.665 END TEST nvmf_multicontroller 00:27:39.665 ************************************ 00:27:39.925 23:30:28 -- nvmf/nvmf.sh@90 -- # run_test nvmf_aer /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:27:39.925 23:30:28 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:27:39.925 23:30:28 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:27:39.925 23:30:28 -- common/autotest_common.sh@10 -- # set +x 00:27:39.925 ************************************ 00:27:39.925 START TEST nvmf_aer 00:27:39.925 ************************************ 00:27:39.925 23:30:29 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:27:39.925 * Looking for test storage... 00:27:40.186 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:27:40.186 23:30:29 -- host/aer.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:40.186 23:30:29 -- nvmf/common.sh@7 -- # uname -s 00:27:40.186 23:30:29 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:40.186 23:30:29 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:40.186 23:30:29 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:40.186 23:30:29 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:40.186 23:30:29 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:40.186 23:30:29 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:40.186 23:30:29 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:40.186 23:30:29 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:40.186 23:30:29 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:40.186 23:30:29 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:40.186 23:30:29 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:27:40.186 23:30:29 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:27:40.186 23:30:29 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:40.186 23:30:29 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:40.186 23:30:29 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:40.186 23:30:29 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:40.186 23:30:29 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:40.186 23:30:29 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:40.186 23:30:29 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:40.186 23:30:29 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:40.186 23:30:29 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:40.186 23:30:29 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:40.186 23:30:29 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:40.186 23:30:29 -- paths/export.sh@5 -- # export PATH 00:27:40.186 23:30:29 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:40.186 23:30:29 -- nvmf/common.sh@47 -- # : 0 00:27:40.186 23:30:29 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:27:40.186 23:30:29 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:27:40.186 23:30:29 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:40.186 23:30:29 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:40.186 23:30:29 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:40.186 23:30:29 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:27:40.186 23:30:29 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:27:40.186 23:30:29 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:27:40.186 23:30:29 -- host/aer.sh@11 -- # nvmftestinit 00:27:40.186 23:30:29 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:27:40.186 23:30:29 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:40.186 23:30:29 -- nvmf/common.sh@437 -- # prepare_net_devs 00:27:40.186 23:30:29 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:27:40.186 23:30:29 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:27:40.186 23:30:29 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:40.186 23:30:29 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:40.186 23:30:29 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:40.186 23:30:29 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:27:40.186 23:30:29 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:27:40.186 23:30:29 -- nvmf/common.sh@285 -- # xtrace_disable 00:27:40.186 23:30:29 -- common/autotest_common.sh@10 -- # set +x 00:27:46.770 23:30:36 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:27:46.770 23:30:36 -- nvmf/common.sh@291 -- # pci_devs=() 00:27:46.770 23:30:36 -- nvmf/common.sh@291 -- # local -a pci_devs 00:27:46.770 23:30:36 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:27:46.770 23:30:36 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:27:46.770 23:30:36 -- nvmf/common.sh@293 -- # pci_drivers=() 00:27:46.770 23:30:36 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:27:46.770 23:30:36 -- nvmf/common.sh@295 -- # net_devs=() 00:27:46.770 23:30:36 -- nvmf/common.sh@295 -- # local -ga net_devs 00:27:46.770 23:30:36 -- nvmf/common.sh@296 -- # e810=() 00:27:46.770 23:30:36 -- nvmf/common.sh@296 -- # local -ga e810 00:27:46.770 23:30:36 -- nvmf/common.sh@297 -- # x722=() 00:27:46.770 23:30:36 -- nvmf/common.sh@297 -- # local -ga x722 00:27:46.770 23:30:36 -- nvmf/common.sh@298 -- # mlx=() 00:27:46.770 23:30:36 -- nvmf/common.sh@298 -- # local -ga mlx 00:27:46.770 23:30:36 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:46.770 23:30:36 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:46.770 23:30:36 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:46.770 23:30:36 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:46.770 23:30:36 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:46.770 23:30:36 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:46.770 23:30:36 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:46.770 23:30:36 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:46.770 23:30:36 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:46.770 23:30:36 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:46.770 23:30:36 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:46.770 23:30:36 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:27:46.770 23:30:36 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:27:46.770 23:30:36 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:27:46.770 23:30:36 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:27:46.770 23:30:36 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:27:46.770 23:30:36 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:27:46.770 23:30:36 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:46.770 23:30:36 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:27:46.770 Found 0000:31:00.0 (0x8086 - 0x159b) 00:27:46.770 23:30:36 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:46.770 23:30:36 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:46.770 23:30:36 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:46.770 23:30:36 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:46.770 23:30:36 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:46.770 23:30:36 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:46.770 23:30:36 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:27:46.770 Found 0000:31:00.1 (0x8086 - 0x159b) 00:27:46.770 23:30:36 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:47.031 23:30:36 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:47.031 23:30:36 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:47.031 23:30:36 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:47.031 23:30:36 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:47.031 23:30:36 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:27:47.031 23:30:36 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:27:47.031 23:30:36 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:27:47.031 23:30:36 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:47.031 23:30:36 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:47.031 23:30:36 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:27:47.031 23:30:36 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:47.031 23:30:36 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:27:47.031 Found net devices under 0000:31:00.0: cvl_0_0 00:27:47.031 23:30:36 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:27:47.031 23:30:36 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:47.031 23:30:36 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:47.031 23:30:36 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:27:47.031 23:30:36 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:47.031 23:30:36 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:27:47.031 Found net devices under 0000:31:00.1: cvl_0_1 00:27:47.031 23:30:36 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:27:47.031 23:30:36 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:27:47.031 23:30:36 -- nvmf/common.sh@403 -- # is_hw=yes 00:27:47.031 23:30:36 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:27:47.031 23:30:36 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:27:47.031 23:30:36 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:27:47.031 23:30:36 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:47.031 23:30:36 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:47.031 23:30:36 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:47.031 23:30:36 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:27:47.031 23:30:36 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:47.031 23:30:36 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:47.031 23:30:36 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:27:47.031 23:30:36 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:47.031 23:30:36 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:47.031 23:30:36 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:27:47.031 23:30:36 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:27:47.031 23:30:36 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:27:47.031 23:30:36 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:47.031 23:30:36 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:47.031 23:30:36 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:47.031 23:30:36 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:27:47.031 23:30:36 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:47.292 23:30:36 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:47.292 23:30:36 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:47.292 23:30:36 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:27:47.292 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:47.292 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.632 ms 00:27:47.292 00:27:47.292 --- 10.0.0.2 ping statistics --- 00:27:47.292 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:47.292 rtt min/avg/max/mdev = 0.632/0.632/0.632/0.000 ms 00:27:47.292 23:30:36 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:47.292 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:47.293 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.320 ms 00:27:47.293 00:27:47.293 --- 10.0.0.1 ping statistics --- 00:27:47.293 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:47.293 rtt min/avg/max/mdev = 0.320/0.320/0.320/0.000 ms 00:27:47.293 23:30:36 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:47.293 23:30:36 -- nvmf/common.sh@411 -- # return 0 00:27:47.293 23:30:36 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:27:47.293 23:30:36 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:47.293 23:30:36 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:27:47.293 23:30:36 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:27:47.293 23:30:36 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:47.293 23:30:36 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:27:47.293 23:30:36 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:27:47.293 23:30:36 -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:27:47.293 23:30:36 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:27:47.293 23:30:36 -- common/autotest_common.sh@710 -- # xtrace_disable 00:27:47.293 23:30:36 -- common/autotest_common.sh@10 -- # set +x 00:27:47.293 23:30:36 -- nvmf/common.sh@470 -- # nvmfpid=4081106 00:27:47.293 23:30:36 -- nvmf/common.sh@471 -- # waitforlisten 4081106 00:27:47.293 23:30:36 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:27:47.293 23:30:36 -- common/autotest_common.sh@817 -- # '[' -z 4081106 ']' 00:27:47.293 23:30:36 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:47.293 23:30:36 -- common/autotest_common.sh@822 -- # local max_retries=100 00:27:47.293 23:30:36 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:47.293 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:47.293 23:30:36 -- common/autotest_common.sh@826 -- # xtrace_disable 00:27:47.293 23:30:36 -- common/autotest_common.sh@10 -- # set +x 00:27:47.293 [2024-04-26 23:30:36.447919] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:27:47.293 [2024-04-26 23:30:36.447966] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:47.293 EAL: No free 2048 kB hugepages reported on node 1 00:27:47.293 [2024-04-26 23:30:36.516006] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:47.293 [2024-04-26 23:30:36.546382] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:47.293 [2024-04-26 23:30:36.546422] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:47.293 [2024-04-26 23:30:36.546430] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:47.293 [2024-04-26 23:30:36.546437] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:47.293 [2024-04-26 23:30:36.546443] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:47.554 [2024-04-26 23:30:36.549860] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:27:47.554 [2024-04-26 23:30:36.550034] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:27:47.554 [2024-04-26 23:30:36.550244] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:27:47.554 [2024-04-26 23:30:36.550245] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:47.554 23:30:36 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:27:47.554 23:30:36 -- common/autotest_common.sh@850 -- # return 0 00:27:47.554 23:30:36 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:27:47.554 23:30:36 -- common/autotest_common.sh@716 -- # xtrace_disable 00:27:47.554 23:30:36 -- common/autotest_common.sh@10 -- # set +x 00:27:47.554 23:30:36 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:47.554 23:30:36 -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:27:47.554 23:30:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:47.554 23:30:36 -- common/autotest_common.sh@10 -- # set +x 00:27:47.554 [2024-04-26 23:30:36.703671] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:47.554 23:30:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:47.554 23:30:36 -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:27:47.554 23:30:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:47.554 23:30:36 -- common/autotest_common.sh@10 -- # set +x 00:27:47.554 Malloc0 00:27:47.554 23:30:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:47.554 23:30:36 -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:27:47.554 23:30:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:47.554 23:30:36 -- common/autotest_common.sh@10 -- # set +x 00:27:47.554 23:30:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:47.554 23:30:36 -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:27:47.554 23:30:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:47.554 23:30:36 -- common/autotest_common.sh@10 -- # set +x 00:27:47.554 23:30:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:47.554 23:30:36 -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:47.554 23:30:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:47.554 23:30:36 -- common/autotest_common.sh@10 -- # set +x 00:27:47.554 [2024-04-26 23:30:36.760314] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:47.554 23:30:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:47.554 23:30:36 -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:27:47.554 23:30:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:47.554 23:30:36 -- common/autotest_common.sh@10 -- # set +x 00:27:47.554 [2024-04-26 23:30:36.772107] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:27:47.554 [ 00:27:47.554 { 00:27:47.554 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:27:47.554 "subtype": "Discovery", 00:27:47.554 "listen_addresses": [], 00:27:47.554 "allow_any_host": true, 00:27:47.554 "hosts": [] 00:27:47.554 }, 00:27:47.554 { 00:27:47.554 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:27:47.554 "subtype": "NVMe", 00:27:47.554 "listen_addresses": [ 00:27:47.554 { 00:27:47.554 "transport": "TCP", 00:27:47.554 "trtype": "TCP", 00:27:47.554 "adrfam": "IPv4", 00:27:47.554 "traddr": "10.0.0.2", 00:27:47.554 "trsvcid": "4420" 00:27:47.554 } 00:27:47.554 ], 00:27:47.554 "allow_any_host": true, 00:27:47.554 "hosts": [], 00:27:47.554 "serial_number": "SPDK00000000000001", 00:27:47.554 "model_number": "SPDK bdev Controller", 00:27:47.554 "max_namespaces": 2, 00:27:47.554 "min_cntlid": 1, 00:27:47.554 "max_cntlid": 65519, 00:27:47.554 "namespaces": [ 00:27:47.554 { 00:27:47.554 "nsid": 1, 00:27:47.554 "bdev_name": "Malloc0", 00:27:47.554 "name": "Malloc0", 00:27:47.554 "nguid": "EBCECB6D9E2F4996877DAE5B5CD84DCC", 00:27:47.554 "uuid": "ebcecb6d-9e2f-4996-877d-ae5b5cd84dcc" 00:27:47.554 } 00:27:47.554 ] 00:27:47.555 } 00:27:47.555 ] 00:27:47.555 23:30:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:47.555 23:30:36 -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:27:47.555 23:30:36 -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:27:47.555 23:30:36 -- host/aer.sh@33 -- # aerpid=4081134 00:27:47.555 23:30:36 -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:27:47.555 23:30:36 -- host/aer.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:27:47.555 23:30:36 -- common/autotest_common.sh@1251 -- # local i=0 00:27:47.555 23:30:36 -- common/autotest_common.sh@1252 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:27:47.555 23:30:36 -- common/autotest_common.sh@1253 -- # '[' 0 -lt 200 ']' 00:27:47.555 23:30:36 -- common/autotest_common.sh@1254 -- # i=1 00:27:47.555 23:30:36 -- common/autotest_common.sh@1255 -- # sleep 0.1 00:27:47.817 EAL: No free 2048 kB hugepages reported on node 1 00:27:47.817 23:30:36 -- common/autotest_common.sh@1252 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:27:47.817 23:30:36 -- common/autotest_common.sh@1253 -- # '[' 1 -lt 200 ']' 00:27:47.817 23:30:36 -- common/autotest_common.sh@1254 -- # i=2 00:27:47.817 23:30:36 -- common/autotest_common.sh@1255 -- # sleep 0.1 00:27:47.817 23:30:36 -- common/autotest_common.sh@1252 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:27:47.817 23:30:36 -- common/autotest_common.sh@1258 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:27:47.817 23:30:36 -- common/autotest_common.sh@1262 -- # return 0 00:27:47.817 23:30:36 -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:27:47.817 23:30:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:47.817 23:30:36 -- common/autotest_common.sh@10 -- # set +x 00:27:47.817 Malloc1 00:27:47.817 23:30:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:47.817 23:30:37 -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:27:47.817 23:30:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:47.817 23:30:37 -- common/autotest_common.sh@10 -- # set +x 00:27:47.817 23:30:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:47.817 23:30:37 -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:27:47.817 23:30:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:47.817 23:30:37 -- common/autotest_common.sh@10 -- # set +x 00:27:47.817 Asynchronous Event Request test 00:27:47.817 Attaching to 10.0.0.2 00:27:47.817 Attached to 10.0.0.2 00:27:47.817 Registering asynchronous event callbacks... 00:27:47.817 Starting namespace attribute notice tests for all controllers... 00:27:47.817 10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:27:47.817 aer_cb - Changed Namespace 00:27:47.817 Cleaning up... 00:27:47.817 [ 00:27:47.817 { 00:27:47.817 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:27:47.817 "subtype": "Discovery", 00:27:47.817 "listen_addresses": [], 00:27:47.817 "allow_any_host": true, 00:27:47.817 "hosts": [] 00:27:47.817 }, 00:27:47.817 { 00:27:47.817 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:27:47.817 "subtype": "NVMe", 00:27:47.817 "listen_addresses": [ 00:27:47.817 { 00:27:47.817 "transport": "TCP", 00:27:47.817 "trtype": "TCP", 00:27:47.817 "adrfam": "IPv4", 00:27:47.817 "traddr": "10.0.0.2", 00:27:47.817 "trsvcid": "4420" 00:27:47.817 } 00:27:47.817 ], 00:27:47.817 "allow_any_host": true, 00:27:47.817 "hosts": [], 00:27:47.817 "serial_number": "SPDK00000000000001", 00:27:47.817 "model_number": "SPDK bdev Controller", 00:27:47.817 "max_namespaces": 2, 00:27:47.817 "min_cntlid": 1, 00:27:47.817 "max_cntlid": 65519, 00:27:47.817 "namespaces": [ 00:27:47.817 { 00:27:47.817 "nsid": 1, 00:27:47.817 "bdev_name": "Malloc0", 00:27:47.817 "name": "Malloc0", 00:27:47.817 "nguid": "EBCECB6D9E2F4996877DAE5B5CD84DCC", 00:27:47.817 "uuid": "ebcecb6d-9e2f-4996-877d-ae5b5cd84dcc" 00:27:47.817 }, 00:27:47.817 { 00:27:47.817 "nsid": 2, 00:27:47.817 "bdev_name": "Malloc1", 00:27:47.817 "name": "Malloc1", 00:27:47.817 "nguid": "299E9F382B154BB086F656D48069840C", 00:27:47.817 "uuid": "299e9f38-2b15-4bb0-86f6-56d48069840c" 00:27:47.817 } 00:27:47.817 ] 00:27:47.817 } 00:27:47.817 ] 00:27:47.817 23:30:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:47.817 23:30:37 -- host/aer.sh@43 -- # wait 4081134 00:27:47.817 23:30:37 -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:27:47.817 23:30:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:47.817 23:30:37 -- common/autotest_common.sh@10 -- # set +x 00:27:48.079 23:30:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:48.079 23:30:37 -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:27:48.079 23:30:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:48.079 23:30:37 -- common/autotest_common.sh@10 -- # set +x 00:27:48.079 23:30:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:48.079 23:30:37 -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:27:48.080 23:30:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:48.080 23:30:37 -- common/autotest_common.sh@10 -- # set +x 00:27:48.080 23:30:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:48.080 23:30:37 -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:27:48.080 23:30:37 -- host/aer.sh@51 -- # nvmftestfini 00:27:48.080 23:30:37 -- nvmf/common.sh@477 -- # nvmfcleanup 00:27:48.080 23:30:37 -- nvmf/common.sh@117 -- # sync 00:27:48.080 23:30:37 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:27:48.080 23:30:37 -- nvmf/common.sh@120 -- # set +e 00:27:48.080 23:30:37 -- nvmf/common.sh@121 -- # for i in {1..20} 00:27:48.080 23:30:37 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:27:48.080 rmmod nvme_tcp 00:27:48.080 rmmod nvme_fabrics 00:27:48.080 rmmod nvme_keyring 00:27:48.080 23:30:37 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:27:48.080 23:30:37 -- nvmf/common.sh@124 -- # set -e 00:27:48.080 23:30:37 -- nvmf/common.sh@125 -- # return 0 00:27:48.080 23:30:37 -- nvmf/common.sh@478 -- # '[' -n 4081106 ']' 00:27:48.080 23:30:37 -- nvmf/common.sh@479 -- # killprocess 4081106 00:27:48.080 23:30:37 -- common/autotest_common.sh@936 -- # '[' -z 4081106 ']' 00:27:48.080 23:30:37 -- common/autotest_common.sh@940 -- # kill -0 4081106 00:27:48.080 23:30:37 -- common/autotest_common.sh@941 -- # uname 00:27:48.080 23:30:37 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:27:48.080 23:30:37 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 4081106 00:27:48.080 23:30:37 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:27:48.080 23:30:37 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:27:48.080 23:30:37 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 4081106' 00:27:48.080 killing process with pid 4081106 00:27:48.080 23:30:37 -- common/autotest_common.sh@955 -- # kill 4081106 00:27:48.080 [2024-04-26 23:30:37.255530] app.c: 937:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:27:48.080 23:30:37 -- common/autotest_common.sh@960 -- # wait 4081106 00:27:48.342 23:30:37 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:27:48.342 23:30:37 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:27:48.342 23:30:37 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:27:48.342 23:30:37 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:48.342 23:30:37 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:27:48.342 23:30:37 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:48.342 23:30:37 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:48.342 23:30:37 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:50.259 23:30:39 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:27:50.259 00:27:50.259 real 0m10.368s 00:27:50.259 user 0m5.347s 00:27:50.259 sys 0m5.629s 00:27:50.259 23:30:39 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:27:50.259 23:30:39 -- common/autotest_common.sh@10 -- # set +x 00:27:50.259 ************************************ 00:27:50.259 END TEST nvmf_aer 00:27:50.259 ************************************ 00:27:50.259 23:30:39 -- nvmf/nvmf.sh@91 -- # run_test nvmf_async_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:27:50.259 23:30:39 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:27:50.259 23:30:39 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:27:50.259 23:30:39 -- common/autotest_common.sh@10 -- # set +x 00:27:50.521 ************************************ 00:27:50.521 START TEST nvmf_async_init 00:27:50.521 ************************************ 00:27:50.521 23:30:39 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:27:50.521 * Looking for test storage... 00:27:50.521 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:27:50.521 23:30:39 -- host/async_init.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:50.521 23:30:39 -- nvmf/common.sh@7 -- # uname -s 00:27:50.521 23:30:39 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:50.521 23:30:39 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:50.521 23:30:39 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:50.521 23:30:39 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:50.521 23:30:39 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:50.521 23:30:39 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:50.521 23:30:39 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:50.521 23:30:39 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:50.521 23:30:39 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:50.521 23:30:39 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:50.521 23:30:39 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:27:50.521 23:30:39 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:27:50.521 23:30:39 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:50.521 23:30:39 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:50.521 23:30:39 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:50.521 23:30:39 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:50.521 23:30:39 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:50.521 23:30:39 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:50.521 23:30:39 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:50.521 23:30:39 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:50.521 23:30:39 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:50.521 23:30:39 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:50.521 23:30:39 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:50.521 23:30:39 -- paths/export.sh@5 -- # export PATH 00:27:50.521 23:30:39 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:50.521 23:30:39 -- nvmf/common.sh@47 -- # : 0 00:27:50.521 23:30:39 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:27:50.521 23:30:39 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:27:50.521 23:30:39 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:50.521 23:30:39 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:50.521 23:30:39 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:50.521 23:30:39 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:27:50.521 23:30:39 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:27:50.521 23:30:39 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:27:50.521 23:30:39 -- host/async_init.sh@13 -- # null_bdev_size=1024 00:27:50.521 23:30:39 -- host/async_init.sh@14 -- # null_block_size=512 00:27:50.522 23:30:39 -- host/async_init.sh@15 -- # null_bdev=null0 00:27:50.522 23:30:39 -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:27:50.522 23:30:39 -- host/async_init.sh@20 -- # uuidgen 00:27:50.522 23:30:39 -- host/async_init.sh@20 -- # tr -d - 00:27:50.522 23:30:39 -- host/async_init.sh@20 -- # nguid=4b633b5333a74b5fae72cb391ff3257a 00:27:50.522 23:30:39 -- host/async_init.sh@22 -- # nvmftestinit 00:27:50.522 23:30:39 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:27:50.522 23:30:39 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:50.522 23:30:39 -- nvmf/common.sh@437 -- # prepare_net_devs 00:27:50.522 23:30:39 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:27:50.522 23:30:39 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:27:50.522 23:30:39 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:50.522 23:30:39 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:50.522 23:30:39 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:50.784 23:30:39 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:27:50.784 23:30:39 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:27:50.784 23:30:39 -- nvmf/common.sh@285 -- # xtrace_disable 00:27:50.784 23:30:39 -- common/autotest_common.sh@10 -- # set +x 00:27:57.375 23:30:46 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:27:57.375 23:30:46 -- nvmf/common.sh@291 -- # pci_devs=() 00:27:57.375 23:30:46 -- nvmf/common.sh@291 -- # local -a pci_devs 00:27:57.375 23:30:46 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:27:57.375 23:30:46 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:27:57.375 23:30:46 -- nvmf/common.sh@293 -- # pci_drivers=() 00:27:57.375 23:30:46 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:27:57.375 23:30:46 -- nvmf/common.sh@295 -- # net_devs=() 00:27:57.375 23:30:46 -- nvmf/common.sh@295 -- # local -ga net_devs 00:27:57.375 23:30:46 -- nvmf/common.sh@296 -- # e810=() 00:27:57.375 23:30:46 -- nvmf/common.sh@296 -- # local -ga e810 00:27:57.375 23:30:46 -- nvmf/common.sh@297 -- # x722=() 00:27:57.375 23:30:46 -- nvmf/common.sh@297 -- # local -ga x722 00:27:57.375 23:30:46 -- nvmf/common.sh@298 -- # mlx=() 00:27:57.375 23:30:46 -- nvmf/common.sh@298 -- # local -ga mlx 00:27:57.375 23:30:46 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:57.375 23:30:46 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:57.375 23:30:46 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:57.375 23:30:46 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:57.375 23:30:46 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:57.375 23:30:46 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:57.375 23:30:46 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:57.375 23:30:46 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:57.375 23:30:46 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:57.375 23:30:46 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:57.375 23:30:46 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:57.375 23:30:46 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:27:57.375 23:30:46 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:27:57.375 23:30:46 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:27:57.375 23:30:46 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:27:57.375 23:30:46 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:27:57.375 23:30:46 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:27:57.375 23:30:46 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:57.375 23:30:46 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:27:57.375 Found 0000:31:00.0 (0x8086 - 0x159b) 00:27:57.375 23:30:46 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:57.375 23:30:46 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:57.375 23:30:46 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:57.375 23:30:46 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:57.375 23:30:46 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:57.375 23:30:46 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:57.375 23:30:46 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:27:57.375 Found 0000:31:00.1 (0x8086 - 0x159b) 00:27:57.375 23:30:46 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:57.375 23:30:46 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:57.376 23:30:46 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:57.376 23:30:46 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:57.376 23:30:46 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:57.376 23:30:46 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:27:57.376 23:30:46 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:27:57.376 23:30:46 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:27:57.376 23:30:46 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:57.376 23:30:46 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:57.376 23:30:46 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:27:57.376 23:30:46 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:57.376 23:30:46 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:27:57.376 Found net devices under 0000:31:00.0: cvl_0_0 00:27:57.376 23:30:46 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:27:57.376 23:30:46 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:57.376 23:30:46 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:57.376 23:30:46 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:27:57.376 23:30:46 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:57.376 23:30:46 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:27:57.376 Found net devices under 0000:31:00.1: cvl_0_1 00:27:57.376 23:30:46 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:27:57.376 23:30:46 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:27:57.376 23:30:46 -- nvmf/common.sh@403 -- # is_hw=yes 00:27:57.376 23:30:46 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:27:57.376 23:30:46 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:27:57.376 23:30:46 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:27:57.376 23:30:46 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:57.376 23:30:46 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:57.376 23:30:46 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:57.376 23:30:46 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:27:57.376 23:30:46 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:57.376 23:30:46 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:57.376 23:30:46 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:27:57.376 23:30:46 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:57.376 23:30:46 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:57.376 23:30:46 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:27:57.376 23:30:46 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:27:57.376 23:30:46 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:27:57.376 23:30:46 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:57.376 23:30:46 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:57.376 23:30:46 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:57.376 23:30:46 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:27:57.376 23:30:46 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:57.376 23:30:46 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:57.376 23:30:46 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:57.376 23:30:46 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:27:57.376 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:57.376 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.740 ms 00:27:57.376 00:27:57.376 --- 10.0.0.2 ping statistics --- 00:27:57.376 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:57.376 rtt min/avg/max/mdev = 0.740/0.740/0.740/0.000 ms 00:27:57.376 23:30:46 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:57.376 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:57.376 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.315 ms 00:27:57.376 00:27:57.376 --- 10.0.0.1 ping statistics --- 00:27:57.376 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:57.376 rtt min/avg/max/mdev = 0.315/0.315/0.315/0.000 ms 00:27:57.376 23:30:46 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:57.376 23:30:46 -- nvmf/common.sh@411 -- # return 0 00:27:57.376 23:30:46 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:27:57.376 23:30:46 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:57.376 23:30:46 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:27:57.376 23:30:46 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:27:57.376 23:30:46 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:57.376 23:30:46 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:27:57.376 23:30:46 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:27:57.376 23:30:46 -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:27:57.376 23:30:46 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:27:57.376 23:30:46 -- common/autotest_common.sh@710 -- # xtrace_disable 00:27:57.376 23:30:46 -- common/autotest_common.sh@10 -- # set +x 00:27:57.376 23:30:46 -- nvmf/common.sh@470 -- # nvmfpid=4085284 00:27:57.376 23:30:46 -- nvmf/common.sh@471 -- # waitforlisten 4085284 00:27:57.376 23:30:46 -- common/autotest_common.sh@817 -- # '[' -z 4085284 ']' 00:27:57.376 23:30:46 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:57.376 23:30:46 -- common/autotest_common.sh@822 -- # local max_retries=100 00:27:57.376 23:30:46 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:57.376 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:57.376 23:30:46 -- common/autotest_common.sh@826 -- # xtrace_disable 00:27:57.376 23:30:46 -- common/autotest_common.sh@10 -- # set +x 00:27:57.376 23:30:46 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:27:57.376 [2024-04-26 23:30:46.542226] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:27:57.376 [2024-04-26 23:30:46.542277] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:57.376 EAL: No free 2048 kB hugepages reported on node 1 00:27:57.376 [2024-04-26 23:30:46.609115] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:57.637 [2024-04-26 23:30:46.641042] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:57.637 [2024-04-26 23:30:46.641084] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:57.637 [2024-04-26 23:30:46.641091] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:57.637 [2024-04-26 23:30:46.641098] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:57.637 [2024-04-26 23:30:46.641104] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:57.637 [2024-04-26 23:30:46.641122] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:58.210 23:30:47 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:27:58.210 23:30:47 -- common/autotest_common.sh@850 -- # return 0 00:27:58.210 23:30:47 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:27:58.210 23:30:47 -- common/autotest_common.sh@716 -- # xtrace_disable 00:27:58.210 23:30:47 -- common/autotest_common.sh@10 -- # set +x 00:27:58.210 23:30:47 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:58.210 23:30:47 -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:27:58.210 23:30:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:58.210 23:30:47 -- common/autotest_common.sh@10 -- # set +x 00:27:58.210 [2024-04-26 23:30:47.341933] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:58.210 23:30:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:58.210 23:30:47 -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:27:58.210 23:30:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:58.210 23:30:47 -- common/autotest_common.sh@10 -- # set +x 00:27:58.210 null0 00:27:58.210 23:30:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:58.210 23:30:47 -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:27:58.210 23:30:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:58.210 23:30:47 -- common/autotest_common.sh@10 -- # set +x 00:27:58.210 23:30:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:58.210 23:30:47 -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:27:58.210 23:30:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:58.210 23:30:47 -- common/autotest_common.sh@10 -- # set +x 00:27:58.210 23:30:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:58.210 23:30:47 -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g 4b633b5333a74b5fae72cb391ff3257a 00:27:58.210 23:30:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:58.210 23:30:47 -- common/autotest_common.sh@10 -- # set +x 00:27:58.210 23:30:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:58.210 23:30:47 -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:27:58.210 23:30:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:58.210 23:30:47 -- common/autotest_common.sh@10 -- # set +x 00:27:58.210 [2024-04-26 23:30:47.382120] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:58.210 23:30:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:58.210 23:30:47 -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:27:58.210 23:30:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:58.210 23:30:47 -- common/autotest_common.sh@10 -- # set +x 00:27:58.470 nvme0n1 00:27:58.470 23:30:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:58.470 23:30:47 -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:27:58.470 23:30:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:58.470 23:30:47 -- common/autotest_common.sh@10 -- # set +x 00:27:58.470 [ 00:27:58.470 { 00:27:58.470 "name": "nvme0n1", 00:27:58.470 "aliases": [ 00:27:58.470 "4b633b53-33a7-4b5f-ae72-cb391ff3257a" 00:27:58.470 ], 00:27:58.470 "product_name": "NVMe disk", 00:27:58.470 "block_size": 512, 00:27:58.470 "num_blocks": 2097152, 00:27:58.470 "uuid": "4b633b53-33a7-4b5f-ae72-cb391ff3257a", 00:27:58.470 "assigned_rate_limits": { 00:27:58.470 "rw_ios_per_sec": 0, 00:27:58.470 "rw_mbytes_per_sec": 0, 00:27:58.470 "r_mbytes_per_sec": 0, 00:27:58.470 "w_mbytes_per_sec": 0 00:27:58.470 }, 00:27:58.470 "claimed": false, 00:27:58.470 "zoned": false, 00:27:58.470 "supported_io_types": { 00:27:58.470 "read": true, 00:27:58.470 "write": true, 00:27:58.470 "unmap": false, 00:27:58.470 "write_zeroes": true, 00:27:58.470 "flush": true, 00:27:58.470 "reset": true, 00:27:58.470 "compare": true, 00:27:58.470 "compare_and_write": true, 00:27:58.470 "abort": true, 00:27:58.470 "nvme_admin": true, 00:27:58.470 "nvme_io": true 00:27:58.470 }, 00:27:58.470 "memory_domains": [ 00:27:58.471 { 00:27:58.471 "dma_device_id": "system", 00:27:58.471 "dma_device_type": 1 00:27:58.471 } 00:27:58.471 ], 00:27:58.471 "driver_specific": { 00:27:58.471 "nvme": [ 00:27:58.471 { 00:27:58.471 "trid": { 00:27:58.471 "trtype": "TCP", 00:27:58.471 "adrfam": "IPv4", 00:27:58.471 "traddr": "10.0.0.2", 00:27:58.471 "trsvcid": "4420", 00:27:58.471 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:27:58.471 }, 00:27:58.471 "ctrlr_data": { 00:27:58.471 "cntlid": 1, 00:27:58.471 "vendor_id": "0x8086", 00:27:58.471 "model_number": "SPDK bdev Controller", 00:27:58.471 "serial_number": "00000000000000000000", 00:27:58.471 "firmware_revision": "24.05", 00:27:58.471 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:27:58.471 "oacs": { 00:27:58.471 "security": 0, 00:27:58.471 "format": 0, 00:27:58.471 "firmware": 0, 00:27:58.471 "ns_manage": 0 00:27:58.471 }, 00:27:58.471 "multi_ctrlr": true, 00:27:58.471 "ana_reporting": false 00:27:58.471 }, 00:27:58.471 "vs": { 00:27:58.471 "nvme_version": "1.3" 00:27:58.471 }, 00:27:58.471 "ns_data": { 00:27:58.471 "id": 1, 00:27:58.471 "can_share": true 00:27:58.471 } 00:27:58.471 } 00:27:58.471 ], 00:27:58.471 "mp_policy": "active_passive" 00:27:58.471 } 00:27:58.471 } 00:27:58.471 ] 00:27:58.471 23:30:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:58.471 23:30:47 -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:27:58.471 23:30:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:58.471 23:30:47 -- common/autotest_common.sh@10 -- # set +x 00:27:58.471 [2024-04-26 23:30:47.630626] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:27:58.471 [2024-04-26 23:30:47.630682] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2377780 (9): Bad file descriptor 00:27:58.731 [2024-04-26 23:30:47.762928] bdev_nvme.c:2054:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:27:58.731 23:30:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:58.731 23:30:47 -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:27:58.731 23:30:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:58.731 23:30:47 -- common/autotest_common.sh@10 -- # set +x 00:27:58.731 [ 00:27:58.731 { 00:27:58.731 "name": "nvme0n1", 00:27:58.731 "aliases": [ 00:27:58.731 "4b633b53-33a7-4b5f-ae72-cb391ff3257a" 00:27:58.731 ], 00:27:58.731 "product_name": "NVMe disk", 00:27:58.731 "block_size": 512, 00:27:58.731 "num_blocks": 2097152, 00:27:58.731 "uuid": "4b633b53-33a7-4b5f-ae72-cb391ff3257a", 00:27:58.731 "assigned_rate_limits": { 00:27:58.731 "rw_ios_per_sec": 0, 00:27:58.731 "rw_mbytes_per_sec": 0, 00:27:58.731 "r_mbytes_per_sec": 0, 00:27:58.731 "w_mbytes_per_sec": 0 00:27:58.731 }, 00:27:58.731 "claimed": false, 00:27:58.731 "zoned": false, 00:27:58.731 "supported_io_types": { 00:27:58.731 "read": true, 00:27:58.731 "write": true, 00:27:58.731 "unmap": false, 00:27:58.731 "write_zeroes": true, 00:27:58.731 "flush": true, 00:27:58.731 "reset": true, 00:27:58.731 "compare": true, 00:27:58.731 "compare_and_write": true, 00:27:58.731 "abort": true, 00:27:58.731 "nvme_admin": true, 00:27:58.731 "nvme_io": true 00:27:58.731 }, 00:27:58.731 "memory_domains": [ 00:27:58.731 { 00:27:58.731 "dma_device_id": "system", 00:27:58.731 "dma_device_type": 1 00:27:58.731 } 00:27:58.731 ], 00:27:58.731 "driver_specific": { 00:27:58.731 "nvme": [ 00:27:58.731 { 00:27:58.731 "trid": { 00:27:58.731 "trtype": "TCP", 00:27:58.731 "adrfam": "IPv4", 00:27:58.731 "traddr": "10.0.0.2", 00:27:58.731 "trsvcid": "4420", 00:27:58.731 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:27:58.731 }, 00:27:58.731 "ctrlr_data": { 00:27:58.731 "cntlid": 2, 00:27:58.731 "vendor_id": "0x8086", 00:27:58.731 "model_number": "SPDK bdev Controller", 00:27:58.731 "serial_number": "00000000000000000000", 00:27:58.731 "firmware_revision": "24.05", 00:27:58.731 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:27:58.731 "oacs": { 00:27:58.731 "security": 0, 00:27:58.731 "format": 0, 00:27:58.731 "firmware": 0, 00:27:58.731 "ns_manage": 0 00:27:58.731 }, 00:27:58.731 "multi_ctrlr": true, 00:27:58.731 "ana_reporting": false 00:27:58.731 }, 00:27:58.731 "vs": { 00:27:58.731 "nvme_version": "1.3" 00:27:58.731 }, 00:27:58.731 "ns_data": { 00:27:58.731 "id": 1, 00:27:58.731 "can_share": true 00:27:58.731 } 00:27:58.731 } 00:27:58.731 ], 00:27:58.731 "mp_policy": "active_passive" 00:27:58.731 } 00:27:58.731 } 00:27:58.731 ] 00:27:58.731 23:30:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:58.731 23:30:47 -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:58.731 23:30:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:58.731 23:30:47 -- common/autotest_common.sh@10 -- # set +x 00:27:58.731 23:30:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:58.731 23:30:47 -- host/async_init.sh@53 -- # mktemp 00:27:58.731 23:30:47 -- host/async_init.sh@53 -- # key_path=/tmp/tmp.YWZx0f6P7N 00:27:58.731 23:30:47 -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:27:58.731 23:30:47 -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.YWZx0f6P7N 00:27:58.731 23:30:47 -- host/async_init.sh@56 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:27:58.731 23:30:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:58.731 23:30:47 -- common/autotest_common.sh@10 -- # set +x 00:27:58.731 23:30:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:58.731 23:30:47 -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 --secure-channel 00:27:58.731 23:30:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:58.731 23:30:47 -- common/autotest_common.sh@10 -- # set +x 00:27:58.731 [2024-04-26 23:30:47.811193] tcp.c: 925:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:27:58.731 [2024-04-26 23:30:47.811297] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:27:58.731 23:30:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:58.732 23:30:47 -- host/async_init.sh@59 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.YWZx0f6P7N 00:27:58.732 23:30:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:58.732 23:30:47 -- common/autotest_common.sh@10 -- # set +x 00:27:58.732 [2024-04-26 23:30:47.819207] tcp.c:3652:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:27:58.732 23:30:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:58.732 23:30:47 -- host/async_init.sh@65 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.YWZx0f6P7N 00:27:58.732 23:30:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:58.732 23:30:47 -- common/autotest_common.sh@10 -- # set +x 00:27:58.732 [2024-04-26 23:30:47.827233] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:27:58.732 [2024-04-26 23:30:47.827267] nvme_tcp.c:2577:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:27:58.732 nvme0n1 00:27:58.732 23:30:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:58.732 23:30:47 -- host/async_init.sh@69 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:27:58.732 23:30:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:58.732 23:30:47 -- common/autotest_common.sh@10 -- # set +x 00:27:58.732 [ 00:27:58.732 { 00:27:58.732 "name": "nvme0n1", 00:27:58.732 "aliases": [ 00:27:58.732 "4b633b53-33a7-4b5f-ae72-cb391ff3257a" 00:27:58.732 ], 00:27:58.732 "product_name": "NVMe disk", 00:27:58.732 "block_size": 512, 00:27:58.732 "num_blocks": 2097152, 00:27:58.732 "uuid": "4b633b53-33a7-4b5f-ae72-cb391ff3257a", 00:27:58.732 "assigned_rate_limits": { 00:27:58.732 "rw_ios_per_sec": 0, 00:27:58.732 "rw_mbytes_per_sec": 0, 00:27:58.732 "r_mbytes_per_sec": 0, 00:27:58.732 "w_mbytes_per_sec": 0 00:27:58.732 }, 00:27:58.732 "claimed": false, 00:27:58.732 "zoned": false, 00:27:58.732 "supported_io_types": { 00:27:58.732 "read": true, 00:27:58.732 "write": true, 00:27:58.732 "unmap": false, 00:27:58.732 "write_zeroes": true, 00:27:58.732 "flush": true, 00:27:58.732 "reset": true, 00:27:58.732 "compare": true, 00:27:58.732 "compare_and_write": true, 00:27:58.732 "abort": true, 00:27:58.732 "nvme_admin": true, 00:27:58.732 "nvme_io": true 00:27:58.732 }, 00:27:58.732 "memory_domains": [ 00:27:58.732 { 00:27:58.732 "dma_device_id": "system", 00:27:58.732 "dma_device_type": 1 00:27:58.732 } 00:27:58.732 ], 00:27:58.732 "driver_specific": { 00:27:58.732 "nvme": [ 00:27:58.732 { 00:27:58.732 "trid": { 00:27:58.732 "trtype": "TCP", 00:27:58.732 "adrfam": "IPv4", 00:27:58.732 "traddr": "10.0.0.2", 00:27:58.732 "trsvcid": "4421", 00:27:58.732 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:27:58.732 }, 00:27:58.732 "ctrlr_data": { 00:27:58.732 "cntlid": 3, 00:27:58.732 "vendor_id": "0x8086", 00:27:58.732 "model_number": "SPDK bdev Controller", 00:27:58.732 "serial_number": "00000000000000000000", 00:27:58.732 "firmware_revision": "24.05", 00:27:58.732 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:27:58.732 "oacs": { 00:27:58.732 "security": 0, 00:27:58.732 "format": 0, 00:27:58.732 "firmware": 0, 00:27:58.732 "ns_manage": 0 00:27:58.732 }, 00:27:58.732 "multi_ctrlr": true, 00:27:58.732 "ana_reporting": false 00:27:58.732 }, 00:27:58.732 "vs": { 00:27:58.732 "nvme_version": "1.3" 00:27:58.732 }, 00:27:58.732 "ns_data": { 00:27:58.732 "id": 1, 00:27:58.732 "can_share": true 00:27:58.732 } 00:27:58.732 } 00:27:58.732 ], 00:27:58.732 "mp_policy": "active_passive" 00:27:58.732 } 00:27:58.732 } 00:27:58.732 ] 00:27:58.732 23:30:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:58.732 23:30:47 -- host/async_init.sh@72 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:58.732 23:30:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:58.732 23:30:47 -- common/autotest_common.sh@10 -- # set +x 00:27:58.732 23:30:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:58.732 23:30:47 -- host/async_init.sh@75 -- # rm -f /tmp/tmp.YWZx0f6P7N 00:27:58.732 23:30:47 -- host/async_init.sh@77 -- # trap - SIGINT SIGTERM EXIT 00:27:58.732 23:30:47 -- host/async_init.sh@78 -- # nvmftestfini 00:27:58.732 23:30:47 -- nvmf/common.sh@477 -- # nvmfcleanup 00:27:58.732 23:30:47 -- nvmf/common.sh@117 -- # sync 00:27:58.732 23:30:47 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:27:58.732 23:30:47 -- nvmf/common.sh@120 -- # set +e 00:27:58.732 23:30:47 -- nvmf/common.sh@121 -- # for i in {1..20} 00:27:58.732 23:30:47 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:27:58.732 rmmod nvme_tcp 00:27:58.732 rmmod nvme_fabrics 00:27:58.732 rmmod nvme_keyring 00:27:58.992 23:30:48 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:27:58.992 23:30:48 -- nvmf/common.sh@124 -- # set -e 00:27:58.992 23:30:48 -- nvmf/common.sh@125 -- # return 0 00:27:58.992 23:30:48 -- nvmf/common.sh@478 -- # '[' -n 4085284 ']' 00:27:58.992 23:30:48 -- nvmf/common.sh@479 -- # killprocess 4085284 00:27:58.992 23:30:48 -- common/autotest_common.sh@936 -- # '[' -z 4085284 ']' 00:27:58.992 23:30:48 -- common/autotest_common.sh@940 -- # kill -0 4085284 00:27:58.992 23:30:48 -- common/autotest_common.sh@941 -- # uname 00:27:58.992 23:30:48 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:27:58.992 23:30:48 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 4085284 00:27:58.992 23:30:48 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:27:58.992 23:30:48 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:27:58.992 23:30:48 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 4085284' 00:27:58.992 killing process with pid 4085284 00:27:58.992 23:30:48 -- common/autotest_common.sh@955 -- # kill 4085284 00:27:58.992 [2024-04-26 23:30:48.068385] app.c: 937:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:27:58.992 [2024-04-26 23:30:48.068413] app.c: 937:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:27:58.992 23:30:48 -- common/autotest_common.sh@960 -- # wait 4085284 00:27:58.992 23:30:48 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:27:58.992 23:30:48 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:27:58.992 23:30:48 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:27:58.992 23:30:48 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:58.992 23:30:48 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:27:58.992 23:30:48 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:58.992 23:30:48 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:58.992 23:30:48 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:01.537 23:30:50 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:28:01.537 00:28:01.537 real 0m10.624s 00:28:01.537 user 0m3.578s 00:28:01.537 sys 0m5.387s 00:28:01.537 23:30:50 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:28:01.537 23:30:50 -- common/autotest_common.sh@10 -- # set +x 00:28:01.537 ************************************ 00:28:01.537 END TEST nvmf_async_init 00:28:01.537 ************************************ 00:28:01.537 23:30:50 -- nvmf/nvmf.sh@92 -- # run_test dma /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:28:01.537 23:30:50 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:28:01.537 23:30:50 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:28:01.537 23:30:50 -- common/autotest_common.sh@10 -- # set +x 00:28:01.537 ************************************ 00:28:01.537 START TEST dma 00:28:01.537 ************************************ 00:28:01.537 23:30:50 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:28:01.537 * Looking for test storage... 00:28:01.537 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:28:01.537 23:30:50 -- host/dma.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:01.537 23:30:50 -- nvmf/common.sh@7 -- # uname -s 00:28:01.537 23:30:50 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:01.537 23:30:50 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:01.537 23:30:50 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:01.537 23:30:50 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:01.537 23:30:50 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:01.537 23:30:50 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:01.537 23:30:50 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:01.537 23:30:50 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:01.537 23:30:50 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:01.537 23:30:50 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:01.537 23:30:50 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:28:01.537 23:30:50 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:28:01.537 23:30:50 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:01.537 23:30:50 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:01.537 23:30:50 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:01.537 23:30:50 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:01.537 23:30:50 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:01.537 23:30:50 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:01.537 23:30:50 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:01.537 23:30:50 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:01.537 23:30:50 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:01.537 23:30:50 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:01.537 23:30:50 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:01.537 23:30:50 -- paths/export.sh@5 -- # export PATH 00:28:01.537 23:30:50 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:01.537 23:30:50 -- nvmf/common.sh@47 -- # : 0 00:28:01.537 23:30:50 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:28:01.537 23:30:50 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:28:01.537 23:30:50 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:01.537 23:30:50 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:01.537 23:30:50 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:01.537 23:30:50 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:28:01.537 23:30:50 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:28:01.537 23:30:50 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:28:01.537 23:30:50 -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:28:01.537 23:30:50 -- host/dma.sh@13 -- # exit 0 00:28:01.537 00:28:01.537 real 0m0.137s 00:28:01.537 user 0m0.062s 00:28:01.537 sys 0m0.084s 00:28:01.537 23:30:50 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:28:01.537 23:30:50 -- common/autotest_common.sh@10 -- # set +x 00:28:01.537 ************************************ 00:28:01.537 END TEST dma 00:28:01.537 ************************************ 00:28:01.537 23:30:50 -- nvmf/nvmf.sh@95 -- # run_test nvmf_identify /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:28:01.537 23:30:50 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:28:01.537 23:30:50 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:28:01.537 23:30:50 -- common/autotest_common.sh@10 -- # set +x 00:28:01.537 ************************************ 00:28:01.537 START TEST nvmf_identify 00:28:01.537 ************************************ 00:28:01.537 23:30:50 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:28:01.799 * Looking for test storage... 00:28:01.799 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:28:01.799 23:30:50 -- host/identify.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:01.799 23:30:50 -- nvmf/common.sh@7 -- # uname -s 00:28:01.799 23:30:50 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:01.799 23:30:50 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:01.799 23:30:50 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:01.799 23:30:50 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:01.799 23:30:50 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:01.799 23:30:50 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:01.799 23:30:50 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:01.799 23:30:50 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:01.799 23:30:50 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:01.799 23:30:50 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:01.799 23:30:50 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:28:01.799 23:30:50 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:28:01.799 23:30:50 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:01.799 23:30:50 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:01.799 23:30:50 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:01.799 23:30:50 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:01.799 23:30:50 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:01.799 23:30:50 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:01.799 23:30:50 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:01.799 23:30:50 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:01.799 23:30:50 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:01.799 23:30:50 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:01.799 23:30:50 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:01.799 23:30:50 -- paths/export.sh@5 -- # export PATH 00:28:01.799 23:30:50 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:01.799 23:30:50 -- nvmf/common.sh@47 -- # : 0 00:28:01.799 23:30:50 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:28:01.799 23:30:50 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:28:01.799 23:30:50 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:01.799 23:30:50 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:01.799 23:30:50 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:01.799 23:30:50 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:28:01.799 23:30:50 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:28:01.799 23:30:50 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:28:01.799 23:30:50 -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:28:01.799 23:30:50 -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:28:01.799 23:30:50 -- host/identify.sh@14 -- # nvmftestinit 00:28:01.799 23:30:50 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:28:01.799 23:30:50 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:01.799 23:30:50 -- nvmf/common.sh@437 -- # prepare_net_devs 00:28:01.799 23:30:50 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:28:01.799 23:30:50 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:28:01.799 23:30:50 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:01.799 23:30:50 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:01.799 23:30:50 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:01.799 23:30:50 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:28:01.799 23:30:50 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:28:01.799 23:30:50 -- nvmf/common.sh@285 -- # xtrace_disable 00:28:01.799 23:30:50 -- common/autotest_common.sh@10 -- # set +x 00:28:09.947 23:30:57 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:28:09.947 23:30:57 -- nvmf/common.sh@291 -- # pci_devs=() 00:28:09.947 23:30:57 -- nvmf/common.sh@291 -- # local -a pci_devs 00:28:09.947 23:30:57 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:28:09.947 23:30:57 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:28:09.947 23:30:57 -- nvmf/common.sh@293 -- # pci_drivers=() 00:28:09.947 23:30:57 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:28:09.947 23:30:57 -- nvmf/common.sh@295 -- # net_devs=() 00:28:09.947 23:30:57 -- nvmf/common.sh@295 -- # local -ga net_devs 00:28:09.947 23:30:57 -- nvmf/common.sh@296 -- # e810=() 00:28:09.947 23:30:57 -- nvmf/common.sh@296 -- # local -ga e810 00:28:09.947 23:30:57 -- nvmf/common.sh@297 -- # x722=() 00:28:09.947 23:30:57 -- nvmf/common.sh@297 -- # local -ga x722 00:28:09.947 23:30:57 -- nvmf/common.sh@298 -- # mlx=() 00:28:09.947 23:30:57 -- nvmf/common.sh@298 -- # local -ga mlx 00:28:09.947 23:30:57 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:09.947 23:30:57 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:09.947 23:30:57 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:09.947 23:30:57 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:09.947 23:30:57 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:09.947 23:30:57 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:09.947 23:30:57 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:09.947 23:30:57 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:09.947 23:30:57 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:09.947 23:30:57 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:09.947 23:30:57 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:09.947 23:30:57 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:28:09.947 23:30:57 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:28:09.947 23:30:57 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:28:09.947 23:30:57 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:28:09.947 23:30:57 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:28:09.947 23:30:57 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:28:09.947 23:30:57 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:09.947 23:30:57 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:28:09.947 Found 0000:31:00.0 (0x8086 - 0x159b) 00:28:09.947 23:30:57 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:09.947 23:30:57 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:09.947 23:30:57 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:09.947 23:30:57 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:09.947 23:30:57 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:09.947 23:30:57 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:09.947 23:30:57 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:28:09.947 Found 0000:31:00.1 (0x8086 - 0x159b) 00:28:09.948 23:30:57 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:09.948 23:30:57 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:09.948 23:30:57 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:09.948 23:30:57 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:09.948 23:30:57 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:09.948 23:30:57 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:28:09.948 23:30:57 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:28:09.948 23:30:57 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:28:09.948 23:30:57 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:09.948 23:30:57 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:09.948 23:30:57 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:28:09.948 23:30:57 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:09.948 23:30:57 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:28:09.948 Found net devices under 0000:31:00.0: cvl_0_0 00:28:09.948 23:30:57 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:28:09.948 23:30:57 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:09.948 23:30:57 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:09.948 23:30:57 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:28:09.948 23:30:57 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:09.948 23:30:57 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:28:09.948 Found net devices under 0000:31:00.1: cvl_0_1 00:28:09.948 23:30:57 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:28:09.948 23:30:57 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:28:09.948 23:30:57 -- nvmf/common.sh@403 -- # is_hw=yes 00:28:09.948 23:30:57 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:28:09.948 23:30:57 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:28:09.948 23:30:57 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:28:09.948 23:30:57 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:09.948 23:30:57 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:09.948 23:30:57 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:09.948 23:30:57 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:28:09.948 23:30:57 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:09.948 23:30:57 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:09.948 23:30:57 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:28:09.948 23:30:57 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:09.948 23:30:57 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:09.948 23:30:57 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:28:09.948 23:30:57 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:28:09.948 23:30:57 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:28:09.948 23:30:57 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:09.948 23:30:57 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:09.948 23:30:57 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:09.948 23:30:57 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:28:09.948 23:30:57 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:09.948 23:30:58 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:09.948 23:30:58 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:09.948 23:30:58 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:28:09.948 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:09.948 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.585 ms 00:28:09.948 00:28:09.948 --- 10.0.0.2 ping statistics --- 00:28:09.948 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:09.948 rtt min/avg/max/mdev = 0.585/0.585/0.585/0.000 ms 00:28:09.948 23:30:58 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:09.948 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:09.948 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.350 ms 00:28:09.948 00:28:09.948 --- 10.0.0.1 ping statistics --- 00:28:09.948 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:09.948 rtt min/avg/max/mdev = 0.350/0.350/0.350/0.000 ms 00:28:09.948 23:30:58 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:09.948 23:30:58 -- nvmf/common.sh@411 -- # return 0 00:28:09.948 23:30:58 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:28:09.948 23:30:58 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:09.948 23:30:58 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:28:09.948 23:30:58 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:28:09.948 23:30:58 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:09.948 23:30:58 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:28:09.948 23:30:58 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:28:09.948 23:30:58 -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:28:09.948 23:30:58 -- common/autotest_common.sh@710 -- # xtrace_disable 00:28:09.948 23:30:58 -- common/autotest_common.sh@10 -- # set +x 00:28:09.948 23:30:58 -- host/identify.sh@19 -- # nvmfpid=4089990 00:28:09.948 23:30:58 -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:28:09.948 23:30:58 -- host/identify.sh@18 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:28:09.948 23:30:58 -- host/identify.sh@23 -- # waitforlisten 4089990 00:28:09.948 23:30:58 -- common/autotest_common.sh@817 -- # '[' -z 4089990 ']' 00:28:09.948 23:30:58 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:09.948 23:30:58 -- common/autotest_common.sh@822 -- # local max_retries=100 00:28:09.948 23:30:58 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:09.948 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:09.948 23:30:58 -- common/autotest_common.sh@826 -- # xtrace_disable 00:28:09.948 23:30:58 -- common/autotest_common.sh@10 -- # set +x 00:28:09.948 [2024-04-26 23:30:58.172442] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:28:09.948 [2024-04-26 23:30:58.172506] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:09.948 EAL: No free 2048 kB hugepages reported on node 1 00:28:09.948 [2024-04-26 23:30:58.243799] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:09.948 [2024-04-26 23:30:58.283827] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:09.948 [2024-04-26 23:30:58.283897] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:09.948 [2024-04-26 23:30:58.283905] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:09.948 [2024-04-26 23:30:58.283912] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:09.948 [2024-04-26 23:30:58.283918] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:09.948 [2024-04-26 23:30:58.283979] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:28:09.949 [2024-04-26 23:30:58.284102] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:28:09.949 [2024-04-26 23:30:58.284247] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:09.949 [2024-04-26 23:30:58.284248] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:28:09.949 23:30:58 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:28:09.949 23:30:58 -- common/autotest_common.sh@850 -- # return 0 00:28:09.949 23:30:58 -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:28:09.949 23:30:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:09.949 23:30:58 -- common/autotest_common.sh@10 -- # set +x 00:28:09.949 [2024-04-26 23:30:58.960407] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:09.949 23:30:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:09.949 23:30:58 -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:28:09.949 23:30:58 -- common/autotest_common.sh@716 -- # xtrace_disable 00:28:09.949 23:30:58 -- common/autotest_common.sh@10 -- # set +x 00:28:09.949 23:30:59 -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:28:09.949 23:30:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:09.949 23:30:59 -- common/autotest_common.sh@10 -- # set +x 00:28:09.949 Malloc0 00:28:09.949 23:30:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:09.949 23:30:59 -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:28:09.949 23:30:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:09.949 23:30:59 -- common/autotest_common.sh@10 -- # set +x 00:28:09.949 23:30:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:09.949 23:30:59 -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:28:09.949 23:30:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:09.949 23:30:59 -- common/autotest_common.sh@10 -- # set +x 00:28:09.949 23:30:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:09.949 23:30:59 -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:09.949 23:30:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:09.949 23:30:59 -- common/autotest_common.sh@10 -- # set +x 00:28:09.949 [2024-04-26 23:30:59.059771] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:09.949 23:30:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:09.949 23:30:59 -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:28:09.949 23:30:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:09.949 23:30:59 -- common/autotest_common.sh@10 -- # set +x 00:28:09.949 23:30:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:09.949 23:30:59 -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:28:09.949 23:30:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:09.949 23:30:59 -- common/autotest_common.sh@10 -- # set +x 00:28:09.949 [2024-04-26 23:30:59.083605] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:28:09.949 [ 00:28:09.949 { 00:28:09.949 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:28:09.949 "subtype": "Discovery", 00:28:09.949 "listen_addresses": [ 00:28:09.949 { 00:28:09.949 "transport": "TCP", 00:28:09.949 "trtype": "TCP", 00:28:09.949 "adrfam": "IPv4", 00:28:09.949 "traddr": "10.0.0.2", 00:28:09.949 "trsvcid": "4420" 00:28:09.949 } 00:28:09.949 ], 00:28:09.949 "allow_any_host": true, 00:28:09.949 "hosts": [] 00:28:09.949 }, 00:28:09.949 { 00:28:09.949 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:28:09.949 "subtype": "NVMe", 00:28:09.949 "listen_addresses": [ 00:28:09.949 { 00:28:09.949 "transport": "TCP", 00:28:09.949 "trtype": "TCP", 00:28:09.949 "adrfam": "IPv4", 00:28:09.949 "traddr": "10.0.0.2", 00:28:09.949 "trsvcid": "4420" 00:28:09.949 } 00:28:09.949 ], 00:28:09.949 "allow_any_host": true, 00:28:09.949 "hosts": [], 00:28:09.949 "serial_number": "SPDK00000000000001", 00:28:09.949 "model_number": "SPDK bdev Controller", 00:28:09.949 "max_namespaces": 32, 00:28:09.949 "min_cntlid": 1, 00:28:09.949 "max_cntlid": 65519, 00:28:09.949 "namespaces": [ 00:28:09.949 { 00:28:09.949 "nsid": 1, 00:28:09.949 "bdev_name": "Malloc0", 00:28:09.949 "name": "Malloc0", 00:28:09.949 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:28:09.949 "eui64": "ABCDEF0123456789", 00:28:09.949 "uuid": "1fa27bb2-6cd6-4e2b-9bbc-c964a7fa2a84" 00:28:09.949 } 00:28:09.949 ] 00:28:09.949 } 00:28:09.949 ] 00:28:09.949 23:30:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:09.949 23:30:59 -- host/identify.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:28:09.949 [2024-04-26 23:30:59.119454] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:28:09.949 [2024-04-26 23:30:59.119493] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4090094 ] 00:28:09.949 EAL: No free 2048 kB hugepages reported on node 1 00:28:09.949 [2024-04-26 23:30:59.152472] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to connect adminq (no timeout) 00:28:09.949 [2024-04-26 23:30:59.152519] nvme_tcp.c:2326:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:28:09.949 [2024-04-26 23:30:59.152524] nvme_tcp.c:2330:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:28:09.949 [2024-04-26 23:30:59.152536] nvme_tcp.c:2348:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:28:09.949 [2024-04-26 23:30:59.152543] sock.c: 336:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:28:09.949 [2024-04-26 23:30:59.155869] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for connect adminq (no timeout) 00:28:09.949 [2024-04-26 23:30:59.155901] nvme_tcp.c:1543:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x13c2b60 0 00:28:09.949 [2024-04-26 23:30:59.163849] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:28:09.949 [2024-04-26 23:30:59.163859] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:28:09.949 [2024-04-26 23:30:59.163863] nvme_tcp.c:1589:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:28:09.949 [2024-04-26 23:30:59.163867] nvme_tcp.c:1590:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:28:09.949 [2024-04-26 23:30:59.163902] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:09.949 [2024-04-26 23:30:59.163907] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:09.949 [2024-04-26 23:30:59.163911] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x13c2b60) 00:28:09.950 [2024-04-26 23:30:59.163924] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:28:09.950 [2024-04-26 23:30:59.163939] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x142b180, cid 0, qid 0 00:28:09.950 [2024-04-26 23:30:59.171848] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:09.950 [2024-04-26 23:30:59.171858] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:09.950 [2024-04-26 23:30:59.171862] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:09.950 [2024-04-26 23:30:59.171866] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x142b180) on tqpair=0x13c2b60 00:28:09.950 [2024-04-26 23:30:59.171879] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:28:09.950 [2024-04-26 23:30:59.171886] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs (no timeout) 00:28:09.950 [2024-04-26 23:30:59.171891] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs wait for vs (no timeout) 00:28:09.950 [2024-04-26 23:30:59.171904] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:09.950 [2024-04-26 23:30:59.171908] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:09.950 [2024-04-26 23:30:59.171911] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x13c2b60) 00:28:09.950 [2024-04-26 23:30:59.171919] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.950 [2024-04-26 23:30:59.171932] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x142b180, cid 0, qid 0 00:28:09.950 [2024-04-26 23:30:59.172141] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:09.950 [2024-04-26 23:30:59.172148] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:09.950 [2024-04-26 23:30:59.172151] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:09.950 [2024-04-26 23:30:59.172159] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x142b180) on tqpair=0x13c2b60 00:28:09.950 [2024-04-26 23:30:59.172165] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap (no timeout) 00:28:09.950 [2024-04-26 23:30:59.172173] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap wait for cap (no timeout) 00:28:09.950 [2024-04-26 23:30:59.172179] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:09.950 [2024-04-26 23:30:59.172183] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:09.950 [2024-04-26 23:30:59.172186] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x13c2b60) 00:28:09.950 [2024-04-26 23:30:59.172193] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.950 [2024-04-26 23:30:59.172203] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x142b180, cid 0, qid 0 00:28:09.950 [2024-04-26 23:30:59.172411] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:09.950 [2024-04-26 23:30:59.172417] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:09.950 [2024-04-26 23:30:59.172420] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:09.950 [2024-04-26 23:30:59.172424] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x142b180) on tqpair=0x13c2b60 00:28:09.950 [2024-04-26 23:30:59.172430] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en (no timeout) 00:28:09.950 [2024-04-26 23:30:59.172437] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en wait for cc (timeout 15000 ms) 00:28:09.950 [2024-04-26 23:30:59.172444] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:09.950 [2024-04-26 23:30:59.172447] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:09.950 [2024-04-26 23:30:59.172451] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x13c2b60) 00:28:09.950 [2024-04-26 23:30:59.172458] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.950 [2024-04-26 23:30:59.172467] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x142b180, cid 0, qid 0 00:28:09.950 [2024-04-26 23:30:59.172666] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:09.950 [2024-04-26 23:30:59.172672] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:09.950 [2024-04-26 23:30:59.172675] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:09.950 [2024-04-26 23:30:59.172679] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x142b180) on tqpair=0x13c2b60 00:28:09.950 [2024-04-26 23:30:59.172685] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:28:09.950 [2024-04-26 23:30:59.172694] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:09.950 [2024-04-26 23:30:59.172697] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:09.950 [2024-04-26 23:30:59.172701] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x13c2b60) 00:28:09.950 [2024-04-26 23:30:59.172708] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.950 [2024-04-26 23:30:59.172717] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x142b180, cid 0, qid 0 00:28:09.950 [2024-04-26 23:30:59.172893] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:09.950 [2024-04-26 23:30:59.172900] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:09.950 [2024-04-26 23:30:59.172903] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:09.950 [2024-04-26 23:30:59.172907] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x142b180) on tqpair=0x13c2b60 00:28:09.950 [2024-04-26 23:30:59.172912] nvme_ctrlr.c:3749:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 0 && CSTS.RDY = 0 00:28:09.950 [2024-04-26 23:30:59.172919] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to controller is disabled (timeout 15000 ms) 00:28:09.950 [2024-04-26 23:30:59.172926] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:28:09.950 [2024-04-26 23:30:59.173031] nvme_ctrlr.c:3942:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Setting CC.EN = 1 00:28:09.950 [2024-04-26 23:30:59.173036] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:28:09.950 [2024-04-26 23:30:59.173044] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:09.950 [2024-04-26 23:30:59.173047] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:09.950 [2024-04-26 23:30:59.173051] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x13c2b60) 00:28:09.950 [2024-04-26 23:30:59.173057] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.950 [2024-04-26 23:30:59.173068] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x142b180, cid 0, qid 0 00:28:09.950 [2024-04-26 23:30:59.173272] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:09.950 [2024-04-26 23:30:59.173278] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:09.950 [2024-04-26 23:30:59.173282] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:09.950 [2024-04-26 23:30:59.173285] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x142b180) on tqpair=0x13c2b60 00:28:09.950 [2024-04-26 23:30:59.173291] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:28:09.950 [2024-04-26 23:30:59.173299] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:09.951 [2024-04-26 23:30:59.173303] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:09.951 [2024-04-26 23:30:59.173307] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x13c2b60) 00:28:09.951 [2024-04-26 23:30:59.173313] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.951 [2024-04-26 23:30:59.173322] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x142b180, cid 0, qid 0 00:28:09.951 [2024-04-26 23:30:59.173533] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:09.951 [2024-04-26 23:30:59.173539] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:09.951 [2024-04-26 23:30:59.173542] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:09.951 [2024-04-26 23:30:59.173546] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x142b180) on tqpair=0x13c2b60 00:28:09.951 [2024-04-26 23:30:59.173551] nvme_ctrlr.c:3784:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:28:09.951 [2024-04-26 23:30:59.173556] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to reset admin queue (timeout 30000 ms) 00:28:09.951 [2024-04-26 23:30:59.173563] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to identify controller (no timeout) 00:28:09.951 [2024-04-26 23:30:59.173575] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for identify controller (timeout 30000 ms) 00:28:09.951 [2024-04-26 23:30:59.173585] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:09.951 [2024-04-26 23:30:59.173589] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x13c2b60) 00:28:09.951 [2024-04-26 23:30:59.173595] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.951 [2024-04-26 23:30:59.173605] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x142b180, cid 0, qid 0 00:28:09.951 [2024-04-26 23:30:59.173824] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:28:09.951 [2024-04-26 23:30:59.173831] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:28:09.951 [2024-04-26 23:30:59.173835] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:28:09.951 [2024-04-26 23:30:59.173843] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x13c2b60): datao=0, datal=4096, cccid=0 00:28:09.951 [2024-04-26 23:30:59.173848] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x142b180) on tqpair(0x13c2b60): expected_datao=0, payload_size=4096 00:28:09.951 [2024-04-26 23:30:59.173853] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:09.951 [2024-04-26 23:30:59.173860] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:28:09.951 [2024-04-26 23:30:59.173864] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:28:09.951 [2024-04-26 23:30:59.174005] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:09.951 [2024-04-26 23:30:59.174011] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:09.951 [2024-04-26 23:30:59.174015] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:09.951 [2024-04-26 23:30:59.174018] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x142b180) on tqpair=0x13c2b60 00:28:09.951 [2024-04-26 23:30:59.174026] nvme_ctrlr.c:1984:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_xfer_size 4294967295 00:28:09.951 [2024-04-26 23:30:59.174031] nvme_ctrlr.c:1988:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] MDTS max_xfer_size 131072 00:28:09.951 [2024-04-26 23:30:59.174035] nvme_ctrlr.c:1991:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CNTLID 0x0001 00:28:09.951 [2024-04-26 23:30:59.174040] nvme_ctrlr.c:2015:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_sges 16 00:28:09.951 [2024-04-26 23:30:59.174044] nvme_ctrlr.c:2030:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] fuses compare and write: 1 00:28:09.951 [2024-04-26 23:30:59.174049] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to configure AER (timeout 30000 ms) 00:28:09.951 [2024-04-26 23:30:59.174057] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for configure aer (timeout 30000 ms) 00:28:09.951 [2024-04-26 23:30:59.174063] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:09.951 [2024-04-26 23:30:59.174067] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:09.951 [2024-04-26 23:30:59.174070] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x13c2b60) 00:28:09.951 [2024-04-26 23:30:59.174077] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:28:09.951 [2024-04-26 23:30:59.174088] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x142b180, cid 0, qid 0 00:28:09.951 [2024-04-26 23:30:59.174296] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:09.951 [2024-04-26 23:30:59.174302] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:09.951 [2024-04-26 23:30:59.174305] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:09.951 [2024-04-26 23:30:59.174309] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x142b180) on tqpair=0x13c2b60 00:28:09.951 [2024-04-26 23:30:59.174317] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:09.951 [2024-04-26 23:30:59.174321] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:09.951 [2024-04-26 23:30:59.174324] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x13c2b60) 00:28:09.951 [2024-04-26 23:30:59.174330] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:09.951 [2024-04-26 23:30:59.174337] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:09.951 [2024-04-26 23:30:59.174340] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:09.951 [2024-04-26 23:30:59.174346] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x13c2b60) 00:28:09.951 [2024-04-26 23:30:59.174351] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:09.951 [2024-04-26 23:30:59.174357] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:09.951 [2024-04-26 23:30:59.174361] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:09.951 [2024-04-26 23:30:59.174364] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x13c2b60) 00:28:09.951 [2024-04-26 23:30:59.174370] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:09.951 [2024-04-26 23:30:59.174376] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:09.951 [2024-04-26 23:30:59.174380] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:09.951 [2024-04-26 23:30:59.174383] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x13c2b60) 00:28:09.951 [2024-04-26 23:30:59.174389] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:09.951 [2024-04-26 23:30:59.174393] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to set keep alive timeout (timeout 30000 ms) 00:28:09.951 [2024-04-26 23:30:59.174403] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:28:09.951 [2024-04-26 23:30:59.174410] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:09.951 [2024-04-26 23:30:59.174413] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x13c2b60) 00:28:09.951 [2024-04-26 23:30:59.174420] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.951 [2024-04-26 23:30:59.174431] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x142b180, cid 0, qid 0 00:28:09.951 [2024-04-26 23:30:59.174436] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x142b2e0, cid 1, qid 0 00:28:09.951 [2024-04-26 23:30:59.174441] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x142b440, cid 2, qid 0 00:28:09.952 [2024-04-26 23:30:59.174446] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x142b5a0, cid 3, qid 0 00:28:09.952 [2024-04-26 23:30:59.174450] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x142b700, cid 4, qid 0 00:28:09.952 [2024-04-26 23:30:59.174692] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:09.952 [2024-04-26 23:30:59.174698] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:09.952 [2024-04-26 23:30:59.174702] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:09.952 [2024-04-26 23:30:59.174705] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x142b700) on tqpair=0x13c2b60 00:28:09.952 [2024-04-26 23:30:59.174711] nvme_ctrlr.c:2902:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Sending keep alive every 5000000 us 00:28:09.952 [2024-04-26 23:30:59.174716] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to ready (no timeout) 00:28:09.952 [2024-04-26 23:30:59.174725] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:09.952 [2024-04-26 23:30:59.174729] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x13c2b60) 00:28:09.952 [2024-04-26 23:30:59.174735] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.952 [2024-04-26 23:30:59.174745] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x142b700, cid 4, qid 0 00:28:09.952 [2024-04-26 23:30:59.174974] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:28:09.952 [2024-04-26 23:30:59.174981] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:28:09.952 [2024-04-26 23:30:59.174986] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:28:09.952 [2024-04-26 23:30:59.174990] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x13c2b60): datao=0, datal=4096, cccid=4 00:28:09.952 [2024-04-26 23:30:59.174994] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x142b700) on tqpair(0x13c2b60): expected_datao=0, payload_size=4096 00:28:09.952 [2024-04-26 23:30:59.174998] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:09.952 [2024-04-26 23:30:59.175019] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:28:09.952 [2024-04-26 23:30:59.175023] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:28:10.220 [2024-04-26 23:30:59.218846] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:10.220 [2024-04-26 23:30:59.218859] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:10.220 [2024-04-26 23:30:59.218862] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:10.220 [2024-04-26 23:30:59.218866] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x142b700) on tqpair=0x13c2b60 00:28:10.220 [2024-04-26 23:30:59.218880] nvme_ctrlr.c:4036:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Ctrlr already in ready state 00:28:10.220 [2024-04-26 23:30:59.218899] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:10.220 [2024-04-26 23:30:59.218903] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x13c2b60) 00:28:10.220 [2024-04-26 23:30:59.218910] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.220 [2024-04-26 23:30:59.218917] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:10.220 [2024-04-26 23:30:59.218921] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:10.220 [2024-04-26 23:30:59.218924] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x13c2b60) 00:28:10.220 [2024-04-26 23:30:59.218931] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:28:10.220 [2024-04-26 23:30:59.218948] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x142b700, cid 4, qid 0 00:28:10.220 [2024-04-26 23:30:59.218953] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x142b860, cid 5, qid 0 00:28:10.220 [2024-04-26 23:30:59.219187] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:28:10.220 [2024-04-26 23:30:59.219193] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:28:10.220 [2024-04-26 23:30:59.219197] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:28:10.220 [2024-04-26 23:30:59.219200] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x13c2b60): datao=0, datal=1024, cccid=4 00:28:10.220 [2024-04-26 23:30:59.219205] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x142b700) on tqpair(0x13c2b60): expected_datao=0, payload_size=1024 00:28:10.220 [2024-04-26 23:30:59.219209] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:10.220 [2024-04-26 23:30:59.219216] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:28:10.220 [2024-04-26 23:30:59.219219] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:28:10.220 [2024-04-26 23:30:59.219225] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:10.220 [2024-04-26 23:30:59.219231] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:10.220 [2024-04-26 23:30:59.219234] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:10.220 [2024-04-26 23:30:59.219238] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x142b860) on tqpair=0x13c2b60 00:28:10.220 [2024-04-26 23:30:59.261021] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:10.220 [2024-04-26 23:30:59.261031] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:10.220 [2024-04-26 23:30:59.261034] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:10.220 [2024-04-26 23:30:59.261038] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x142b700) on tqpair=0x13c2b60 00:28:10.220 [2024-04-26 23:30:59.261049] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:10.220 [2024-04-26 23:30:59.261056] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x13c2b60) 00:28:10.220 [2024-04-26 23:30:59.261063] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.220 [2024-04-26 23:30:59.261078] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x142b700, cid 4, qid 0 00:28:10.220 [2024-04-26 23:30:59.261301] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:28:10.220 [2024-04-26 23:30:59.261308] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:28:10.220 [2024-04-26 23:30:59.261311] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:28:10.220 [2024-04-26 23:30:59.261315] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x13c2b60): datao=0, datal=3072, cccid=4 00:28:10.220 [2024-04-26 23:30:59.261319] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x142b700) on tqpair(0x13c2b60): expected_datao=0, payload_size=3072 00:28:10.220 [2024-04-26 23:30:59.261323] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:10.220 [2024-04-26 23:30:59.261330] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:28:10.220 [2024-04-26 23:30:59.261333] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:28:10.220 [2024-04-26 23:30:59.261473] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:10.221 [2024-04-26 23:30:59.261479] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:10.221 [2024-04-26 23:30:59.261482] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:10.221 [2024-04-26 23:30:59.261486] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x142b700) on tqpair=0x13c2b60 00:28:10.221 [2024-04-26 23:30:59.261495] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:10.221 [2024-04-26 23:30:59.261499] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x13c2b60) 00:28:10.221 [2024-04-26 23:30:59.261505] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.221 [2024-04-26 23:30:59.261518] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x142b700, cid 4, qid 0 00:28:10.221 [2024-04-26 23:30:59.261735] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:28:10.221 [2024-04-26 23:30:59.261741] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:28:10.221 [2024-04-26 23:30:59.261745] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:28:10.221 [2024-04-26 23:30:59.261748] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x13c2b60): datao=0, datal=8, cccid=4 00:28:10.221 [2024-04-26 23:30:59.261753] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x142b700) on tqpair(0x13c2b60): expected_datao=0, payload_size=8 00:28:10.221 [2024-04-26 23:30:59.261757] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:10.221 [2024-04-26 23:30:59.261763] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:28:10.221 [2024-04-26 23:30:59.261767] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:28:10.221 [2024-04-26 23:30:59.304848] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:10.221 [2024-04-26 23:30:59.304859] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:10.221 [2024-04-26 23:30:59.304862] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:10.221 [2024-04-26 23:30:59.304866] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x142b700) on tqpair=0x13c2b60 00:28:10.221 ===================================================== 00:28:10.221 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:28:10.221 ===================================================== 00:28:10.221 Controller Capabilities/Features 00:28:10.221 ================================ 00:28:10.221 Vendor ID: 0000 00:28:10.221 Subsystem Vendor ID: 0000 00:28:10.221 Serial Number: .................... 00:28:10.221 Model Number: ........................................ 00:28:10.221 Firmware Version: 24.05 00:28:10.221 Recommended Arb Burst: 0 00:28:10.221 IEEE OUI Identifier: 00 00 00 00:28:10.221 Multi-path I/O 00:28:10.221 May have multiple subsystem ports: No 00:28:10.221 May have multiple controllers: No 00:28:10.221 Associated with SR-IOV VF: No 00:28:10.221 Max Data Transfer Size: 131072 00:28:10.221 Max Number of Namespaces: 0 00:28:10.221 Max Number of I/O Queues: 1024 00:28:10.221 NVMe Specification Version (VS): 1.3 00:28:10.221 NVMe Specification Version (Identify): 1.3 00:28:10.221 Maximum Queue Entries: 128 00:28:10.221 Contiguous Queues Required: Yes 00:28:10.221 Arbitration Mechanisms Supported 00:28:10.221 Weighted Round Robin: Not Supported 00:28:10.221 Vendor Specific: Not Supported 00:28:10.221 Reset Timeout: 15000 ms 00:28:10.221 Doorbell Stride: 4 bytes 00:28:10.221 NVM Subsystem Reset: Not Supported 00:28:10.221 Command Sets Supported 00:28:10.221 NVM Command Set: Supported 00:28:10.221 Boot Partition: Not Supported 00:28:10.221 Memory Page Size Minimum: 4096 bytes 00:28:10.221 Memory Page Size Maximum: 4096 bytes 00:28:10.221 Persistent Memory Region: Not Supported 00:28:10.221 Optional Asynchronous Events Supported 00:28:10.221 Namespace Attribute Notices: Not Supported 00:28:10.221 Firmware Activation Notices: Not Supported 00:28:10.221 ANA Change Notices: Not Supported 00:28:10.221 PLE Aggregate Log Change Notices: Not Supported 00:28:10.221 LBA Status Info Alert Notices: Not Supported 00:28:10.221 EGE Aggregate Log Change Notices: Not Supported 00:28:10.221 Normal NVM Subsystem Shutdown event: Not Supported 00:28:10.221 Zone Descriptor Change Notices: Not Supported 00:28:10.221 Discovery Log Change Notices: Supported 00:28:10.221 Controller Attributes 00:28:10.221 128-bit Host Identifier: Not Supported 00:28:10.221 Non-Operational Permissive Mode: Not Supported 00:28:10.221 NVM Sets: Not Supported 00:28:10.221 Read Recovery Levels: Not Supported 00:28:10.221 Endurance Groups: Not Supported 00:28:10.221 Predictable Latency Mode: Not Supported 00:28:10.221 Traffic Based Keep ALive: Not Supported 00:28:10.221 Namespace Granularity: Not Supported 00:28:10.221 SQ Associations: Not Supported 00:28:10.221 UUID List: Not Supported 00:28:10.221 Multi-Domain Subsystem: Not Supported 00:28:10.221 Fixed Capacity Management: Not Supported 00:28:10.221 Variable Capacity Management: Not Supported 00:28:10.221 Delete Endurance Group: Not Supported 00:28:10.221 Delete NVM Set: Not Supported 00:28:10.221 Extended LBA Formats Supported: Not Supported 00:28:10.221 Flexible Data Placement Supported: Not Supported 00:28:10.221 00:28:10.221 Controller Memory Buffer Support 00:28:10.221 ================================ 00:28:10.221 Supported: No 00:28:10.221 00:28:10.221 Persistent Memory Region Support 00:28:10.221 ================================ 00:28:10.221 Supported: No 00:28:10.221 00:28:10.221 Admin Command Set Attributes 00:28:10.221 ============================ 00:28:10.221 Security Send/Receive: Not Supported 00:28:10.221 Format NVM: Not Supported 00:28:10.221 Firmware Activate/Download: Not Supported 00:28:10.221 Namespace Management: Not Supported 00:28:10.221 Device Self-Test: Not Supported 00:28:10.221 Directives: Not Supported 00:28:10.221 NVMe-MI: Not Supported 00:28:10.221 Virtualization Management: Not Supported 00:28:10.221 Doorbell Buffer Config: Not Supported 00:28:10.221 Get LBA Status Capability: Not Supported 00:28:10.221 Command & Feature Lockdown Capability: Not Supported 00:28:10.221 Abort Command Limit: 1 00:28:10.221 Async Event Request Limit: 4 00:28:10.221 Number of Firmware Slots: N/A 00:28:10.221 Firmware Slot 1 Read-Only: N/A 00:28:10.221 Firmware Activation Without Reset: N/A 00:28:10.221 Multiple Update Detection Support: N/A 00:28:10.221 Firmware Update Granularity: No Information Provided 00:28:10.221 Per-Namespace SMART Log: No 00:28:10.221 Asymmetric Namespace Access Log Page: Not Supported 00:28:10.221 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:28:10.221 Command Effects Log Page: Not Supported 00:28:10.221 Get Log Page Extended Data: Supported 00:28:10.221 Telemetry Log Pages: Not Supported 00:28:10.221 Persistent Event Log Pages: Not Supported 00:28:10.221 Supported Log Pages Log Page: May Support 00:28:10.221 Commands Supported & Effects Log Page: Not Supported 00:28:10.221 Feature Identifiers & Effects Log Page:May Support 00:28:10.221 NVMe-MI Commands & Effects Log Page: May Support 00:28:10.221 Data Area 4 for Telemetry Log: Not Supported 00:28:10.221 Error Log Page Entries Supported: 128 00:28:10.221 Keep Alive: Not Supported 00:28:10.221 00:28:10.221 NVM Command Set Attributes 00:28:10.221 ========================== 00:28:10.221 Submission Queue Entry Size 00:28:10.221 Max: 1 00:28:10.221 Min: 1 00:28:10.221 Completion Queue Entry Size 00:28:10.221 Max: 1 00:28:10.221 Min: 1 00:28:10.221 Number of Namespaces: 0 00:28:10.221 Compare Command: Not Supported 00:28:10.221 Write Uncorrectable Command: Not Supported 00:28:10.221 Dataset Management Command: Not Supported 00:28:10.221 Write Zeroes Command: Not Supported 00:28:10.221 Set Features Save Field: Not Supported 00:28:10.221 Reservations: Not Supported 00:28:10.221 Timestamp: Not Supported 00:28:10.221 Copy: Not Supported 00:28:10.221 Volatile Write Cache: Not Present 00:28:10.221 Atomic Write Unit (Normal): 1 00:28:10.221 Atomic Write Unit (PFail): 1 00:28:10.221 Atomic Compare & Write Unit: 1 00:28:10.221 Fused Compare & Write: Supported 00:28:10.221 Scatter-Gather List 00:28:10.221 SGL Command Set: Supported 00:28:10.221 SGL Keyed: Supported 00:28:10.221 SGL Bit Bucket Descriptor: Not Supported 00:28:10.221 SGL Metadata Pointer: Not Supported 00:28:10.221 Oversized SGL: Not Supported 00:28:10.221 SGL Metadata Address: Not Supported 00:28:10.221 SGL Offset: Supported 00:28:10.221 Transport SGL Data Block: Not Supported 00:28:10.221 Replay Protected Memory Block: Not Supported 00:28:10.221 00:28:10.221 Firmware Slot Information 00:28:10.221 ========================= 00:28:10.221 Active slot: 0 00:28:10.221 00:28:10.221 00:28:10.221 Error Log 00:28:10.221 ========= 00:28:10.221 00:28:10.221 Active Namespaces 00:28:10.221 ================= 00:28:10.221 Discovery Log Page 00:28:10.221 ================== 00:28:10.221 Generation Counter: 2 00:28:10.221 Number of Records: 2 00:28:10.221 Record Format: 0 00:28:10.221 00:28:10.221 Discovery Log Entry 0 00:28:10.221 ---------------------- 00:28:10.221 Transport Type: 3 (TCP) 00:28:10.221 Address Family: 1 (IPv4) 00:28:10.221 Subsystem Type: 3 (Current Discovery Subsystem) 00:28:10.221 Entry Flags: 00:28:10.221 Duplicate Returned Information: 1 00:28:10.221 Explicit Persistent Connection Support for Discovery: 1 00:28:10.221 Transport Requirements: 00:28:10.221 Secure Channel: Not Required 00:28:10.221 Port ID: 0 (0x0000) 00:28:10.221 Controller ID: 65535 (0xffff) 00:28:10.221 Admin Max SQ Size: 128 00:28:10.221 Transport Service Identifier: 4420 00:28:10.221 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:28:10.221 Transport Address: 10.0.0.2 00:28:10.221 Discovery Log Entry 1 00:28:10.221 ---------------------- 00:28:10.221 Transport Type: 3 (TCP) 00:28:10.221 Address Family: 1 (IPv4) 00:28:10.221 Subsystem Type: 2 (NVM Subsystem) 00:28:10.221 Entry Flags: 00:28:10.221 Duplicate Returned Information: 0 00:28:10.221 Explicit Persistent Connection Support for Discovery: 0 00:28:10.221 Transport Requirements: 00:28:10.221 Secure Channel: Not Required 00:28:10.221 Port ID: 0 (0x0000) 00:28:10.221 Controller ID: 65535 (0xffff) 00:28:10.221 Admin Max SQ Size: 128 00:28:10.221 Transport Service Identifier: 4420 00:28:10.221 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:28:10.221 Transport Address: 10.0.0.2 [2024-04-26 23:30:59.304952] nvme_ctrlr.c:4221:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Prepare to destruct SSD 00:28:10.221 [2024-04-26 23:30:59.304964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.221 [2024-04-26 23:30:59.304971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.221 [2024-04-26 23:30:59.304977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.221 [2024-04-26 23:30:59.304985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.221 [2024-04-26 23:30:59.304993] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:10.221 [2024-04-26 23:30:59.304997] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:10.221 [2024-04-26 23:30:59.305000] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x13c2b60) 00:28:10.221 [2024-04-26 23:30:59.305008] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.221 [2024-04-26 23:30:59.305021] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x142b5a0, cid 3, qid 0 00:28:10.221 [2024-04-26 23:30:59.305119] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:10.221 [2024-04-26 23:30:59.305126] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:10.221 [2024-04-26 23:30:59.305129] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:10.221 [2024-04-26 23:30:59.305133] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x142b5a0) on tqpair=0x13c2b60 00:28:10.221 [2024-04-26 23:30:59.305140] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:10.221 [2024-04-26 23:30:59.305144] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:10.221 [2024-04-26 23:30:59.305147] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x13c2b60) 00:28:10.221 [2024-04-26 23:30:59.305154] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.221 [2024-04-26 23:30:59.305167] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x142b5a0, cid 3, qid 0 00:28:10.221 [2024-04-26 23:30:59.305387] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:10.221 [2024-04-26 23:30:59.305393] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:10.221 [2024-04-26 23:30:59.305397] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:10.221 [2024-04-26 23:30:59.305400] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x142b5a0) on tqpair=0x13c2b60 00:28:10.221 [2024-04-26 23:30:59.305406] nvme_ctrlr.c:1082:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] RTD3E = 0 us 00:28:10.221 [2024-04-26 23:30:59.305410] nvme_ctrlr.c:1085:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown timeout = 10000 ms 00:28:10.221 [2024-04-26 23:30:59.305419] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:10.221 [2024-04-26 23:30:59.305423] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:10.221 [2024-04-26 23:30:59.305426] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x13c2b60) 00:28:10.221 [2024-04-26 23:30:59.305433] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.221 [2024-04-26 23:30:59.305442] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x142b5a0, cid 3, qid 0 00:28:10.221 [2024-04-26 23:30:59.305654] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:10.221 [2024-04-26 23:30:59.305660] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:10.221 [2024-04-26 23:30:59.305664] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:10.221 [2024-04-26 23:30:59.305667] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x142b5a0) on tqpair=0x13c2b60 00:28:10.221 [2024-04-26 23:30:59.305678] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:10.221 [2024-04-26 23:30:59.305681] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:10.221 [2024-04-26 23:30:59.305685] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x13c2b60) 00:28:10.221 [2024-04-26 23:30:59.305692] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.221 [2024-04-26 23:30:59.305701] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x142b5a0, cid 3, qid 0 00:28:10.221 [2024-04-26 23:30:59.305891] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:10.221 [2024-04-26 23:30:59.305898] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:10.221 [2024-04-26 23:30:59.305902] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:10.221 [2024-04-26 23:30:59.305905] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x142b5a0) on tqpair=0x13c2b60 00:28:10.221 [2024-04-26 23:30:59.305916] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:10.221 [2024-04-26 23:30:59.305919] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:10.221 [2024-04-26 23:30:59.305923] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x13c2b60) 00:28:10.221 [2024-04-26 23:30:59.305930] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.221 [2024-04-26 23:30:59.305939] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x142b5a0, cid 3, qid 0 00:28:10.221 [2024-04-26 23:30:59.306111] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:10.221 [2024-04-26 23:30:59.306117] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:10.221 [2024-04-26 23:30:59.306121] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:10.221 [2024-04-26 23:30:59.306124] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x142b5a0) on tqpair=0x13c2b60 00:28:10.221 [2024-04-26 23:30:59.306134] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:10.221 [2024-04-26 23:30:59.306138] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:10.221 [2024-04-26 23:30:59.306142] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x13c2b60) 00:28:10.221 [2024-04-26 23:30:59.306148] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.221 [2024-04-26 23:30:59.306158] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x142b5a0, cid 3, qid 0 00:28:10.221 [2024-04-26 23:30:59.306354] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:10.221 [2024-04-26 23:30:59.306360] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:10.221 [2024-04-26 23:30:59.306363] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:10.221 [2024-04-26 23:30:59.306367] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x142b5a0) on tqpair=0x13c2b60 00:28:10.221 [2024-04-26 23:30:59.306377] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:10.221 [2024-04-26 23:30:59.306381] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:10.221 [2024-04-26 23:30:59.306384] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x13c2b60) 00:28:10.221 [2024-04-26 23:30:59.306391] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.221 [2024-04-26 23:30:59.306401] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x142b5a0, cid 3, qid 0 00:28:10.221 [2024-04-26 23:30:59.306593] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:10.221 [2024-04-26 23:30:59.306599] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:10.221 [2024-04-26 23:30:59.306602] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:10.221 [2024-04-26 23:30:59.306606] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x142b5a0) on tqpair=0x13c2b60 00:28:10.221 [2024-04-26 23:30:59.306616] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:10.221 [2024-04-26 23:30:59.306620] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:10.221 [2024-04-26 23:30:59.306623] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x13c2b60) 00:28:10.221 [2024-04-26 23:30:59.306630] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.221 [2024-04-26 23:30:59.306639] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x142b5a0, cid 3, qid 0 00:28:10.221 [2024-04-26 23:30:59.306822] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:10.221 [2024-04-26 23:30:59.306831] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:10.221 [2024-04-26 23:30:59.306834] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:10.221 [2024-04-26 23:30:59.306842] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x142b5a0) on tqpair=0x13c2b60 00:28:10.221 [2024-04-26 23:30:59.306852] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:10.221 [2024-04-26 23:30:59.306856] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:10.221 [2024-04-26 23:30:59.306860] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x13c2b60) 00:28:10.222 [2024-04-26 23:30:59.306867] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.222 [2024-04-26 23:30:59.306877] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x142b5a0, cid 3, qid 0 00:28:10.222 [2024-04-26 23:30:59.307091] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:10.222 [2024-04-26 23:30:59.307097] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:10.222 [2024-04-26 23:30:59.307100] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:10.222 [2024-04-26 23:30:59.307104] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x142b5a0) on tqpair=0x13c2b60 00:28:10.222 [2024-04-26 23:30:59.307114] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:10.222 [2024-04-26 23:30:59.307118] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:10.222 [2024-04-26 23:30:59.307121] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x13c2b60) 00:28:10.222 [2024-04-26 23:30:59.307128] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.222 [2024-04-26 23:30:59.307138] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x142b5a0, cid 3, qid 0 00:28:10.222 [2024-04-26 23:30:59.307330] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:10.222 [2024-04-26 23:30:59.307336] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:10.222 [2024-04-26 23:30:59.307339] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:10.222 [2024-04-26 23:30:59.307343] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x142b5a0) on tqpair=0x13c2b60 00:28:10.222 [2024-04-26 23:30:59.307353] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:10.222 [2024-04-26 23:30:59.307357] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:10.222 [2024-04-26 23:30:59.307360] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x13c2b60) 00:28:10.222 [2024-04-26 23:30:59.307367] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.222 [2024-04-26 23:30:59.307376] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x142b5a0, cid 3, qid 0 00:28:10.222 [2024-04-26 23:30:59.307580] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:10.222 [2024-04-26 23:30:59.307586] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:10.222 [2024-04-26 23:30:59.307590] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:10.222 [2024-04-26 23:30:59.307593] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x142b5a0) on tqpair=0x13c2b60 00:28:10.222 [2024-04-26 23:30:59.307603] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:10.222 [2024-04-26 23:30:59.307607] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:10.222 [2024-04-26 23:30:59.307611] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x13c2b60) 00:28:10.222 [2024-04-26 23:30:59.307617] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.222 [2024-04-26 23:30:59.307627] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x142b5a0, cid 3, qid 0 00:28:10.222 [2024-04-26 23:30:59.307831] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:10.222 [2024-04-26 23:30:59.307840] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:10.222 [2024-04-26 23:30:59.307846] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:10.222 [2024-04-26 23:30:59.307850] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x142b5a0) on tqpair=0x13c2b60 00:28:10.222 [2024-04-26 23:30:59.307860] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:10.222 [2024-04-26 23:30:59.307864] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:10.222 [2024-04-26 23:30:59.307867] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x13c2b60) 00:28:10.222 [2024-04-26 23:30:59.307874] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.222 [2024-04-26 23:30:59.307884] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x142b5a0, cid 3, qid 0 00:28:10.222 [2024-04-26 23:30:59.308077] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:10.222 [2024-04-26 23:30:59.308083] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:10.222 [2024-04-26 23:30:59.308087] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:10.222 [2024-04-26 23:30:59.308091] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x142b5a0) on tqpair=0x13c2b60 00:28:10.222 [2024-04-26 23:30:59.308101] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:10.222 [2024-04-26 23:30:59.308105] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:10.222 [2024-04-26 23:30:59.308109] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x13c2b60) 00:28:10.222 [2024-04-26 23:30:59.308115] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.222 [2024-04-26 23:30:59.308125] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x142b5a0, cid 3, qid 0 00:28:10.222 [2024-04-26 23:30:59.308335] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:10.222 [2024-04-26 23:30:59.308341] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:10.222 [2024-04-26 23:30:59.308345] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:10.222 [2024-04-26 23:30:59.308348] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x142b5a0) on tqpair=0x13c2b60 00:28:10.222 [2024-04-26 23:30:59.308358] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:10.222 [2024-04-26 23:30:59.308362] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:10.222 [2024-04-26 23:30:59.308366] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x13c2b60) 00:28:10.222 [2024-04-26 23:30:59.308372] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.222 [2024-04-26 23:30:59.308382] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x142b5a0, cid 3, qid 0 00:28:10.222 [2024-04-26 23:30:59.308589] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:10.222 [2024-04-26 23:30:59.308595] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:10.222 [2024-04-26 23:30:59.308598] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:10.222 [2024-04-26 23:30:59.308602] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x142b5a0) on tqpair=0x13c2b60 00:28:10.222 [2024-04-26 23:30:59.308612] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:10.222 [2024-04-26 23:30:59.308616] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:10.222 [2024-04-26 23:30:59.308619] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x13c2b60) 00:28:10.222 [2024-04-26 23:30:59.308626] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.222 [2024-04-26 23:30:59.308635] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x142b5a0, cid 3, qid 0 00:28:10.222 [2024-04-26 23:30:59.312844] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:10.222 [2024-04-26 23:30:59.312853] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:10.222 [2024-04-26 23:30:59.312856] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:10.222 [2024-04-26 23:30:59.312862] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x142b5a0) on tqpair=0x13c2b60 00:28:10.222 [2024-04-26 23:30:59.312873] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:10.222 [2024-04-26 23:30:59.312877] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:10.222 [2024-04-26 23:30:59.312880] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x13c2b60) 00:28:10.222 [2024-04-26 23:30:59.312887] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.222 [2024-04-26 23:30:59.312898] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x142b5a0, cid 3, qid 0 00:28:10.222 [2024-04-26 23:30:59.313089] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:10.222 [2024-04-26 23:30:59.313095] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:10.222 [2024-04-26 23:30:59.313099] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:10.222 [2024-04-26 23:30:59.313103] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x142b5a0) on tqpair=0x13c2b60 00:28:10.222 [2024-04-26 23:30:59.313111] nvme_ctrlr.c:1204:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown complete in 7 milliseconds 00:28:10.222 00:28:10.222 23:30:59 -- host/identify.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:28:10.222 [2024-04-26 23:30:59.356277] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:28:10.222 [2024-04-26 23:30:59.356331] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4090199 ] 00:28:10.222 EAL: No free 2048 kB hugepages reported on node 1 00:28:10.222 [2024-04-26 23:30:59.388362] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to connect adminq (no timeout) 00:28:10.222 [2024-04-26 23:30:59.388406] nvme_tcp.c:2326:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:28:10.222 [2024-04-26 23:30:59.388411] nvme_tcp.c:2330:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:28:10.222 [2024-04-26 23:30:59.388423] nvme_tcp.c:2348:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:28:10.222 [2024-04-26 23:30:59.388430] sock.c: 336:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:28:10.222 [2024-04-26 23:30:59.391866] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for connect adminq (no timeout) 00:28:10.222 [2024-04-26 23:30:59.391894] nvme_tcp.c:1543:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x11a2b60 0 00:28:10.222 [2024-04-26 23:30:59.399846] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:28:10.222 [2024-04-26 23:30:59.399856] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:28:10.222 [2024-04-26 23:30:59.399860] nvme_tcp.c:1589:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:28:10.222 [2024-04-26 23:30:59.399863] nvme_tcp.c:1590:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:28:10.222 [2024-04-26 23:30:59.399892] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:10.222 [2024-04-26 23:30:59.399898] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:10.222 [2024-04-26 23:30:59.399902] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x11a2b60) 00:28:10.222 [2024-04-26 23:30:59.399913] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:28:10.222 [2024-04-26 23:30:59.399928] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x120b180, cid 0, qid 0 00:28:10.222 [2024-04-26 23:30:59.407848] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:10.222 [2024-04-26 23:30:59.407857] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:10.222 [2024-04-26 23:30:59.407861] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:10.222 [2024-04-26 23:30:59.407865] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x120b180) on tqpair=0x11a2b60 00:28:10.222 [2024-04-26 23:30:59.407875] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:28:10.222 [2024-04-26 23:30:59.407881] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs (no timeout) 00:28:10.222 [2024-04-26 23:30:59.407886] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs wait for vs (no timeout) 00:28:10.222 [2024-04-26 23:30:59.407898] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:10.222 [2024-04-26 23:30:59.407902] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:10.222 [2024-04-26 23:30:59.407906] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x11a2b60) 00:28:10.222 [2024-04-26 23:30:59.407913] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.222 [2024-04-26 23:30:59.407926] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x120b180, cid 0, qid 0 00:28:10.222 [2024-04-26 23:30:59.408089] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:10.222 [2024-04-26 23:30:59.408096] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:10.222 [2024-04-26 23:30:59.408099] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:10.222 [2024-04-26 23:30:59.408103] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x120b180) on tqpair=0x11a2b60 00:28:10.222 [2024-04-26 23:30:59.408109] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap (no timeout) 00:28:10.222 [2024-04-26 23:30:59.408116] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap wait for cap (no timeout) 00:28:10.222 [2024-04-26 23:30:59.408122] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:10.222 [2024-04-26 23:30:59.408126] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:10.222 [2024-04-26 23:30:59.408129] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x11a2b60) 00:28:10.222 [2024-04-26 23:30:59.408136] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.222 [2024-04-26 23:30:59.408146] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x120b180, cid 0, qid 0 00:28:10.222 [2024-04-26 23:30:59.408320] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:10.222 [2024-04-26 23:30:59.408326] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:10.222 [2024-04-26 23:30:59.408329] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:10.222 [2024-04-26 23:30:59.408333] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x120b180) on tqpair=0x11a2b60 00:28:10.222 [2024-04-26 23:30:59.408339] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en (no timeout) 00:28:10.222 [2024-04-26 23:30:59.408347] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en wait for cc (timeout 15000 ms) 00:28:10.222 [2024-04-26 23:30:59.408353] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:10.222 [2024-04-26 23:30:59.408357] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:10.222 [2024-04-26 23:30:59.408361] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x11a2b60) 00:28:10.222 [2024-04-26 23:30:59.408368] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.222 [2024-04-26 23:30:59.408378] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x120b180, cid 0, qid 0 00:28:10.222 [2024-04-26 23:30:59.408557] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:10.222 [2024-04-26 23:30:59.408566] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:10.222 [2024-04-26 23:30:59.408570] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:10.222 [2024-04-26 23:30:59.408574] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x120b180) on tqpair=0x11a2b60 00:28:10.222 [2024-04-26 23:30:59.408579] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:28:10.222 [2024-04-26 23:30:59.408588] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:10.222 [2024-04-26 23:30:59.408592] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:10.222 [2024-04-26 23:30:59.408596] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x11a2b60) 00:28:10.222 [2024-04-26 23:30:59.408602] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.222 [2024-04-26 23:30:59.408612] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x120b180, cid 0, qid 0 00:28:10.222 [2024-04-26 23:30:59.408795] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:10.222 [2024-04-26 23:30:59.408802] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:10.222 [2024-04-26 23:30:59.408805] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:10.222 [2024-04-26 23:30:59.408809] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x120b180) on tqpair=0x11a2b60 00:28:10.222 [2024-04-26 23:30:59.408814] nvme_ctrlr.c:3749:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 0 && CSTS.RDY = 0 00:28:10.222 [2024-04-26 23:30:59.408819] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to controller is disabled (timeout 15000 ms) 00:28:10.222 [2024-04-26 23:30:59.408826] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:28:10.222 [2024-04-26 23:30:59.408931] nvme_ctrlr.c:3942:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Setting CC.EN = 1 00:28:10.222 [2024-04-26 23:30:59.408935] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:28:10.222 [2024-04-26 23:30:59.408943] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:10.222 [2024-04-26 23:30:59.408947] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:10.222 [2024-04-26 23:30:59.408950] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x11a2b60) 00:28:10.222 [2024-04-26 23:30:59.408957] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.222 [2024-04-26 23:30:59.408967] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x120b180, cid 0, qid 0 00:28:10.222 [2024-04-26 23:30:59.409163] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:10.222 [2024-04-26 23:30:59.409169] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:10.222 [2024-04-26 23:30:59.409172] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:10.222 [2024-04-26 23:30:59.409176] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x120b180) on tqpair=0x11a2b60 00:28:10.222 [2024-04-26 23:30:59.409181] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:28:10.222 [2024-04-26 23:30:59.409191] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:10.222 [2024-04-26 23:30:59.409194] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:10.222 [2024-04-26 23:30:59.409198] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x11a2b60) 00:28:10.222 [2024-04-26 23:30:59.409205] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.222 [2024-04-26 23:30:59.409214] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x120b180, cid 0, qid 0 00:28:10.222 [2024-04-26 23:30:59.409408] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:10.222 [2024-04-26 23:30:59.409415] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:10.222 [2024-04-26 23:30:59.409418] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:10.222 [2024-04-26 23:30:59.409422] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x120b180) on tqpair=0x11a2b60 00:28:10.222 [2024-04-26 23:30:59.409427] nvme_ctrlr.c:3784:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:28:10.222 [2024-04-26 23:30:59.409432] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to reset admin queue (timeout 30000 ms) 00:28:10.222 [2024-04-26 23:30:59.409439] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller (no timeout) 00:28:10.222 [2024-04-26 23:30:59.409450] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify controller (timeout 30000 ms) 00:28:10.222 [2024-04-26 23:30:59.409459] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:10.222 [2024-04-26 23:30:59.409463] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x11a2b60) 00:28:10.222 [2024-04-26 23:30:59.409470] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.222 [2024-04-26 23:30:59.409480] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x120b180, cid 0, qid 0 00:28:10.222 [2024-04-26 23:30:59.409674] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:28:10.222 [2024-04-26 23:30:59.409681] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:28:10.222 [2024-04-26 23:30:59.409684] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:28:10.222 [2024-04-26 23:30:59.409688] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x11a2b60): datao=0, datal=4096, cccid=0 00:28:10.222 [2024-04-26 23:30:59.409693] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x120b180) on tqpair(0x11a2b60): expected_datao=0, payload_size=4096 00:28:10.222 [2024-04-26 23:30:59.409697] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:10.222 [2024-04-26 23:30:59.409704] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:28:10.222 [2024-04-26 23:30:59.409708] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:28:10.222 [2024-04-26 23:30:59.409904] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:10.222 [2024-04-26 23:30:59.409911] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:10.222 [2024-04-26 23:30:59.409914] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:10.222 [2024-04-26 23:30:59.409918] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x120b180) on tqpair=0x11a2b60 00:28:10.222 [2024-04-26 23:30:59.409925] nvme_ctrlr.c:1984:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_xfer_size 4294967295 00:28:10.223 [2024-04-26 23:30:59.409930] nvme_ctrlr.c:1988:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] MDTS max_xfer_size 131072 00:28:10.223 [2024-04-26 23:30:59.409934] nvme_ctrlr.c:1991:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CNTLID 0x0001 00:28:10.223 [2024-04-26 23:30:59.409938] nvme_ctrlr.c:2015:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_sges 16 00:28:10.223 [2024-04-26 23:30:59.409943] nvme_ctrlr.c:2030:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] fuses compare and write: 1 00:28:10.223 [2024-04-26 23:30:59.409948] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to configure AER (timeout 30000 ms) 00:28:10.223 [2024-04-26 23:30:59.409956] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for configure aer (timeout 30000 ms) 00:28:10.223 [2024-04-26 23:30:59.409963] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:10.223 [2024-04-26 23:30:59.409966] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:10.223 [2024-04-26 23:30:59.409972] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x11a2b60) 00:28:10.223 [2024-04-26 23:30:59.409979] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:28:10.223 [2024-04-26 23:30:59.409990] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x120b180, cid 0, qid 0 00:28:10.223 [2024-04-26 23:30:59.410189] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:10.223 [2024-04-26 23:30:59.410195] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:10.223 [2024-04-26 23:30:59.410199] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:10.223 [2024-04-26 23:30:59.410202] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x120b180) on tqpair=0x11a2b60 00:28:10.223 [2024-04-26 23:30:59.410210] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:10.223 [2024-04-26 23:30:59.410214] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:10.223 [2024-04-26 23:30:59.410217] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x11a2b60) 00:28:10.223 [2024-04-26 23:30:59.410224] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:10.223 [2024-04-26 23:30:59.410230] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:10.223 [2024-04-26 23:30:59.410233] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:10.223 [2024-04-26 23:30:59.410237] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x11a2b60) 00:28:10.223 [2024-04-26 23:30:59.410243] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:10.223 [2024-04-26 23:30:59.410249] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:10.223 [2024-04-26 23:30:59.410253] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:10.223 [2024-04-26 23:30:59.410256] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x11a2b60) 00:28:10.223 [2024-04-26 23:30:59.410262] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:10.223 [2024-04-26 23:30:59.410268] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:10.223 [2024-04-26 23:30:59.410272] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:10.223 [2024-04-26 23:30:59.410275] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x11a2b60) 00:28:10.223 [2024-04-26 23:30:59.410281] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:10.223 [2024-04-26 23:30:59.410285] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set keep alive timeout (timeout 30000 ms) 00:28:10.223 [2024-04-26 23:30:59.410296] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:28:10.223 [2024-04-26 23:30:59.410302] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:10.223 [2024-04-26 23:30:59.410306] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x11a2b60) 00:28:10.223 [2024-04-26 23:30:59.410313] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.223 [2024-04-26 23:30:59.410324] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x120b180, cid 0, qid 0 00:28:10.223 [2024-04-26 23:30:59.410330] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x120b2e0, cid 1, qid 0 00:28:10.223 [2024-04-26 23:30:59.410334] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x120b440, cid 2, qid 0 00:28:10.223 [2024-04-26 23:30:59.410339] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x120b5a0, cid 3, qid 0 00:28:10.223 [2024-04-26 23:30:59.410344] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x120b700, cid 4, qid 0 00:28:10.223 [2024-04-26 23:30:59.410557] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:10.223 [2024-04-26 23:30:59.410564] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:10.223 [2024-04-26 23:30:59.410568] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:10.223 [2024-04-26 23:30:59.410572] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x120b700) on tqpair=0x11a2b60 00:28:10.223 [2024-04-26 23:30:59.410577] nvme_ctrlr.c:2902:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Sending keep alive every 5000000 us 00:28:10.223 [2024-04-26 23:30:59.410582] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller iocs specific (timeout 30000 ms) 00:28:10.223 [2024-04-26 23:30:59.410591] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set number of queues (timeout 30000 ms) 00:28:10.223 [2024-04-26 23:30:59.410597] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set number of queues (timeout 30000 ms) 00:28:10.223 [2024-04-26 23:30:59.410604] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:10.223 [2024-04-26 23:30:59.410608] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:10.223 [2024-04-26 23:30:59.410611] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x11a2b60) 00:28:10.223 [2024-04-26 23:30:59.410617] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:28:10.223 [2024-04-26 23:30:59.410627] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x120b700, cid 4, qid 0 00:28:10.223 [2024-04-26 23:30:59.410796] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:10.223 [2024-04-26 23:30:59.410802] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:10.223 [2024-04-26 23:30:59.410805] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:10.223 [2024-04-26 23:30:59.410809] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x120b700) on tqpair=0x11a2b60 00:28:10.223 [2024-04-26 23:30:59.410863] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify active ns (timeout 30000 ms) 00:28:10.223 [2024-04-26 23:30:59.410872] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify active ns (timeout 30000 ms) 00:28:10.223 [2024-04-26 23:30:59.410879] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:10.223 [2024-04-26 23:30:59.410883] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x11a2b60) 00:28:10.223 [2024-04-26 23:30:59.410889] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.223 [2024-04-26 23:30:59.410899] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x120b700, cid 4, qid 0 00:28:10.223 [2024-04-26 23:30:59.411111] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:28:10.223 [2024-04-26 23:30:59.411117] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:28:10.223 [2024-04-26 23:30:59.411120] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:28:10.223 [2024-04-26 23:30:59.411124] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x11a2b60): datao=0, datal=4096, cccid=4 00:28:10.223 [2024-04-26 23:30:59.411128] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x120b700) on tqpair(0x11a2b60): expected_datao=0, payload_size=4096 00:28:10.223 [2024-04-26 23:30:59.411133] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:10.223 [2024-04-26 23:30:59.411182] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:28:10.223 [2024-04-26 23:30:59.411186] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:28:10.223 [2024-04-26 23:30:59.411311] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:10.223 [2024-04-26 23:30:59.411317] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:10.223 [2024-04-26 23:30:59.411321] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:10.223 [2024-04-26 23:30:59.411326] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x120b700) on tqpair=0x11a2b60 00:28:10.223 [2024-04-26 23:30:59.411335] nvme_ctrlr.c:4557:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Namespace 1 was added 00:28:10.223 [2024-04-26 23:30:59.411344] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns (timeout 30000 ms) 00:28:10.223 [2024-04-26 23:30:59.411353] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify ns (timeout 30000 ms) 00:28:10.223 [2024-04-26 23:30:59.411359] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:10.223 [2024-04-26 23:30:59.411363] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x11a2b60) 00:28:10.223 [2024-04-26 23:30:59.411370] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.223 [2024-04-26 23:30:59.411380] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x120b700, cid 4, qid 0 00:28:10.223 [2024-04-26 23:30:59.411567] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:28:10.223 [2024-04-26 23:30:59.411574] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:28:10.223 [2024-04-26 23:30:59.411578] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:28:10.223 [2024-04-26 23:30:59.411581] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x11a2b60): datao=0, datal=4096, cccid=4 00:28:10.223 [2024-04-26 23:30:59.411585] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x120b700) on tqpair(0x11a2b60): expected_datao=0, payload_size=4096 00:28:10.223 [2024-04-26 23:30:59.411590] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:10.223 [2024-04-26 23:30:59.411611] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:28:10.223 [2024-04-26 23:30:59.411615] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:28:10.223 [2024-04-26 23:30:59.455846] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:10.223 [2024-04-26 23:30:59.455855] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:10.223 [2024-04-26 23:30:59.455859] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:10.223 [2024-04-26 23:30:59.455863] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x120b700) on tqpair=0x11a2b60 00:28:10.223 [2024-04-26 23:30:59.455878] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:28:10.223 [2024-04-26 23:30:59.455887] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:28:10.223 [2024-04-26 23:30:59.455895] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:10.223 [2024-04-26 23:30:59.455899] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x11a2b60) 00:28:10.223 [2024-04-26 23:30:59.455906] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.223 [2024-04-26 23:30:59.455918] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x120b700, cid 4, qid 0 00:28:10.223 [2024-04-26 23:30:59.456094] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:28:10.223 [2024-04-26 23:30:59.456101] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:28:10.223 [2024-04-26 23:30:59.456104] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:28:10.223 [2024-04-26 23:30:59.456108] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x11a2b60): datao=0, datal=4096, cccid=4 00:28:10.223 [2024-04-26 23:30:59.456114] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x120b700) on tqpair(0x11a2b60): expected_datao=0, payload_size=4096 00:28:10.223 [2024-04-26 23:30:59.456118] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:10.223 [2024-04-26 23:30:59.456143] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:28:10.223 [2024-04-26 23:30:59.456147] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:28:10.223 [2024-04-26 23:30:59.456336] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:10.223 [2024-04-26 23:30:59.456342] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:10.223 [2024-04-26 23:30:59.456346] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:10.223 [2024-04-26 23:30:59.456349] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x120b700) on tqpair=0x11a2b60 00:28:10.223 [2024-04-26 23:30:59.456357] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns iocs specific (timeout 30000 ms) 00:28:10.223 [2024-04-26 23:30:59.456365] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported log pages (timeout 30000 ms) 00:28:10.223 [2024-04-26 23:30:59.456373] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported features (timeout 30000 ms) 00:28:10.223 [2024-04-26 23:30:59.456379] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set doorbell buffer config (timeout 30000 ms) 00:28:10.223 [2024-04-26 23:30:59.456384] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host ID (timeout 30000 ms) 00:28:10.223 [2024-04-26 23:30:59.456389] nvme_ctrlr.c:2990:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] NVMe-oF transport - not sending Set Features - Host ID 00:28:10.223 [2024-04-26 23:30:59.456393] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to transport ready (timeout 30000 ms) 00:28:10.223 [2024-04-26 23:30:59.456398] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to ready (no timeout) 00:28:10.223 [2024-04-26 23:30:59.456411] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:10.223 [2024-04-26 23:30:59.456415] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x11a2b60) 00:28:10.223 [2024-04-26 23:30:59.456422] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.223 [2024-04-26 23:30:59.456428] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:10.223 [2024-04-26 23:30:59.456432] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:10.223 [2024-04-26 23:30:59.456435] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x11a2b60) 00:28:10.223 [2024-04-26 23:30:59.456441] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:28:10.223 [2024-04-26 23:30:59.456454] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x120b700, cid 4, qid 0 00:28:10.223 [2024-04-26 23:30:59.456459] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x120b860, cid 5, qid 0 00:28:10.223 [2024-04-26 23:30:59.456664] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:10.223 [2024-04-26 23:30:59.456670] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:10.223 [2024-04-26 23:30:59.456673] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:10.223 [2024-04-26 23:30:59.456677] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x120b700) on tqpair=0x11a2b60 00:28:10.223 [2024-04-26 23:30:59.456684] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:10.223 [2024-04-26 23:30:59.456690] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:10.223 [2024-04-26 23:30:59.456693] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:10.223 [2024-04-26 23:30:59.456697] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x120b860) on tqpair=0x11a2b60 00:28:10.223 [2024-04-26 23:30:59.456706] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:10.223 [2024-04-26 23:30:59.456710] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x11a2b60) 00:28:10.223 [2024-04-26 23:30:59.456716] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.223 [2024-04-26 23:30:59.456728] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x120b860, cid 5, qid 0 00:28:10.223 [2024-04-26 23:30:59.456892] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:10.223 [2024-04-26 23:30:59.456899] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:10.223 [2024-04-26 23:30:59.456902] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:10.223 [2024-04-26 23:30:59.456906] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x120b860) on tqpair=0x11a2b60 00:28:10.223 [2024-04-26 23:30:59.456915] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:10.223 [2024-04-26 23:30:59.456919] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x11a2b60) 00:28:10.223 [2024-04-26 23:30:59.456925] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.223 [2024-04-26 23:30:59.456935] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x120b860, cid 5, qid 0 00:28:10.223 [2024-04-26 23:30:59.457127] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:10.223 [2024-04-26 23:30:59.457134] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:10.223 [2024-04-26 23:30:59.457137] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:10.223 [2024-04-26 23:30:59.457141] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x120b860) on tqpair=0x11a2b60 00:28:10.223 [2024-04-26 23:30:59.457153] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:10.223 [2024-04-26 23:30:59.457157] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x11a2b60) 00:28:10.223 [2024-04-26 23:30:59.457163] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.223 [2024-04-26 23:30:59.457172] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x120b860, cid 5, qid 0 00:28:10.223 [2024-04-26 23:30:59.457337] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:10.223 [2024-04-26 23:30:59.457344] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:10.224 [2024-04-26 23:30:59.457348] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:10.224 [2024-04-26 23:30:59.457352] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x120b860) on tqpair=0x11a2b60 00:28:10.224 [2024-04-26 23:30:59.457364] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:10.224 [2024-04-26 23:30:59.457368] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x11a2b60) 00:28:10.224 [2024-04-26 23:30:59.457375] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.224 [2024-04-26 23:30:59.457382] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:10.224 [2024-04-26 23:30:59.457386] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x11a2b60) 00:28:10.224 [2024-04-26 23:30:59.457392] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.224 [2024-04-26 23:30:59.457400] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:10.224 [2024-04-26 23:30:59.457404] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x11a2b60) 00:28:10.224 [2024-04-26 23:30:59.457410] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.224 [2024-04-26 23:30:59.457417] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:10.224 [2024-04-26 23:30:59.457421] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x11a2b60) 00:28:10.224 [2024-04-26 23:30:59.457428] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.224 [2024-04-26 23:30:59.457441] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x120b860, cid 5, qid 0 00:28:10.224 [2024-04-26 23:30:59.457447] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x120b700, cid 4, qid 0 00:28:10.224 [2024-04-26 23:30:59.457451] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x120b9c0, cid 6, qid 0 00:28:10.224 [2024-04-26 23:30:59.457456] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x120bb20, cid 7, qid 0 00:28:10.224 [2024-04-26 23:30:59.457690] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:28:10.224 [2024-04-26 23:30:59.457696] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:28:10.224 [2024-04-26 23:30:59.457700] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:28:10.224 [2024-04-26 23:30:59.457703] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x11a2b60): datao=0, datal=8192, cccid=5 00:28:10.224 [2024-04-26 23:30:59.457708] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x120b860) on tqpair(0x11a2b60): expected_datao=0, payload_size=8192 00:28:10.224 [2024-04-26 23:30:59.457712] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:10.224 [2024-04-26 23:30:59.457797] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:28:10.224 [2024-04-26 23:30:59.457801] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:28:10.224 [2024-04-26 23:30:59.457807] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:28:10.224 [2024-04-26 23:30:59.457812] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:28:10.224 [2024-04-26 23:30:59.457816] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:28:10.224 [2024-04-26 23:30:59.457819] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x11a2b60): datao=0, datal=512, cccid=4 00:28:10.224 [2024-04-26 23:30:59.457824] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x120b700) on tqpair(0x11a2b60): expected_datao=0, payload_size=512 00:28:10.224 [2024-04-26 23:30:59.457828] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:10.224 [2024-04-26 23:30:59.457834] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:28:10.224 [2024-04-26 23:30:59.457842] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:28:10.224 [2024-04-26 23:30:59.457847] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:28:10.224 [2024-04-26 23:30:59.457853] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:28:10.224 [2024-04-26 23:30:59.457856] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:28:10.224 [2024-04-26 23:30:59.457860] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x11a2b60): datao=0, datal=512, cccid=6 00:28:10.224 [2024-04-26 23:30:59.457864] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x120b9c0) on tqpair(0x11a2b60): expected_datao=0, payload_size=512 00:28:10.224 [2024-04-26 23:30:59.457868] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:10.224 [2024-04-26 23:30:59.457875] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:28:10.224 [2024-04-26 23:30:59.457878] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:28:10.224 [2024-04-26 23:30:59.457884] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:28:10.224 [2024-04-26 23:30:59.457889] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:28:10.224 [2024-04-26 23:30:59.457892] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:28:10.224 [2024-04-26 23:30:59.457896] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x11a2b60): datao=0, datal=4096, cccid=7 00:28:10.224 [2024-04-26 23:30:59.457900] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x120bb20) on tqpair(0x11a2b60): expected_datao=0, payload_size=4096 00:28:10.224 [2024-04-26 23:30:59.457904] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:10.224 [2024-04-26 23:30:59.457911] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:28:10.224 [2024-04-26 23:30:59.457914] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:28:10.224 [2024-04-26 23:30:59.457967] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:10.224 [2024-04-26 23:30:59.457973] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:10.224 [2024-04-26 23:30:59.457978] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:10.224 [2024-04-26 23:30:59.457982] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x120b860) on tqpair=0x11a2b60 00:28:10.224 [2024-04-26 23:30:59.457995] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:10.224 [2024-04-26 23:30:59.458001] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:10.224 [2024-04-26 23:30:59.458004] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:10.224 [2024-04-26 23:30:59.458008] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x120b700) on tqpair=0x11a2b60 00:28:10.224 [2024-04-26 23:30:59.458017] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:10.224 [2024-04-26 23:30:59.458023] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:10.224 [2024-04-26 23:30:59.458026] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:10.224 [2024-04-26 23:30:59.458030] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x120b9c0) on tqpair=0x11a2b60 00:28:10.224 [2024-04-26 23:30:59.458037] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:10.224 [2024-04-26 23:30:59.458043] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:10.224 [2024-04-26 23:30:59.458046] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:10.224 [2024-04-26 23:30:59.458050] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x120bb20) on tqpair=0x11a2b60 00:28:10.224 ===================================================== 00:28:10.224 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:10.224 ===================================================== 00:28:10.224 Controller Capabilities/Features 00:28:10.224 ================================ 00:28:10.224 Vendor ID: 8086 00:28:10.224 Subsystem Vendor ID: 8086 00:28:10.224 Serial Number: SPDK00000000000001 00:28:10.224 Model Number: SPDK bdev Controller 00:28:10.224 Firmware Version: 24.05 00:28:10.224 Recommended Arb Burst: 6 00:28:10.224 IEEE OUI Identifier: e4 d2 5c 00:28:10.224 Multi-path I/O 00:28:10.224 May have multiple subsystem ports: Yes 00:28:10.224 May have multiple controllers: Yes 00:28:10.224 Associated with SR-IOV VF: No 00:28:10.224 Max Data Transfer Size: 131072 00:28:10.224 Max Number of Namespaces: 32 00:28:10.224 Max Number of I/O Queues: 127 00:28:10.224 NVMe Specification Version (VS): 1.3 00:28:10.224 NVMe Specification Version (Identify): 1.3 00:28:10.224 Maximum Queue Entries: 128 00:28:10.224 Contiguous Queues Required: Yes 00:28:10.224 Arbitration Mechanisms Supported 00:28:10.224 Weighted Round Robin: Not Supported 00:28:10.224 Vendor Specific: Not Supported 00:28:10.224 Reset Timeout: 15000 ms 00:28:10.224 Doorbell Stride: 4 bytes 00:28:10.224 NVM Subsystem Reset: Not Supported 00:28:10.224 Command Sets Supported 00:28:10.224 NVM Command Set: Supported 00:28:10.224 Boot Partition: Not Supported 00:28:10.224 Memory Page Size Minimum: 4096 bytes 00:28:10.224 Memory Page Size Maximum: 4096 bytes 00:28:10.224 Persistent Memory Region: Not Supported 00:28:10.224 Optional Asynchronous Events Supported 00:28:10.224 Namespace Attribute Notices: Supported 00:28:10.224 Firmware Activation Notices: Not Supported 00:28:10.224 ANA Change Notices: Not Supported 00:28:10.224 PLE Aggregate Log Change Notices: Not Supported 00:28:10.224 LBA Status Info Alert Notices: Not Supported 00:28:10.224 EGE Aggregate Log Change Notices: Not Supported 00:28:10.224 Normal NVM Subsystem Shutdown event: Not Supported 00:28:10.224 Zone Descriptor Change Notices: Not Supported 00:28:10.224 Discovery Log Change Notices: Not Supported 00:28:10.224 Controller Attributes 00:28:10.224 128-bit Host Identifier: Supported 00:28:10.224 Non-Operational Permissive Mode: Not Supported 00:28:10.224 NVM Sets: Not Supported 00:28:10.224 Read Recovery Levels: Not Supported 00:28:10.224 Endurance Groups: Not Supported 00:28:10.224 Predictable Latency Mode: Not Supported 00:28:10.224 Traffic Based Keep ALive: Not Supported 00:28:10.224 Namespace Granularity: Not Supported 00:28:10.224 SQ Associations: Not Supported 00:28:10.224 UUID List: Not Supported 00:28:10.224 Multi-Domain Subsystem: Not Supported 00:28:10.224 Fixed Capacity Management: Not Supported 00:28:10.224 Variable Capacity Management: Not Supported 00:28:10.224 Delete Endurance Group: Not Supported 00:28:10.224 Delete NVM Set: Not Supported 00:28:10.224 Extended LBA Formats Supported: Not Supported 00:28:10.224 Flexible Data Placement Supported: Not Supported 00:28:10.224 00:28:10.224 Controller Memory Buffer Support 00:28:10.224 ================================ 00:28:10.224 Supported: No 00:28:10.224 00:28:10.224 Persistent Memory Region Support 00:28:10.224 ================================ 00:28:10.224 Supported: No 00:28:10.224 00:28:10.224 Admin Command Set Attributes 00:28:10.224 ============================ 00:28:10.224 Security Send/Receive: Not Supported 00:28:10.224 Format NVM: Not Supported 00:28:10.224 Firmware Activate/Download: Not Supported 00:28:10.224 Namespace Management: Not Supported 00:28:10.224 Device Self-Test: Not Supported 00:28:10.224 Directives: Not Supported 00:28:10.224 NVMe-MI: Not Supported 00:28:10.224 Virtualization Management: Not Supported 00:28:10.224 Doorbell Buffer Config: Not Supported 00:28:10.224 Get LBA Status Capability: Not Supported 00:28:10.224 Command & Feature Lockdown Capability: Not Supported 00:28:10.224 Abort Command Limit: 4 00:28:10.224 Async Event Request Limit: 4 00:28:10.224 Number of Firmware Slots: N/A 00:28:10.224 Firmware Slot 1 Read-Only: N/A 00:28:10.224 Firmware Activation Without Reset: N/A 00:28:10.224 Multiple Update Detection Support: N/A 00:28:10.224 Firmware Update Granularity: No Information Provided 00:28:10.224 Per-Namespace SMART Log: No 00:28:10.224 Asymmetric Namespace Access Log Page: Not Supported 00:28:10.224 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:28:10.224 Command Effects Log Page: Supported 00:28:10.224 Get Log Page Extended Data: Supported 00:28:10.224 Telemetry Log Pages: Not Supported 00:28:10.224 Persistent Event Log Pages: Not Supported 00:28:10.224 Supported Log Pages Log Page: May Support 00:28:10.224 Commands Supported & Effects Log Page: Not Supported 00:28:10.224 Feature Identifiers & Effects Log Page:May Support 00:28:10.224 NVMe-MI Commands & Effects Log Page: May Support 00:28:10.224 Data Area 4 for Telemetry Log: Not Supported 00:28:10.224 Error Log Page Entries Supported: 128 00:28:10.224 Keep Alive: Supported 00:28:10.224 Keep Alive Granularity: 10000 ms 00:28:10.224 00:28:10.224 NVM Command Set Attributes 00:28:10.224 ========================== 00:28:10.224 Submission Queue Entry Size 00:28:10.224 Max: 64 00:28:10.224 Min: 64 00:28:10.224 Completion Queue Entry Size 00:28:10.224 Max: 16 00:28:10.224 Min: 16 00:28:10.224 Number of Namespaces: 32 00:28:10.224 Compare Command: Supported 00:28:10.224 Write Uncorrectable Command: Not Supported 00:28:10.224 Dataset Management Command: Supported 00:28:10.224 Write Zeroes Command: Supported 00:28:10.224 Set Features Save Field: Not Supported 00:28:10.224 Reservations: Supported 00:28:10.224 Timestamp: Not Supported 00:28:10.224 Copy: Supported 00:28:10.224 Volatile Write Cache: Present 00:28:10.224 Atomic Write Unit (Normal): 1 00:28:10.224 Atomic Write Unit (PFail): 1 00:28:10.224 Atomic Compare & Write Unit: 1 00:28:10.224 Fused Compare & Write: Supported 00:28:10.224 Scatter-Gather List 00:28:10.224 SGL Command Set: Supported 00:28:10.224 SGL Keyed: Supported 00:28:10.224 SGL Bit Bucket Descriptor: Not Supported 00:28:10.224 SGL Metadata Pointer: Not Supported 00:28:10.224 Oversized SGL: Not Supported 00:28:10.224 SGL Metadata Address: Not Supported 00:28:10.224 SGL Offset: Supported 00:28:10.224 Transport SGL Data Block: Not Supported 00:28:10.224 Replay Protected Memory Block: Not Supported 00:28:10.224 00:28:10.224 Firmware Slot Information 00:28:10.224 ========================= 00:28:10.224 Active slot: 1 00:28:10.224 Slot 1 Firmware Revision: 24.05 00:28:10.224 00:28:10.224 00:28:10.224 Commands Supported and Effects 00:28:10.224 ============================== 00:28:10.224 Admin Commands 00:28:10.224 -------------- 00:28:10.224 Get Log Page (02h): Supported 00:28:10.224 Identify (06h): Supported 00:28:10.224 Abort (08h): Supported 00:28:10.224 Set Features (09h): Supported 00:28:10.224 Get Features (0Ah): Supported 00:28:10.224 Asynchronous Event Request (0Ch): Supported 00:28:10.224 Keep Alive (18h): Supported 00:28:10.224 I/O Commands 00:28:10.224 ------------ 00:28:10.224 Flush (00h): Supported LBA-Change 00:28:10.224 Write (01h): Supported LBA-Change 00:28:10.224 Read (02h): Supported 00:28:10.224 Compare (05h): Supported 00:28:10.224 Write Zeroes (08h): Supported LBA-Change 00:28:10.224 Dataset Management (09h): Supported LBA-Change 00:28:10.224 Copy (19h): Supported LBA-Change 00:28:10.224 Unknown (79h): Supported LBA-Change 00:28:10.224 Unknown (7Ah): Supported 00:28:10.224 00:28:10.224 Error Log 00:28:10.224 ========= 00:28:10.224 00:28:10.224 Arbitration 00:28:10.224 =========== 00:28:10.224 Arbitration Burst: 1 00:28:10.224 00:28:10.224 Power Management 00:28:10.224 ================ 00:28:10.224 Number of Power States: 1 00:28:10.224 Current Power State: Power State #0 00:28:10.224 Power State #0: 00:28:10.224 Max Power: 0.00 W 00:28:10.224 Non-Operational State: Operational 00:28:10.224 Entry Latency: Not Reported 00:28:10.224 Exit Latency: Not Reported 00:28:10.224 Relative Read Throughput: 0 00:28:10.224 Relative Read Latency: 0 00:28:10.224 Relative Write Throughput: 0 00:28:10.224 Relative Write Latency: 0 00:28:10.224 Idle Power: Not Reported 00:28:10.224 Active Power: Not Reported 00:28:10.224 Non-Operational Permissive Mode: Not Supported 00:28:10.224 00:28:10.224 Health Information 00:28:10.224 ================== 00:28:10.224 Critical Warnings: 00:28:10.224 Available Spare Space: OK 00:28:10.224 Temperature: OK 00:28:10.224 Device Reliability: OK 00:28:10.224 Read Only: No 00:28:10.224 Volatile Memory Backup: OK 00:28:10.224 Current Temperature: 0 Kelvin (-273 Celsius) 00:28:10.224 Temperature Threshold: [2024-04-26 23:30:59.458153] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:10.224 [2024-04-26 23:30:59.458158] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x11a2b60) 00:28:10.224 [2024-04-26 23:30:59.458165] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.224 [2024-04-26 23:30:59.458176] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x120bb20, cid 7, qid 0 00:28:10.224 [2024-04-26 23:30:59.458389] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:10.224 [2024-04-26 23:30:59.458395] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:10.224 [2024-04-26 23:30:59.458398] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:10.224 [2024-04-26 23:30:59.458402] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x120bb20) on tqpair=0x11a2b60 00:28:10.224 [2024-04-26 23:30:59.458431] nvme_ctrlr.c:4221:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Prepare to destruct SSD 00:28:10.224 [2024-04-26 23:30:59.458442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.224 [2024-04-26 23:30:59.458448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.224 [2024-04-26 23:30:59.458454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.224 [2024-04-26 23:30:59.458460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.224 [2024-04-26 23:30:59.458467] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:10.224 [2024-04-26 23:30:59.458471] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:10.224 [2024-04-26 23:30:59.458475] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x11a2b60) 00:28:10.224 [2024-04-26 23:30:59.458482] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.224 [2024-04-26 23:30:59.458493] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x120b5a0, cid 3, qid 0 00:28:10.224 [2024-04-26 23:30:59.458663] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:10.224 [2024-04-26 23:30:59.458669] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:10.224 [2024-04-26 23:30:59.458673] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:10.224 [2024-04-26 23:30:59.458677] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x120b5a0) on tqpair=0x11a2b60 00:28:10.224 [2024-04-26 23:30:59.458686] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:10.224 [2024-04-26 23:30:59.458690] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:10.224 [2024-04-26 23:30:59.458693] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x11a2b60) 00:28:10.224 [2024-04-26 23:30:59.458700] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.224 [2024-04-26 23:30:59.458713] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x120b5a0, cid 3, qid 0 00:28:10.224 [2024-04-26 23:30:59.458923] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:10.225 [2024-04-26 23:30:59.458929] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:10.225 [2024-04-26 23:30:59.458933] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:10.225 [2024-04-26 23:30:59.458936] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x120b5a0) on tqpair=0x11a2b60 00:28:10.225 [2024-04-26 23:30:59.458942] nvme_ctrlr.c:1082:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] RTD3E = 0 us 00:28:10.225 [2024-04-26 23:30:59.458946] nvme_ctrlr.c:1085:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown timeout = 10000 ms 00:28:10.225 [2024-04-26 23:30:59.458955] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:10.225 [2024-04-26 23:30:59.458959] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:10.225 [2024-04-26 23:30:59.458963] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x11a2b60) 00:28:10.225 [2024-04-26 23:30:59.458969] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.225 [2024-04-26 23:30:59.458979] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x120b5a0, cid 3, qid 0 00:28:10.225 [2024-04-26 23:30:59.459141] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:10.225 [2024-04-26 23:30:59.459148] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:10.225 [2024-04-26 23:30:59.459151] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:10.225 [2024-04-26 23:30:59.459155] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x120b5a0) on tqpair=0x11a2b60 00:28:10.225 [2024-04-26 23:30:59.459165] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:10.225 [2024-04-26 23:30:59.459169] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:10.225 [2024-04-26 23:30:59.459172] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x11a2b60) 00:28:10.225 [2024-04-26 23:30:59.459179] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.225 [2024-04-26 23:30:59.459188] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x120b5a0, cid 3, qid 0 00:28:10.225 [2024-04-26 23:30:59.459393] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:10.225 [2024-04-26 23:30:59.459400] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:10.225 [2024-04-26 23:30:59.459403] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:10.225 [2024-04-26 23:30:59.459407] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x120b5a0) on tqpair=0x11a2b60 00:28:10.225 [2024-04-26 23:30:59.459417] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:10.225 [2024-04-26 23:30:59.459421] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:10.225 [2024-04-26 23:30:59.459424] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x11a2b60) 00:28:10.225 [2024-04-26 23:30:59.459431] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.225 [2024-04-26 23:30:59.459440] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x120b5a0, cid 3, qid 0 00:28:10.225 [2024-04-26 23:30:59.459654] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:10.225 [2024-04-26 23:30:59.459660] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:10.225 [2024-04-26 23:30:59.459665] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:10.225 [2024-04-26 23:30:59.459669] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x120b5a0) on tqpair=0x11a2b60 00:28:10.225 [2024-04-26 23:30:59.459679] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:10.225 [2024-04-26 23:30:59.459683] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:10.225 [2024-04-26 23:30:59.459686] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x11a2b60) 00:28:10.225 [2024-04-26 23:30:59.459693] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.225 [2024-04-26 23:30:59.459702] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x120b5a0, cid 3, qid 0 00:28:10.225 [2024-04-26 23:30:59.463846] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:10.225 [2024-04-26 23:30:59.463854] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:10.225 [2024-04-26 23:30:59.463857] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:10.225 [2024-04-26 23:30:59.463861] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x120b5a0) on tqpair=0x11a2b60 00:28:10.225 [2024-04-26 23:30:59.463870] nvme_ctrlr.c:1204:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown complete in 4 milliseconds 00:28:10.486 0 Kelvin (-273 Celsius) 00:28:10.486 Available Spare: 0% 00:28:10.486 Available Spare Threshold: 0% 00:28:10.486 Life Percentage Used: 0% 00:28:10.486 Data Units Read: 0 00:28:10.486 Data Units Written: 0 00:28:10.486 Host Read Commands: 0 00:28:10.486 Host Write Commands: 0 00:28:10.486 Controller Busy Time: 0 minutes 00:28:10.486 Power Cycles: 0 00:28:10.486 Power On Hours: 0 hours 00:28:10.486 Unsafe Shutdowns: 0 00:28:10.486 Unrecoverable Media Errors: 0 00:28:10.486 Lifetime Error Log Entries: 0 00:28:10.486 Warning Temperature Time: 0 minutes 00:28:10.486 Critical Temperature Time: 0 minutes 00:28:10.486 00:28:10.486 Number of Queues 00:28:10.486 ================ 00:28:10.486 Number of I/O Submission Queues: 127 00:28:10.486 Number of I/O Completion Queues: 127 00:28:10.486 00:28:10.486 Active Namespaces 00:28:10.486 ================= 00:28:10.486 Namespace ID:1 00:28:10.486 Error Recovery Timeout: Unlimited 00:28:10.486 Command Set Identifier: NVM (00h) 00:28:10.486 Deallocate: Supported 00:28:10.486 Deallocated/Unwritten Error: Not Supported 00:28:10.486 Deallocated Read Value: Unknown 00:28:10.486 Deallocate in Write Zeroes: Not Supported 00:28:10.486 Deallocated Guard Field: 0xFFFF 00:28:10.486 Flush: Supported 00:28:10.486 Reservation: Supported 00:28:10.486 Namespace Sharing Capabilities: Multiple Controllers 00:28:10.486 Size (in LBAs): 131072 (0GiB) 00:28:10.486 Capacity (in LBAs): 131072 (0GiB) 00:28:10.486 Utilization (in LBAs): 131072 (0GiB) 00:28:10.486 NGUID: ABCDEF0123456789ABCDEF0123456789 00:28:10.486 EUI64: ABCDEF0123456789 00:28:10.486 UUID: 1fa27bb2-6cd6-4e2b-9bbc-c964a7fa2a84 00:28:10.486 Thin Provisioning: Not Supported 00:28:10.486 Per-NS Atomic Units: Yes 00:28:10.486 Atomic Boundary Size (Normal): 0 00:28:10.486 Atomic Boundary Size (PFail): 0 00:28:10.486 Atomic Boundary Offset: 0 00:28:10.486 Maximum Single Source Range Length: 65535 00:28:10.486 Maximum Copy Length: 65535 00:28:10.486 Maximum Source Range Count: 1 00:28:10.486 NGUID/EUI64 Never Reused: No 00:28:10.486 Namespace Write Protected: No 00:28:10.486 Number of LBA Formats: 1 00:28:10.486 Current LBA Format: LBA Format #00 00:28:10.486 LBA Format #00: Data Size: 512 Metadata Size: 0 00:28:10.486 00:28:10.486 23:30:59 -- host/identify.sh@51 -- # sync 00:28:10.486 23:30:59 -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:10.486 23:30:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:10.486 23:30:59 -- common/autotest_common.sh@10 -- # set +x 00:28:10.486 23:30:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:10.486 23:30:59 -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:28:10.486 23:30:59 -- host/identify.sh@56 -- # nvmftestfini 00:28:10.486 23:30:59 -- nvmf/common.sh@477 -- # nvmfcleanup 00:28:10.486 23:30:59 -- nvmf/common.sh@117 -- # sync 00:28:10.486 23:30:59 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:28:10.486 23:30:59 -- nvmf/common.sh@120 -- # set +e 00:28:10.486 23:30:59 -- nvmf/common.sh@121 -- # for i in {1..20} 00:28:10.486 23:30:59 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:28:10.486 rmmod nvme_tcp 00:28:10.486 rmmod nvme_fabrics 00:28:10.486 rmmod nvme_keyring 00:28:10.486 23:30:59 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:28:10.486 23:30:59 -- nvmf/common.sh@124 -- # set -e 00:28:10.486 23:30:59 -- nvmf/common.sh@125 -- # return 0 00:28:10.486 23:30:59 -- nvmf/common.sh@478 -- # '[' -n 4089990 ']' 00:28:10.486 23:30:59 -- nvmf/common.sh@479 -- # killprocess 4089990 00:28:10.486 23:30:59 -- common/autotest_common.sh@936 -- # '[' -z 4089990 ']' 00:28:10.486 23:30:59 -- common/autotest_common.sh@940 -- # kill -0 4089990 00:28:10.486 23:30:59 -- common/autotest_common.sh@941 -- # uname 00:28:10.486 23:30:59 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:28:10.486 23:30:59 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 4089990 00:28:10.486 23:30:59 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:28:10.486 23:30:59 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:28:10.486 23:30:59 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 4089990' 00:28:10.486 killing process with pid 4089990 00:28:10.486 23:30:59 -- common/autotest_common.sh@955 -- # kill 4089990 00:28:10.486 [2024-04-26 23:30:59.617033] app.c: 937:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:28:10.486 23:30:59 -- common/autotest_common.sh@960 -- # wait 4089990 00:28:10.748 23:30:59 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:28:10.748 23:30:59 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:28:10.748 23:30:59 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:28:10.748 23:30:59 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:28:10.748 23:30:59 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:28:10.748 23:30:59 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:10.748 23:30:59 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:10.748 23:30:59 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:12.676 23:31:01 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:28:12.676 00:28:12.676 real 0m11.047s 00:28:12.676 user 0m7.914s 00:28:12.676 sys 0m5.781s 00:28:12.676 23:31:01 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:28:12.676 23:31:01 -- common/autotest_common.sh@10 -- # set +x 00:28:12.676 ************************************ 00:28:12.676 END TEST nvmf_identify 00:28:12.676 ************************************ 00:28:12.676 23:31:01 -- nvmf/nvmf.sh@96 -- # run_test nvmf_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:28:12.676 23:31:01 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:28:12.676 23:31:01 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:28:12.676 23:31:01 -- common/autotest_common.sh@10 -- # set +x 00:28:13.022 ************************************ 00:28:13.022 START TEST nvmf_perf 00:28:13.022 ************************************ 00:28:13.022 23:31:02 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:28:13.022 * Looking for test storage... 00:28:13.022 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:28:13.022 23:31:02 -- host/perf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:13.022 23:31:02 -- nvmf/common.sh@7 -- # uname -s 00:28:13.022 23:31:02 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:13.022 23:31:02 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:13.022 23:31:02 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:13.022 23:31:02 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:13.022 23:31:02 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:13.022 23:31:02 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:13.022 23:31:02 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:13.022 23:31:02 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:13.022 23:31:02 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:13.022 23:31:02 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:13.022 23:31:02 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:28:13.022 23:31:02 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:28:13.022 23:31:02 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:13.022 23:31:02 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:13.022 23:31:02 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:13.022 23:31:02 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:13.022 23:31:02 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:13.022 23:31:02 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:13.022 23:31:02 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:13.022 23:31:02 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:13.022 23:31:02 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:13.022 23:31:02 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:13.022 23:31:02 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:13.022 23:31:02 -- paths/export.sh@5 -- # export PATH 00:28:13.022 23:31:02 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:13.022 23:31:02 -- nvmf/common.sh@47 -- # : 0 00:28:13.022 23:31:02 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:28:13.022 23:31:02 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:28:13.022 23:31:02 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:13.022 23:31:02 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:13.022 23:31:02 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:13.023 23:31:02 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:28:13.023 23:31:02 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:28:13.023 23:31:02 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:28:13.023 23:31:02 -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:28:13.023 23:31:02 -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:28:13.023 23:31:02 -- host/perf.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:28:13.023 23:31:02 -- host/perf.sh@17 -- # nvmftestinit 00:28:13.023 23:31:02 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:28:13.023 23:31:02 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:13.023 23:31:02 -- nvmf/common.sh@437 -- # prepare_net_devs 00:28:13.023 23:31:02 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:28:13.023 23:31:02 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:28:13.023 23:31:02 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:13.023 23:31:02 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:13.023 23:31:02 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:13.023 23:31:02 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:28:13.023 23:31:02 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:28:13.023 23:31:02 -- nvmf/common.sh@285 -- # xtrace_disable 00:28:13.023 23:31:02 -- common/autotest_common.sh@10 -- # set +x 00:28:19.635 23:31:08 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:28:19.635 23:31:08 -- nvmf/common.sh@291 -- # pci_devs=() 00:28:19.635 23:31:08 -- nvmf/common.sh@291 -- # local -a pci_devs 00:28:19.635 23:31:08 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:28:19.635 23:31:08 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:28:19.635 23:31:08 -- nvmf/common.sh@293 -- # pci_drivers=() 00:28:19.635 23:31:08 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:28:19.635 23:31:08 -- nvmf/common.sh@295 -- # net_devs=() 00:28:19.635 23:31:08 -- nvmf/common.sh@295 -- # local -ga net_devs 00:28:19.635 23:31:08 -- nvmf/common.sh@296 -- # e810=() 00:28:19.635 23:31:08 -- nvmf/common.sh@296 -- # local -ga e810 00:28:19.635 23:31:08 -- nvmf/common.sh@297 -- # x722=() 00:28:19.635 23:31:08 -- nvmf/common.sh@297 -- # local -ga x722 00:28:19.635 23:31:08 -- nvmf/common.sh@298 -- # mlx=() 00:28:19.635 23:31:08 -- nvmf/common.sh@298 -- # local -ga mlx 00:28:19.635 23:31:08 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:19.635 23:31:08 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:19.635 23:31:08 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:19.635 23:31:08 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:19.635 23:31:08 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:19.635 23:31:08 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:19.635 23:31:08 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:19.635 23:31:08 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:19.635 23:31:08 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:19.635 23:31:08 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:19.635 23:31:08 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:19.635 23:31:08 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:28:19.635 23:31:08 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:28:19.635 23:31:08 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:28:19.635 23:31:08 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:28:19.635 23:31:08 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:28:19.635 23:31:08 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:28:19.635 23:31:08 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:19.635 23:31:08 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:28:19.635 Found 0000:31:00.0 (0x8086 - 0x159b) 00:28:19.635 23:31:08 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:19.635 23:31:08 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:19.635 23:31:08 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:19.635 23:31:08 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:19.635 23:31:08 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:19.635 23:31:08 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:19.635 23:31:08 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:28:19.635 Found 0000:31:00.1 (0x8086 - 0x159b) 00:28:19.635 23:31:08 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:19.635 23:31:08 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:19.635 23:31:08 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:19.635 23:31:08 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:19.635 23:31:08 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:19.635 23:31:08 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:28:19.635 23:31:08 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:28:19.635 23:31:08 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:28:19.635 23:31:08 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:19.635 23:31:08 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:19.635 23:31:08 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:28:19.635 23:31:08 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:19.635 23:31:08 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:28:19.635 Found net devices under 0000:31:00.0: cvl_0_0 00:28:19.635 23:31:08 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:28:19.635 23:31:08 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:19.635 23:31:08 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:19.635 23:31:08 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:28:19.635 23:31:08 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:19.635 23:31:08 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:28:19.635 Found net devices under 0000:31:00.1: cvl_0_1 00:28:19.635 23:31:08 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:28:19.635 23:31:08 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:28:19.635 23:31:08 -- nvmf/common.sh@403 -- # is_hw=yes 00:28:19.635 23:31:08 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:28:19.635 23:31:08 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:28:19.635 23:31:08 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:28:19.635 23:31:08 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:19.635 23:31:08 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:19.635 23:31:08 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:19.635 23:31:08 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:28:19.635 23:31:08 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:19.635 23:31:08 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:19.635 23:31:08 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:28:19.635 23:31:08 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:19.635 23:31:08 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:19.635 23:31:08 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:28:19.635 23:31:08 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:28:19.635 23:31:08 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:28:19.635 23:31:08 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:19.635 23:31:08 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:19.635 23:31:08 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:19.635 23:31:08 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:28:19.635 23:31:08 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:19.635 23:31:08 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:19.635 23:31:08 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:19.635 23:31:08 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:28:19.635 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:19.635 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.587 ms 00:28:19.635 00:28:19.635 --- 10.0.0.2 ping statistics --- 00:28:19.635 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:19.635 rtt min/avg/max/mdev = 0.587/0.587/0.587/0.000 ms 00:28:19.635 23:31:08 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:19.635 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:19.635 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.272 ms 00:28:19.635 00:28:19.635 --- 10.0.0.1 ping statistics --- 00:28:19.635 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:19.635 rtt min/avg/max/mdev = 0.272/0.272/0.272/0.000 ms 00:28:19.635 23:31:08 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:19.635 23:31:08 -- nvmf/common.sh@411 -- # return 0 00:28:19.635 23:31:08 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:28:19.635 23:31:08 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:19.635 23:31:08 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:28:19.635 23:31:08 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:28:19.635 23:31:08 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:19.635 23:31:08 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:28:19.635 23:31:08 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:28:19.635 23:31:08 -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:28:19.635 23:31:08 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:28:19.635 23:31:08 -- common/autotest_common.sh@710 -- # xtrace_disable 00:28:19.635 23:31:08 -- common/autotest_common.sh@10 -- # set +x 00:28:19.635 23:31:08 -- nvmf/common.sh@470 -- # nvmfpid=4094410 00:28:19.635 23:31:08 -- nvmf/common.sh@471 -- # waitforlisten 4094410 00:28:19.635 23:31:08 -- common/autotest_common.sh@817 -- # '[' -z 4094410 ']' 00:28:19.635 23:31:08 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:19.635 23:31:08 -- common/autotest_common.sh@822 -- # local max_retries=100 00:28:19.635 23:31:08 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:19.635 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:19.635 23:31:08 -- common/autotest_common.sh@826 -- # xtrace_disable 00:28:19.635 23:31:08 -- common/autotest_common.sh@10 -- # set +x 00:28:19.635 23:31:08 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:28:19.635 [2024-04-26 23:31:08.876224] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:28:19.635 [2024-04-26 23:31:08.876275] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:19.896 EAL: No free 2048 kB hugepages reported on node 1 00:28:19.896 [2024-04-26 23:31:08.942467] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:19.896 [2024-04-26 23:31:08.973185] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:19.896 [2024-04-26 23:31:08.973227] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:19.896 [2024-04-26 23:31:08.973236] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:19.896 [2024-04-26 23:31:08.973244] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:19.896 [2024-04-26 23:31:08.973250] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:19.896 [2024-04-26 23:31:08.973758] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:28:19.896 [2024-04-26 23:31:08.973851] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:28:19.896 [2024-04-26 23:31:08.973960] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:28:19.896 [2024-04-26 23:31:08.974063] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:20.468 23:31:09 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:28:20.468 23:31:09 -- common/autotest_common.sh@850 -- # return 0 00:28:20.468 23:31:09 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:28:20.468 23:31:09 -- common/autotest_common.sh@716 -- # xtrace_disable 00:28:20.468 23:31:09 -- common/autotest_common.sh@10 -- # set +x 00:28:20.468 23:31:09 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:20.468 23:31:09 -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:28:20.468 23:31:09 -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:28:21.038 23:31:10 -- host/perf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_get_config bdev 00:28:21.038 23:31:10 -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:28:21.298 23:31:10 -- host/perf.sh@30 -- # local_nvme_trid=0000:65:00.0 00:28:21.298 23:31:10 -- host/perf.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:28:21.298 23:31:10 -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:28:21.298 23:31:10 -- host/perf.sh@33 -- # '[' -n 0000:65:00.0 ']' 00:28:21.298 23:31:10 -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:28:21.298 23:31:10 -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:28:21.298 23:31:10 -- host/perf.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:28:21.558 [2024-04-26 23:31:10.663937] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:21.558 23:31:10 -- host/perf.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:28:21.817 23:31:10 -- host/perf.sh@45 -- # for bdev in $bdevs 00:28:21.817 23:31:10 -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:28:21.817 23:31:11 -- host/perf.sh@45 -- # for bdev in $bdevs 00:28:21.817 23:31:11 -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:28:22.078 23:31:11 -- host/perf.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:22.338 [2024-04-26 23:31:11.342472] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:22.338 23:31:11 -- host/perf.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:28:22.338 23:31:11 -- host/perf.sh@52 -- # '[' -n 0000:65:00.0 ']' 00:28:22.338 23:31:11 -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:65:00.0' 00:28:22.338 23:31:11 -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:28:22.338 23:31:11 -- host/perf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:65:00.0' 00:28:23.724 Initializing NVMe Controllers 00:28:23.724 Attached to NVMe Controller at 0000:65:00.0 [144d:a80a] 00:28:23.724 Associating PCIE (0000:65:00.0) NSID 1 with lcore 0 00:28:23.724 Initialization complete. Launching workers. 00:28:23.724 ======================================================== 00:28:23.724 Latency(us) 00:28:23.724 Device Information : IOPS MiB/s Average min max 00:28:23.724 PCIE (0000:65:00.0) NSID 1 from core 0: 80801.35 315.63 395.49 13.32 7220.26 00:28:23.724 ======================================================== 00:28:23.724 Total : 80801.35 315.63 395.49 13.32 7220.26 00:28:23.724 00:28:23.724 23:31:12 -- host/perf.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:28:23.724 EAL: No free 2048 kB hugepages reported on node 1 00:28:25.107 Initializing NVMe Controllers 00:28:25.107 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:25.107 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:28:25.107 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:28:25.107 Initialization complete. Launching workers. 00:28:25.107 ======================================================== 00:28:25.107 Latency(us) 00:28:25.107 Device Information : IOPS MiB/s Average min max 00:28:25.107 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 117.00 0.46 8663.07 259.43 44814.56 00:28:25.107 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 70.00 0.27 14821.76 7930.62 47902.39 00:28:25.107 ======================================================== 00:28:25.107 Total : 187.00 0.73 10968.46 259.43 47902.39 00:28:25.107 00:28:25.107 23:31:14 -- host/perf.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:28:25.107 EAL: No free 2048 kB hugepages reported on node 1 00:28:26.492 Initializing NVMe Controllers 00:28:26.492 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:26.492 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:28:26.492 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:28:26.492 Initialization complete. Launching workers. 00:28:26.492 ======================================================== 00:28:26.492 Latency(us) 00:28:26.492 Device Information : IOPS MiB/s Average min max 00:28:26.492 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 10357.37 40.46 3089.81 486.49 6572.97 00:28:26.492 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3833.55 14.97 8349.66 6911.08 18138.14 00:28:26.492 ======================================================== 00:28:26.492 Total : 14190.92 55.43 4510.71 486.49 18138.14 00:28:26.492 00:28:26.492 23:31:15 -- host/perf.sh@59 -- # [[ e810 == \e\8\1\0 ]] 00:28:26.492 23:31:15 -- host/perf.sh@59 -- # [[ tcp == \r\d\m\a ]] 00:28:26.492 23:31:15 -- host/perf.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:28:26.492 EAL: No free 2048 kB hugepages reported on node 1 00:28:29.035 Initializing NVMe Controllers 00:28:29.035 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:29.035 Controller IO queue size 128, less than required. 00:28:29.035 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:29.035 Controller IO queue size 128, less than required. 00:28:29.035 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:29.035 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:28:29.035 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:28:29.035 Initialization complete. Launching workers. 00:28:29.035 ======================================================== 00:28:29.035 Latency(us) 00:28:29.035 Device Information : IOPS MiB/s Average min max 00:28:29.035 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1316.35 329.09 99150.37 69771.22 148486.87 00:28:29.035 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 604.97 151.24 218745.31 72373.15 304581.98 00:28:29.035 ======================================================== 00:28:29.035 Total : 1921.33 480.33 136807.54 69771.22 304581.98 00:28:29.035 00:28:29.035 23:31:17 -- host/perf.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:28:29.035 EAL: No free 2048 kB hugepages reported on node 1 00:28:29.035 No valid NVMe controllers or AIO or URING devices found 00:28:29.295 Initializing NVMe Controllers 00:28:29.295 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:29.295 Controller IO queue size 128, less than required. 00:28:29.295 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:29.296 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:28:29.296 Controller IO queue size 128, less than required. 00:28:29.296 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:29.296 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 512. Removing this ns from test 00:28:29.296 WARNING: Some requested NVMe devices were skipped 00:28:29.296 23:31:18 -- host/perf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:28:29.296 EAL: No free 2048 kB hugepages reported on node 1 00:28:31.863 Initializing NVMe Controllers 00:28:31.863 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:31.863 Controller IO queue size 128, less than required. 00:28:31.863 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:31.863 Controller IO queue size 128, less than required. 00:28:31.863 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:31.863 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:28:31.863 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:28:31.863 Initialization complete. Launching workers. 00:28:31.863 00:28:31.863 ==================== 00:28:31.863 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:28:31.863 TCP transport: 00:28:31.863 polls: 27864 00:28:31.863 idle_polls: 14595 00:28:31.863 sock_completions: 13269 00:28:31.863 nvme_completions: 5235 00:28:31.863 submitted_requests: 7900 00:28:31.863 queued_requests: 1 00:28:31.863 00:28:31.863 ==================== 00:28:31.863 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:28:31.863 TCP transport: 00:28:31.863 polls: 28634 00:28:31.863 idle_polls: 12963 00:28:31.863 sock_completions: 15671 00:28:31.863 nvme_completions: 5593 00:28:31.863 submitted_requests: 8424 00:28:31.863 queued_requests: 1 00:28:31.863 ======================================================== 00:28:31.863 Latency(us) 00:28:31.863 Device Information : IOPS MiB/s Average min max 00:28:31.864 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1308.28 327.07 99922.16 48134.54 162426.22 00:28:31.864 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1397.77 349.44 93489.57 52409.41 157179.68 00:28:31.864 ======================================================== 00:28:31.864 Total : 2706.05 676.51 96599.51 48134.54 162426.22 00:28:31.864 00:28:31.864 23:31:20 -- host/perf.sh@66 -- # sync 00:28:31.864 23:31:20 -- host/perf.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:31.864 23:31:21 -- host/perf.sh@69 -- # '[' 1 -eq 1 ']' 00:28:31.864 23:31:21 -- host/perf.sh@71 -- # '[' -n 0000:65:00.0 ']' 00:28:31.864 23:31:21 -- host/perf.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore Nvme0n1 lvs_0 00:28:33.247 23:31:22 -- host/perf.sh@72 -- # ls_guid=28dcb075-e980-4648-82b1-c468ecf698c0 00:28:33.247 23:31:22 -- host/perf.sh@73 -- # get_lvs_free_mb 28dcb075-e980-4648-82b1-c468ecf698c0 00:28:33.247 23:31:22 -- common/autotest_common.sh@1350 -- # local lvs_uuid=28dcb075-e980-4648-82b1-c468ecf698c0 00:28:33.247 23:31:22 -- common/autotest_common.sh@1351 -- # local lvs_info 00:28:33.247 23:31:22 -- common/autotest_common.sh@1352 -- # local fc 00:28:33.247 23:31:22 -- common/autotest_common.sh@1353 -- # local cs 00:28:33.247 23:31:22 -- common/autotest_common.sh@1354 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:28:33.247 23:31:22 -- common/autotest_common.sh@1354 -- # lvs_info='[ 00:28:33.247 { 00:28:33.247 "uuid": "28dcb075-e980-4648-82b1-c468ecf698c0", 00:28:33.247 "name": "lvs_0", 00:28:33.247 "base_bdev": "Nvme0n1", 00:28:33.247 "total_data_clusters": 457407, 00:28:33.247 "free_clusters": 457407, 00:28:33.247 "block_size": 512, 00:28:33.247 "cluster_size": 4194304 00:28:33.247 } 00:28:33.247 ]' 00:28:33.247 23:31:22 -- common/autotest_common.sh@1355 -- # jq '.[] | select(.uuid=="28dcb075-e980-4648-82b1-c468ecf698c0") .free_clusters' 00:28:33.247 23:31:22 -- common/autotest_common.sh@1355 -- # fc=457407 00:28:33.247 23:31:22 -- common/autotest_common.sh@1356 -- # jq '.[] | select(.uuid=="28dcb075-e980-4648-82b1-c468ecf698c0") .cluster_size' 00:28:33.247 23:31:22 -- common/autotest_common.sh@1356 -- # cs=4194304 00:28:33.247 23:31:22 -- common/autotest_common.sh@1359 -- # free_mb=1829628 00:28:33.247 23:31:22 -- common/autotest_common.sh@1360 -- # echo 1829628 00:28:33.247 1829628 00:28:33.247 23:31:22 -- host/perf.sh@77 -- # '[' 1829628 -gt 20480 ']' 00:28:33.247 23:31:22 -- host/perf.sh@78 -- # free_mb=20480 00:28:33.247 23:31:22 -- host/perf.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 28dcb075-e980-4648-82b1-c468ecf698c0 lbd_0 20480 00:28:33.507 23:31:22 -- host/perf.sh@80 -- # lb_guid=c068e314-5aab-4f8f-8c4f-02b285ae4557 00:28:33.507 23:31:22 -- host/perf.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore c068e314-5aab-4f8f-8c4f-02b285ae4557 lvs_n_0 00:28:35.425 23:31:24 -- host/perf.sh@83 -- # ls_nested_guid=0170c497-aa24-4bf2-9481-15504430371b 00:28:35.425 23:31:24 -- host/perf.sh@84 -- # get_lvs_free_mb 0170c497-aa24-4bf2-9481-15504430371b 00:28:35.425 23:31:24 -- common/autotest_common.sh@1350 -- # local lvs_uuid=0170c497-aa24-4bf2-9481-15504430371b 00:28:35.425 23:31:24 -- common/autotest_common.sh@1351 -- # local lvs_info 00:28:35.425 23:31:24 -- common/autotest_common.sh@1352 -- # local fc 00:28:35.425 23:31:24 -- common/autotest_common.sh@1353 -- # local cs 00:28:35.425 23:31:24 -- common/autotest_common.sh@1354 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:28:35.425 23:31:24 -- common/autotest_common.sh@1354 -- # lvs_info='[ 00:28:35.425 { 00:28:35.425 "uuid": "28dcb075-e980-4648-82b1-c468ecf698c0", 00:28:35.425 "name": "lvs_0", 00:28:35.425 "base_bdev": "Nvme0n1", 00:28:35.425 "total_data_clusters": 457407, 00:28:35.425 "free_clusters": 452287, 00:28:35.425 "block_size": 512, 00:28:35.425 "cluster_size": 4194304 00:28:35.425 }, 00:28:35.425 { 00:28:35.425 "uuid": "0170c497-aa24-4bf2-9481-15504430371b", 00:28:35.425 "name": "lvs_n_0", 00:28:35.425 "base_bdev": "c068e314-5aab-4f8f-8c4f-02b285ae4557", 00:28:35.425 "total_data_clusters": 5114, 00:28:35.425 "free_clusters": 5114, 00:28:35.425 "block_size": 512, 00:28:35.425 "cluster_size": 4194304 00:28:35.425 } 00:28:35.425 ]' 00:28:35.425 23:31:24 -- common/autotest_common.sh@1355 -- # jq '.[] | select(.uuid=="0170c497-aa24-4bf2-9481-15504430371b") .free_clusters' 00:28:35.425 23:31:24 -- common/autotest_common.sh@1355 -- # fc=5114 00:28:35.425 23:31:24 -- common/autotest_common.sh@1356 -- # jq '.[] | select(.uuid=="0170c497-aa24-4bf2-9481-15504430371b") .cluster_size' 00:28:35.425 23:31:24 -- common/autotest_common.sh@1356 -- # cs=4194304 00:28:35.425 23:31:24 -- common/autotest_common.sh@1359 -- # free_mb=20456 00:28:35.425 23:31:24 -- common/autotest_common.sh@1360 -- # echo 20456 00:28:35.425 20456 00:28:35.425 23:31:24 -- host/perf.sh@85 -- # '[' 20456 -gt 20480 ']' 00:28:35.425 23:31:24 -- host/perf.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 0170c497-aa24-4bf2-9481-15504430371b lbd_nest_0 20456 00:28:35.686 23:31:24 -- host/perf.sh@88 -- # lb_nested_guid=c381aa49-2a55-4813-aef1-8c5de2e97ef5 00:28:35.686 23:31:24 -- host/perf.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:28:35.686 23:31:24 -- host/perf.sh@90 -- # for bdev in $lb_nested_guid 00:28:35.686 23:31:24 -- host/perf.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 c381aa49-2a55-4813-aef1-8c5de2e97ef5 00:28:35.948 23:31:25 -- host/perf.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:35.948 23:31:25 -- host/perf.sh@95 -- # qd_depth=("1" "32" "128") 00:28:35.948 23:31:25 -- host/perf.sh@96 -- # io_size=("512" "131072") 00:28:35.948 23:31:25 -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:28:35.948 23:31:25 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:28:35.948 23:31:25 -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:28:36.209 EAL: No free 2048 kB hugepages reported on node 1 00:28:48.441 Initializing NVMe Controllers 00:28:48.441 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:48.441 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:28:48.441 Initialization complete. Launching workers. 00:28:48.441 ======================================================== 00:28:48.441 Latency(us) 00:28:48.441 Device Information : IOPS MiB/s Average min max 00:28:48.441 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 42.40 0.02 23638.07 254.74 45145.60 00:28:48.441 ======================================================== 00:28:48.441 Total : 42.40 0.02 23638.07 254.74 45145.60 00:28:48.441 00:28:48.441 23:31:35 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:28:48.441 23:31:35 -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:28:48.441 EAL: No free 2048 kB hugepages reported on node 1 00:28:58.447 Initializing NVMe Controllers 00:28:58.447 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:58.447 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:28:58.447 Initialization complete. Launching workers. 00:28:58.447 ======================================================== 00:28:58.447 Latency(us) 00:28:58.447 Device Information : IOPS MiB/s Average min max 00:28:58.447 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 61.90 7.74 16187.80 5983.17 55867.73 00:28:58.447 ======================================================== 00:28:58.447 Total : 61.90 7.74 16187.80 5983.17 55867.73 00:28:58.447 00:28:58.447 23:31:45 -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:28:58.447 23:31:45 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:28:58.447 23:31:45 -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:28:58.447 EAL: No free 2048 kB hugepages reported on node 1 00:29:08.512 Initializing NVMe Controllers 00:29:08.512 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:08.512 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:29:08.512 Initialization complete. Launching workers. 00:29:08.512 ======================================================== 00:29:08.512 Latency(us) 00:29:08.512 Device Information : IOPS MiB/s Average min max 00:29:08.512 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 9007.74 4.40 3557.62 276.58 47886.45 00:29:08.512 ======================================================== 00:29:08.512 Total : 9007.74 4.40 3557.62 276.58 47886.45 00:29:08.512 00:29:08.512 23:31:56 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:29:08.512 23:31:56 -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:29:08.512 EAL: No free 2048 kB hugepages reported on node 1 00:29:18.579 Initializing NVMe Controllers 00:29:18.579 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:18.579 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:29:18.579 Initialization complete. Launching workers. 00:29:18.579 ======================================================== 00:29:18.579 Latency(us) 00:29:18.579 Device Information : IOPS MiB/s Average min max 00:29:18.579 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 2841.00 355.12 11268.31 738.02 24518.91 00:29:18.579 ======================================================== 00:29:18.579 Total : 2841.00 355.12 11268.31 738.02 24518.91 00:29:18.579 00:29:18.579 23:32:06 -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:29:18.579 23:32:06 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:29:18.579 23:32:06 -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:29:18.579 EAL: No free 2048 kB hugepages reported on node 1 00:29:28.589 [2024-04-26 23:32:16.904672] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ccf80 is same with the state(5) to be set 00:29:28.589 Initializing NVMe Controllers 00:29:28.590 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:28.590 Controller IO queue size 128, less than required. 00:29:28.590 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:28.590 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:29:28.590 Initialization complete. Launching workers. 00:29:28.590 ======================================================== 00:29:28.590 Latency(us) 00:29:28.590 Device Information : IOPS MiB/s Average min max 00:29:28.590 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 12114.81 5.92 10567.30 1907.45 26302.22 00:29:28.590 ======================================================== 00:29:28.590 Total : 12114.81 5.92 10567.30 1907.45 26302.22 00:29:28.590 00:29:28.590 23:32:16 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:29:28.590 23:32:16 -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:29:28.590 EAL: No free 2048 kB hugepages reported on node 1 00:29:38.586 Initializing NVMe Controllers 00:29:38.586 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:38.586 Controller IO queue size 128, less than required. 00:29:38.586 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:38.586 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:29:38.586 Initialization complete. Launching workers. 00:29:38.586 ======================================================== 00:29:38.586 Latency(us) 00:29:38.586 Device Information : IOPS MiB/s Average min max 00:29:38.586 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1169.30 146.16 109637.29 15218.66 240037.41 00:29:38.586 ======================================================== 00:29:38.586 Total : 1169.30 146.16 109637.29 15218.66 240037.41 00:29:38.586 00:29:38.586 23:32:27 -- host/perf.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:29:38.586 23:32:27 -- host/perf.sh@105 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete c381aa49-2a55-4813-aef1-8c5de2e97ef5 00:29:39.970 23:32:29 -- host/perf.sh@106 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:29:40.230 23:32:29 -- host/perf.sh@107 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete c068e314-5aab-4f8f-8c4f-02b285ae4557 00:29:40.230 23:32:29 -- host/perf.sh@108 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:29:40.491 23:32:29 -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:29:40.491 23:32:29 -- host/perf.sh@114 -- # nvmftestfini 00:29:40.491 23:32:29 -- nvmf/common.sh@477 -- # nvmfcleanup 00:29:40.491 23:32:29 -- nvmf/common.sh@117 -- # sync 00:29:40.491 23:32:29 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:29:40.491 23:32:29 -- nvmf/common.sh@120 -- # set +e 00:29:40.491 23:32:29 -- nvmf/common.sh@121 -- # for i in {1..20} 00:29:40.491 23:32:29 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:29:40.491 rmmod nvme_tcp 00:29:40.491 rmmod nvme_fabrics 00:29:40.491 rmmod nvme_keyring 00:29:40.491 23:32:29 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:29:40.491 23:32:29 -- nvmf/common.sh@124 -- # set -e 00:29:40.491 23:32:29 -- nvmf/common.sh@125 -- # return 0 00:29:40.491 23:32:29 -- nvmf/common.sh@478 -- # '[' -n 4094410 ']' 00:29:40.491 23:32:29 -- nvmf/common.sh@479 -- # killprocess 4094410 00:29:40.491 23:32:29 -- common/autotest_common.sh@936 -- # '[' -z 4094410 ']' 00:29:40.491 23:32:29 -- common/autotest_common.sh@940 -- # kill -0 4094410 00:29:40.491 23:32:29 -- common/autotest_common.sh@941 -- # uname 00:29:40.491 23:32:29 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:29:40.491 23:32:29 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 4094410 00:29:40.752 23:32:29 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:29:40.752 23:32:29 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:29:40.752 23:32:29 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 4094410' 00:29:40.752 killing process with pid 4094410 00:29:40.752 23:32:29 -- common/autotest_common.sh@955 -- # kill 4094410 00:29:40.752 23:32:29 -- common/autotest_common.sh@960 -- # wait 4094410 00:29:42.664 23:32:31 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:29:42.664 23:32:31 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:29:42.664 23:32:31 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:29:42.664 23:32:31 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:29:42.664 23:32:31 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:29:42.664 23:32:31 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:42.664 23:32:31 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:29:42.664 23:32:31 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:44.576 23:32:33 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:29:44.576 00:29:44.576 real 1m31.792s 00:29:44.576 user 5m27.112s 00:29:44.576 sys 0m13.837s 00:29:44.576 23:32:33 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:29:44.576 23:32:33 -- common/autotest_common.sh@10 -- # set +x 00:29:44.576 ************************************ 00:29:44.576 END TEST nvmf_perf 00:29:44.576 ************************************ 00:29:44.836 23:32:33 -- nvmf/nvmf.sh@97 -- # run_test nvmf_fio_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:29:44.836 23:32:33 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:29:44.836 23:32:33 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:29:44.836 23:32:33 -- common/autotest_common.sh@10 -- # set +x 00:29:44.836 ************************************ 00:29:44.836 START TEST nvmf_fio_host 00:29:44.836 ************************************ 00:29:44.836 23:32:33 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:29:45.098 * Looking for test storage... 00:29:45.098 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:29:45.098 23:32:34 -- host/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:45.098 23:32:34 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:45.098 23:32:34 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:45.098 23:32:34 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:45.098 23:32:34 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:45.098 23:32:34 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:45.098 23:32:34 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:45.098 23:32:34 -- paths/export.sh@5 -- # export PATH 00:29:45.098 23:32:34 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:45.098 23:32:34 -- host/fio.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:45.098 23:32:34 -- nvmf/common.sh@7 -- # uname -s 00:29:45.098 23:32:34 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:45.098 23:32:34 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:45.098 23:32:34 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:45.098 23:32:34 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:45.098 23:32:34 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:45.098 23:32:34 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:45.098 23:32:34 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:45.098 23:32:34 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:45.098 23:32:34 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:45.098 23:32:34 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:45.098 23:32:34 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:29:45.098 23:32:34 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:29:45.098 23:32:34 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:45.098 23:32:34 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:45.098 23:32:34 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:45.098 23:32:34 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:45.098 23:32:34 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:45.098 23:32:34 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:45.098 23:32:34 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:45.098 23:32:34 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:45.098 23:32:34 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:45.098 23:32:34 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:45.098 23:32:34 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:45.098 23:32:34 -- paths/export.sh@5 -- # export PATH 00:29:45.098 23:32:34 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:45.098 23:32:34 -- nvmf/common.sh@47 -- # : 0 00:29:45.098 23:32:34 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:29:45.098 23:32:34 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:29:45.098 23:32:34 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:45.098 23:32:34 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:45.098 23:32:34 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:45.098 23:32:34 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:29:45.098 23:32:34 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:29:45.098 23:32:34 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:29:45.098 23:32:34 -- host/fio.sh@12 -- # nvmftestinit 00:29:45.098 23:32:34 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:29:45.098 23:32:34 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:45.098 23:32:34 -- nvmf/common.sh@437 -- # prepare_net_devs 00:29:45.098 23:32:34 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:29:45.098 23:32:34 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:29:45.098 23:32:34 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:45.098 23:32:34 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:29:45.098 23:32:34 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:45.098 23:32:34 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:29:45.098 23:32:34 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:29:45.098 23:32:34 -- nvmf/common.sh@285 -- # xtrace_disable 00:29:45.098 23:32:34 -- common/autotest_common.sh@10 -- # set +x 00:29:53.243 23:32:41 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:29:53.243 23:32:41 -- nvmf/common.sh@291 -- # pci_devs=() 00:29:53.243 23:32:41 -- nvmf/common.sh@291 -- # local -a pci_devs 00:29:53.243 23:32:41 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:29:53.243 23:32:41 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:29:53.243 23:32:41 -- nvmf/common.sh@293 -- # pci_drivers=() 00:29:53.243 23:32:41 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:29:53.243 23:32:41 -- nvmf/common.sh@295 -- # net_devs=() 00:29:53.243 23:32:41 -- nvmf/common.sh@295 -- # local -ga net_devs 00:29:53.243 23:32:41 -- nvmf/common.sh@296 -- # e810=() 00:29:53.243 23:32:41 -- nvmf/common.sh@296 -- # local -ga e810 00:29:53.243 23:32:41 -- nvmf/common.sh@297 -- # x722=() 00:29:53.243 23:32:41 -- nvmf/common.sh@297 -- # local -ga x722 00:29:53.243 23:32:41 -- nvmf/common.sh@298 -- # mlx=() 00:29:53.243 23:32:41 -- nvmf/common.sh@298 -- # local -ga mlx 00:29:53.243 23:32:41 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:53.243 23:32:41 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:53.243 23:32:41 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:53.243 23:32:41 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:53.243 23:32:41 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:53.243 23:32:41 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:53.243 23:32:41 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:53.243 23:32:41 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:53.243 23:32:41 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:53.243 23:32:41 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:53.243 23:32:41 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:53.243 23:32:41 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:29:53.243 23:32:41 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:29:53.243 23:32:41 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:29:53.243 23:32:41 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:29:53.243 23:32:41 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:29:53.243 23:32:41 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:29:53.243 23:32:41 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:53.243 23:32:41 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:29:53.243 Found 0000:31:00.0 (0x8086 - 0x159b) 00:29:53.243 23:32:41 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:29:53.243 23:32:41 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:29:53.243 23:32:41 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:53.243 23:32:41 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:53.243 23:32:41 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:29:53.243 23:32:41 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:53.243 23:32:41 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:29:53.243 Found 0000:31:00.1 (0x8086 - 0x159b) 00:29:53.243 23:32:41 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:29:53.243 23:32:41 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:29:53.243 23:32:41 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:53.243 23:32:41 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:53.243 23:32:41 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:29:53.243 23:32:41 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:29:53.243 23:32:41 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:29:53.243 23:32:41 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:29:53.243 23:32:41 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:53.243 23:32:41 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:53.243 23:32:41 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:29:53.243 23:32:41 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:53.243 23:32:41 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:29:53.243 Found net devices under 0000:31:00.0: cvl_0_0 00:29:53.243 23:32:41 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:29:53.243 23:32:41 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:53.243 23:32:41 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:53.243 23:32:41 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:29:53.243 23:32:41 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:53.243 23:32:41 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:29:53.243 Found net devices under 0000:31:00.1: cvl_0_1 00:29:53.243 23:32:41 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:29:53.243 23:32:41 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:29:53.243 23:32:41 -- nvmf/common.sh@403 -- # is_hw=yes 00:29:53.243 23:32:41 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:29:53.243 23:32:41 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:29:53.243 23:32:41 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:29:53.243 23:32:41 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:53.243 23:32:41 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:53.243 23:32:41 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:53.243 23:32:41 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:29:53.243 23:32:41 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:53.243 23:32:41 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:53.243 23:32:41 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:29:53.243 23:32:41 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:53.243 23:32:41 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:53.243 23:32:41 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:29:53.243 23:32:41 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:29:53.243 23:32:41 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:29:53.243 23:32:41 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:53.243 23:32:41 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:53.243 23:32:41 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:53.243 23:32:41 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:29:53.243 23:32:41 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:53.243 23:32:41 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:53.243 23:32:41 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:53.243 23:32:41 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:29:53.244 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:53.244 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.652 ms 00:29:53.244 00:29:53.244 --- 10.0.0.2 ping statistics --- 00:29:53.244 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:53.244 rtt min/avg/max/mdev = 0.652/0.652/0.652/0.000 ms 00:29:53.244 23:32:41 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:53.244 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:53.244 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.201 ms 00:29:53.244 00:29:53.244 --- 10.0.0.1 ping statistics --- 00:29:53.244 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:53.244 rtt min/avg/max/mdev = 0.201/0.201/0.201/0.000 ms 00:29:53.244 23:32:41 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:53.244 23:32:41 -- nvmf/common.sh@411 -- # return 0 00:29:53.244 23:32:41 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:29:53.244 23:32:41 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:53.244 23:32:41 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:29:53.244 23:32:41 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:29:53.244 23:32:41 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:53.244 23:32:41 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:29:53.244 23:32:41 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:29:53.244 23:32:41 -- host/fio.sh@14 -- # [[ y != y ]] 00:29:53.244 23:32:41 -- host/fio.sh@19 -- # timing_enter start_nvmf_tgt 00:29:53.244 23:32:41 -- common/autotest_common.sh@710 -- # xtrace_disable 00:29:53.244 23:32:41 -- common/autotest_common.sh@10 -- # set +x 00:29:53.244 23:32:41 -- host/fio.sh@22 -- # nvmfpid=4114263 00:29:53.244 23:32:41 -- host/fio.sh@24 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:29:53.244 23:32:41 -- host/fio.sh@21 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:29:53.244 23:32:41 -- host/fio.sh@26 -- # waitforlisten 4114263 00:29:53.244 23:32:41 -- common/autotest_common.sh@817 -- # '[' -z 4114263 ']' 00:29:53.244 23:32:41 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:53.244 23:32:41 -- common/autotest_common.sh@822 -- # local max_retries=100 00:29:53.244 23:32:41 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:53.244 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:53.244 23:32:41 -- common/autotest_common.sh@826 -- # xtrace_disable 00:29:53.244 23:32:41 -- common/autotest_common.sh@10 -- # set +x 00:29:53.244 [2024-04-26 23:32:41.545201] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:29:53.244 [2024-04-26 23:32:41.545266] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:53.244 EAL: No free 2048 kB hugepages reported on node 1 00:29:53.244 [2024-04-26 23:32:41.616995] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:53.244 [2024-04-26 23:32:41.655521] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:53.244 [2024-04-26 23:32:41.655568] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:53.244 [2024-04-26 23:32:41.655577] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:53.244 [2024-04-26 23:32:41.655584] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:53.244 [2024-04-26 23:32:41.655589] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:53.244 [2024-04-26 23:32:41.655701] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:29:53.244 [2024-04-26 23:32:41.655851] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:29:53.244 [2024-04-26 23:32:41.655998] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:29:53.244 [2024-04-26 23:32:41.656143] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:53.244 23:32:42 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:29:53.244 23:32:42 -- common/autotest_common.sh@850 -- # return 0 00:29:53.244 23:32:42 -- host/fio.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:29:53.244 23:32:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:53.244 23:32:42 -- common/autotest_common.sh@10 -- # set +x 00:29:53.244 [2024-04-26 23:32:42.334393] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:53.244 23:32:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:53.244 23:32:42 -- host/fio.sh@28 -- # timing_exit start_nvmf_tgt 00:29:53.244 23:32:42 -- common/autotest_common.sh@716 -- # xtrace_disable 00:29:53.244 23:32:42 -- common/autotest_common.sh@10 -- # set +x 00:29:53.244 23:32:42 -- host/fio.sh@30 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:29:53.244 23:32:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:53.244 23:32:42 -- common/autotest_common.sh@10 -- # set +x 00:29:53.244 Malloc1 00:29:53.244 23:32:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:53.244 23:32:42 -- host/fio.sh@31 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:53.244 23:32:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:53.244 23:32:42 -- common/autotest_common.sh@10 -- # set +x 00:29:53.244 23:32:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:53.244 23:32:42 -- host/fio.sh@32 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:29:53.244 23:32:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:53.244 23:32:42 -- common/autotest_common.sh@10 -- # set +x 00:29:53.244 23:32:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:53.244 23:32:42 -- host/fio.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:53.244 23:32:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:53.244 23:32:42 -- common/autotest_common.sh@10 -- # set +x 00:29:53.244 [2024-04-26 23:32:42.433856] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:53.244 23:32:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:53.244 23:32:42 -- host/fio.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:29:53.244 23:32:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:53.244 23:32:42 -- common/autotest_common.sh@10 -- # set +x 00:29:53.244 23:32:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:53.244 23:32:42 -- host/fio.sh@36 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:29:53.244 23:32:42 -- host/fio.sh@39 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:29:53.244 23:32:42 -- common/autotest_common.sh@1346 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:29:53.244 23:32:42 -- common/autotest_common.sh@1323 -- # local fio_dir=/usr/src/fio 00:29:53.244 23:32:42 -- common/autotest_common.sh@1325 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:29:53.244 23:32:42 -- common/autotest_common.sh@1325 -- # local sanitizers 00:29:53.244 23:32:42 -- common/autotest_common.sh@1326 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:29:53.244 23:32:42 -- common/autotest_common.sh@1327 -- # shift 00:29:53.244 23:32:42 -- common/autotest_common.sh@1329 -- # local asan_lib= 00:29:53.244 23:32:42 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:29:53.244 23:32:42 -- common/autotest_common.sh@1331 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:29:53.244 23:32:42 -- common/autotest_common.sh@1331 -- # grep libasan 00:29:53.244 23:32:42 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:29:53.244 23:32:42 -- common/autotest_common.sh@1331 -- # asan_lib= 00:29:53.244 23:32:42 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:29:53.244 23:32:42 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:29:53.244 23:32:42 -- common/autotest_common.sh@1331 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:29:53.244 23:32:42 -- common/autotest_common.sh@1331 -- # grep libclang_rt.asan 00:29:53.244 23:32:42 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:29:53.524 23:32:42 -- common/autotest_common.sh@1331 -- # asan_lib= 00:29:53.524 23:32:42 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:29:53.524 23:32:42 -- common/autotest_common.sh@1338 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:29:53.524 23:32:42 -- common/autotest_common.sh@1338 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:29:53.813 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:29:53.813 fio-3.35 00:29:53.813 Starting 1 thread 00:29:53.813 EAL: No free 2048 kB hugepages reported on node 1 00:29:56.373 [2024-04-26 23:32:45.132854] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16a3380 is same with the state(5) to be set 00:29:56.373 00:29:56.373 test: (groupid=0, jobs=1): err= 0: pid=4114766: Fri Apr 26 23:32:45 2024 00:29:56.373 read: IOPS=9069, BW=35.4MiB/s (37.1MB/s)(71.1MiB/2006msec) 00:29:56.373 slat (usec): min=2, max=275, avg= 2.20, stdev= 2.94 00:29:56.373 clat (usec): min=3695, max=13629, avg=7813.62, stdev=579.25 00:29:56.373 lat (usec): min=3727, max=13631, avg=7815.82, stdev=579.07 00:29:56.373 clat percentiles (usec): 00:29:56.373 | 1.00th=[ 6521], 5.00th=[ 6915], 10.00th=[ 7111], 20.00th=[ 7373], 00:29:56.373 | 30.00th=[ 7570], 40.00th=[ 7701], 50.00th=[ 7832], 60.00th=[ 7963], 00:29:56.373 | 70.00th=[ 8094], 80.00th=[ 8291], 90.00th=[ 8455], 95.00th=[ 8717], 00:29:56.373 | 99.00th=[ 9110], 99.50th=[ 9241], 99.90th=[12125], 99.95th=[12387], 00:29:56.373 | 99.99th=[13566] 00:29:56.373 bw ( KiB/s): min=35704, max=36712, per=99.88%, avg=36236.00, stdev=451.49, samples=4 00:29:56.373 iops : min= 8926, max= 9178, avg=9059.00, stdev=112.87, samples=4 00:29:56.373 write: IOPS=9082, BW=35.5MiB/s (37.2MB/s)(71.2MiB/2006msec); 0 zone resets 00:29:56.373 slat (usec): min=2, max=265, avg= 2.30, stdev= 2.18 00:29:56.373 clat (usec): min=2909, max=11684, avg=6255.53, stdev=477.50 00:29:56.373 lat (usec): min=2927, max=11687, avg=6257.83, stdev=477.44 00:29:56.373 clat percentiles (usec): 00:29:56.373 | 1.00th=[ 5145], 5.00th=[ 5538], 10.00th=[ 5669], 20.00th=[ 5866], 00:29:56.373 | 30.00th=[ 5997], 40.00th=[ 6128], 50.00th=[ 6259], 60.00th=[ 6390], 00:29:56.373 | 70.00th=[ 6456], 80.00th=[ 6652], 90.00th=[ 6783], 95.00th=[ 6980], 00:29:56.373 | 99.00th=[ 7308], 99.50th=[ 7439], 99.90th=[10028], 99.95th=[10421], 00:29:56.373 | 99.99th=[11469] 00:29:56.373 bw ( KiB/s): min=36096, max=36424, per=100.00%, avg=36338.00, stdev=161.38, samples=4 00:29:56.373 iops : min= 9024, max= 9106, avg=9084.50, stdev=40.34, samples=4 00:29:56.373 lat (msec) : 4=0.06%, 10=99.78%, 20=0.16% 00:29:56.374 cpu : usr=71.67%, sys=25.99%, ctx=94, majf=0, minf=5 00:29:56.374 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:29:56.374 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:56.374 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:29:56.374 issued rwts: total=18194,18220,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:56.374 latency : target=0, window=0, percentile=100.00%, depth=128 00:29:56.374 00:29:56.374 Run status group 0 (all jobs): 00:29:56.374 READ: bw=35.4MiB/s (37.1MB/s), 35.4MiB/s-35.4MiB/s (37.1MB/s-37.1MB/s), io=71.1MiB (74.5MB), run=2006-2006msec 00:29:56.374 WRITE: bw=35.5MiB/s (37.2MB/s), 35.5MiB/s-35.5MiB/s (37.2MB/s-37.2MB/s), io=71.2MiB (74.6MB), run=2006-2006msec 00:29:56.374 23:32:45 -- host/fio.sh@43 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:29:56.374 23:32:45 -- common/autotest_common.sh@1346 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:29:56.374 23:32:45 -- common/autotest_common.sh@1323 -- # local fio_dir=/usr/src/fio 00:29:56.374 23:32:45 -- common/autotest_common.sh@1325 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:29:56.374 23:32:45 -- common/autotest_common.sh@1325 -- # local sanitizers 00:29:56.374 23:32:45 -- common/autotest_common.sh@1326 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:29:56.374 23:32:45 -- common/autotest_common.sh@1327 -- # shift 00:29:56.374 23:32:45 -- common/autotest_common.sh@1329 -- # local asan_lib= 00:29:56.374 23:32:45 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:29:56.374 23:32:45 -- common/autotest_common.sh@1331 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:29:56.374 23:32:45 -- common/autotest_common.sh@1331 -- # grep libasan 00:29:56.374 23:32:45 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:29:56.374 23:32:45 -- common/autotest_common.sh@1331 -- # asan_lib= 00:29:56.374 23:32:45 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:29:56.374 23:32:45 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:29:56.374 23:32:45 -- common/autotest_common.sh@1331 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:29:56.374 23:32:45 -- common/autotest_common.sh@1331 -- # grep libclang_rt.asan 00:29:56.374 23:32:45 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:29:56.374 23:32:45 -- common/autotest_common.sh@1331 -- # asan_lib= 00:29:56.374 23:32:45 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:29:56.374 23:32:45 -- common/autotest_common.sh@1338 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:29:56.374 23:32:45 -- common/autotest_common.sh@1338 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:29:56.374 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:29:56.374 fio-3.35 00:29:56.374 Starting 1 thread 00:29:56.374 EAL: No free 2048 kB hugepages reported on node 1 00:29:58.923 00:29:58.923 test: (groupid=0, jobs=1): err= 0: pid=4115557: Fri Apr 26 23:32:47 2024 00:29:58.923 read: IOPS=9154, BW=143MiB/s (150MB/s)(287MiB/2008msec) 00:29:58.923 slat (usec): min=3, max=107, avg= 3.65, stdev= 1.67 00:29:58.923 clat (usec): min=1078, max=15921, avg=8449.76, stdev=1995.52 00:29:58.923 lat (usec): min=1082, max=15938, avg=8453.41, stdev=1995.74 00:29:58.923 clat percentiles (usec): 00:29:58.923 | 1.00th=[ 4424], 5.00th=[ 5407], 10.00th=[ 5932], 20.00th=[ 6587], 00:29:58.923 | 30.00th=[ 7242], 40.00th=[ 7832], 50.00th=[ 8455], 60.00th=[ 8979], 00:29:58.923 | 70.00th=[ 9634], 80.00th=[10159], 90.00th=[10945], 95.00th=[11600], 00:29:58.923 | 99.00th=[13435], 99.50th=[14091], 99.90th=[15401], 99.95th=[15664], 00:29:58.923 | 99.99th=[15926] 00:29:58.923 bw ( KiB/s): min=62400, max=83488, per=49.44%, avg=72408.00, stdev=8737.64, samples=4 00:29:58.923 iops : min= 3900, max= 5218, avg=4525.50, stdev=546.10, samples=4 00:29:58.923 write: IOPS=5330, BW=83.3MiB/s (87.3MB/s)(148MiB/1774msec); 0 zone resets 00:29:58.923 slat (usec): min=40, max=442, avg=41.34, stdev= 9.36 00:29:58.923 clat (usec): min=2857, max=17171, avg=9589.82, stdev=1579.35 00:29:58.923 lat (usec): min=2897, max=17308, avg=9631.15, stdev=1582.07 00:29:58.923 clat percentiles (usec): 00:29:58.923 | 1.00th=[ 6783], 5.00th=[ 7373], 10.00th=[ 7767], 20.00th=[ 8291], 00:29:58.923 | 30.00th=[ 8717], 40.00th=[ 9110], 50.00th=[ 9503], 60.00th=[ 9765], 00:29:58.923 | 70.00th=[10159], 80.00th=[10683], 90.00th=[11600], 95.00th=[12387], 00:29:58.923 | 99.00th=[14091], 99.50th=[15139], 99.90th=[16450], 99.95th=[16712], 00:29:58.923 | 99.99th=[17171] 00:29:58.923 bw ( KiB/s): min=65440, max=87040, per=88.39%, avg=75384.00, stdev=8969.04, samples=4 00:29:58.923 iops : min= 4090, max= 5440, avg=4711.50, stdev=560.56, samples=4 00:29:58.923 lat (msec) : 2=0.01%, 4=0.27%, 10=72.41%, 20=27.32% 00:29:58.923 cpu : usr=84.26%, sys=13.25%, ctx=16, majf=0, minf=28 00:29:58.923 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.4% 00:29:58.923 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:58.923 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:29:58.923 issued rwts: total=18382,9456,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:58.923 latency : target=0, window=0, percentile=100.00%, depth=128 00:29:58.923 00:29:58.923 Run status group 0 (all jobs): 00:29:58.923 READ: bw=143MiB/s (150MB/s), 143MiB/s-143MiB/s (150MB/s-150MB/s), io=287MiB (301MB), run=2008-2008msec 00:29:58.923 WRITE: bw=83.3MiB/s (87.3MB/s), 83.3MiB/s-83.3MiB/s (87.3MB/s-87.3MB/s), io=148MiB (155MB), run=1774-1774msec 00:29:58.923 23:32:47 -- host/fio.sh@45 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:29:58.923 23:32:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:58.923 23:32:47 -- common/autotest_common.sh@10 -- # set +x 00:29:58.923 23:32:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:58.923 23:32:47 -- host/fio.sh@47 -- # '[' 1 -eq 1 ']' 00:29:58.923 23:32:47 -- host/fio.sh@49 -- # bdfs=($(get_nvme_bdfs)) 00:29:58.923 23:32:47 -- host/fio.sh@49 -- # get_nvme_bdfs 00:29:58.923 23:32:47 -- common/autotest_common.sh@1499 -- # bdfs=() 00:29:58.923 23:32:47 -- common/autotest_common.sh@1499 -- # local bdfs 00:29:58.923 23:32:47 -- common/autotest_common.sh@1500 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:29:58.923 23:32:47 -- common/autotest_common.sh@1500 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:29:58.923 23:32:47 -- common/autotest_common.sh@1500 -- # jq -r '.config[].params.traddr' 00:29:58.923 23:32:47 -- common/autotest_common.sh@1501 -- # (( 1 == 0 )) 00:29:58.923 23:32:47 -- common/autotest_common.sh@1505 -- # printf '%s\n' 0000:65:00.0 00:29:58.923 23:32:47 -- host/fio.sh@50 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:65:00.0 -i 10.0.0.2 00:29:58.923 23:32:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:58.923 23:32:47 -- common/autotest_common.sh@10 -- # set +x 00:29:59.185 Nvme0n1 00:29:59.185 23:32:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:59.185 23:32:48 -- host/fio.sh@51 -- # rpc_cmd bdev_lvol_create_lvstore -c 1073741824 Nvme0n1 lvs_0 00:29:59.185 23:32:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:59.185 23:32:48 -- common/autotest_common.sh@10 -- # set +x 00:29:59.446 23:32:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:59.446 23:32:48 -- host/fio.sh@51 -- # ls_guid=ac272f96-6a0b-422f-a882-09dd712aa1fe 00:29:59.446 23:32:48 -- host/fio.sh@52 -- # get_lvs_free_mb ac272f96-6a0b-422f-a882-09dd712aa1fe 00:29:59.446 23:32:48 -- common/autotest_common.sh@1350 -- # local lvs_uuid=ac272f96-6a0b-422f-a882-09dd712aa1fe 00:29:59.446 23:32:48 -- common/autotest_common.sh@1351 -- # local lvs_info 00:29:59.446 23:32:48 -- common/autotest_common.sh@1352 -- # local fc 00:29:59.446 23:32:48 -- common/autotest_common.sh@1353 -- # local cs 00:29:59.446 23:32:48 -- common/autotest_common.sh@1354 -- # rpc_cmd bdev_lvol_get_lvstores 00:29:59.446 23:32:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:59.446 23:32:48 -- common/autotest_common.sh@10 -- # set +x 00:29:59.446 23:32:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:59.446 23:32:48 -- common/autotest_common.sh@1354 -- # lvs_info='[ 00:29:59.446 { 00:29:59.446 "uuid": "ac272f96-6a0b-422f-a882-09dd712aa1fe", 00:29:59.446 "name": "lvs_0", 00:29:59.446 "base_bdev": "Nvme0n1", 00:29:59.446 "total_data_clusters": 1787, 00:29:59.446 "free_clusters": 1787, 00:29:59.446 "block_size": 512, 00:29:59.446 "cluster_size": 1073741824 00:29:59.446 } 00:29:59.446 ]' 00:29:59.446 23:32:48 -- common/autotest_common.sh@1355 -- # jq '.[] | select(.uuid=="ac272f96-6a0b-422f-a882-09dd712aa1fe") .free_clusters' 00:29:59.708 23:32:48 -- common/autotest_common.sh@1355 -- # fc=1787 00:29:59.708 23:32:48 -- common/autotest_common.sh@1356 -- # jq '.[] | select(.uuid=="ac272f96-6a0b-422f-a882-09dd712aa1fe") .cluster_size' 00:29:59.708 23:32:48 -- common/autotest_common.sh@1356 -- # cs=1073741824 00:29:59.708 23:32:48 -- common/autotest_common.sh@1359 -- # free_mb=1829888 00:29:59.708 23:32:48 -- common/autotest_common.sh@1360 -- # echo 1829888 00:29:59.708 1829888 00:29:59.708 23:32:48 -- host/fio.sh@53 -- # rpc_cmd bdev_lvol_create -l lvs_0 lbd_0 1829888 00:29:59.708 23:32:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:59.708 23:32:48 -- common/autotest_common.sh@10 -- # set +x 00:29:59.708 c97b1615-7c03-43fa-ab8a-2730f432cdb5 00:29:59.708 23:32:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:59.708 23:32:48 -- host/fio.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000001 00:29:59.708 23:32:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:59.708 23:32:48 -- common/autotest_common.sh@10 -- # set +x 00:29:59.708 23:32:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:59.708 23:32:48 -- host/fio.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 lvs_0/lbd_0 00:29:59.708 23:32:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:59.708 23:32:48 -- common/autotest_common.sh@10 -- # set +x 00:29:59.708 23:32:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:59.708 23:32:48 -- host/fio.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:29:59.708 23:32:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:59.708 23:32:48 -- common/autotest_common.sh@10 -- # set +x 00:29:59.708 23:32:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:59.708 23:32:48 -- host/fio.sh@57 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:29:59.708 23:32:48 -- common/autotest_common.sh@1346 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:29:59.708 23:32:48 -- common/autotest_common.sh@1323 -- # local fio_dir=/usr/src/fio 00:29:59.708 23:32:48 -- common/autotest_common.sh@1325 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:29:59.708 23:32:48 -- common/autotest_common.sh@1325 -- # local sanitizers 00:29:59.708 23:32:48 -- common/autotest_common.sh@1326 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:29:59.708 23:32:48 -- common/autotest_common.sh@1327 -- # shift 00:29:59.708 23:32:48 -- common/autotest_common.sh@1329 -- # local asan_lib= 00:29:59.708 23:32:48 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:29:59.708 23:32:48 -- common/autotest_common.sh@1331 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:29:59.708 23:32:48 -- common/autotest_common.sh@1331 -- # grep libasan 00:29:59.708 23:32:48 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:29:59.708 23:32:48 -- common/autotest_common.sh@1331 -- # asan_lib= 00:29:59.708 23:32:48 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:29:59.708 23:32:48 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:29:59.708 23:32:48 -- common/autotest_common.sh@1331 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:29:59.708 23:32:48 -- common/autotest_common.sh@1331 -- # grep libclang_rt.asan 00:29:59.708 23:32:48 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:29:59.708 23:32:48 -- common/autotest_common.sh@1331 -- # asan_lib= 00:29:59.708 23:32:48 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:29:59.708 23:32:48 -- common/autotest_common.sh@1338 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:29:59.708 23:32:48 -- common/autotest_common.sh@1338 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:29:59.970 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:29:59.970 fio-3.35 00:29:59.970 Starting 1 thread 00:30:00.230 EAL: No free 2048 kB hugepages reported on node 1 00:30:02.773 00:30:02.773 test: (groupid=0, jobs=1): err= 0: pid=4116443: Fri Apr 26 23:32:51 2024 00:30:02.773 read: IOPS=10.4k, BW=40.5MiB/s (42.4MB/s)(81.2MiB/2005msec) 00:30:02.773 slat (usec): min=2, max=110, avg= 2.22, stdev= 1.02 00:30:02.773 clat (usec): min=2513, max=11458, avg=6823.75, stdev=510.16 00:30:02.773 lat (usec): min=2530, max=11460, avg=6825.97, stdev=510.11 00:30:02.773 clat percentiles (usec): 00:30:02.773 | 1.00th=[ 5669], 5.00th=[ 5997], 10.00th=[ 6194], 20.00th=[ 6390], 00:30:02.773 | 30.00th=[ 6587], 40.00th=[ 6718], 50.00th=[ 6849], 60.00th=[ 6980], 00:30:02.773 | 70.00th=[ 7046], 80.00th=[ 7242], 90.00th=[ 7439], 95.00th=[ 7635], 00:30:02.773 | 99.00th=[ 7963], 99.50th=[ 8160], 99.90th=[ 9241], 99.95th=[10159], 00:30:02.773 | 99.99th=[11076] 00:30:02.773 bw ( KiB/s): min=40416, max=41992, per=99.90%, avg=41412.00, stdev=698.48, samples=4 00:30:02.773 iops : min=10104, max=10498, avg=10353.00, stdev=174.62, samples=4 00:30:02.773 write: IOPS=10.4k, BW=40.5MiB/s (42.5MB/s)(81.2MiB/2005msec); 0 zone resets 00:30:02.773 slat (nsec): min=2127, max=116459, avg=2323.71, stdev=846.22 00:30:02.773 clat (usec): min=1093, max=10269, avg=5455.83, stdev=442.07 00:30:02.773 lat (usec): min=1101, max=10271, avg=5458.15, stdev=442.05 00:30:02.773 clat percentiles (usec): 00:30:02.773 | 1.00th=[ 4424], 5.00th=[ 4817], 10.00th=[ 4948], 20.00th=[ 5145], 00:30:02.773 | 30.00th=[ 5211], 40.00th=[ 5342], 50.00th=[ 5473], 60.00th=[ 5538], 00:30:02.773 | 70.00th=[ 5669], 80.00th=[ 5800], 90.00th=[ 5997], 95.00th=[ 6128], 00:30:02.773 | 99.00th=[ 6456], 99.50th=[ 6521], 99.90th=[ 8225], 99.95th=[ 9241], 00:30:02.773 | 99.99th=[10159] 00:30:02.773 bw ( KiB/s): min=40976, max=41792, per=99.99%, avg=41476.00, stdev=358.34, samples=4 00:30:02.773 iops : min=10244, max=10448, avg=10369.00, stdev=89.58, samples=4 00:30:02.773 lat (msec) : 2=0.02%, 4=0.11%, 10=99.82%, 20=0.05% 00:30:02.773 cpu : usr=68.06%, sys=29.64%, ctx=65, majf=0, minf=14 00:30:02.773 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:30:02.773 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:02.773 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:30:02.773 issued rwts: total=20779,20792,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:02.773 latency : target=0, window=0, percentile=100.00%, depth=128 00:30:02.773 00:30:02.773 Run status group 0 (all jobs): 00:30:02.773 READ: bw=40.5MiB/s (42.4MB/s), 40.5MiB/s-40.5MiB/s (42.4MB/s-42.4MB/s), io=81.2MiB (85.1MB), run=2005-2005msec 00:30:02.773 WRITE: bw=40.5MiB/s (42.5MB/s), 40.5MiB/s-40.5MiB/s (42.5MB/s-42.5MB/s), io=81.2MiB (85.2MB), run=2005-2005msec 00:30:02.773 23:32:51 -- host/fio.sh@59 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:30:02.773 23:32:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:02.773 23:32:51 -- common/autotest_common.sh@10 -- # set +x 00:30:02.773 23:32:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:02.773 23:32:51 -- host/fio.sh@62 -- # rpc_cmd bdev_lvol_create_lvstore --clear-method none lvs_0/lbd_0 lvs_n_0 00:30:02.773 23:32:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:02.773 23:32:51 -- common/autotest_common.sh@10 -- # set +x 00:30:03.034 23:32:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:03.034 23:32:52 -- host/fio.sh@62 -- # ls_nested_guid=aee67274-7ea3-4dae-a960-6d57feb59b4d 00:30:03.034 23:32:52 -- host/fio.sh@63 -- # get_lvs_free_mb aee67274-7ea3-4dae-a960-6d57feb59b4d 00:30:03.034 23:32:52 -- common/autotest_common.sh@1350 -- # local lvs_uuid=aee67274-7ea3-4dae-a960-6d57feb59b4d 00:30:03.034 23:32:52 -- common/autotest_common.sh@1351 -- # local lvs_info 00:30:03.034 23:32:52 -- common/autotest_common.sh@1352 -- # local fc 00:30:03.034 23:32:52 -- common/autotest_common.sh@1353 -- # local cs 00:30:03.034 23:32:52 -- common/autotest_common.sh@1354 -- # rpc_cmd bdev_lvol_get_lvstores 00:30:03.034 23:32:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:03.034 23:32:52 -- common/autotest_common.sh@10 -- # set +x 00:30:03.034 23:32:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:03.034 23:32:52 -- common/autotest_common.sh@1354 -- # lvs_info='[ 00:30:03.034 { 00:30:03.034 "uuid": "ac272f96-6a0b-422f-a882-09dd712aa1fe", 00:30:03.034 "name": "lvs_0", 00:30:03.034 "base_bdev": "Nvme0n1", 00:30:03.034 "total_data_clusters": 1787, 00:30:03.034 "free_clusters": 0, 00:30:03.034 "block_size": 512, 00:30:03.034 "cluster_size": 1073741824 00:30:03.034 }, 00:30:03.034 { 00:30:03.034 "uuid": "aee67274-7ea3-4dae-a960-6d57feb59b4d", 00:30:03.034 "name": "lvs_n_0", 00:30:03.034 "base_bdev": "c97b1615-7c03-43fa-ab8a-2730f432cdb5", 00:30:03.034 "total_data_clusters": 457025, 00:30:03.034 "free_clusters": 457025, 00:30:03.034 "block_size": 512, 00:30:03.034 "cluster_size": 4194304 00:30:03.034 } 00:30:03.034 ]' 00:30:03.034 23:32:52 -- common/autotest_common.sh@1355 -- # jq '.[] | select(.uuid=="aee67274-7ea3-4dae-a960-6d57feb59b4d") .free_clusters' 00:30:03.034 23:32:52 -- common/autotest_common.sh@1355 -- # fc=457025 00:30:03.294 23:32:52 -- common/autotest_common.sh@1356 -- # jq '.[] | select(.uuid=="aee67274-7ea3-4dae-a960-6d57feb59b4d") .cluster_size' 00:30:03.294 23:32:52 -- common/autotest_common.sh@1356 -- # cs=4194304 00:30:03.294 23:32:52 -- common/autotest_common.sh@1359 -- # free_mb=1828100 00:30:03.294 23:32:52 -- common/autotest_common.sh@1360 -- # echo 1828100 00:30:03.294 1828100 00:30:03.294 23:32:52 -- host/fio.sh@64 -- # rpc_cmd bdev_lvol_create -l lvs_n_0 lbd_nest_0 1828100 00:30:03.294 23:32:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:03.294 23:32:52 -- common/autotest_common.sh@10 -- # set +x 00:30:04.274 11ec1436-930d-4ea9-918f-00b24de8f457 00:30:04.274 23:32:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:04.274 23:32:53 -- host/fio.sh@65 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000001 00:30:04.274 23:32:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:04.274 23:32:53 -- common/autotest_common.sh@10 -- # set +x 00:30:04.274 23:32:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:04.274 23:32:53 -- host/fio.sh@66 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 lvs_n_0/lbd_nest_0 00:30:04.274 23:32:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:04.274 23:32:53 -- common/autotest_common.sh@10 -- # set +x 00:30:04.274 23:32:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:04.274 23:32:53 -- host/fio.sh@67 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:30:04.274 23:32:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:04.274 23:32:53 -- common/autotest_common.sh@10 -- # set +x 00:30:04.274 23:32:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:04.274 23:32:53 -- host/fio.sh@68 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:30:04.274 23:32:53 -- common/autotest_common.sh@1346 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:30:04.274 23:32:53 -- common/autotest_common.sh@1323 -- # local fio_dir=/usr/src/fio 00:30:04.274 23:32:53 -- common/autotest_common.sh@1325 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:30:04.274 23:32:53 -- common/autotest_common.sh@1325 -- # local sanitizers 00:30:04.274 23:32:53 -- common/autotest_common.sh@1326 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:30:04.274 23:32:53 -- common/autotest_common.sh@1327 -- # shift 00:30:04.274 23:32:53 -- common/autotest_common.sh@1329 -- # local asan_lib= 00:30:04.274 23:32:53 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:30:04.274 23:32:53 -- common/autotest_common.sh@1331 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:30:04.274 23:32:53 -- common/autotest_common.sh@1331 -- # grep libasan 00:30:04.274 23:32:53 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:30:04.274 23:32:53 -- common/autotest_common.sh@1331 -- # asan_lib= 00:30:04.274 23:32:53 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:30:04.274 23:32:53 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:30:04.274 23:32:53 -- common/autotest_common.sh@1331 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:30:04.274 23:32:53 -- common/autotest_common.sh@1331 -- # grep libclang_rt.asan 00:30:04.274 23:32:53 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:30:04.274 23:32:53 -- common/autotest_common.sh@1331 -- # asan_lib= 00:30:04.274 23:32:53 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:30:04.274 23:32:53 -- common/autotest_common.sh@1338 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:30:04.274 23:32:53 -- common/autotest_common.sh@1338 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:30:04.541 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:30:04.541 fio-3.35 00:30:04.541 Starting 1 thread 00:30:04.541 EAL: No free 2048 kB hugepages reported on node 1 00:30:07.238 00:30:07.238 test: (groupid=0, jobs=1): err= 0: pid=4117305: Fri Apr 26 23:32:56 2024 00:30:07.238 read: IOPS=9171, BW=35.8MiB/s (37.6MB/s)(71.9MiB/2006msec) 00:30:07.238 slat (usec): min=2, max=110, avg= 2.24, stdev= 1.09 00:30:07.238 clat (usec): min=2891, max=12557, avg=7702.98, stdev=610.54 00:30:07.238 lat (usec): min=2908, max=12560, avg=7705.22, stdev=610.49 00:30:07.238 clat percentiles (usec): 00:30:07.238 | 1.00th=[ 6325], 5.00th=[ 6783], 10.00th=[ 6980], 20.00th=[ 7177], 00:30:07.238 | 30.00th=[ 7373], 40.00th=[ 7570], 50.00th=[ 7701], 60.00th=[ 7832], 00:30:07.238 | 70.00th=[ 8029], 80.00th=[ 8225], 90.00th=[ 8455], 95.00th=[ 8586], 00:30:07.238 | 99.00th=[ 9110], 99.50th=[ 9241], 99.90th=[10945], 99.95th=[11994], 00:30:07.238 | 99.99th=[12518] 00:30:07.238 bw ( KiB/s): min=35640, max=37272, per=99.91%, avg=36652.00, stdev=704.74, samples=4 00:30:07.238 iops : min= 8910, max= 9318, avg=9163.00, stdev=176.19, samples=4 00:30:07.238 write: IOPS=9181, BW=35.9MiB/s (37.6MB/s)(71.9MiB/2006msec); 0 zone resets 00:30:07.238 slat (nsec): min=2137, max=94578, avg=2354.61, stdev=745.66 00:30:07.238 clat (usec): min=1486, max=10902, avg=6164.12, stdev=517.12 00:30:07.238 lat (usec): min=1494, max=10904, avg=6166.47, stdev=517.09 00:30:07.238 clat percentiles (usec): 00:30:07.238 | 1.00th=[ 4948], 5.00th=[ 5342], 10.00th=[ 5538], 20.00th=[ 5735], 00:30:07.238 | 30.00th=[ 5932], 40.00th=[ 6063], 50.00th=[ 6194], 60.00th=[ 6325], 00:30:07.238 | 70.00th=[ 6390], 80.00th=[ 6587], 90.00th=[ 6783], 95.00th=[ 6980], 00:30:07.238 | 99.00th=[ 7308], 99.50th=[ 7439], 99.90th=[ 8848], 99.95th=[ 9896], 00:30:07.238 | 99.99th=[10814] 00:30:07.238 bw ( KiB/s): min=36440, max=36992, per=100.00%, avg=36726.00, stdev=248.68, samples=4 00:30:07.238 iops : min= 9110, max= 9248, avg=9181.50, stdev=62.17, samples=4 00:30:07.238 lat (msec) : 2=0.01%, 4=0.10%, 10=99.79%, 20=0.11% 00:30:07.238 cpu : usr=70.72%, sys=27.18%, ctx=65, majf=0, minf=14 00:30:07.238 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:30:07.238 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:07.238 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:30:07.238 issued rwts: total=18398,18418,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:07.238 latency : target=0, window=0, percentile=100.00%, depth=128 00:30:07.238 00:30:07.238 Run status group 0 (all jobs): 00:30:07.238 READ: bw=35.8MiB/s (37.6MB/s), 35.8MiB/s-35.8MiB/s (37.6MB/s-37.6MB/s), io=71.9MiB (75.4MB), run=2006-2006msec 00:30:07.238 WRITE: bw=35.9MiB/s (37.6MB/s), 35.9MiB/s-35.9MiB/s (37.6MB/s-37.6MB/s), io=71.9MiB (75.4MB), run=2006-2006msec 00:30:07.238 23:32:56 -- host/fio.sh@70 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:30:07.238 23:32:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:07.238 23:32:56 -- common/autotest_common.sh@10 -- # set +x 00:30:07.238 23:32:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:07.238 23:32:56 -- host/fio.sh@72 -- # sync 00:30:07.238 23:32:56 -- host/fio.sh@74 -- # rpc_cmd bdev_lvol_delete lvs_n_0/lbd_nest_0 00:30:07.238 23:32:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:07.238 23:32:56 -- common/autotest_common.sh@10 -- # set +x 00:30:09.151 23:32:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:09.151 23:32:57 -- host/fio.sh@75 -- # rpc_cmd bdev_lvol_delete_lvstore -l lvs_n_0 00:30:09.151 23:32:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:09.151 23:32:57 -- common/autotest_common.sh@10 -- # set +x 00:30:09.151 23:32:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:09.151 23:32:57 -- host/fio.sh@76 -- # rpc_cmd bdev_lvol_delete lvs_0/lbd_0 00:30:09.151 23:32:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:09.151 23:32:57 -- common/autotest_common.sh@10 -- # set +x 00:30:09.151 23:32:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:09.151 23:32:58 -- host/fio.sh@77 -- # rpc_cmd bdev_lvol_delete_lvstore -l lvs_0 00:30:09.151 23:32:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:09.151 23:32:58 -- common/autotest_common.sh@10 -- # set +x 00:30:09.151 23:32:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:09.151 23:32:58 -- host/fio.sh@78 -- # rpc_cmd bdev_nvme_detach_controller Nvme0 00:30:09.151 23:32:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:09.151 23:32:58 -- common/autotest_common.sh@10 -- # set +x 00:30:11.063 23:33:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:11.063 23:33:00 -- host/fio.sh@81 -- # trap - SIGINT SIGTERM EXIT 00:30:11.063 23:33:00 -- host/fio.sh@83 -- # rm -f ./local-test-0-verify.state 00:30:11.063 23:33:00 -- host/fio.sh@84 -- # nvmftestfini 00:30:11.063 23:33:00 -- nvmf/common.sh@477 -- # nvmfcleanup 00:30:11.063 23:33:00 -- nvmf/common.sh@117 -- # sync 00:30:11.063 23:33:00 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:30:11.063 23:33:00 -- nvmf/common.sh@120 -- # set +e 00:30:11.063 23:33:00 -- nvmf/common.sh@121 -- # for i in {1..20} 00:30:11.063 23:33:00 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:30:11.063 rmmod nvme_tcp 00:30:11.063 rmmod nvme_fabrics 00:30:11.063 rmmod nvme_keyring 00:30:11.063 23:33:00 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:30:11.063 23:33:00 -- nvmf/common.sh@124 -- # set -e 00:30:11.063 23:33:00 -- nvmf/common.sh@125 -- # return 0 00:30:11.063 23:33:00 -- nvmf/common.sh@478 -- # '[' -n 4114263 ']' 00:30:11.063 23:33:00 -- nvmf/common.sh@479 -- # killprocess 4114263 00:30:11.063 23:33:00 -- common/autotest_common.sh@936 -- # '[' -z 4114263 ']' 00:30:11.063 23:33:00 -- common/autotest_common.sh@940 -- # kill -0 4114263 00:30:11.063 23:33:00 -- common/autotest_common.sh@941 -- # uname 00:30:11.063 23:33:00 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:30:11.063 23:33:00 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 4114263 00:30:11.324 23:33:00 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:30:11.324 23:33:00 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:30:11.324 23:33:00 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 4114263' 00:30:11.324 killing process with pid 4114263 00:30:11.324 23:33:00 -- common/autotest_common.sh@955 -- # kill 4114263 00:30:11.324 23:33:00 -- common/autotest_common.sh@960 -- # wait 4114263 00:30:11.324 23:33:00 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:30:11.324 23:33:00 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:30:11.324 23:33:00 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:30:11.324 23:33:00 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:30:11.324 23:33:00 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:30:11.324 23:33:00 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:11.324 23:33:00 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:30:11.324 23:33:00 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:13.884 23:33:02 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:30:13.885 00:30:13.885 real 0m28.528s 00:30:13.885 user 2m18.580s 00:30:13.885 sys 0m9.101s 00:30:13.885 23:33:02 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:30:13.885 23:33:02 -- common/autotest_common.sh@10 -- # set +x 00:30:13.885 ************************************ 00:30:13.885 END TEST nvmf_fio_host 00:30:13.885 ************************************ 00:30:13.885 23:33:02 -- nvmf/nvmf.sh@98 -- # run_test nvmf_failover /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:30:13.885 23:33:02 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:30:13.885 23:33:02 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:30:13.885 23:33:02 -- common/autotest_common.sh@10 -- # set +x 00:30:13.885 ************************************ 00:30:13.885 START TEST nvmf_failover 00:30:13.885 ************************************ 00:30:13.885 23:33:02 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:30:13.885 * Looking for test storage... 00:30:13.885 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:30:13.885 23:33:02 -- host/failover.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:13.885 23:33:02 -- nvmf/common.sh@7 -- # uname -s 00:30:13.885 23:33:02 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:13.885 23:33:02 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:13.885 23:33:02 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:13.885 23:33:02 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:13.885 23:33:02 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:13.885 23:33:02 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:13.885 23:33:02 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:13.885 23:33:02 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:13.885 23:33:02 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:13.885 23:33:02 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:13.885 23:33:02 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:30:13.885 23:33:02 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:30:13.885 23:33:02 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:13.885 23:33:02 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:13.885 23:33:02 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:13.885 23:33:02 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:13.885 23:33:02 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:13.885 23:33:02 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:13.885 23:33:02 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:13.885 23:33:02 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:13.885 23:33:02 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:13.885 23:33:02 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:13.885 23:33:02 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:13.885 23:33:02 -- paths/export.sh@5 -- # export PATH 00:30:13.885 23:33:02 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:13.885 23:33:02 -- nvmf/common.sh@47 -- # : 0 00:30:13.885 23:33:02 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:30:13.885 23:33:02 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:30:13.885 23:33:02 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:13.885 23:33:02 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:13.885 23:33:02 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:13.885 23:33:02 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:30:13.885 23:33:02 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:30:13.885 23:33:02 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:30:13.885 23:33:02 -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:30:13.885 23:33:02 -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:30:13.885 23:33:02 -- host/failover.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:30:13.885 23:33:02 -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:30:13.885 23:33:02 -- host/failover.sh@18 -- # nvmftestinit 00:30:13.885 23:33:02 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:30:13.885 23:33:02 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:13.885 23:33:02 -- nvmf/common.sh@437 -- # prepare_net_devs 00:30:13.885 23:33:02 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:30:13.885 23:33:02 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:30:13.885 23:33:02 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:13.885 23:33:02 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:30:13.885 23:33:02 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:13.885 23:33:02 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:30:13.885 23:33:02 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:30:13.885 23:33:02 -- nvmf/common.sh@285 -- # xtrace_disable 00:30:13.885 23:33:02 -- common/autotest_common.sh@10 -- # set +x 00:30:22.036 23:33:10 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:30:22.036 23:33:10 -- nvmf/common.sh@291 -- # pci_devs=() 00:30:22.036 23:33:10 -- nvmf/common.sh@291 -- # local -a pci_devs 00:30:22.036 23:33:10 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:30:22.036 23:33:10 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:30:22.036 23:33:10 -- nvmf/common.sh@293 -- # pci_drivers=() 00:30:22.036 23:33:10 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:30:22.036 23:33:10 -- nvmf/common.sh@295 -- # net_devs=() 00:30:22.036 23:33:10 -- nvmf/common.sh@295 -- # local -ga net_devs 00:30:22.036 23:33:10 -- nvmf/common.sh@296 -- # e810=() 00:30:22.036 23:33:10 -- nvmf/common.sh@296 -- # local -ga e810 00:30:22.036 23:33:10 -- nvmf/common.sh@297 -- # x722=() 00:30:22.036 23:33:10 -- nvmf/common.sh@297 -- # local -ga x722 00:30:22.036 23:33:10 -- nvmf/common.sh@298 -- # mlx=() 00:30:22.036 23:33:10 -- nvmf/common.sh@298 -- # local -ga mlx 00:30:22.036 23:33:10 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:22.036 23:33:10 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:22.036 23:33:10 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:22.036 23:33:10 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:22.036 23:33:10 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:22.036 23:33:10 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:22.036 23:33:10 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:22.036 23:33:10 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:22.036 23:33:10 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:22.036 23:33:10 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:22.036 23:33:10 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:22.036 23:33:10 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:30:22.036 23:33:10 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:30:22.036 23:33:10 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:30:22.036 23:33:10 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:30:22.036 23:33:10 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:30:22.036 23:33:10 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:30:22.036 23:33:10 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:22.036 23:33:10 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:30:22.036 Found 0000:31:00.0 (0x8086 - 0x159b) 00:30:22.036 23:33:10 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:30:22.036 23:33:10 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:30:22.036 23:33:10 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:22.036 23:33:10 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:22.036 23:33:10 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:30:22.036 23:33:10 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:22.036 23:33:10 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:30:22.036 Found 0000:31:00.1 (0x8086 - 0x159b) 00:30:22.036 23:33:10 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:30:22.036 23:33:10 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:30:22.036 23:33:10 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:22.036 23:33:10 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:22.036 23:33:10 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:30:22.036 23:33:10 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:30:22.036 23:33:10 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:30:22.036 23:33:10 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:30:22.036 23:33:10 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:22.036 23:33:10 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:22.036 23:33:10 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:30:22.036 23:33:10 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:22.036 23:33:10 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:30:22.036 Found net devices under 0000:31:00.0: cvl_0_0 00:30:22.036 23:33:10 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:30:22.036 23:33:10 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:22.036 23:33:10 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:22.036 23:33:10 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:30:22.036 23:33:10 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:22.036 23:33:10 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:30:22.036 Found net devices under 0000:31:00.1: cvl_0_1 00:30:22.036 23:33:10 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:30:22.036 23:33:10 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:30:22.036 23:33:10 -- nvmf/common.sh@403 -- # is_hw=yes 00:30:22.036 23:33:10 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:30:22.036 23:33:10 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:30:22.036 23:33:10 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:30:22.036 23:33:10 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:22.036 23:33:10 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:22.036 23:33:10 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:22.036 23:33:10 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:30:22.036 23:33:10 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:22.036 23:33:10 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:22.036 23:33:10 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:30:22.036 23:33:10 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:22.036 23:33:10 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:22.036 23:33:10 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:30:22.036 23:33:10 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:30:22.036 23:33:10 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:30:22.036 23:33:10 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:22.036 23:33:10 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:22.036 23:33:10 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:22.036 23:33:10 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:30:22.036 23:33:10 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:22.036 23:33:10 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:22.036 23:33:10 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:22.036 23:33:10 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:30:22.036 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:22.036 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.678 ms 00:30:22.036 00:30:22.036 --- 10.0.0.2 ping statistics --- 00:30:22.036 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:22.036 rtt min/avg/max/mdev = 0.678/0.678/0.678/0.000 ms 00:30:22.036 23:33:10 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:22.036 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:22.037 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.240 ms 00:30:22.037 00:30:22.037 --- 10.0.0.1 ping statistics --- 00:30:22.037 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:22.037 rtt min/avg/max/mdev = 0.240/0.240/0.240/0.000 ms 00:30:22.037 23:33:10 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:22.037 23:33:10 -- nvmf/common.sh@411 -- # return 0 00:30:22.037 23:33:10 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:30:22.037 23:33:10 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:22.037 23:33:10 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:30:22.037 23:33:10 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:30:22.037 23:33:10 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:22.037 23:33:10 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:30:22.037 23:33:10 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:30:22.037 23:33:10 -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:30:22.037 23:33:10 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:30:22.037 23:33:10 -- common/autotest_common.sh@710 -- # xtrace_disable 00:30:22.037 23:33:10 -- common/autotest_common.sh@10 -- # set +x 00:30:22.037 23:33:10 -- nvmf/common.sh@470 -- # nvmfpid=4123515 00:30:22.037 23:33:10 -- nvmf/common.sh@471 -- # waitforlisten 4123515 00:30:22.037 23:33:10 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:30:22.037 23:33:10 -- common/autotest_common.sh@817 -- # '[' -z 4123515 ']' 00:30:22.037 23:33:10 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:22.037 23:33:10 -- common/autotest_common.sh@822 -- # local max_retries=100 00:30:22.037 23:33:10 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:22.037 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:22.037 23:33:10 -- common/autotest_common.sh@826 -- # xtrace_disable 00:30:22.037 23:33:10 -- common/autotest_common.sh@10 -- # set +x 00:30:22.037 [2024-04-26 23:33:10.471091] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:30:22.037 [2024-04-26 23:33:10.471160] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:22.037 EAL: No free 2048 kB hugepages reported on node 1 00:30:22.037 [2024-04-26 23:33:10.543074] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:30:22.037 [2024-04-26 23:33:10.581234] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:22.037 [2024-04-26 23:33:10.581283] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:22.037 [2024-04-26 23:33:10.581291] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:22.037 [2024-04-26 23:33:10.581299] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:22.037 [2024-04-26 23:33:10.581306] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:22.037 [2024-04-26 23:33:10.581436] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:30:22.037 [2024-04-26 23:33:10.581634] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:30:22.037 [2024-04-26 23:33:10.581635] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:30:22.037 23:33:11 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:30:22.037 23:33:11 -- common/autotest_common.sh@850 -- # return 0 00:30:22.037 23:33:11 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:30:22.037 23:33:11 -- common/autotest_common.sh@716 -- # xtrace_disable 00:30:22.037 23:33:11 -- common/autotest_common.sh@10 -- # set +x 00:30:22.037 23:33:11 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:22.298 23:33:11 -- host/failover.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:30:22.298 [2024-04-26 23:33:11.430138] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:22.298 23:33:11 -- host/failover.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:30:22.559 Malloc0 00:30:22.559 23:33:11 -- host/failover.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:30:22.821 23:33:11 -- host/failover.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:30:22.821 23:33:11 -- host/failover.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:23.083 [2024-04-26 23:33:12.120381] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:23.083 23:33:12 -- host/failover.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:30:23.083 [2024-04-26 23:33:12.288830] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:30:23.083 23:33:12 -- host/failover.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:30:23.345 [2024-04-26 23:33:12.457373] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:30:23.345 23:33:12 -- host/failover.sh@31 -- # bdevperf_pid=4123935 00:30:23.345 23:33:12 -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:30:23.345 23:33:12 -- host/failover.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:30:23.345 23:33:12 -- host/failover.sh@34 -- # waitforlisten 4123935 /var/tmp/bdevperf.sock 00:30:23.345 23:33:12 -- common/autotest_common.sh@817 -- # '[' -z 4123935 ']' 00:30:23.345 23:33:12 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:30:23.345 23:33:12 -- common/autotest_common.sh@822 -- # local max_retries=100 00:30:23.345 23:33:12 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:30:23.345 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:30:23.345 23:33:12 -- common/autotest_common.sh@826 -- # xtrace_disable 00:30:23.345 23:33:12 -- common/autotest_common.sh@10 -- # set +x 00:30:24.287 23:33:13 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:30:24.287 23:33:13 -- common/autotest_common.sh@850 -- # return 0 00:30:24.287 23:33:13 -- host/failover.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:30:24.548 NVMe0n1 00:30:24.548 23:33:13 -- host/failover.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:30:24.808 00:30:24.808 23:33:14 -- host/failover.sh@39 -- # run_test_pid=4124257 00:30:24.808 23:33:14 -- host/failover.sh@41 -- # sleep 1 00:30:24.808 23:33:14 -- host/failover.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:30:26.196 23:33:15 -- host/failover.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:26.196 [2024-04-26 23:33:15.158911] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd280d0 is same with the state(5) to be set 00:30:26.196 [2024-04-26 23:33:15.158955] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd280d0 is same with the state(5) to be set 00:30:26.196 [2024-04-26 23:33:15.158961] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd280d0 is same with the state(5) to be set 00:30:26.196 [2024-04-26 23:33:15.158966] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd280d0 is same with the state(5) to be set 00:30:26.196 [2024-04-26 23:33:15.158971] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd280d0 is same with the state(5) to be set 00:30:26.197 [2024-04-26 23:33:15.158975] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd280d0 is same with the state(5) to be set 00:30:26.197 [2024-04-26 23:33:15.158980] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd280d0 is same with the state(5) to be set 00:30:26.197 [2024-04-26 23:33:15.158984] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd280d0 is same with the state(5) to be set 00:30:26.197 [2024-04-26 23:33:15.158988] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd280d0 is same with the state(5) to be set 00:30:26.197 [2024-04-26 23:33:15.158993] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd280d0 is same with the state(5) to be set 00:30:26.197 [2024-04-26 23:33:15.158997] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd280d0 is same with the state(5) to be set 00:30:26.197 [2024-04-26 23:33:15.159001] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd280d0 is same with the state(5) to be set 00:30:26.197 [2024-04-26 23:33:15.159006] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd280d0 is same with the state(5) to be set 00:30:26.197 [2024-04-26 23:33:15.159010] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd280d0 is same with the state(5) to be set 00:30:26.197 [2024-04-26 23:33:15.159018] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd280d0 is same with the state(5) to be set 00:30:26.197 [2024-04-26 23:33:15.159023] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd280d0 is same with the state(5) to be set 00:30:26.197 [2024-04-26 23:33:15.159027] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd280d0 is same with the state(5) to be set 00:30:26.197 [2024-04-26 23:33:15.159032] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd280d0 is same with the state(5) to be set 00:30:26.197 [2024-04-26 23:33:15.159036] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd280d0 is same with the state(5) to be set 00:30:26.197 [2024-04-26 23:33:15.159041] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd280d0 is same with the state(5) to be set 00:30:26.197 [2024-04-26 23:33:15.159045] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd280d0 is same with the state(5) to be set 00:30:26.197 [2024-04-26 23:33:15.159049] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd280d0 is same with the state(5) to be set 00:30:26.197 [2024-04-26 23:33:15.159053] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd280d0 is same with the state(5) to be set 00:30:26.197 [2024-04-26 23:33:15.159058] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd280d0 is same with the state(5) to be set 00:30:26.197 [2024-04-26 23:33:15.159062] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd280d0 is same with the state(5) to be set 00:30:26.197 [2024-04-26 23:33:15.159066] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd280d0 is same with the state(5) to be set 00:30:26.197 [2024-04-26 23:33:15.159070] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd280d0 is same with the state(5) to be set 00:30:26.197 [2024-04-26 23:33:15.159075] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd280d0 is same with the state(5) to be set 00:30:26.197 [2024-04-26 23:33:15.159079] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd280d0 is same with the state(5) to be set 00:30:26.197 [2024-04-26 23:33:15.159083] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd280d0 is same with the state(5) to be set 00:30:26.197 [2024-04-26 23:33:15.159088] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd280d0 is same with the state(5) to be set 00:30:26.197 [2024-04-26 23:33:15.159092] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd280d0 is same with the state(5) to be set 00:30:26.197 [2024-04-26 23:33:15.159097] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd280d0 is same with the state(5) to be set 00:30:26.197 [2024-04-26 23:33:15.159102] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd280d0 is same with the state(5) to be set 00:30:26.197 [2024-04-26 23:33:15.159106] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd280d0 is same with the state(5) to be set 00:30:26.197 [2024-04-26 23:33:15.159111] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd280d0 is same with the state(5) to be set 00:30:26.197 [2024-04-26 23:33:15.159115] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd280d0 is same with the state(5) to be set 00:30:26.197 [2024-04-26 23:33:15.159119] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd280d0 is same with the state(5) to be set 00:30:26.197 [2024-04-26 23:33:15.159124] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd280d0 is same with the state(5) to be set 00:30:26.197 [2024-04-26 23:33:15.159128] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd280d0 is same with the state(5) to be set 00:30:26.197 [2024-04-26 23:33:15.159132] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd280d0 is same with the state(5) to be set 00:30:26.197 [2024-04-26 23:33:15.159137] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd280d0 is same with the state(5) to be set 00:30:26.197 [2024-04-26 23:33:15.159142] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd280d0 is same with the state(5) to be set 00:30:26.197 [2024-04-26 23:33:15.159147] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd280d0 is same with the state(5) to be set 00:30:26.197 [2024-04-26 23:33:15.159151] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd280d0 is same with the state(5) to be set 00:30:26.197 [2024-04-26 23:33:15.159155] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd280d0 is same with the state(5) to be set 00:30:26.197 [2024-04-26 23:33:15.159160] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd280d0 is same with the state(5) to be set 00:30:26.197 [2024-04-26 23:33:15.159164] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd280d0 is same with the state(5) to be set 00:30:26.197 [2024-04-26 23:33:15.159168] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd280d0 is same with the state(5) to be set 00:30:26.197 [2024-04-26 23:33:15.159172] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd280d0 is same with the state(5) to be set 00:30:26.197 [2024-04-26 23:33:15.159177] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd280d0 is same with the state(5) to be set 00:30:26.197 [2024-04-26 23:33:15.159181] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd280d0 is same with the state(5) to be set 00:30:26.197 [2024-04-26 23:33:15.159185] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd280d0 is same with the state(5) to be set 00:30:26.197 [2024-04-26 23:33:15.159189] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd280d0 is same with the state(5) to be set 00:30:26.197 [2024-04-26 23:33:15.159194] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd280d0 is same with the state(5) to be set 00:30:26.197 23:33:15 -- host/failover.sh@45 -- # sleep 3 00:30:29.500 23:33:18 -- host/failover.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:30:29.501 00:30:29.501 23:33:18 -- host/failover.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:30:29.501 [2024-04-26 23:33:18.622871] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd288e0 is same with the state(5) to be set 00:30:29.501 [2024-04-26 23:33:18.622912] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd288e0 is same with the state(5) to be set 00:30:29.501 [2024-04-26 23:33:18.622922] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd288e0 is same with the state(5) to be set 00:30:29.501 [2024-04-26 23:33:18.622929] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd288e0 is same with the state(5) to be set 00:30:29.501 [2024-04-26 23:33:18.622937] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd288e0 is same with the state(5) to be set 00:30:29.501 [2024-04-26 23:33:18.622943] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd288e0 is same with the state(5) to be set 00:30:29.501 [2024-04-26 23:33:18.622950] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd288e0 is same with the state(5) to be set 00:30:29.501 23:33:18 -- host/failover.sh@50 -- # sleep 3 00:30:32.795 23:33:21 -- host/failover.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:32.795 [2024-04-26 23:33:21.798647] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:32.795 23:33:21 -- host/failover.sh@55 -- # sleep 1 00:30:33.740 23:33:22 -- host/failover.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:30:33.740 [2024-04-26 23:33:22.973761] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xace2e0 is same with the state(5) to be set 00:30:33.740 [2024-04-26 23:33:22.973802] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xace2e0 is same with the state(5) to be set 00:30:33.740 [2024-04-26 23:33:22.973810] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xace2e0 is same with the state(5) to be set 00:30:33.740 [2024-04-26 23:33:22.973816] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xace2e0 is same with the state(5) to be set 00:30:33.740 [2024-04-26 23:33:22.973823] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xace2e0 is same with the state(5) to be set 00:30:33.740 [2024-04-26 23:33:22.973829] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xace2e0 is same with the state(5) to be set 00:30:33.740 [2024-04-26 23:33:22.973836] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xace2e0 is same with the state(5) to be set 00:30:33.740 [2024-04-26 23:33:22.973850] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xace2e0 is same with the state(5) to be set 00:30:33.740 [2024-04-26 23:33:22.973856] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xace2e0 is same with the state(5) to be set 00:30:33.740 [2024-04-26 23:33:22.973862] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xace2e0 is same with the state(5) to be set 00:30:33.740 [2024-04-26 23:33:22.973869] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xace2e0 is same with the state(5) to be set 00:30:33.740 [2024-04-26 23:33:22.973875] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xace2e0 is same with the state(5) to be set 00:30:33.740 [2024-04-26 23:33:22.973882] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xace2e0 is same with the state(5) to be set 00:30:33.740 [2024-04-26 23:33:22.973888] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xace2e0 is same with the state(5) to be set 00:30:33.740 [2024-04-26 23:33:22.973894] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xace2e0 is same with the state(5) to be set 00:30:33.740 [2024-04-26 23:33:22.973901] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xace2e0 is same with the state(5) to be set 00:30:33.740 [2024-04-26 23:33:22.973907] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xace2e0 is same with the state(5) to be set 00:30:33.740 [2024-04-26 23:33:22.973913] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xace2e0 is same with the state(5) to be set 00:30:33.740 [2024-04-26 23:33:22.973919] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xace2e0 is same with the state(5) to be set 00:30:33.740 [2024-04-26 23:33:22.973926] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xace2e0 is same with the state(5) to be set 00:30:33.740 [2024-04-26 23:33:22.973932] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xace2e0 is same with the state(5) to be set 00:30:33.740 [2024-04-26 23:33:22.973938] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xace2e0 is same with the state(5) to be set 00:30:33.740 [2024-04-26 23:33:22.973944] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xace2e0 is same with the state(5) to be set 00:30:33.740 [2024-04-26 23:33:22.973950] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xace2e0 is same with the state(5) to be set 00:30:33.740 [2024-04-26 23:33:22.973956] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xace2e0 is same with the state(5) to be set 00:30:33.741 [2024-04-26 23:33:22.973963] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xace2e0 is same with the state(5) to be set 00:30:33.741 [2024-04-26 23:33:22.973969] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xace2e0 is same with the state(5) to be set 00:30:33.741 [2024-04-26 23:33:22.973980] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xace2e0 is same with the state(5) to be set 00:30:33.741 [2024-04-26 23:33:22.973987] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xace2e0 is same with the state(5) to be set 00:30:33.741 [2024-04-26 23:33:22.973993] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xace2e0 is same with the state(5) to be set 00:30:33.741 [2024-04-26 23:33:22.974000] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xace2e0 is same with the state(5) to be set 00:30:33.741 [2024-04-26 23:33:22.974006] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xace2e0 is same with the state(5) to be set 00:30:33.741 [2024-04-26 23:33:22.974012] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xace2e0 is same with the state(5) to be set 00:30:33.741 [2024-04-26 23:33:22.974018] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xace2e0 is same with the state(5) to be set 00:30:33.741 [2024-04-26 23:33:22.974025] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xace2e0 is same with the state(5) to be set 00:30:33.741 [2024-04-26 23:33:22.974031] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xace2e0 is same with the state(5) to be set 00:30:33.741 [2024-04-26 23:33:22.974037] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xace2e0 is same with the state(5) to be set 00:30:33.741 [2024-04-26 23:33:22.974044] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xace2e0 is same with the state(5) to be set 00:30:33.741 [2024-04-26 23:33:22.974051] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xace2e0 is same with the state(5) to be set 00:30:33.741 [2024-04-26 23:33:22.974057] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xace2e0 is same with the state(5) to be set 00:30:33.741 [2024-04-26 23:33:22.974063] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xace2e0 is same with the state(5) to be set 00:30:33.741 [2024-04-26 23:33:22.974069] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xace2e0 is same with the state(5) to be set 00:30:33.741 [2024-04-26 23:33:22.974075] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xace2e0 is same with the state(5) to be set 00:30:33.741 [2024-04-26 23:33:22.974082] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xace2e0 is same with the state(5) to be set 00:30:33.741 [2024-04-26 23:33:22.974088] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xace2e0 is same with the state(5) to be set 00:30:33.741 [2024-04-26 23:33:22.974094] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xace2e0 is same with the state(5) to be set 00:30:33.741 [2024-04-26 23:33:22.974100] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xace2e0 is same with the state(5) to be set 00:30:33.741 [2024-04-26 23:33:22.974106] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xace2e0 is same with the state(5) to be set 00:30:33.741 [2024-04-26 23:33:22.974113] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xace2e0 is same with the state(5) to be set 00:30:33.741 [2024-04-26 23:33:22.974119] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xace2e0 is same with the state(5) to be set 00:30:33.741 [2024-04-26 23:33:22.974126] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xace2e0 is same with the state(5) to be set 00:30:33.741 [2024-04-26 23:33:22.974132] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xace2e0 is same with the state(5) to be set 00:30:33.741 [2024-04-26 23:33:22.974139] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xace2e0 is same with the state(5) to be set 00:30:33.741 [2024-04-26 23:33:22.974145] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xace2e0 is same with the state(5) to be set 00:30:33.741 [2024-04-26 23:33:22.974151] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xace2e0 is same with the state(5) to be set 00:30:33.741 [2024-04-26 23:33:22.974160] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xace2e0 is same with the state(5) to be set 00:30:34.002 23:33:23 -- host/failover.sh@59 -- # wait 4124257 00:30:40.597 0 00:30:40.597 23:33:29 -- host/failover.sh@61 -- # killprocess 4123935 00:30:40.597 23:33:29 -- common/autotest_common.sh@936 -- # '[' -z 4123935 ']' 00:30:40.597 23:33:29 -- common/autotest_common.sh@940 -- # kill -0 4123935 00:30:40.597 23:33:29 -- common/autotest_common.sh@941 -- # uname 00:30:40.597 23:33:29 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:30:40.597 23:33:29 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 4123935 00:30:40.597 23:33:29 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:30:40.597 23:33:29 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:30:40.597 23:33:29 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 4123935' 00:30:40.597 killing process with pid 4123935 00:30:40.597 23:33:29 -- common/autotest_common.sh@955 -- # kill 4123935 00:30:40.597 23:33:29 -- common/autotest_common.sh@960 -- # wait 4123935 00:30:40.597 23:33:29 -- host/failover.sh@63 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:30:40.597 [2024-04-26 23:33:12.531395] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:30:40.597 [2024-04-26 23:33:12.531452] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4123935 ] 00:30:40.597 EAL: No free 2048 kB hugepages reported on node 1 00:30:40.597 [2024-04-26 23:33:12.591312] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:40.597 [2024-04-26 23:33:12.619997] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:30:40.597 Running I/O for 15 seconds... 00:30:40.597 [2024-04-26 23:33:15.159828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:93864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.597 [2024-04-26 23:33:15.159868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.597 [2024-04-26 23:33:15.159884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:93872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.597 [2024-04-26 23:33:15.159893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.597 [2024-04-26 23:33:15.159902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:93880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.597 [2024-04-26 23:33:15.159910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.597 [2024-04-26 23:33:15.159919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:93888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.597 [2024-04-26 23:33:15.159927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.597 [2024-04-26 23:33:15.159936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:93896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.597 [2024-04-26 23:33:15.159943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.597 [2024-04-26 23:33:15.159952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:93904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.597 [2024-04-26 23:33:15.159959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.597 [2024-04-26 23:33:15.159969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:93912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.597 [2024-04-26 23:33:15.159976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.597 [2024-04-26 23:33:15.159985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:93920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.597 [2024-04-26 23:33:15.159992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.597 [2024-04-26 23:33:15.160001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:93928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.597 [2024-04-26 23:33:15.160008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.597 [2024-04-26 23:33:15.160017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:93936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.597 [2024-04-26 23:33:15.160024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.597 [2024-04-26 23:33:15.160034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:93944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.597 [2024-04-26 23:33:15.160041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.597 [2024-04-26 23:33:15.160056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:93952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.597 [2024-04-26 23:33:15.160063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.597 [2024-04-26 23:33:15.160072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:93960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.597 [2024-04-26 23:33:15.160079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.597 [2024-04-26 23:33:15.160088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:93968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.597 [2024-04-26 23:33:15.160095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.597 [2024-04-26 23:33:15.160104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:93976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.597 [2024-04-26 23:33:15.160111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.597 [2024-04-26 23:33:15.160120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:93984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.597 [2024-04-26 23:33:15.160127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.597 [2024-04-26 23:33:15.160136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:93992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.598 [2024-04-26 23:33:15.160143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.598 [2024-04-26 23:33:15.160152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:94000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.598 [2024-04-26 23:33:15.160159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.598 [2024-04-26 23:33:15.160169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:94008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.598 [2024-04-26 23:33:15.160176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.598 [2024-04-26 23:33:15.160185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:94016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.598 [2024-04-26 23:33:15.160191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.598 [2024-04-26 23:33:15.160200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:94024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.598 [2024-04-26 23:33:15.160207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.598 [2024-04-26 23:33:15.160216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:94032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.598 [2024-04-26 23:33:15.160223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.598 [2024-04-26 23:33:15.160232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:94040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.598 [2024-04-26 23:33:15.160239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.598 [2024-04-26 23:33:15.160248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:94048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.598 [2024-04-26 23:33:15.160257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.598 [2024-04-26 23:33:15.160266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:94056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.598 [2024-04-26 23:33:15.160273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.598 [2024-04-26 23:33:15.160282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:94064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.598 [2024-04-26 23:33:15.160288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.598 [2024-04-26 23:33:15.160297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:94072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.598 [2024-04-26 23:33:15.160305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.598 [2024-04-26 23:33:15.160314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:94080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.598 [2024-04-26 23:33:15.160320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.598 [2024-04-26 23:33:15.160329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:94088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.598 [2024-04-26 23:33:15.160336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.598 [2024-04-26 23:33:15.160345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:94096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.598 [2024-04-26 23:33:15.160352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.598 [2024-04-26 23:33:15.160361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:94104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.598 [2024-04-26 23:33:15.160368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.598 [2024-04-26 23:33:15.160377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:94112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.598 [2024-04-26 23:33:15.160384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.598 [2024-04-26 23:33:15.160395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:94120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.598 [2024-04-26 23:33:15.160403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.598 [2024-04-26 23:33:15.160412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:94128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.598 [2024-04-26 23:33:15.160419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.598 [2024-04-26 23:33:15.160428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:94136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.598 [2024-04-26 23:33:15.160435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.598 [2024-04-26 23:33:15.160444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:94144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.598 [2024-04-26 23:33:15.160452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.598 [2024-04-26 23:33:15.160462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:94152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.598 [2024-04-26 23:33:15.160469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.598 [2024-04-26 23:33:15.160478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:94160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.598 [2024-04-26 23:33:15.160485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.598 [2024-04-26 23:33:15.160495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:94168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.598 [2024-04-26 23:33:15.160502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.598 [2024-04-26 23:33:15.160511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:94176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.598 [2024-04-26 23:33:15.160518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.598 [2024-04-26 23:33:15.160527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:94184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.598 [2024-04-26 23:33:15.160534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.598 [2024-04-26 23:33:15.160543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:94192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.598 [2024-04-26 23:33:15.160550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.598 [2024-04-26 23:33:15.160559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:94200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.598 [2024-04-26 23:33:15.160566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.598 [2024-04-26 23:33:15.160575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:94208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.598 [2024-04-26 23:33:15.160582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.598 [2024-04-26 23:33:15.160591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:94216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.598 [2024-04-26 23:33:15.160598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.598 [2024-04-26 23:33:15.160608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:94224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.598 [2024-04-26 23:33:15.160614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.598 [2024-04-26 23:33:15.160623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:94232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.598 [2024-04-26 23:33:15.160630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.598 [2024-04-26 23:33:15.160639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:94240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.598 [2024-04-26 23:33:15.160646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.598 [2024-04-26 23:33:15.160655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:94248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.598 [2024-04-26 23:33:15.160663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.598 [2024-04-26 23:33:15.160672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:94256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.598 [2024-04-26 23:33:15.160680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.598 [2024-04-26 23:33:15.160689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:94264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.598 [2024-04-26 23:33:15.160696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.598 [2024-04-26 23:33:15.160705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:94272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.598 [2024-04-26 23:33:15.160712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.598 [2024-04-26 23:33:15.160721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:94280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.598 [2024-04-26 23:33:15.160728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.598 [2024-04-26 23:33:15.160737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:94288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.598 [2024-04-26 23:33:15.160744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.598 [2024-04-26 23:33:15.160753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:94296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.598 [2024-04-26 23:33:15.160760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.598 [2024-04-26 23:33:15.160769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:94304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.598 [2024-04-26 23:33:15.160776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.599 [2024-04-26 23:33:15.160785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:94312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.599 [2024-04-26 23:33:15.160792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.599 [2024-04-26 23:33:15.160802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:94320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.599 [2024-04-26 23:33:15.160808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.599 [2024-04-26 23:33:15.160817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:94328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.599 [2024-04-26 23:33:15.160824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.599 [2024-04-26 23:33:15.160834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:94440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:40.599 [2024-04-26 23:33:15.160846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.599 [2024-04-26 23:33:15.160855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:94448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:40.599 [2024-04-26 23:33:15.160861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.599 [2024-04-26 23:33:15.160870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:94456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:40.599 [2024-04-26 23:33:15.160879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.599 [2024-04-26 23:33:15.160888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:94464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:40.599 [2024-04-26 23:33:15.160895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.599 [2024-04-26 23:33:15.160904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:94472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:40.599 [2024-04-26 23:33:15.160911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.599 [2024-04-26 23:33:15.160920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:94480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:40.599 [2024-04-26 23:33:15.160927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.599 [2024-04-26 23:33:15.160936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:94488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:40.599 [2024-04-26 23:33:15.160943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.599 [2024-04-26 23:33:15.160952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:94496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:40.599 [2024-04-26 23:33:15.160960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.599 [2024-04-26 23:33:15.160969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:94504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:40.599 [2024-04-26 23:33:15.160976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.599 [2024-04-26 23:33:15.160985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:94512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:40.599 [2024-04-26 23:33:15.160992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.599 [2024-04-26 23:33:15.161001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:94520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:40.599 [2024-04-26 23:33:15.161009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.599 [2024-04-26 23:33:15.161017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:94528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:40.599 [2024-04-26 23:33:15.161024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.599 [2024-04-26 23:33:15.161033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:94536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:40.599 [2024-04-26 23:33:15.161040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.599 [2024-04-26 23:33:15.161049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:94544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:40.599 [2024-04-26 23:33:15.161056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.599 [2024-04-26 23:33:15.161065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:94552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:40.599 [2024-04-26 23:33:15.161072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.599 [2024-04-26 23:33:15.161083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:94560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:40.599 [2024-04-26 23:33:15.161090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.599 [2024-04-26 23:33:15.161099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:94568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:40.599 [2024-04-26 23:33:15.161106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.599 [2024-04-26 23:33:15.161115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:94336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.599 [2024-04-26 23:33:15.161122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.599 [2024-04-26 23:33:15.161131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:94344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.599 [2024-04-26 23:33:15.161138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.599 [2024-04-26 23:33:15.161147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:94352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.599 [2024-04-26 23:33:15.161154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.599 [2024-04-26 23:33:15.161163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:94360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.599 [2024-04-26 23:33:15.161170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.599 [2024-04-26 23:33:15.161179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:94368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.599 [2024-04-26 23:33:15.161186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.599 [2024-04-26 23:33:15.161195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:94376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.599 [2024-04-26 23:33:15.161202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.599 [2024-04-26 23:33:15.161210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:94384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.599 [2024-04-26 23:33:15.161217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.599 [2024-04-26 23:33:15.161226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:94392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.599 [2024-04-26 23:33:15.161233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.599 [2024-04-26 23:33:15.161242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:94576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:40.599 [2024-04-26 23:33:15.161249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.599 [2024-04-26 23:33:15.161258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:94584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:40.599 [2024-04-26 23:33:15.161265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.599 [2024-04-26 23:33:15.161274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:94592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:40.599 [2024-04-26 23:33:15.161282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.599 [2024-04-26 23:33:15.161291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:94600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:40.599 [2024-04-26 23:33:15.161298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.599 [2024-04-26 23:33:15.161307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:94608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:40.599 [2024-04-26 23:33:15.161314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.599 [2024-04-26 23:33:15.161323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:94616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:40.599 [2024-04-26 23:33:15.161330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.599 [2024-04-26 23:33:15.161339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:94624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:40.599 [2024-04-26 23:33:15.161346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.599 [2024-04-26 23:33:15.161355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:94632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:40.599 [2024-04-26 23:33:15.161362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.599 [2024-04-26 23:33:15.161370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:94640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:40.599 [2024-04-26 23:33:15.161377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.599 [2024-04-26 23:33:15.161386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:94648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:40.599 [2024-04-26 23:33:15.161393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.599 [2024-04-26 23:33:15.161401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:94656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:40.599 [2024-04-26 23:33:15.161409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.600 [2024-04-26 23:33:15.161417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:94664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:40.600 [2024-04-26 23:33:15.161424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.600 [2024-04-26 23:33:15.161433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:94672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:40.600 [2024-04-26 23:33:15.161440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.600 [2024-04-26 23:33:15.161449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:94680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:40.600 [2024-04-26 23:33:15.161456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.600 [2024-04-26 23:33:15.161465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:94688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:40.600 [2024-04-26 23:33:15.161472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.600 [2024-04-26 23:33:15.161483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:94696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:40.600 [2024-04-26 23:33:15.161490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.600 [2024-04-26 23:33:15.161499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:94704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:40.600 [2024-04-26 23:33:15.161506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.600 [2024-04-26 23:33:15.161515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:94712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:40.600 [2024-04-26 23:33:15.161522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.600 [2024-04-26 23:33:15.161530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:94720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:40.600 [2024-04-26 23:33:15.161537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.600 [2024-04-26 23:33:15.161546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:94728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:40.600 [2024-04-26 23:33:15.161553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.600 [2024-04-26 23:33:15.161562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:94736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:40.600 [2024-04-26 23:33:15.161569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.600 [2024-04-26 23:33:15.161578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:94744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:40.600 [2024-04-26 23:33:15.161585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.600 [2024-04-26 23:33:15.161594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:94752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:40.600 [2024-04-26 23:33:15.161601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.600 [2024-04-26 23:33:15.161610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:94760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:40.600 [2024-04-26 23:33:15.161617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.600 [2024-04-26 23:33:15.161626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:94768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:40.600 [2024-04-26 23:33:15.161633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.600 [2024-04-26 23:33:15.161641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:94776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:40.600 [2024-04-26 23:33:15.161649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.600 [2024-04-26 23:33:15.161657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:94784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:40.600 [2024-04-26 23:33:15.161664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.600 [2024-04-26 23:33:15.161673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:94792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:40.600 [2024-04-26 23:33:15.161680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.600 [2024-04-26 23:33:15.161690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:94800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:40.600 [2024-04-26 23:33:15.161698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.600 [2024-04-26 23:33:15.161706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:94808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:40.600 [2024-04-26 23:33:15.161713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.600 [2024-04-26 23:33:15.161722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:94816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:40.600 [2024-04-26 23:33:15.161729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.600 [2024-04-26 23:33:15.161738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:94824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:40.600 [2024-04-26 23:33:15.161745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.600 [2024-04-26 23:33:15.161753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:94832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:40.600 [2024-04-26 23:33:15.161761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.600 [2024-04-26 23:33:15.161769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:94840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:40.600 [2024-04-26 23:33:15.161776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.600 [2024-04-26 23:33:15.161785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:94848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:40.600 [2024-04-26 23:33:15.161792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.600 [2024-04-26 23:33:15.161801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:94856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:40.600 [2024-04-26 23:33:15.161808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.600 [2024-04-26 23:33:15.161816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:94864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:40.600 [2024-04-26 23:33:15.161823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.600 [2024-04-26 23:33:15.161832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:94872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:40.600 [2024-04-26 23:33:15.161843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.600 [2024-04-26 23:33:15.161853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:94880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:40.600 [2024-04-26 23:33:15.161860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.600 [2024-04-26 23:33:15.161869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:94400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.600 [2024-04-26 23:33:15.161876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.600 [2024-04-26 23:33:15.161885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:94408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.600 [2024-04-26 23:33:15.161894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.600 [2024-04-26 23:33:15.161903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:94416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.600 [2024-04-26 23:33:15.161910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.600 [2024-04-26 23:33:15.161919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:94424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.600 [2024-04-26 23:33:15.161925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.600 [2024-04-26 23:33:15.161949] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:40.600 [2024-04-26 23:33:15.161956] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:40.600 [2024-04-26 23:33:15.161963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:94432 len:8 PRP1 0x0 PRP2 0x0 00:30:40.600 [2024-04-26 23:33:15.161970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.600 [2024-04-26 23:33:15.162005] bdev_nvme.c:1601:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x9dabb0 was disconnected and freed. reset controller. 00:30:40.600 [2024-04-26 23:33:15.162014] bdev_nvme.c:1857:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:30:40.600 [2024-04-26 23:33:15.162032] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:30:40.600 [2024-04-26 23:33:15.162040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.600 [2024-04-26 23:33:15.162048] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:30:40.600 [2024-04-26 23:33:15.162056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.600 [2024-04-26 23:33:15.162064] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:30:40.600 [2024-04-26 23:33:15.162071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.600 [2024-04-26 23:33:15.162078] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:30:40.600 [2024-04-26 23:33:15.162086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.600 [2024-04-26 23:33:15.162093] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:40.600 [2024-04-26 23:33:15.165660] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:40.600 [2024-04-26 23:33:15.165682] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9e52e0 (9): Bad file descriptor 00:30:40.601 [2024-04-26 23:33:15.290260] bdev_nvme.c:2054:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:30:40.601 [2024-04-26 23:33:18.619505] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:30:40.601 [2024-04-26 23:33:18.619549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.601 [2024-04-26 23:33:18.619560] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:30:40.601 [2024-04-26 23:33:18.619568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.601 [2024-04-26 23:33:18.619581] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:30:40.601 [2024-04-26 23:33:18.619588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.601 [2024-04-26 23:33:18.619596] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:30:40.601 [2024-04-26 23:33:18.619603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.601 [2024-04-26 23:33:18.619610] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e52e0 is same with the state(5) to be set 00:30:40.601 [2024-04-26 23:33:18.623157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:109200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:40.601 [2024-04-26 23:33:18.623170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.601 [2024-04-26 23:33:18.623184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:108264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.601 [2024-04-26 23:33:18.623192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.601 [2024-04-26 23:33:18.623201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:108272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.601 [2024-04-26 23:33:18.623208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.601 [2024-04-26 23:33:18.623218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:108280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.601 [2024-04-26 23:33:18.623225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.601 [2024-04-26 23:33:18.623234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:108288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.601 [2024-04-26 23:33:18.623240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.601 [2024-04-26 23:33:18.623250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:108296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.601 [2024-04-26 23:33:18.623256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.601 [2024-04-26 23:33:18.623266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:108304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.601 [2024-04-26 23:33:18.623273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.601 [2024-04-26 23:33:18.623282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:108312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.601 [2024-04-26 23:33:18.623289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.601 [2024-04-26 23:33:18.623298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:108320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.601 [2024-04-26 23:33:18.623305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.601 [2024-04-26 23:33:18.623314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:109208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:40.601 [2024-04-26 23:33:18.623321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.601 [2024-04-26 23:33:18.623336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:108328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.601 [2024-04-26 23:33:18.623343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.601 [2024-04-26 23:33:18.623352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:108336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.601 [2024-04-26 23:33:18.623359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.601 [2024-04-26 23:33:18.623368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:108344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.601 [2024-04-26 23:33:18.623375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.601 [2024-04-26 23:33:18.623384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:108352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.601 [2024-04-26 23:33:18.623391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.601 [2024-04-26 23:33:18.623401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:108360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.601 [2024-04-26 23:33:18.623408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.601 [2024-04-26 23:33:18.623417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:108368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.601 [2024-04-26 23:33:18.623424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.601 [2024-04-26 23:33:18.623432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:108376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.601 [2024-04-26 23:33:18.623439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.601 [2024-04-26 23:33:18.623448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:108384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.601 [2024-04-26 23:33:18.623455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.601 [2024-04-26 23:33:18.623464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:109216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:40.601 [2024-04-26 23:33:18.623471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.601 [2024-04-26 23:33:18.623480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:109224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:40.601 [2024-04-26 23:33:18.623487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.601 [2024-04-26 23:33:18.623496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:108392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.601 [2024-04-26 23:33:18.623502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.601 [2024-04-26 23:33:18.623511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:108400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.601 [2024-04-26 23:33:18.623518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.601 [2024-04-26 23:33:18.623527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:108408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.601 [2024-04-26 23:33:18.623535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.601 [2024-04-26 23:33:18.623545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:108416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.601 [2024-04-26 23:33:18.623552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.601 [2024-04-26 23:33:18.623561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:108424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.601 [2024-04-26 23:33:18.623568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.601 [2024-04-26 23:33:18.623577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:108432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.601 [2024-04-26 23:33:18.623584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.601 [2024-04-26 23:33:18.623594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:108440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.601 [2024-04-26 23:33:18.623605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.601 [2024-04-26 23:33:18.623618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:108448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.601 [2024-04-26 23:33:18.623630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.602 [2024-04-26 23:33:18.623639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:108456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.602 [2024-04-26 23:33:18.623646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.602 [2024-04-26 23:33:18.623656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:108464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.602 [2024-04-26 23:33:18.623662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.602 [2024-04-26 23:33:18.623671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:108472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.602 [2024-04-26 23:33:18.623678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.602 [2024-04-26 23:33:18.623687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:108480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.602 [2024-04-26 23:33:18.623694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.602 [2024-04-26 23:33:18.623703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:108488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.602 [2024-04-26 23:33:18.623711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.602 [2024-04-26 23:33:18.623720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:108496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.602 [2024-04-26 23:33:18.623727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.602 [2024-04-26 23:33:18.623735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:108504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.602 [2024-04-26 23:33:18.623743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.602 [2024-04-26 23:33:18.623752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:108512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.602 [2024-04-26 23:33:18.623760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.602 [2024-04-26 23:33:18.623770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:108520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.602 [2024-04-26 23:33:18.623776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.602 [2024-04-26 23:33:18.623786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:108528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.602 [2024-04-26 23:33:18.623793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.602 [2024-04-26 23:33:18.623802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:108536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.602 [2024-04-26 23:33:18.623809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.602 [2024-04-26 23:33:18.623818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:108544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.602 [2024-04-26 23:33:18.623825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.602 [2024-04-26 23:33:18.623834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:108552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.602 [2024-04-26 23:33:18.623846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.602 [2024-04-26 23:33:18.623855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:108560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.602 [2024-04-26 23:33:18.623862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.602 [2024-04-26 23:33:18.623871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:108568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.602 [2024-04-26 23:33:18.623878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.602 [2024-04-26 23:33:18.623887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:108576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.602 [2024-04-26 23:33:18.623894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.602 [2024-04-26 23:33:18.623903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:108584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.602 [2024-04-26 23:33:18.623910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.602 [2024-04-26 23:33:18.623918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:108592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.602 [2024-04-26 23:33:18.623925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.602 [2024-04-26 23:33:18.623934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:108600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.602 [2024-04-26 23:33:18.623941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.602 [2024-04-26 23:33:18.623951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:108608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.602 [2024-04-26 23:33:18.623957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.602 [2024-04-26 23:33:18.623968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:108616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.602 [2024-04-26 23:33:18.623975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.602 [2024-04-26 23:33:18.623984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:108624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.602 [2024-04-26 23:33:18.623991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.602 [2024-04-26 23:33:18.624000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:108632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.602 [2024-04-26 23:33:18.624007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.602 [2024-04-26 23:33:18.624016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:108640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.602 [2024-04-26 23:33:18.624023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.602 [2024-04-26 23:33:18.624031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:108648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.602 [2024-04-26 23:33:18.624038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.602 [2024-04-26 23:33:18.624048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:108656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.602 [2024-04-26 23:33:18.624054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.602 [2024-04-26 23:33:18.624063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:108664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.602 [2024-04-26 23:33:18.624070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.602 [2024-04-26 23:33:18.624079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:108672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.602 [2024-04-26 23:33:18.624086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.602 [2024-04-26 23:33:18.624095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:108680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.602 [2024-04-26 23:33:18.624102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.602 [2024-04-26 23:33:18.624111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:108688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.602 [2024-04-26 23:33:18.624117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.602 [2024-04-26 23:33:18.624126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:108696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.602 [2024-04-26 23:33:18.624133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.602 [2024-04-26 23:33:18.624142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:108704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.602 [2024-04-26 23:33:18.624149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.602 [2024-04-26 23:33:18.624158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:108712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.602 [2024-04-26 23:33:18.624166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.602 [2024-04-26 23:33:18.624175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:108720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.602 [2024-04-26 23:33:18.624182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.602 [2024-04-26 23:33:18.624191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:108728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.603 [2024-04-26 23:33:18.624198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.603 [2024-04-26 23:33:18.624207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:108736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.603 [2024-04-26 23:33:18.624215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.603 [2024-04-26 23:33:18.624224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:108744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.603 [2024-04-26 23:33:18.624232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.603 [2024-04-26 23:33:18.624241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:108752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.603 [2024-04-26 23:33:18.624248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.603 [2024-04-26 23:33:18.624257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:108760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.603 [2024-04-26 23:33:18.624264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.603 [2024-04-26 23:33:18.624273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:108768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.603 [2024-04-26 23:33:18.624280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.603 [2024-04-26 23:33:18.624289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:108776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.603 [2024-04-26 23:33:18.624296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.603 [2024-04-26 23:33:18.624305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:108784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.603 [2024-04-26 23:33:18.624312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.603 [2024-04-26 23:33:18.624321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:108792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.603 [2024-04-26 23:33:18.624328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.603 [2024-04-26 23:33:18.624337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:108800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.603 [2024-04-26 23:33:18.624344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.603 [2024-04-26 23:33:18.624354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:108808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.603 [2024-04-26 23:33:18.624360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.603 [2024-04-26 23:33:18.624371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:108816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.603 [2024-04-26 23:33:18.624378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.603 [2024-04-26 23:33:18.624387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:108824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.603 [2024-04-26 23:33:18.624394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.603 [2024-04-26 23:33:18.624404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:108832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.603 [2024-04-26 23:33:18.624410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.603 [2024-04-26 23:33:18.624419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:108840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.603 [2024-04-26 23:33:18.624426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.603 [2024-04-26 23:33:18.624435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:108848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.603 [2024-04-26 23:33:18.624442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.603 [2024-04-26 23:33:18.624451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:108856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.603 [2024-04-26 23:33:18.624458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.603 [2024-04-26 23:33:18.624468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:108864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.603 [2024-04-26 23:33:18.624474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.603 [2024-04-26 23:33:18.624483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:108872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.603 [2024-04-26 23:33:18.624490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.603 [2024-04-26 23:33:18.624499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:108880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.603 [2024-04-26 23:33:18.624506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.603 [2024-04-26 23:33:18.624515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:108888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.603 [2024-04-26 23:33:18.624522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.603 [2024-04-26 23:33:18.624531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:108896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.603 [2024-04-26 23:33:18.624537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.603 [2024-04-26 23:33:18.624547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:108904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.603 [2024-04-26 23:33:18.624554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.603 [2024-04-26 23:33:18.624563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:108912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.603 [2024-04-26 23:33:18.624571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.603 [2024-04-26 23:33:18.624580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:108920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.603 [2024-04-26 23:33:18.624587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.603 [2024-04-26 23:33:18.624596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:108928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.603 [2024-04-26 23:33:18.624603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.603 [2024-04-26 23:33:18.624612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:108936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.603 [2024-04-26 23:33:18.624619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.603 [2024-04-26 23:33:18.624627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:108944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.603 [2024-04-26 23:33:18.624635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.603 [2024-04-26 23:33:18.624644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:108952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.603 [2024-04-26 23:33:18.624651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.603 [2024-04-26 23:33:18.624660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:108960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.603 [2024-04-26 23:33:18.624667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.603 [2024-04-26 23:33:18.624675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:108968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.603 [2024-04-26 23:33:18.624682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.603 [2024-04-26 23:33:18.624691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:108976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.603 [2024-04-26 23:33:18.624698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.603 [2024-04-26 23:33:18.624707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:108984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.603 [2024-04-26 23:33:18.624714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.603 [2024-04-26 23:33:18.624723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:108992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.603 [2024-04-26 23:33:18.624729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.603 [2024-04-26 23:33:18.624738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:109000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.603 [2024-04-26 23:33:18.624746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.603 [2024-04-26 23:33:18.624755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:109008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.603 [2024-04-26 23:33:18.624762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.603 [2024-04-26 23:33:18.624772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:109016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.603 [2024-04-26 23:33:18.624779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.603 [2024-04-26 23:33:18.624788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:109024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.603 [2024-04-26 23:33:18.624795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.603 [2024-04-26 23:33:18.624804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:109032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.603 [2024-04-26 23:33:18.624811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.603 [2024-04-26 23:33:18.624820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:109040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.604 [2024-04-26 23:33:18.624827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.604 [2024-04-26 23:33:18.624836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:109048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.604 [2024-04-26 23:33:18.624846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.604 [2024-04-26 23:33:18.624855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:109056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.604 [2024-04-26 23:33:18.624862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.604 [2024-04-26 23:33:18.624871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:109064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.604 [2024-04-26 23:33:18.624878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.604 [2024-04-26 23:33:18.624886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:109232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:40.604 [2024-04-26 23:33:18.624893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.604 [2024-04-26 23:33:18.624902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:109240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:40.604 [2024-04-26 23:33:18.624909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.604 [2024-04-26 23:33:18.624918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:109248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:40.604 [2024-04-26 23:33:18.624925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.604 [2024-04-26 23:33:18.624934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:109256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:40.604 [2024-04-26 23:33:18.624941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.604 [2024-04-26 23:33:18.624949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:109264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:40.604 [2024-04-26 23:33:18.624956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.604 [2024-04-26 23:33:18.624965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:109272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:40.604 [2024-04-26 23:33:18.624974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.604 [2024-04-26 23:33:18.624983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:109280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:40.604 [2024-04-26 23:33:18.624989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.604 [2024-04-26 23:33:18.624999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:109072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.604 [2024-04-26 23:33:18.625006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.604 [2024-04-26 23:33:18.625015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:109080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.604 [2024-04-26 23:33:18.625022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.604 [2024-04-26 23:33:18.625031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:109088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.604 [2024-04-26 23:33:18.625038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.604 [2024-04-26 23:33:18.625047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:109096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.604 [2024-04-26 23:33:18.625053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.604 [2024-04-26 23:33:18.625063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:109104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.604 [2024-04-26 23:33:18.625069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.604 [2024-04-26 23:33:18.625078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:109112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.604 [2024-04-26 23:33:18.625085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.604 [2024-04-26 23:33:18.625094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:109120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.604 [2024-04-26 23:33:18.625101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.604 [2024-04-26 23:33:18.625110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:109128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.604 [2024-04-26 23:33:18.625117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.604 [2024-04-26 23:33:18.625126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:109136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.604 [2024-04-26 23:33:18.625132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.604 [2024-04-26 23:33:18.625141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:109144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.604 [2024-04-26 23:33:18.625148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.604 [2024-04-26 23:33:18.625157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:109152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.604 [2024-04-26 23:33:18.625164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.604 [2024-04-26 23:33:18.625174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:109160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.604 [2024-04-26 23:33:18.625181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.604 [2024-04-26 23:33:18.625190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:109168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.604 [2024-04-26 23:33:18.625197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.604 [2024-04-26 23:33:18.625206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:109176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.604 [2024-04-26 23:33:18.625213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.604 [2024-04-26 23:33:18.625222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:109184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.604 [2024-04-26 23:33:18.625229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.604 [2024-04-26 23:33:18.625237] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbaea40 is same with the state(5) to be set 00:30:40.604 [2024-04-26 23:33:18.625245] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:40.604 [2024-04-26 23:33:18.625251] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:40.604 [2024-04-26 23:33:18.625258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:109192 len:8 PRP1 0x0 PRP2 0x0 00:30:40.604 [2024-04-26 23:33:18.625265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.604 [2024-04-26 23:33:18.625298] bdev_nvme.c:1601:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xbaea40 was disconnected and freed. reset controller. 00:30:40.604 [2024-04-26 23:33:18.625306] bdev_nvme.c:1857:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:30:40.604 [2024-04-26 23:33:18.625314] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:40.604 [2024-04-26 23:33:18.628895] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:40.604 [2024-04-26 23:33:18.628920] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9e52e0 (9): Bad file descriptor 00:30:40.604 [2024-04-26 23:33:18.660912] bdev_nvme.c:2054:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:30:40.604 [2024-04-26 23:33:22.974542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:31128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.604 [2024-04-26 23:33:22.974579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.604 [2024-04-26 23:33:22.974597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:31136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.604 [2024-04-26 23:33:22.974605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.604 [2024-04-26 23:33:22.974615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:31144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.604 [2024-04-26 23:33:22.974623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.604 [2024-04-26 23:33:22.974632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:31152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.604 [2024-04-26 23:33:22.974639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.604 [2024-04-26 23:33:22.974653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:31160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.604 [2024-04-26 23:33:22.974661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.604 [2024-04-26 23:33:22.974670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:31168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.604 [2024-04-26 23:33:22.974677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.604 [2024-04-26 23:33:22.974686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:31176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.604 [2024-04-26 23:33:22.974692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.604 [2024-04-26 23:33:22.974701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:31184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.604 [2024-04-26 23:33:22.974708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.604 [2024-04-26 23:33:22.974717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:31192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.605 [2024-04-26 23:33:22.974724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.605 [2024-04-26 23:33:22.974733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:31200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.605 [2024-04-26 23:33:22.974740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.605 [2024-04-26 23:33:22.974749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:31208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.605 [2024-04-26 23:33:22.974756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.605 [2024-04-26 23:33:22.974764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:31216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.605 [2024-04-26 23:33:22.974772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.605 [2024-04-26 23:33:22.974781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.605 [2024-04-26 23:33:22.974788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.605 [2024-04-26 23:33:22.974797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:31232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.605 [2024-04-26 23:33:22.974803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.605 [2024-04-26 23:33:22.974812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:31240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.605 [2024-04-26 23:33:22.974819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.605 [2024-04-26 23:33:22.974828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:31248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.605 [2024-04-26 23:33:22.974835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.605 [2024-04-26 23:33:22.974850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:31256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.605 [2024-04-26 23:33:22.974857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.605 [2024-04-26 23:33:22.974867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:31264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.605 [2024-04-26 23:33:22.974875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.605 [2024-04-26 23:33:22.974883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:31272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.605 [2024-04-26 23:33:22.974890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.605 [2024-04-26 23:33:22.974899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:31280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.605 [2024-04-26 23:33:22.974906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.605 [2024-04-26 23:33:22.974915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:31288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.605 [2024-04-26 23:33:22.974922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.605 [2024-04-26 23:33:22.974931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:31296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.605 [2024-04-26 23:33:22.974938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.605 [2024-04-26 23:33:22.974947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:31304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.605 [2024-04-26 23:33:22.974953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.605 [2024-04-26 23:33:22.974963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:31312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.605 [2024-04-26 23:33:22.974969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.605 [2024-04-26 23:33:22.974978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:31320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.605 [2024-04-26 23:33:22.974985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.605 [2024-04-26 23:33:22.974994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:31328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.605 [2024-04-26 23:33:22.975001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.605 [2024-04-26 23:33:22.975010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.605 [2024-04-26 23:33:22.975016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.605 [2024-04-26 23:33:22.975025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:31344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.605 [2024-04-26 23:33:22.975032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.605 [2024-04-26 23:33:22.975041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:31352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.605 [2024-04-26 23:33:22.975048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.605 [2024-04-26 23:33:22.975057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:31360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.605 [2024-04-26 23:33:22.975065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.605 [2024-04-26 23:33:22.975074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.605 [2024-04-26 23:33:22.975081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.605 [2024-04-26 23:33:22.975089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:31376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.605 [2024-04-26 23:33:22.975096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.605 [2024-04-26 23:33:22.975105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:31384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.605 [2024-04-26 23:33:22.975113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.605 [2024-04-26 23:33:22.975122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:31592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:40.605 [2024-04-26 23:33:22.975130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.605 [2024-04-26 23:33:22.975139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:31600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:40.605 [2024-04-26 23:33:22.975145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.605 [2024-04-26 23:33:22.975154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:31608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:40.605 [2024-04-26 23:33:22.975161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.605 [2024-04-26 23:33:22.975170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:31616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:40.605 [2024-04-26 23:33:22.975177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.605 [2024-04-26 23:33:22.975186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:31624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:40.605 [2024-04-26 23:33:22.975193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.605 [2024-04-26 23:33:22.975202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:40.605 [2024-04-26 23:33:22.975209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.605 [2024-04-26 23:33:22.975218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:31640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:40.605 [2024-04-26 23:33:22.975225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.605 [2024-04-26 23:33:22.975233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:31392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.605 [2024-04-26 23:33:22.975240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.605 [2024-04-26 23:33:22.975249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:31400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.605 [2024-04-26 23:33:22.975256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.605 [2024-04-26 23:33:22.975266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:31408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.605 [2024-04-26 23:33:22.975273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.605 [2024-04-26 23:33:22.975282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:31416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.605 [2024-04-26 23:33:22.975289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.605 [2024-04-26 23:33:22.975298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:31424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.605 [2024-04-26 23:33:22.975305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.605 [2024-04-26 23:33:22.975314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:31432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.605 [2024-04-26 23:33:22.975321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.605 [2024-04-26 23:33:22.975330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:31440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.605 [2024-04-26 23:33:22.975337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.605 [2024-04-26 23:33:22.975346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:31448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.605 [2024-04-26 23:33:22.975353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.606 [2024-04-26 23:33:22.975362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:31456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.606 [2024-04-26 23:33:22.975369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.606 [2024-04-26 23:33:22.975378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:31464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.606 [2024-04-26 23:33:22.975384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.606 [2024-04-26 23:33:22.975394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.606 [2024-04-26 23:33:22.975400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.606 [2024-04-26 23:33:22.975409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:31480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.606 [2024-04-26 23:33:22.975416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.606 [2024-04-26 23:33:22.975425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:31488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.606 [2024-04-26 23:33:22.975433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.606 [2024-04-26 23:33:22.975442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:31496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.606 [2024-04-26 23:33:22.975448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.606 [2024-04-26 23:33:22.975457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:31504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.606 [2024-04-26 23:33:22.975466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.606 [2024-04-26 23:33:22.975475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:31512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.606 [2024-04-26 23:33:22.975482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.606 [2024-04-26 23:33:22.975491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:31520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.606 [2024-04-26 23:33:22.975498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.606 [2024-04-26 23:33:22.975506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.606 [2024-04-26 23:33:22.975513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.606 [2024-04-26 23:33:22.975522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:31648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:40.606 [2024-04-26 23:33:22.975529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.606 [2024-04-26 23:33:22.975538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:31656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:40.606 [2024-04-26 23:33:22.975544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.606 [2024-04-26 23:33:22.975553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:31664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:40.606 [2024-04-26 23:33:22.975560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.606 [2024-04-26 23:33:22.975568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:31672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:40.606 [2024-04-26 23:33:22.975575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.606 [2024-04-26 23:33:22.975584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:31680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:40.606 [2024-04-26 23:33:22.975591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.606 [2024-04-26 23:33:22.975600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:31688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:40.606 [2024-04-26 23:33:22.975607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.606 [2024-04-26 23:33:22.975615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:31696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:40.606 [2024-04-26 23:33:22.975623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.606 [2024-04-26 23:33:22.975632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:31704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:40.606 [2024-04-26 23:33:22.975639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.606 [2024-04-26 23:33:22.975648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:31712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:40.606 [2024-04-26 23:33:22.975655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.606 [2024-04-26 23:33:22.975663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:31720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:40.606 [2024-04-26 23:33:22.975675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.606 [2024-04-26 23:33:22.975684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:31728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:40.606 [2024-04-26 23:33:22.975690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.606 [2024-04-26 23:33:22.975699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:31736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:40.606 [2024-04-26 23:33:22.975706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.606 [2024-04-26 23:33:22.975715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:31744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:40.606 [2024-04-26 23:33:22.975722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.606 [2024-04-26 23:33:22.975731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:31752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:40.606 [2024-04-26 23:33:22.975738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.606 [2024-04-26 23:33:22.975746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:31760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:40.606 [2024-04-26 23:33:22.975753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.606 [2024-04-26 23:33:22.975762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:31768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:40.606 [2024-04-26 23:33:22.975769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.606 [2024-04-26 23:33:22.975778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:31776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:40.606 [2024-04-26 23:33:22.975784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.606 [2024-04-26 23:33:22.975793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:31784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:40.606 [2024-04-26 23:33:22.975800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.606 [2024-04-26 23:33:22.975809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:31792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:40.606 [2024-04-26 23:33:22.975815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.606 [2024-04-26 23:33:22.975824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:31800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:40.606 [2024-04-26 23:33:22.975831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.606 [2024-04-26 23:33:22.975845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:31808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:40.606 [2024-04-26 23:33:22.975852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.606 [2024-04-26 23:33:22.975861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:31816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:40.606 [2024-04-26 23:33:22.975868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.606 [2024-04-26 23:33:22.975879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:31824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:40.606 [2024-04-26 23:33:22.975886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.607 [2024-04-26 23:33:22.975895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:31832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:40.607 [2024-04-26 23:33:22.975901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.607 [2024-04-26 23:33:22.975910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:31840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:40.607 [2024-04-26 23:33:22.975917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.607 [2024-04-26 23:33:22.975926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:31848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:40.607 [2024-04-26 23:33:22.975933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.607 [2024-04-26 23:33:22.975942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:31856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:40.607 [2024-04-26 23:33:22.975949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.607 [2024-04-26 23:33:22.975957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:31864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:40.607 [2024-04-26 23:33:22.975964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.607 [2024-04-26 23:33:22.975973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:31872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:40.607 [2024-04-26 23:33:22.975980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.607 [2024-04-26 23:33:22.975988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:31880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:40.607 [2024-04-26 23:33:22.975995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.607 [2024-04-26 23:33:22.976004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:31888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:40.607 [2024-04-26 23:33:22.976011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.607 [2024-04-26 23:33:22.976019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:31896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:40.607 [2024-04-26 23:33:22.976026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.607 [2024-04-26 23:33:22.976035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:31904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:40.607 [2024-04-26 23:33:22.976042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.607 [2024-04-26 23:33:22.976050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:31912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:40.607 [2024-04-26 23:33:22.976057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.607 [2024-04-26 23:33:22.976066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:40.607 [2024-04-26 23:33:22.976074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.607 [2024-04-26 23:33:22.976083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:31928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:40.607 [2024-04-26 23:33:22.976090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.607 [2024-04-26 23:33:22.976098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:31936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:40.607 [2024-04-26 23:33:22.976105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.607 [2024-04-26 23:33:22.976114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:31944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:40.607 [2024-04-26 23:33:22.976121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.607 [2024-04-26 23:33:22.976129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:31952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:40.607 [2024-04-26 23:33:22.976137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.607 [2024-04-26 23:33:22.976145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:31960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:40.607 [2024-04-26 23:33:22.976152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.607 [2024-04-26 23:33:22.976161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:31968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:40.607 [2024-04-26 23:33:22.976168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.607 [2024-04-26 23:33:22.976177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:31976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:40.607 [2024-04-26 23:33:22.976184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.607 [2024-04-26 23:33:22.976193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:31984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:40.607 [2024-04-26 23:33:22.976200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.607 [2024-04-26 23:33:22.976209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:31992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:40.607 [2024-04-26 23:33:22.976216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.607 [2024-04-26 23:33:22.976224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:32000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:40.607 [2024-04-26 23:33:22.976231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.607 [2024-04-26 23:33:22.976239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:32008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:40.607 [2024-04-26 23:33:22.976246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.607 [2024-04-26 23:33:22.976255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:32016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:40.607 [2024-04-26 23:33:22.976262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.607 [2024-04-26 23:33:22.976272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:32024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:40.607 [2024-04-26 23:33:22.976279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.607 [2024-04-26 23:33:22.976288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:32032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:40.607 [2024-04-26 23:33:22.976295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.607 [2024-04-26 23:33:22.976304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:32040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:40.607 [2024-04-26 23:33:22.976313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.607 [2024-04-26 23:33:22.976322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:32048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:40.607 [2024-04-26 23:33:22.976329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.607 [2024-04-26 23:33:22.976337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:32056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:40.607 [2024-04-26 23:33:22.976345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.607 [2024-04-26 23:33:22.976354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:32064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:40.607 [2024-04-26 23:33:22.976361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.607 [2024-04-26 23:33:22.976370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:32072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:40.607 [2024-04-26 23:33:22.976376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.607 [2024-04-26 23:33:22.976385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:32080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:40.607 [2024-04-26 23:33:22.976392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.607 [2024-04-26 23:33:22.976401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:32088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:40.607 [2024-04-26 23:33:22.976408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.607 [2024-04-26 23:33:22.976417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:32096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:40.607 [2024-04-26 23:33:22.976423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.607 [2024-04-26 23:33:22.976432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:32104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:40.607 [2024-04-26 23:33:22.976439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.607 [2024-04-26 23:33:22.976448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:32112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:40.607 [2024-04-26 23:33:22.976455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.607 [2024-04-26 23:33:22.976464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:32120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:40.607 [2024-04-26 23:33:22.976470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.607 [2024-04-26 23:33:22.976481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:32128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:40.607 [2024-04-26 23:33:22.976488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.607 [2024-04-26 23:33:22.976497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:32136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:40.607 [2024-04-26 23:33:22.976504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.607 [2024-04-26 23:33:22.976512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:32144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:40.608 [2024-04-26 23:33:22.976519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.608 [2024-04-26 23:33:22.976528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:31536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.608 [2024-04-26 23:33:22.976535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.608 [2024-04-26 23:33:22.976544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:31544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.608 [2024-04-26 23:33:22.976551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.608 [2024-04-26 23:33:22.976560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:31552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.608 [2024-04-26 23:33:22.976568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.608 [2024-04-26 23:33:22.976576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:31560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.608 [2024-04-26 23:33:22.976584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.608 [2024-04-26 23:33:22.976593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:31568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.608 [2024-04-26 23:33:22.976600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.608 [2024-04-26 23:33:22.976609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:31576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.608 [2024-04-26 23:33:22.976616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.608 [2024-04-26 23:33:22.976638] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:40.608 [2024-04-26 23:33:22.976645] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:40.608 [2024-04-26 23:33:22.976651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:31584 len:8 PRP1 0x0 PRP2 0x0 00:30:40.608 [2024-04-26 23:33:22.976659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.608 [2024-04-26 23:33:22.976695] bdev_nvme.c:1601:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x9f1840 was disconnected and freed. reset controller. 00:30:40.608 [2024-04-26 23:33:22.976704] bdev_nvme.c:1857:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:30:40.608 [2024-04-26 23:33:22.976722] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:30:40.608 [2024-04-26 23:33:22.976731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.608 [2024-04-26 23:33:22.976741] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:30:40.608 [2024-04-26 23:33:22.976749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.608 [2024-04-26 23:33:22.976756] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:30:40.608 [2024-04-26 23:33:22.976763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.608 [2024-04-26 23:33:22.976771] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:30:40.608 [2024-04-26 23:33:22.976779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.608 [2024-04-26 23:33:22.976786] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:40.608 [2024-04-26 23:33:22.976817] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9e52e0 (9): Bad file descriptor 00:30:40.608 [2024-04-26 23:33:22.980353] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:40.608 [2024-04-26 23:33:23.026064] bdev_nvme.c:2054:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:30:40.608 00:30:40.608 Latency(us) 00:30:40.608 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:40.608 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:30:40.608 Verification LBA range: start 0x0 length 0x4000 00:30:40.608 NVMe0n1 : 15.01 9932.16 38.80 489.69 0.00 12253.99 771.41 16930.13 00:30:40.608 =================================================================================================================== 00:30:40.608 Total : 9932.16 38.80 489.69 0.00 12253.99 771.41 16930.13 00:30:40.608 Received shutdown signal, test time was about 15.000000 seconds 00:30:40.608 00:30:40.608 Latency(us) 00:30:40.608 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:40.608 =================================================================================================================== 00:30:40.608 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:30:40.608 23:33:29 -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:30:40.608 23:33:29 -- host/failover.sh@65 -- # count=3 00:30:40.608 23:33:29 -- host/failover.sh@67 -- # (( count != 3 )) 00:30:40.608 23:33:29 -- host/failover.sh@73 -- # bdevperf_pid=4127016 00:30:40.608 23:33:29 -- host/failover.sh@75 -- # waitforlisten 4127016 /var/tmp/bdevperf.sock 00:30:40.608 23:33:29 -- host/failover.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:30:40.608 23:33:29 -- common/autotest_common.sh@817 -- # '[' -z 4127016 ']' 00:30:40.608 23:33:29 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:30:40.608 23:33:29 -- common/autotest_common.sh@822 -- # local max_retries=100 00:30:40.608 23:33:29 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:30:40.608 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:30:40.608 23:33:29 -- common/autotest_common.sh@826 -- # xtrace_disable 00:30:40.608 23:33:29 -- common/autotest_common.sh@10 -- # set +x 00:30:40.608 23:33:29 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:30:40.608 23:33:29 -- common/autotest_common.sh@850 -- # return 0 00:30:40.608 23:33:29 -- host/failover.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:30:40.608 [2024-04-26 23:33:29.694319] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:30:40.608 23:33:29 -- host/failover.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:30:40.870 [2024-04-26 23:33:29.862744] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:30:40.870 23:33:29 -- host/failover.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:30:41.133 NVMe0n1 00:30:41.133 23:33:30 -- host/failover.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:30:41.394 00:30:41.394 23:33:30 -- host/failover.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:30:41.966 00:30:41.966 23:33:30 -- host/failover.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:30:41.966 23:33:30 -- host/failover.sh@82 -- # grep -q NVMe0 00:30:41.966 23:33:31 -- host/failover.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:30:42.227 23:33:31 -- host/failover.sh@87 -- # sleep 3 00:30:45.532 23:33:34 -- host/failover.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:30:45.533 23:33:34 -- host/failover.sh@88 -- # grep -q NVMe0 00:30:45.533 23:33:34 -- host/failover.sh@90 -- # run_test_pid=4127979 00:30:45.533 23:33:34 -- host/failover.sh@92 -- # wait 4127979 00:30:45.533 23:33:34 -- host/failover.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:30:46.493 0 00:30:46.493 23:33:35 -- host/failover.sh@94 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:30:46.493 [2024-04-26 23:33:29.389248] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:30:46.493 [2024-04-26 23:33:29.389304] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4127016 ] 00:30:46.493 EAL: No free 2048 kB hugepages reported on node 1 00:30:46.493 [2024-04-26 23:33:29.449539] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:46.493 [2024-04-26 23:33:29.477032] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:30:46.493 [2024-04-26 23:33:31.271335] bdev_nvme.c:1857:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:30:46.493 [2024-04-26 23:33:31.271379] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:30:46.493 [2024-04-26 23:33:31.271389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.493 [2024-04-26 23:33:31.271398] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:30:46.493 [2024-04-26 23:33:31.271406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.493 [2024-04-26 23:33:31.271414] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:30:46.493 [2024-04-26 23:33:31.271421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.493 [2024-04-26 23:33:31.271428] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:30:46.493 [2024-04-26 23:33:31.271435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.493 [2024-04-26 23:33:31.271442] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:46.493 [2024-04-26 23:33:31.271470] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:46.493 [2024-04-26 23:33:31.271484] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16aa2e0 (9): Bad file descriptor 00:30:46.493 [2024-04-26 23:33:31.405950] bdev_nvme.c:2054:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:30:46.493 Running I/O for 1 seconds... 00:30:46.493 00:30:46.493 Latency(us) 00:30:46.493 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:46.493 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:30:46.493 Verification LBA range: start 0x0 length 0x4000 00:30:46.494 NVMe0n1 : 1.05 10611.04 41.45 0.00 0.00 11552.11 2334.72 44127.57 00:30:46.494 =================================================================================================================== 00:30:46.494 Total : 10611.04 41.45 0.00 0.00 11552.11 2334.72 44127.57 00:30:46.494 23:33:35 -- host/failover.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:30:46.494 23:33:35 -- host/failover.sh@95 -- # grep -q NVMe0 00:30:46.753 23:33:35 -- host/failover.sh@98 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:30:46.753 23:33:35 -- host/failover.sh@99 -- # grep -q NVMe0 00:30:46.753 23:33:35 -- host/failover.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:30:47.013 23:33:36 -- host/failover.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:30:47.274 23:33:36 -- host/failover.sh@101 -- # sleep 3 00:30:50.663 23:33:39 -- host/failover.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:30:50.663 23:33:39 -- host/failover.sh@103 -- # grep -q NVMe0 00:30:50.663 23:33:39 -- host/failover.sh@108 -- # killprocess 4127016 00:30:50.663 23:33:39 -- common/autotest_common.sh@936 -- # '[' -z 4127016 ']' 00:30:50.663 23:33:39 -- common/autotest_common.sh@940 -- # kill -0 4127016 00:30:50.663 23:33:39 -- common/autotest_common.sh@941 -- # uname 00:30:50.663 23:33:39 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:30:50.663 23:33:39 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 4127016 00:30:50.663 23:33:39 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:30:50.663 23:33:39 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:30:50.663 23:33:39 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 4127016' 00:30:50.663 killing process with pid 4127016 00:30:50.663 23:33:39 -- common/autotest_common.sh@955 -- # kill 4127016 00:30:50.663 23:33:39 -- common/autotest_common.sh@960 -- # wait 4127016 00:30:50.663 23:33:39 -- host/failover.sh@110 -- # sync 00:30:50.663 23:33:39 -- host/failover.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:30:50.663 23:33:39 -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:30:50.663 23:33:39 -- host/failover.sh@115 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:30:50.663 23:33:39 -- host/failover.sh@116 -- # nvmftestfini 00:30:50.663 23:33:39 -- nvmf/common.sh@477 -- # nvmfcleanup 00:30:50.663 23:33:39 -- nvmf/common.sh@117 -- # sync 00:30:50.663 23:33:39 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:30:50.663 23:33:39 -- nvmf/common.sh@120 -- # set +e 00:30:50.663 23:33:39 -- nvmf/common.sh@121 -- # for i in {1..20} 00:30:50.663 23:33:39 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:30:50.663 rmmod nvme_tcp 00:30:50.663 rmmod nvme_fabrics 00:30:50.663 rmmod nvme_keyring 00:30:50.663 23:33:39 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:30:50.663 23:33:39 -- nvmf/common.sh@124 -- # set -e 00:30:50.663 23:33:39 -- nvmf/common.sh@125 -- # return 0 00:30:50.663 23:33:39 -- nvmf/common.sh@478 -- # '[' -n 4123515 ']' 00:30:50.663 23:33:39 -- nvmf/common.sh@479 -- # killprocess 4123515 00:30:50.663 23:33:39 -- common/autotest_common.sh@936 -- # '[' -z 4123515 ']' 00:30:50.663 23:33:39 -- common/autotest_common.sh@940 -- # kill -0 4123515 00:30:50.663 23:33:39 -- common/autotest_common.sh@941 -- # uname 00:30:50.924 23:33:39 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:30:50.924 23:33:39 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 4123515 00:30:50.924 23:33:39 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:30:50.924 23:33:39 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:30:50.924 23:33:39 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 4123515' 00:30:50.924 killing process with pid 4123515 00:30:50.924 23:33:39 -- common/autotest_common.sh@955 -- # kill 4123515 00:30:50.924 23:33:39 -- common/autotest_common.sh@960 -- # wait 4123515 00:30:50.924 23:33:40 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:30:50.924 23:33:40 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:30:50.924 23:33:40 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:30:50.924 23:33:40 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:30:50.924 23:33:40 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:30:50.924 23:33:40 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:50.924 23:33:40 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:30:50.924 23:33:40 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:53.472 23:33:42 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:30:53.472 00:30:53.472 real 0m39.442s 00:30:53.472 user 2m0.525s 00:30:53.472 sys 0m8.302s 00:30:53.472 23:33:42 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:30:53.472 23:33:42 -- common/autotest_common.sh@10 -- # set +x 00:30:53.472 ************************************ 00:30:53.472 END TEST nvmf_failover 00:30:53.472 ************************************ 00:30:53.472 23:33:42 -- nvmf/nvmf.sh@99 -- # run_test nvmf_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:30:53.472 23:33:42 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:30:53.472 23:33:42 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:30:53.472 23:33:42 -- common/autotest_common.sh@10 -- # set +x 00:30:53.472 ************************************ 00:30:53.472 START TEST nvmf_discovery 00:30:53.472 ************************************ 00:30:53.472 23:33:42 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:30:53.472 * Looking for test storage... 00:30:53.472 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:30:53.472 23:33:42 -- host/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:53.472 23:33:42 -- nvmf/common.sh@7 -- # uname -s 00:30:53.472 23:33:42 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:53.472 23:33:42 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:53.472 23:33:42 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:53.472 23:33:42 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:53.472 23:33:42 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:53.472 23:33:42 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:53.472 23:33:42 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:53.472 23:33:42 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:53.472 23:33:42 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:53.472 23:33:42 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:53.472 23:33:42 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:30:53.472 23:33:42 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:30:53.472 23:33:42 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:53.472 23:33:42 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:53.472 23:33:42 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:53.472 23:33:42 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:53.472 23:33:42 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:53.472 23:33:42 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:53.472 23:33:42 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:53.472 23:33:42 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:53.473 23:33:42 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:53.473 23:33:42 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:53.473 23:33:42 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:53.473 23:33:42 -- paths/export.sh@5 -- # export PATH 00:30:53.473 23:33:42 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:53.473 23:33:42 -- nvmf/common.sh@47 -- # : 0 00:30:53.473 23:33:42 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:30:53.473 23:33:42 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:30:53.473 23:33:42 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:53.473 23:33:42 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:53.473 23:33:42 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:53.473 23:33:42 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:30:53.473 23:33:42 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:30:53.473 23:33:42 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:30:53.473 23:33:42 -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:30:53.473 23:33:42 -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:30:53.473 23:33:42 -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:30:53.473 23:33:42 -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:30:53.473 23:33:42 -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:30:53.473 23:33:42 -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:30:53.473 23:33:42 -- host/discovery.sh@25 -- # nvmftestinit 00:30:53.473 23:33:42 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:30:53.473 23:33:42 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:53.473 23:33:42 -- nvmf/common.sh@437 -- # prepare_net_devs 00:30:53.473 23:33:42 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:30:53.473 23:33:42 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:30:53.473 23:33:42 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:53.473 23:33:42 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:30:53.473 23:33:42 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:53.473 23:33:42 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:30:53.473 23:33:42 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:30:53.473 23:33:42 -- nvmf/common.sh@285 -- # xtrace_disable 00:30:53.473 23:33:42 -- common/autotest_common.sh@10 -- # set +x 00:31:00.063 23:33:49 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:31:00.063 23:33:49 -- nvmf/common.sh@291 -- # pci_devs=() 00:31:00.063 23:33:49 -- nvmf/common.sh@291 -- # local -a pci_devs 00:31:00.063 23:33:49 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:31:00.063 23:33:49 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:31:00.063 23:33:49 -- nvmf/common.sh@293 -- # pci_drivers=() 00:31:00.063 23:33:49 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:31:00.063 23:33:49 -- nvmf/common.sh@295 -- # net_devs=() 00:31:00.063 23:33:49 -- nvmf/common.sh@295 -- # local -ga net_devs 00:31:00.063 23:33:49 -- nvmf/common.sh@296 -- # e810=() 00:31:00.063 23:33:49 -- nvmf/common.sh@296 -- # local -ga e810 00:31:00.063 23:33:49 -- nvmf/common.sh@297 -- # x722=() 00:31:00.063 23:33:49 -- nvmf/common.sh@297 -- # local -ga x722 00:31:00.063 23:33:49 -- nvmf/common.sh@298 -- # mlx=() 00:31:00.063 23:33:49 -- nvmf/common.sh@298 -- # local -ga mlx 00:31:00.063 23:33:49 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:00.063 23:33:49 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:00.063 23:33:49 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:00.063 23:33:49 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:00.063 23:33:49 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:00.063 23:33:49 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:00.063 23:33:49 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:00.063 23:33:49 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:00.063 23:33:49 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:00.063 23:33:49 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:00.063 23:33:49 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:00.063 23:33:49 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:31:00.063 23:33:49 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:31:00.063 23:33:49 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:31:00.063 23:33:49 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:31:00.063 23:33:49 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:31:00.063 23:33:49 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:31:00.063 23:33:49 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:31:00.063 23:33:49 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:31:00.063 Found 0000:31:00.0 (0x8086 - 0x159b) 00:31:00.063 23:33:49 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:31:00.063 23:33:49 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:31:00.063 23:33:49 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:00.063 23:33:49 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:00.063 23:33:49 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:31:00.063 23:33:49 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:31:00.063 23:33:49 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:31:00.063 Found 0000:31:00.1 (0x8086 - 0x159b) 00:31:00.063 23:33:49 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:31:00.063 23:33:49 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:31:00.063 23:33:49 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:00.063 23:33:49 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:00.063 23:33:49 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:31:00.063 23:33:49 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:31:00.063 23:33:49 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:31:00.063 23:33:49 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:31:00.063 23:33:49 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:31:00.063 23:33:49 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:00.063 23:33:49 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:31:00.063 23:33:49 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:00.063 23:33:49 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:31:00.063 Found net devices under 0000:31:00.0: cvl_0_0 00:31:00.063 23:33:49 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:31:00.063 23:33:49 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:31:00.063 23:33:49 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:00.063 23:33:49 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:31:00.063 23:33:49 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:00.063 23:33:49 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:31:00.063 Found net devices under 0000:31:00.1: cvl_0_1 00:31:00.063 23:33:49 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:31:00.063 23:33:49 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:31:00.063 23:33:49 -- nvmf/common.sh@403 -- # is_hw=yes 00:31:00.063 23:33:49 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:31:00.063 23:33:49 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:31:00.063 23:33:49 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:31:00.063 23:33:49 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:00.063 23:33:49 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:00.063 23:33:49 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:00.063 23:33:49 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:31:00.063 23:33:49 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:00.063 23:33:49 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:00.063 23:33:49 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:31:00.063 23:33:49 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:00.063 23:33:49 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:00.063 23:33:49 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:31:00.063 23:33:49 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:31:00.063 23:33:49 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:31:00.063 23:33:49 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:00.325 23:33:49 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:00.325 23:33:49 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:00.325 23:33:49 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:31:00.325 23:33:49 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:00.585 23:33:49 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:00.585 23:33:49 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:00.585 23:33:49 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:31:00.585 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:00.585 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.552 ms 00:31:00.585 00:31:00.585 --- 10.0.0.2 ping statistics --- 00:31:00.586 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:00.586 rtt min/avg/max/mdev = 0.552/0.552/0.552/0.000 ms 00:31:00.586 23:33:49 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:00.586 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:00.586 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.302 ms 00:31:00.586 00:31:00.586 --- 10.0.0.1 ping statistics --- 00:31:00.586 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:00.586 rtt min/avg/max/mdev = 0.302/0.302/0.302/0.000 ms 00:31:00.586 23:33:49 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:00.586 23:33:49 -- nvmf/common.sh@411 -- # return 0 00:31:00.586 23:33:49 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:31:00.586 23:33:49 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:00.586 23:33:49 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:31:00.586 23:33:49 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:31:00.586 23:33:49 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:00.586 23:33:49 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:31:00.586 23:33:49 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:31:00.586 23:33:49 -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:31:00.586 23:33:49 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:31:00.586 23:33:49 -- common/autotest_common.sh@710 -- # xtrace_disable 00:31:00.586 23:33:49 -- common/autotest_common.sh@10 -- # set +x 00:31:00.586 23:33:49 -- nvmf/common.sh@470 -- # nvmfpid=4133354 00:31:00.586 23:33:49 -- nvmf/common.sh@471 -- # waitforlisten 4133354 00:31:00.586 23:33:49 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:31:00.586 23:33:49 -- common/autotest_common.sh@817 -- # '[' -z 4133354 ']' 00:31:00.586 23:33:49 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:00.586 23:33:49 -- common/autotest_common.sh@822 -- # local max_retries=100 00:31:00.586 23:33:49 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:00.586 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:00.586 23:33:49 -- common/autotest_common.sh@826 -- # xtrace_disable 00:31:00.586 23:33:49 -- common/autotest_common.sh@10 -- # set +x 00:31:00.586 [2024-04-26 23:33:49.729725] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:31:00.586 [2024-04-26 23:33:49.729787] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:00.586 EAL: No free 2048 kB hugepages reported on node 1 00:31:00.586 [2024-04-26 23:33:49.800989] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:00.586 [2024-04-26 23:33:49.829408] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:00.586 [2024-04-26 23:33:49.829447] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:00.586 [2024-04-26 23:33:49.829455] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:00.586 [2024-04-26 23:33:49.829461] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:00.586 [2024-04-26 23:33:49.829466] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:00.586 [2024-04-26 23:33:49.829491] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:31:01.526 23:33:50 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:31:01.526 23:33:50 -- common/autotest_common.sh@850 -- # return 0 00:31:01.526 23:33:50 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:31:01.526 23:33:50 -- common/autotest_common.sh@716 -- # xtrace_disable 00:31:01.526 23:33:50 -- common/autotest_common.sh@10 -- # set +x 00:31:01.526 23:33:50 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:01.526 23:33:50 -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:31:01.526 23:33:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:01.526 23:33:50 -- common/autotest_common.sh@10 -- # set +x 00:31:01.526 [2024-04-26 23:33:50.537578] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:01.526 23:33:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:01.526 23:33:50 -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:31:01.526 23:33:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:01.526 23:33:50 -- common/autotest_common.sh@10 -- # set +x 00:31:01.526 [2024-04-26 23:33:50.549753] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:31:01.526 23:33:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:01.526 23:33:50 -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:31:01.526 23:33:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:01.526 23:33:50 -- common/autotest_common.sh@10 -- # set +x 00:31:01.526 null0 00:31:01.526 23:33:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:01.526 23:33:50 -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:31:01.526 23:33:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:01.526 23:33:50 -- common/autotest_common.sh@10 -- # set +x 00:31:01.526 null1 00:31:01.526 23:33:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:01.526 23:33:50 -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:31:01.526 23:33:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:01.526 23:33:50 -- common/autotest_common.sh@10 -- # set +x 00:31:01.526 23:33:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:01.526 23:33:50 -- host/discovery.sh@45 -- # hostpid=4133400 00:31:01.526 23:33:50 -- host/discovery.sh@46 -- # waitforlisten 4133400 /tmp/host.sock 00:31:01.526 23:33:50 -- host/discovery.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:31:01.526 23:33:50 -- common/autotest_common.sh@817 -- # '[' -z 4133400 ']' 00:31:01.526 23:33:50 -- common/autotest_common.sh@821 -- # local rpc_addr=/tmp/host.sock 00:31:01.526 23:33:50 -- common/autotest_common.sh@822 -- # local max_retries=100 00:31:01.526 23:33:50 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:31:01.526 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:31:01.526 23:33:50 -- common/autotest_common.sh@826 -- # xtrace_disable 00:31:01.526 23:33:50 -- common/autotest_common.sh@10 -- # set +x 00:31:01.526 [2024-04-26 23:33:50.636191] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:31:01.526 [2024-04-26 23:33:50.636237] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4133400 ] 00:31:01.526 EAL: No free 2048 kB hugepages reported on node 1 00:31:01.526 [2024-04-26 23:33:50.695023] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:01.526 [2024-04-26 23:33:50.723936] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:31:01.787 23:33:50 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:31:01.787 23:33:50 -- common/autotest_common.sh@850 -- # return 0 00:31:01.787 23:33:50 -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:31:01.787 23:33:50 -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:31:01.787 23:33:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:01.787 23:33:50 -- common/autotest_common.sh@10 -- # set +x 00:31:01.787 23:33:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:01.787 23:33:50 -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:31:01.787 23:33:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:01.787 23:33:50 -- common/autotest_common.sh@10 -- # set +x 00:31:01.787 23:33:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:01.787 23:33:50 -- host/discovery.sh@72 -- # notify_id=0 00:31:01.787 23:33:50 -- host/discovery.sh@83 -- # get_subsystem_names 00:31:01.787 23:33:50 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:31:01.787 23:33:50 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:31:01.787 23:33:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:01.787 23:33:50 -- host/discovery.sh@59 -- # sort 00:31:01.787 23:33:50 -- common/autotest_common.sh@10 -- # set +x 00:31:01.787 23:33:50 -- host/discovery.sh@59 -- # xargs 00:31:01.787 23:33:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:01.787 23:33:50 -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:31:01.787 23:33:50 -- host/discovery.sh@84 -- # get_bdev_list 00:31:01.787 23:33:50 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:01.787 23:33:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:01.787 23:33:50 -- common/autotest_common.sh@10 -- # set +x 00:31:01.787 23:33:50 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:31:01.787 23:33:50 -- host/discovery.sh@55 -- # sort 00:31:01.787 23:33:50 -- host/discovery.sh@55 -- # xargs 00:31:01.787 23:33:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:01.787 23:33:50 -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:31:01.787 23:33:50 -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:31:01.787 23:33:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:01.787 23:33:50 -- common/autotest_common.sh@10 -- # set +x 00:31:01.787 23:33:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:01.787 23:33:50 -- host/discovery.sh@87 -- # get_subsystem_names 00:31:01.787 23:33:50 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:31:01.787 23:33:50 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:31:01.787 23:33:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:01.787 23:33:50 -- host/discovery.sh@59 -- # sort 00:31:01.787 23:33:50 -- common/autotest_common.sh@10 -- # set +x 00:31:01.787 23:33:50 -- host/discovery.sh@59 -- # xargs 00:31:01.787 23:33:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:01.787 23:33:50 -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:31:01.787 23:33:50 -- host/discovery.sh@88 -- # get_bdev_list 00:31:01.787 23:33:50 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:01.787 23:33:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:01.787 23:33:50 -- common/autotest_common.sh@10 -- # set +x 00:31:01.787 23:33:50 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:31:01.787 23:33:50 -- host/discovery.sh@55 -- # sort 00:31:01.787 23:33:50 -- host/discovery.sh@55 -- # xargs 00:31:01.787 23:33:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:02.048 23:33:51 -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:31:02.048 23:33:51 -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:31:02.048 23:33:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:02.048 23:33:51 -- common/autotest_common.sh@10 -- # set +x 00:31:02.048 23:33:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:02.048 23:33:51 -- host/discovery.sh@91 -- # get_subsystem_names 00:31:02.048 23:33:51 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:31:02.048 23:33:51 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:31:02.048 23:33:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:02.048 23:33:51 -- host/discovery.sh@59 -- # sort 00:31:02.048 23:33:51 -- common/autotest_common.sh@10 -- # set +x 00:31:02.048 23:33:51 -- host/discovery.sh@59 -- # xargs 00:31:02.048 23:33:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:02.048 23:33:51 -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:31:02.048 23:33:51 -- host/discovery.sh@92 -- # get_bdev_list 00:31:02.048 23:33:51 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:02.048 23:33:51 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:31:02.048 23:33:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:02.048 23:33:51 -- host/discovery.sh@55 -- # sort 00:31:02.048 23:33:51 -- common/autotest_common.sh@10 -- # set +x 00:31:02.048 23:33:51 -- host/discovery.sh@55 -- # xargs 00:31:02.048 23:33:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:02.048 23:33:51 -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:31:02.048 23:33:51 -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:31:02.048 23:33:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:02.048 23:33:51 -- common/autotest_common.sh@10 -- # set +x 00:31:02.048 [2024-04-26 23:33:51.171336] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:02.048 23:33:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:02.048 23:33:51 -- host/discovery.sh@97 -- # get_subsystem_names 00:31:02.048 23:33:51 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:31:02.048 23:33:51 -- host/discovery.sh@59 -- # xargs 00:31:02.048 23:33:51 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:31:02.048 23:33:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:02.048 23:33:51 -- host/discovery.sh@59 -- # sort 00:31:02.048 23:33:51 -- common/autotest_common.sh@10 -- # set +x 00:31:02.048 23:33:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:02.048 23:33:51 -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:31:02.048 23:33:51 -- host/discovery.sh@98 -- # get_bdev_list 00:31:02.048 23:33:51 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:02.048 23:33:51 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:31:02.048 23:33:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:02.048 23:33:51 -- host/discovery.sh@55 -- # sort 00:31:02.048 23:33:51 -- common/autotest_common.sh@10 -- # set +x 00:31:02.048 23:33:51 -- host/discovery.sh@55 -- # xargs 00:31:02.048 23:33:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:02.048 23:33:51 -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:31:02.048 23:33:51 -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:31:02.048 23:33:51 -- host/discovery.sh@79 -- # expected_count=0 00:31:02.048 23:33:51 -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:31:02.048 23:33:51 -- common/autotest_common.sh@900 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:31:02.048 23:33:51 -- common/autotest_common.sh@901 -- # local max=10 00:31:02.048 23:33:51 -- common/autotest_common.sh@902 -- # (( max-- )) 00:31:02.048 23:33:51 -- common/autotest_common.sh@903 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:31:02.048 23:33:51 -- common/autotest_common.sh@903 -- # get_notification_count 00:31:02.048 23:33:51 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:31:02.048 23:33:51 -- host/discovery.sh@74 -- # jq '. | length' 00:31:02.048 23:33:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:02.048 23:33:51 -- common/autotest_common.sh@10 -- # set +x 00:31:02.048 23:33:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:02.309 23:33:51 -- host/discovery.sh@74 -- # notification_count=0 00:31:02.309 23:33:51 -- host/discovery.sh@75 -- # notify_id=0 00:31:02.309 23:33:51 -- common/autotest_common.sh@903 -- # (( notification_count == expected_count )) 00:31:02.309 23:33:51 -- common/autotest_common.sh@904 -- # return 0 00:31:02.309 23:33:51 -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:31:02.309 23:33:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:02.309 23:33:51 -- common/autotest_common.sh@10 -- # set +x 00:31:02.309 23:33:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:02.309 23:33:51 -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:31:02.309 23:33:51 -- common/autotest_common.sh@900 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:31:02.309 23:33:51 -- common/autotest_common.sh@901 -- # local max=10 00:31:02.309 23:33:51 -- common/autotest_common.sh@902 -- # (( max-- )) 00:31:02.309 23:33:51 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:31:02.309 23:33:51 -- common/autotest_common.sh@903 -- # get_subsystem_names 00:31:02.309 23:33:51 -- host/discovery.sh@59 -- # sort 00:31:02.309 23:33:51 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:31:02.309 23:33:51 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:31:02.309 23:33:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:02.309 23:33:51 -- common/autotest_common.sh@10 -- # set +x 00:31:02.309 23:33:51 -- host/discovery.sh@59 -- # xargs 00:31:02.309 23:33:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:02.309 23:33:51 -- common/autotest_common.sh@903 -- # [[ '' == \n\v\m\e\0 ]] 00:31:02.309 23:33:51 -- common/autotest_common.sh@906 -- # sleep 1 00:31:02.880 [2024-04-26 23:33:51.833726] bdev_nvme.c:6923:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:31:02.880 [2024-04-26 23:33:51.833747] bdev_nvme.c:7003:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:31:02.880 [2024-04-26 23:33:51.833762] bdev_nvme.c:6886:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:31:02.880 [2024-04-26 23:33:51.922125] bdev_nvme.c:6852:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:31:02.880 [2024-04-26 23:33:51.984306] bdev_nvme.c:6742:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:31:02.880 [2024-04-26 23:33:51.984325] bdev_nvme.c:6701:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:31:03.141 23:33:52 -- common/autotest_common.sh@902 -- # (( max-- )) 00:31:03.141 23:33:52 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:31:03.402 23:33:52 -- common/autotest_common.sh@903 -- # get_subsystem_names 00:31:03.402 23:33:52 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:31:03.402 23:33:52 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:31:03.402 23:33:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:03.402 23:33:52 -- host/discovery.sh@59 -- # sort 00:31:03.402 23:33:52 -- common/autotest_common.sh@10 -- # set +x 00:31:03.402 23:33:52 -- host/discovery.sh@59 -- # xargs 00:31:03.402 23:33:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:03.402 23:33:52 -- common/autotest_common.sh@903 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:03.402 23:33:52 -- common/autotest_common.sh@904 -- # return 0 00:31:03.402 23:33:52 -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:31:03.402 23:33:52 -- common/autotest_common.sh@900 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:31:03.402 23:33:52 -- common/autotest_common.sh@901 -- # local max=10 00:31:03.402 23:33:52 -- common/autotest_common.sh@902 -- # (( max-- )) 00:31:03.402 23:33:52 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:31:03.402 23:33:52 -- common/autotest_common.sh@903 -- # get_bdev_list 00:31:03.402 23:33:52 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:03.402 23:33:52 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:31:03.402 23:33:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:03.402 23:33:52 -- host/discovery.sh@55 -- # sort 00:31:03.402 23:33:52 -- common/autotest_common.sh@10 -- # set +x 00:31:03.402 23:33:52 -- host/discovery.sh@55 -- # xargs 00:31:03.402 23:33:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:03.402 23:33:52 -- common/autotest_common.sh@903 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:31:03.402 23:33:52 -- common/autotest_common.sh@904 -- # return 0 00:31:03.402 23:33:52 -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:31:03.402 23:33:52 -- common/autotest_common.sh@900 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:31:03.402 23:33:52 -- common/autotest_common.sh@901 -- # local max=10 00:31:03.402 23:33:52 -- common/autotest_common.sh@902 -- # (( max-- )) 00:31:03.402 23:33:52 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:31:03.402 23:33:52 -- common/autotest_common.sh@903 -- # get_subsystem_paths nvme0 00:31:03.402 23:33:52 -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:31:03.402 23:33:52 -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:31:03.402 23:33:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:03.402 23:33:52 -- host/discovery.sh@63 -- # sort -n 00:31:03.402 23:33:52 -- common/autotest_common.sh@10 -- # set +x 00:31:03.402 23:33:52 -- host/discovery.sh@63 -- # xargs 00:31:03.402 23:33:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:03.402 23:33:52 -- common/autotest_common.sh@903 -- # [[ 4420 == \4\4\2\0 ]] 00:31:03.402 23:33:52 -- common/autotest_common.sh@904 -- # return 0 00:31:03.402 23:33:52 -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:31:03.402 23:33:52 -- host/discovery.sh@79 -- # expected_count=1 00:31:03.402 23:33:52 -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:31:03.402 23:33:52 -- common/autotest_common.sh@900 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:31:03.402 23:33:52 -- common/autotest_common.sh@901 -- # local max=10 00:31:03.402 23:33:52 -- common/autotest_common.sh@902 -- # (( max-- )) 00:31:03.402 23:33:52 -- common/autotest_common.sh@903 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:31:03.402 23:33:52 -- common/autotest_common.sh@903 -- # get_notification_count 00:31:03.402 23:33:52 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:31:03.402 23:33:52 -- host/discovery.sh@74 -- # jq '. | length' 00:31:03.402 23:33:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:03.402 23:33:52 -- common/autotest_common.sh@10 -- # set +x 00:31:03.402 23:33:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:03.402 23:33:52 -- host/discovery.sh@74 -- # notification_count=1 00:31:03.402 23:33:52 -- host/discovery.sh@75 -- # notify_id=1 00:31:03.402 23:33:52 -- common/autotest_common.sh@903 -- # (( notification_count == expected_count )) 00:31:03.402 23:33:52 -- common/autotest_common.sh@904 -- # return 0 00:31:03.403 23:33:52 -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:31:03.403 23:33:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:03.403 23:33:52 -- common/autotest_common.sh@10 -- # set +x 00:31:03.403 23:33:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:03.403 23:33:52 -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:31:03.403 23:33:52 -- common/autotest_common.sh@900 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:31:03.403 23:33:52 -- common/autotest_common.sh@901 -- # local max=10 00:31:03.403 23:33:52 -- common/autotest_common.sh@902 -- # (( max-- )) 00:31:03.403 23:33:52 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:31:03.403 23:33:52 -- common/autotest_common.sh@903 -- # get_bdev_list 00:31:03.403 23:33:52 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:03.403 23:33:52 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:31:03.403 23:33:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:03.403 23:33:52 -- host/discovery.sh@55 -- # sort 00:31:03.403 23:33:52 -- common/autotest_common.sh@10 -- # set +x 00:31:03.403 23:33:52 -- host/discovery.sh@55 -- # xargs 00:31:03.664 23:33:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:03.664 23:33:52 -- common/autotest_common.sh@903 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:31:03.664 23:33:52 -- common/autotest_common.sh@904 -- # return 0 00:31:03.664 23:33:52 -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:31:03.664 23:33:52 -- host/discovery.sh@79 -- # expected_count=1 00:31:03.664 23:33:52 -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:31:03.664 23:33:52 -- common/autotest_common.sh@900 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:31:03.664 23:33:52 -- common/autotest_common.sh@901 -- # local max=10 00:31:03.664 23:33:52 -- common/autotest_common.sh@902 -- # (( max-- )) 00:31:03.664 23:33:52 -- common/autotest_common.sh@903 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:31:03.664 23:33:52 -- common/autotest_common.sh@903 -- # get_notification_count 00:31:03.664 23:33:52 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:31:03.664 23:33:52 -- host/discovery.sh@74 -- # jq '. | length' 00:31:03.664 23:33:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:03.664 23:33:52 -- common/autotest_common.sh@10 -- # set +x 00:31:03.664 23:33:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:03.664 23:33:52 -- host/discovery.sh@74 -- # notification_count=0 00:31:03.664 23:33:52 -- host/discovery.sh@75 -- # notify_id=1 00:31:03.664 23:33:52 -- common/autotest_common.sh@903 -- # (( notification_count == expected_count )) 00:31:03.664 23:33:52 -- common/autotest_common.sh@906 -- # sleep 1 00:31:05.047 23:33:53 -- common/autotest_common.sh@902 -- # (( max-- )) 00:31:05.047 23:33:53 -- common/autotest_common.sh@903 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:31:05.047 23:33:53 -- common/autotest_common.sh@903 -- # get_notification_count 00:31:05.047 23:33:53 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:31:05.047 23:33:53 -- host/discovery.sh@74 -- # jq '. | length' 00:31:05.047 23:33:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:05.047 23:33:53 -- common/autotest_common.sh@10 -- # set +x 00:31:05.047 23:33:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:05.047 23:33:53 -- host/discovery.sh@74 -- # notification_count=1 00:31:05.047 23:33:53 -- host/discovery.sh@75 -- # notify_id=2 00:31:05.047 23:33:53 -- common/autotest_common.sh@903 -- # (( notification_count == expected_count )) 00:31:05.047 23:33:53 -- common/autotest_common.sh@904 -- # return 0 00:31:05.047 23:33:53 -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:31:05.047 23:33:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:05.047 23:33:53 -- common/autotest_common.sh@10 -- # set +x 00:31:05.047 [2024-04-26 23:33:53.922859] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:31:05.047 [2024-04-26 23:33:53.923199] bdev_nvme.c:6905:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:31:05.047 [2024-04-26 23:33:53.923224] bdev_nvme.c:6886:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:31:05.047 23:33:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:05.047 23:33:53 -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:31:05.047 23:33:53 -- common/autotest_common.sh@900 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:31:05.047 23:33:53 -- common/autotest_common.sh@901 -- # local max=10 00:31:05.047 23:33:53 -- common/autotest_common.sh@902 -- # (( max-- )) 00:31:05.047 23:33:53 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:31:05.047 23:33:53 -- common/autotest_common.sh@903 -- # get_subsystem_names 00:31:05.047 23:33:53 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:31:05.047 23:33:53 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:31:05.047 23:33:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:05.047 23:33:53 -- host/discovery.sh@59 -- # sort 00:31:05.047 23:33:53 -- common/autotest_common.sh@10 -- # set +x 00:31:05.047 23:33:53 -- host/discovery.sh@59 -- # xargs 00:31:05.047 23:33:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:05.047 23:33:53 -- common/autotest_common.sh@903 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:05.047 23:33:53 -- common/autotest_common.sh@904 -- # return 0 00:31:05.047 23:33:53 -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:31:05.047 23:33:53 -- common/autotest_common.sh@900 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:31:05.047 23:33:53 -- common/autotest_common.sh@901 -- # local max=10 00:31:05.047 23:33:53 -- common/autotest_common.sh@902 -- # (( max-- )) 00:31:05.047 23:33:53 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:31:05.047 23:33:53 -- common/autotest_common.sh@903 -- # get_bdev_list 00:31:05.047 23:33:53 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:05.047 23:33:53 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:31:05.047 23:33:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:05.047 23:33:53 -- host/discovery.sh@55 -- # sort 00:31:05.047 23:33:53 -- common/autotest_common.sh@10 -- # set +x 00:31:05.047 23:33:53 -- host/discovery.sh@55 -- # xargs 00:31:05.047 23:33:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:05.048 23:33:54 -- common/autotest_common.sh@903 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:31:05.048 23:33:54 -- common/autotest_common.sh@904 -- # return 0 00:31:05.048 23:33:54 -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:31:05.048 23:33:54 -- common/autotest_common.sh@900 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:31:05.048 23:33:54 -- common/autotest_common.sh@901 -- # local max=10 00:31:05.048 23:33:54 -- common/autotest_common.sh@902 -- # (( max-- )) 00:31:05.048 23:33:54 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:31:05.048 23:33:54 -- common/autotest_common.sh@903 -- # get_subsystem_paths nvme0 00:31:05.048 23:33:54 -- host/discovery.sh@63 -- # xargs 00:31:05.048 23:33:54 -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:31:05.048 23:33:54 -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:31:05.048 23:33:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:05.048 23:33:54 -- host/discovery.sh@63 -- # sort -n 00:31:05.048 23:33:54 -- common/autotest_common.sh@10 -- # set +x 00:31:05.048 [2024-04-26 23:33:54.053991] bdev_nvme.c:6847:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:31:05.048 23:33:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:05.048 23:33:54 -- common/autotest_common.sh@903 -- # [[ 4420 == \4\4\2\0\ \4\4\2\1 ]] 00:31:05.048 23:33:54 -- common/autotest_common.sh@906 -- # sleep 1 00:31:05.308 [2024-04-26 23:33:54.358423] bdev_nvme.c:6742:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:31:05.308 [2024-04-26 23:33:54.358441] bdev_nvme.c:6701:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:31:05.308 [2024-04-26 23:33:54.358447] bdev_nvme.c:6701:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:31:05.880 23:33:55 -- common/autotest_common.sh@902 -- # (( max-- )) 00:31:05.880 23:33:55 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:31:05.880 23:33:55 -- common/autotest_common.sh@903 -- # get_subsystem_paths nvme0 00:31:05.880 23:33:55 -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:31:05.880 23:33:55 -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:31:05.880 23:33:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:05.880 23:33:55 -- common/autotest_common.sh@10 -- # set +x 00:31:05.880 23:33:55 -- host/discovery.sh@63 -- # sort -n 00:31:05.880 23:33:55 -- host/discovery.sh@63 -- # xargs 00:31:05.880 23:33:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:06.141 23:33:55 -- common/autotest_common.sh@903 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:31:06.141 23:33:55 -- common/autotest_common.sh@904 -- # return 0 00:31:06.141 23:33:55 -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:31:06.141 23:33:55 -- host/discovery.sh@79 -- # expected_count=0 00:31:06.141 23:33:55 -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:31:06.141 23:33:55 -- common/autotest_common.sh@900 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:31:06.141 23:33:55 -- common/autotest_common.sh@901 -- # local max=10 00:31:06.141 23:33:55 -- common/autotest_common.sh@902 -- # (( max-- )) 00:31:06.141 23:33:55 -- common/autotest_common.sh@903 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:31:06.141 23:33:55 -- common/autotest_common.sh@903 -- # get_notification_count 00:31:06.141 23:33:55 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:31:06.141 23:33:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:06.141 23:33:55 -- common/autotest_common.sh@10 -- # set +x 00:31:06.141 23:33:55 -- host/discovery.sh@74 -- # jq '. | length' 00:31:06.141 23:33:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:06.141 23:33:55 -- host/discovery.sh@74 -- # notification_count=0 00:31:06.141 23:33:55 -- host/discovery.sh@75 -- # notify_id=2 00:31:06.141 23:33:55 -- common/autotest_common.sh@903 -- # (( notification_count == expected_count )) 00:31:06.141 23:33:55 -- common/autotest_common.sh@904 -- # return 0 00:31:06.141 23:33:55 -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:31:06.141 23:33:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:06.141 23:33:55 -- common/autotest_common.sh@10 -- # set +x 00:31:06.141 [2024-04-26 23:33:55.202842] bdev_nvme.c:6905:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:31:06.141 [2024-04-26 23:33:55.202862] bdev_nvme.c:6886:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:31:06.141 [2024-04-26 23:33:55.206040] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:31:06.141 [2024-04-26 23:33:55.206058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:06.141 [2024-04-26 23:33:55.206067] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:31:06.141 [2024-04-26 23:33:55.206074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:06.141 [2024-04-26 23:33:55.206082] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:31:06.141 [2024-04-26 23:33:55.206089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:06.141 [2024-04-26 23:33:55.206097] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:31:06.141 [2024-04-26 23:33:55.206104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:06.141 [2024-04-26 23:33:55.206111] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13ec8b0 is same with the state(5) to be set 00:31:06.141 23:33:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:06.141 23:33:55 -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:31:06.141 23:33:55 -- common/autotest_common.sh@900 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:31:06.141 23:33:55 -- common/autotest_common.sh@901 -- # local max=10 00:31:06.141 23:33:55 -- common/autotest_common.sh@902 -- # (( max-- )) 00:31:06.141 23:33:55 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:31:06.141 23:33:55 -- common/autotest_common.sh@903 -- # get_subsystem_names 00:31:06.141 23:33:55 -- host/discovery.sh@59 -- # sort 00:31:06.141 23:33:55 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:31:06.141 23:33:55 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:31:06.141 23:33:55 -- host/discovery.sh@59 -- # xargs 00:31:06.141 23:33:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:06.141 23:33:55 -- common/autotest_common.sh@10 -- # set +x 00:31:06.142 [2024-04-26 23:33:55.216052] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13ec8b0 (9): Bad file descriptor 00:31:06.142 23:33:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:06.142 [2024-04-26 23:33:55.226090] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:31:06.142 [2024-04-26 23:33:55.226451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.142 [2024-04-26 23:33:55.226774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.142 [2024-04-26 23:33:55.226783] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13ec8b0 with addr=10.0.0.2, port=4420 00:31:06.142 [2024-04-26 23:33:55.226791] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13ec8b0 is same with the state(5) to be set 00:31:06.142 [2024-04-26 23:33:55.226802] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13ec8b0 (9): Bad file descriptor 00:31:06.142 [2024-04-26 23:33:55.226813] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:31:06.142 [2024-04-26 23:33:55.226819] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:31:06.142 [2024-04-26 23:33:55.226827] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:31:06.142 [2024-04-26 23:33:55.226842] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:06.142 [2024-04-26 23:33:55.236142] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:31:06.142 [2024-04-26 23:33:55.236463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.142 [2024-04-26 23:33:55.236807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.142 [2024-04-26 23:33:55.236816] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13ec8b0 with addr=10.0.0.2, port=4420 00:31:06.142 [2024-04-26 23:33:55.236824] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13ec8b0 is same with the state(5) to be set 00:31:06.142 [2024-04-26 23:33:55.236834] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13ec8b0 (9): Bad file descriptor 00:31:06.142 [2024-04-26 23:33:55.236849] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:31:06.142 [2024-04-26 23:33:55.236855] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:31:06.142 [2024-04-26 23:33:55.236862] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:31:06.142 [2024-04-26 23:33:55.236872] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:06.142 [2024-04-26 23:33:55.246192] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:31:06.142 [2024-04-26 23:33:55.246529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.142 [2024-04-26 23:33:55.246857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.142 [2024-04-26 23:33:55.246874] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13ec8b0 with addr=10.0.0.2, port=4420 00:31:06.142 [2024-04-26 23:33:55.246881] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13ec8b0 is same with the state(5) to be set 00:31:06.142 [2024-04-26 23:33:55.246892] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13ec8b0 (9): Bad file descriptor 00:31:06.142 [2024-04-26 23:33:55.246902] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:31:06.142 [2024-04-26 23:33:55.246908] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:31:06.142 [2024-04-26 23:33:55.246915] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:31:06.142 [2024-04-26 23:33:55.246925] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:06.142 [2024-04-26 23:33:55.256243] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:31:06.142 [2024-04-26 23:33:55.256588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.142 [2024-04-26 23:33:55.257032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.142 [2024-04-26 23:33:55.257068] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13ec8b0 with addr=10.0.0.2, port=4420 00:31:06.142 [2024-04-26 23:33:55.257079] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13ec8b0 is same with the state(5) to be set 00:31:06.142 [2024-04-26 23:33:55.257097] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13ec8b0 (9): Bad file descriptor 00:31:06.142 [2024-04-26 23:33:55.257123] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:31:06.142 [2024-04-26 23:33:55.257131] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:31:06.142 [2024-04-26 23:33:55.257139] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:31:06.142 [2024-04-26 23:33:55.257154] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:06.142 23:33:55 -- common/autotest_common.sh@903 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:06.142 23:33:55 -- common/autotest_common.sh@904 -- # return 0 00:31:06.142 23:33:55 -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:31:06.142 23:33:55 -- common/autotest_common.sh@900 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:31:06.142 23:33:55 -- common/autotest_common.sh@901 -- # local max=10 00:31:06.142 23:33:55 -- common/autotest_common.sh@902 -- # (( max-- )) 00:31:06.142 23:33:55 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:31:06.142 23:33:55 -- common/autotest_common.sh@903 -- # get_bdev_list 00:31:06.142 [2024-04-26 23:33:55.266298] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:31:06.142 [2024-04-26 23:33:55.266458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.142 23:33:55 -- host/discovery.sh@55 -- # sort 00:31:06.142 [2024-04-26 23:33:55.267522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.142 [2024-04-26 23:33:55.267543] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13ec8b0 with addr=10.0.0.2, port=4420 00:31:06.142 [2024-04-26 23:33:55.267552] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13ec8b0 is same with the state(5) to be set 00:31:06.142 [2024-04-26 23:33:55.267567] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13ec8b0 (9): Bad file descriptor 00:31:06.142 [2024-04-26 23:33:55.267595] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:31:06.142 [2024-04-26 23:33:55.267603] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:31:06.142 [2024-04-26 23:33:55.267611] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:31:06.142 [2024-04-26 23:33:55.267622] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:06.142 23:33:55 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:06.142 23:33:55 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:31:06.142 23:33:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:06.142 23:33:55 -- common/autotest_common.sh@10 -- # set +x 00:31:06.142 23:33:55 -- host/discovery.sh@55 -- # xargs 00:31:06.142 [2024-04-26 23:33:55.276354] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:31:06.142 [2024-04-26 23:33:55.276678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.142 [2024-04-26 23:33:55.276882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.142 [2024-04-26 23:33:55.276892] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13ec8b0 with addr=10.0.0.2, port=4420 00:31:06.142 [2024-04-26 23:33:55.276904] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13ec8b0 is same with the state(5) to be set 00:31:06.142 [2024-04-26 23:33:55.276916] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13ec8b0 (9): Bad file descriptor 00:31:06.142 [2024-04-26 23:33:55.276933] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:31:06.142 [2024-04-26 23:33:55.276940] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:31:06.142 [2024-04-26 23:33:55.276947] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:31:06.142 [2024-04-26 23:33:55.276957] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:06.142 [2024-04-26 23:33:55.286409] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:31:06.142 [2024-04-26 23:33:55.286731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.142 [2024-04-26 23:33:55.286923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.142 [2024-04-26 23:33:55.286932] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13ec8b0 with addr=10.0.0.2, port=4420 00:31:06.142 [2024-04-26 23:33:55.286939] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13ec8b0 is same with the state(5) to be set 00:31:06.142 [2024-04-26 23:33:55.286950] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13ec8b0 (9): Bad file descriptor 00:31:06.142 [2024-04-26 23:33:55.286967] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:31:06.142 [2024-04-26 23:33:55.286974] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:31:06.142 [2024-04-26 23:33:55.286980] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:31:06.142 [2024-04-26 23:33:55.286991] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:06.142 [2024-04-26 23:33:55.292992] bdev_nvme.c:6710:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:31:06.142 [2024-04-26 23:33:55.293010] bdev_nvme.c:6701:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:31:06.142 23:33:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:06.142 23:33:55 -- common/autotest_common.sh@903 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:31:06.142 23:33:55 -- common/autotest_common.sh@904 -- # return 0 00:31:06.142 23:33:55 -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:31:06.142 23:33:55 -- common/autotest_common.sh@900 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:31:06.142 23:33:55 -- common/autotest_common.sh@901 -- # local max=10 00:31:06.142 23:33:55 -- common/autotest_common.sh@902 -- # (( max-- )) 00:31:06.142 23:33:55 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:31:06.142 23:33:55 -- common/autotest_common.sh@903 -- # get_subsystem_paths nvme0 00:31:06.142 23:33:55 -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:31:06.142 23:33:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:06.142 23:33:55 -- common/autotest_common.sh@10 -- # set +x 00:31:06.142 23:33:55 -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:31:06.142 23:33:55 -- host/discovery.sh@63 -- # sort -n 00:31:06.142 23:33:55 -- host/discovery.sh@63 -- # xargs 00:31:06.142 23:33:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:06.142 23:33:55 -- common/autotest_common.sh@903 -- # [[ 4421 == \4\4\2\1 ]] 00:31:06.143 23:33:55 -- common/autotest_common.sh@904 -- # return 0 00:31:06.143 23:33:55 -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:31:06.143 23:33:55 -- host/discovery.sh@79 -- # expected_count=0 00:31:06.143 23:33:55 -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:31:06.143 23:33:55 -- common/autotest_common.sh@900 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:31:06.143 23:33:55 -- common/autotest_common.sh@901 -- # local max=10 00:31:06.143 23:33:55 -- common/autotest_common.sh@902 -- # (( max-- )) 00:31:06.143 23:33:55 -- common/autotest_common.sh@903 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:31:06.143 23:33:55 -- common/autotest_common.sh@903 -- # get_notification_count 00:31:06.143 23:33:55 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:31:06.143 23:33:55 -- host/discovery.sh@74 -- # jq '. | length' 00:31:06.143 23:33:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:06.143 23:33:55 -- common/autotest_common.sh@10 -- # set +x 00:31:06.143 23:33:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:06.403 23:33:55 -- host/discovery.sh@74 -- # notification_count=0 00:31:06.403 23:33:55 -- host/discovery.sh@75 -- # notify_id=2 00:31:06.403 23:33:55 -- common/autotest_common.sh@903 -- # (( notification_count == expected_count )) 00:31:06.403 23:33:55 -- common/autotest_common.sh@904 -- # return 0 00:31:06.403 23:33:55 -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:31:06.403 23:33:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:06.403 23:33:55 -- common/autotest_common.sh@10 -- # set +x 00:31:06.403 23:33:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:06.403 23:33:55 -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:31:06.403 23:33:55 -- common/autotest_common.sh@900 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:31:06.403 23:33:55 -- common/autotest_common.sh@901 -- # local max=10 00:31:06.403 23:33:55 -- common/autotest_common.sh@902 -- # (( max-- )) 00:31:06.403 23:33:55 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:31:06.403 23:33:55 -- common/autotest_common.sh@903 -- # get_subsystem_names 00:31:06.403 23:33:55 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:31:06.403 23:33:55 -- host/discovery.sh@59 -- # xargs 00:31:06.403 23:33:55 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:31:06.403 23:33:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:06.403 23:33:55 -- host/discovery.sh@59 -- # sort 00:31:06.403 23:33:55 -- common/autotest_common.sh@10 -- # set +x 00:31:06.403 23:33:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:06.403 23:33:55 -- common/autotest_common.sh@903 -- # [[ '' == '' ]] 00:31:06.403 23:33:55 -- common/autotest_common.sh@904 -- # return 0 00:31:06.403 23:33:55 -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:31:06.403 23:33:55 -- common/autotest_common.sh@900 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:31:06.403 23:33:55 -- common/autotest_common.sh@901 -- # local max=10 00:31:06.403 23:33:55 -- common/autotest_common.sh@902 -- # (( max-- )) 00:31:06.403 23:33:55 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:31:06.403 23:33:55 -- common/autotest_common.sh@903 -- # get_bdev_list 00:31:06.403 23:33:55 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:06.403 23:33:55 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:31:06.403 23:33:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:06.403 23:33:55 -- host/discovery.sh@55 -- # sort 00:31:06.403 23:33:55 -- common/autotest_common.sh@10 -- # set +x 00:31:06.403 23:33:55 -- host/discovery.sh@55 -- # xargs 00:31:06.403 23:33:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:06.403 23:33:55 -- common/autotest_common.sh@903 -- # [[ '' == '' ]] 00:31:06.403 23:33:55 -- common/autotest_common.sh@904 -- # return 0 00:31:06.403 23:33:55 -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:31:06.403 23:33:55 -- host/discovery.sh@79 -- # expected_count=2 00:31:06.403 23:33:55 -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:31:06.403 23:33:55 -- common/autotest_common.sh@900 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:31:06.403 23:33:55 -- common/autotest_common.sh@901 -- # local max=10 00:31:06.403 23:33:55 -- common/autotest_common.sh@902 -- # (( max-- )) 00:31:06.403 23:33:55 -- common/autotest_common.sh@903 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:31:06.403 23:33:55 -- common/autotest_common.sh@903 -- # get_notification_count 00:31:06.403 23:33:55 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:31:06.403 23:33:55 -- host/discovery.sh@74 -- # jq '. | length' 00:31:06.403 23:33:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:06.403 23:33:55 -- common/autotest_common.sh@10 -- # set +x 00:31:06.403 23:33:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:06.403 23:33:55 -- host/discovery.sh@74 -- # notification_count=2 00:31:06.403 23:33:55 -- host/discovery.sh@75 -- # notify_id=4 00:31:06.404 23:33:55 -- common/autotest_common.sh@903 -- # (( notification_count == expected_count )) 00:31:06.404 23:33:55 -- common/autotest_common.sh@904 -- # return 0 00:31:06.404 23:33:55 -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:31:06.404 23:33:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:06.404 23:33:55 -- common/autotest_common.sh@10 -- # set +x 00:31:07.789 [2024-04-26 23:33:56.617078] bdev_nvme.c:6923:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:31:07.789 [2024-04-26 23:33:56.617095] bdev_nvme.c:7003:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:31:07.789 [2024-04-26 23:33:56.617107] bdev_nvme.c:6886:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:31:07.789 [2024-04-26 23:33:56.705405] bdev_nvme.c:6852:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:31:07.789 [2024-04-26 23:33:56.975937] bdev_nvme.c:6742:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:31:07.789 [2024-04-26 23:33:56.975967] bdev_nvme.c:6701:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:31:07.789 23:33:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:07.789 23:33:56 -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:31:07.789 23:33:56 -- common/autotest_common.sh@638 -- # local es=0 00:31:07.789 23:33:56 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:31:07.789 23:33:56 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:31:07.789 23:33:56 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:31:07.789 23:33:56 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:31:07.789 23:33:56 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:31:07.789 23:33:56 -- common/autotest_common.sh@641 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:31:07.789 23:33:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:07.789 23:33:56 -- common/autotest_common.sh@10 -- # set +x 00:31:07.789 request: 00:31:07.789 { 00:31:07.789 "name": "nvme", 00:31:07.789 "trtype": "tcp", 00:31:07.789 "traddr": "10.0.0.2", 00:31:07.789 "hostnqn": "nqn.2021-12.io.spdk:test", 00:31:07.789 "adrfam": "ipv4", 00:31:07.789 "trsvcid": "8009", 00:31:07.789 "wait_for_attach": true, 00:31:07.789 "method": "bdev_nvme_start_discovery", 00:31:07.789 "req_id": 1 00:31:07.789 } 00:31:07.790 Got JSON-RPC error response 00:31:07.790 response: 00:31:07.790 { 00:31:07.790 "code": -17, 00:31:07.790 "message": "File exists" 00:31:07.790 } 00:31:07.790 23:33:56 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:31:07.790 23:33:56 -- common/autotest_common.sh@641 -- # es=1 00:31:07.790 23:33:56 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:31:07.790 23:33:56 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:31:07.790 23:33:56 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:31:07.790 23:33:56 -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:31:07.790 23:33:57 -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:31:07.790 23:33:57 -- host/discovery.sh@67 -- # jq -r '.[].name' 00:31:07.790 23:33:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:07.790 23:33:57 -- host/discovery.sh@67 -- # sort 00:31:07.790 23:33:57 -- common/autotest_common.sh@10 -- # set +x 00:31:07.790 23:33:57 -- host/discovery.sh@67 -- # xargs 00:31:07.790 23:33:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:08.050 23:33:57 -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:31:08.050 23:33:57 -- host/discovery.sh@146 -- # get_bdev_list 00:31:08.050 23:33:57 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:08.050 23:33:57 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:31:08.050 23:33:57 -- host/discovery.sh@55 -- # sort 00:31:08.050 23:33:57 -- host/discovery.sh@55 -- # xargs 00:31:08.050 23:33:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:08.050 23:33:57 -- common/autotest_common.sh@10 -- # set +x 00:31:08.050 23:33:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:08.050 23:33:57 -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:31:08.050 23:33:57 -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:31:08.050 23:33:57 -- common/autotest_common.sh@638 -- # local es=0 00:31:08.050 23:33:57 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:31:08.050 23:33:57 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:31:08.050 23:33:57 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:31:08.050 23:33:57 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:31:08.050 23:33:57 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:31:08.050 23:33:57 -- common/autotest_common.sh@641 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:31:08.050 23:33:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:08.050 23:33:57 -- common/autotest_common.sh@10 -- # set +x 00:31:08.050 request: 00:31:08.050 { 00:31:08.050 "name": "nvme_second", 00:31:08.050 "trtype": "tcp", 00:31:08.050 "traddr": "10.0.0.2", 00:31:08.050 "hostnqn": "nqn.2021-12.io.spdk:test", 00:31:08.050 "adrfam": "ipv4", 00:31:08.050 "trsvcid": "8009", 00:31:08.050 "wait_for_attach": true, 00:31:08.050 "method": "bdev_nvme_start_discovery", 00:31:08.050 "req_id": 1 00:31:08.050 } 00:31:08.050 Got JSON-RPC error response 00:31:08.050 response: 00:31:08.050 { 00:31:08.050 "code": -17, 00:31:08.050 "message": "File exists" 00:31:08.050 } 00:31:08.050 23:33:57 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:31:08.050 23:33:57 -- common/autotest_common.sh@641 -- # es=1 00:31:08.050 23:33:57 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:31:08.050 23:33:57 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:31:08.050 23:33:57 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:31:08.050 23:33:57 -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:31:08.050 23:33:57 -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:31:08.050 23:33:57 -- host/discovery.sh@67 -- # jq -r '.[].name' 00:31:08.050 23:33:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:08.050 23:33:57 -- host/discovery.sh@67 -- # sort 00:31:08.050 23:33:57 -- common/autotest_common.sh@10 -- # set +x 00:31:08.050 23:33:57 -- host/discovery.sh@67 -- # xargs 00:31:08.050 23:33:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:08.050 23:33:57 -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:31:08.050 23:33:57 -- host/discovery.sh@152 -- # get_bdev_list 00:31:08.050 23:33:57 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:08.051 23:33:57 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:31:08.051 23:33:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:08.051 23:33:57 -- host/discovery.sh@55 -- # sort 00:31:08.051 23:33:57 -- common/autotest_common.sh@10 -- # set +x 00:31:08.051 23:33:57 -- host/discovery.sh@55 -- # xargs 00:31:08.051 23:33:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:08.051 23:33:57 -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:31:08.051 23:33:57 -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:31:08.051 23:33:57 -- common/autotest_common.sh@638 -- # local es=0 00:31:08.051 23:33:57 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:31:08.051 23:33:57 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:31:08.051 23:33:57 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:31:08.051 23:33:57 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:31:08.051 23:33:57 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:31:08.051 23:33:57 -- common/autotest_common.sh@641 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:31:08.051 23:33:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:08.051 23:33:57 -- common/autotest_common.sh@10 -- # set +x 00:31:08.995 [2024-04-26 23:33:58.244389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.995 [2024-04-26 23:33:58.244646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.995 [2024-04-26 23:33:58.244657] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1408e00 with addr=10.0.0.2, port=8010 00:31:08.995 [2024-04-26 23:33:58.244669] nvme_tcp.c:2699:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:31:08.995 [2024-04-26 23:33:58.244676] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:31:08.995 [2024-04-26 23:33:58.244683] bdev_nvme.c:6985:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:31:10.383 [2024-04-26 23:33:59.246810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.383 [2024-04-26 23:33:59.247195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.383 [2024-04-26 23:33:59.247206] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1408e00 with addr=10.0.0.2, port=8010 00:31:10.383 [2024-04-26 23:33:59.247217] nvme_tcp.c:2699:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:31:10.383 [2024-04-26 23:33:59.247224] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:31:10.383 [2024-04-26 23:33:59.247231] bdev_nvme.c:6985:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:31:11.327 [2024-04-26 23:34:00.248816] bdev_nvme.c:6966:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:31:11.327 request: 00:31:11.327 { 00:31:11.327 "name": "nvme_second", 00:31:11.327 "trtype": "tcp", 00:31:11.327 "traddr": "10.0.0.2", 00:31:11.327 "hostnqn": "nqn.2021-12.io.spdk:test", 00:31:11.327 "adrfam": "ipv4", 00:31:11.327 "trsvcid": "8010", 00:31:11.327 "attach_timeout_ms": 3000, 00:31:11.327 "method": "bdev_nvme_start_discovery", 00:31:11.327 "req_id": 1 00:31:11.327 } 00:31:11.327 Got JSON-RPC error response 00:31:11.327 response: 00:31:11.327 { 00:31:11.327 "code": -110, 00:31:11.327 "message": "Connection timed out" 00:31:11.327 } 00:31:11.327 23:34:00 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:31:11.327 23:34:00 -- common/autotest_common.sh@641 -- # es=1 00:31:11.327 23:34:00 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:31:11.327 23:34:00 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:31:11.327 23:34:00 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:31:11.327 23:34:00 -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:31:11.327 23:34:00 -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:31:11.327 23:34:00 -- host/discovery.sh@67 -- # jq -r '.[].name' 00:31:11.327 23:34:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:11.327 23:34:00 -- common/autotest_common.sh@10 -- # set +x 00:31:11.327 23:34:00 -- host/discovery.sh@67 -- # sort 00:31:11.327 23:34:00 -- host/discovery.sh@67 -- # xargs 00:31:11.327 23:34:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:11.327 23:34:00 -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:31:11.327 23:34:00 -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:31:11.327 23:34:00 -- host/discovery.sh@161 -- # kill 4133400 00:31:11.327 23:34:00 -- host/discovery.sh@162 -- # nvmftestfini 00:31:11.327 23:34:00 -- nvmf/common.sh@477 -- # nvmfcleanup 00:31:11.327 23:34:00 -- nvmf/common.sh@117 -- # sync 00:31:11.327 23:34:00 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:31:11.327 23:34:00 -- nvmf/common.sh@120 -- # set +e 00:31:11.327 23:34:00 -- nvmf/common.sh@121 -- # for i in {1..20} 00:31:11.327 23:34:00 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:31:11.327 rmmod nvme_tcp 00:31:11.327 rmmod nvme_fabrics 00:31:11.327 rmmod nvme_keyring 00:31:11.327 23:34:00 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:31:11.327 23:34:00 -- nvmf/common.sh@124 -- # set -e 00:31:11.327 23:34:00 -- nvmf/common.sh@125 -- # return 0 00:31:11.327 23:34:00 -- nvmf/common.sh@478 -- # '[' -n 4133354 ']' 00:31:11.327 23:34:00 -- nvmf/common.sh@479 -- # killprocess 4133354 00:31:11.327 23:34:00 -- common/autotest_common.sh@936 -- # '[' -z 4133354 ']' 00:31:11.327 23:34:00 -- common/autotest_common.sh@940 -- # kill -0 4133354 00:31:11.327 23:34:00 -- common/autotest_common.sh@941 -- # uname 00:31:11.327 23:34:00 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:31:11.327 23:34:00 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 4133354 00:31:11.327 23:34:00 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:31:11.327 23:34:00 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:31:11.327 23:34:00 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 4133354' 00:31:11.327 killing process with pid 4133354 00:31:11.327 23:34:00 -- common/autotest_common.sh@955 -- # kill 4133354 00:31:11.327 23:34:00 -- common/autotest_common.sh@960 -- # wait 4133354 00:31:11.327 23:34:00 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:31:11.327 23:34:00 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:31:11.327 23:34:00 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:31:11.327 23:34:00 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:31:11.327 23:34:00 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:31:11.327 23:34:00 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:11.327 23:34:00 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:31:11.327 23:34:00 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:13.888 23:34:02 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:31:13.888 00:31:13.888 real 0m20.234s 00:31:13.888 user 0m24.271s 00:31:13.888 sys 0m6.631s 00:31:13.888 23:34:02 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:31:13.888 23:34:02 -- common/autotest_common.sh@10 -- # set +x 00:31:13.888 ************************************ 00:31:13.888 END TEST nvmf_discovery 00:31:13.888 ************************************ 00:31:13.888 23:34:02 -- nvmf/nvmf.sh@100 -- # run_test nvmf_discovery_remove_ifc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:31:13.888 23:34:02 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:31:13.888 23:34:02 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:31:13.888 23:34:02 -- common/autotest_common.sh@10 -- # set +x 00:31:13.888 ************************************ 00:31:13.888 START TEST nvmf_discovery_remove_ifc 00:31:13.888 ************************************ 00:31:13.888 23:34:02 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:31:13.888 * Looking for test storage... 00:31:13.888 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:31:13.888 23:34:02 -- host/discovery_remove_ifc.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:13.888 23:34:02 -- nvmf/common.sh@7 -- # uname -s 00:31:13.888 23:34:02 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:13.888 23:34:02 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:13.888 23:34:02 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:13.888 23:34:02 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:13.888 23:34:02 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:13.888 23:34:02 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:13.888 23:34:02 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:13.888 23:34:02 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:13.888 23:34:02 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:13.888 23:34:02 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:13.888 23:34:02 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:31:13.888 23:34:02 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:31:13.888 23:34:02 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:13.888 23:34:02 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:13.888 23:34:02 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:13.888 23:34:02 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:13.888 23:34:02 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:13.888 23:34:02 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:13.888 23:34:02 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:13.888 23:34:02 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:13.888 23:34:02 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:13.888 23:34:02 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:13.888 23:34:02 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:13.888 23:34:02 -- paths/export.sh@5 -- # export PATH 00:31:13.888 23:34:02 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:13.888 23:34:02 -- nvmf/common.sh@47 -- # : 0 00:31:13.888 23:34:02 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:31:13.888 23:34:02 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:31:13.888 23:34:02 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:13.888 23:34:02 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:13.888 23:34:02 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:13.888 23:34:02 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:31:13.888 23:34:02 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:31:13.888 23:34:02 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:31:13.888 23:34:02 -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:31:13.888 23:34:02 -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:31:13.888 23:34:02 -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:31:13.888 23:34:02 -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:31:13.888 23:34:02 -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:31:13.888 23:34:02 -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:31:13.888 23:34:02 -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:31:13.888 23:34:02 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:31:13.888 23:34:02 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:13.888 23:34:02 -- nvmf/common.sh@437 -- # prepare_net_devs 00:31:13.888 23:34:02 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:31:13.888 23:34:02 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:31:13.888 23:34:02 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:13.888 23:34:02 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:31:13.888 23:34:02 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:13.889 23:34:02 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:31:13.889 23:34:02 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:31:13.889 23:34:02 -- nvmf/common.sh@285 -- # xtrace_disable 00:31:13.889 23:34:02 -- common/autotest_common.sh@10 -- # set +x 00:31:20.568 23:34:09 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:31:20.568 23:34:09 -- nvmf/common.sh@291 -- # pci_devs=() 00:31:20.568 23:34:09 -- nvmf/common.sh@291 -- # local -a pci_devs 00:31:20.568 23:34:09 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:31:20.568 23:34:09 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:31:20.568 23:34:09 -- nvmf/common.sh@293 -- # pci_drivers=() 00:31:20.568 23:34:09 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:31:20.568 23:34:09 -- nvmf/common.sh@295 -- # net_devs=() 00:31:20.568 23:34:09 -- nvmf/common.sh@295 -- # local -ga net_devs 00:31:20.568 23:34:09 -- nvmf/common.sh@296 -- # e810=() 00:31:20.568 23:34:09 -- nvmf/common.sh@296 -- # local -ga e810 00:31:20.568 23:34:09 -- nvmf/common.sh@297 -- # x722=() 00:31:20.568 23:34:09 -- nvmf/common.sh@297 -- # local -ga x722 00:31:20.568 23:34:09 -- nvmf/common.sh@298 -- # mlx=() 00:31:20.568 23:34:09 -- nvmf/common.sh@298 -- # local -ga mlx 00:31:20.568 23:34:09 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:20.568 23:34:09 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:20.568 23:34:09 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:20.568 23:34:09 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:20.568 23:34:09 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:20.568 23:34:09 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:20.568 23:34:09 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:20.568 23:34:09 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:20.568 23:34:09 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:20.568 23:34:09 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:20.568 23:34:09 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:20.568 23:34:09 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:31:20.569 23:34:09 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:31:20.569 23:34:09 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:31:20.569 23:34:09 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:31:20.569 23:34:09 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:31:20.569 23:34:09 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:31:20.569 23:34:09 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:31:20.569 23:34:09 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:31:20.569 Found 0000:31:00.0 (0x8086 - 0x159b) 00:31:20.569 23:34:09 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:31:20.569 23:34:09 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:31:20.569 23:34:09 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:20.569 23:34:09 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:20.569 23:34:09 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:31:20.569 23:34:09 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:31:20.569 23:34:09 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:31:20.569 Found 0000:31:00.1 (0x8086 - 0x159b) 00:31:20.569 23:34:09 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:31:20.569 23:34:09 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:31:20.569 23:34:09 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:20.569 23:34:09 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:20.569 23:34:09 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:31:20.569 23:34:09 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:31:20.569 23:34:09 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:31:20.569 23:34:09 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:31:20.569 23:34:09 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:31:20.569 23:34:09 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:20.569 23:34:09 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:31:20.569 23:34:09 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:20.569 23:34:09 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:31:20.569 Found net devices under 0000:31:00.0: cvl_0_0 00:31:20.569 23:34:09 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:31:20.569 23:34:09 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:31:20.569 23:34:09 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:20.569 23:34:09 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:31:20.569 23:34:09 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:20.569 23:34:09 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:31:20.569 Found net devices under 0000:31:00.1: cvl_0_1 00:31:20.569 23:34:09 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:31:20.569 23:34:09 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:31:20.569 23:34:09 -- nvmf/common.sh@403 -- # is_hw=yes 00:31:20.569 23:34:09 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:31:20.569 23:34:09 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:31:20.569 23:34:09 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:31:20.569 23:34:09 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:20.569 23:34:09 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:20.569 23:34:09 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:20.569 23:34:09 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:31:20.569 23:34:09 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:20.569 23:34:09 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:20.569 23:34:09 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:31:20.569 23:34:09 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:20.569 23:34:09 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:20.569 23:34:09 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:31:20.924 23:34:09 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:31:20.924 23:34:09 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:31:20.924 23:34:09 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:20.924 23:34:09 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:20.924 23:34:09 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:20.924 23:34:09 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:31:20.924 23:34:09 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:20.924 23:34:10 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:20.924 23:34:10 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:20.924 23:34:10 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:31:20.924 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:20.924 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.594 ms 00:31:20.924 00:31:20.924 --- 10.0.0.2 ping statistics --- 00:31:20.924 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:20.924 rtt min/avg/max/mdev = 0.594/0.594/0.594/0.000 ms 00:31:20.924 23:34:10 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:20.924 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:20.924 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.294 ms 00:31:20.924 00:31:20.924 --- 10.0.0.1 ping statistics --- 00:31:20.924 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:20.924 rtt min/avg/max/mdev = 0.294/0.294/0.294/0.000 ms 00:31:20.924 23:34:10 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:20.924 23:34:10 -- nvmf/common.sh@411 -- # return 0 00:31:20.924 23:34:10 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:31:20.924 23:34:10 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:20.924 23:34:10 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:31:20.924 23:34:10 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:31:20.924 23:34:10 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:20.924 23:34:10 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:31:20.924 23:34:10 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:31:21.218 23:34:10 -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:31:21.218 23:34:10 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:31:21.218 23:34:10 -- common/autotest_common.sh@710 -- # xtrace_disable 00:31:21.218 23:34:10 -- common/autotest_common.sh@10 -- # set +x 00:31:21.218 23:34:10 -- nvmf/common.sh@470 -- # nvmfpid=4139654 00:31:21.218 23:34:10 -- nvmf/common.sh@471 -- # waitforlisten 4139654 00:31:21.218 23:34:10 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:31:21.218 23:34:10 -- common/autotest_common.sh@817 -- # '[' -z 4139654 ']' 00:31:21.218 23:34:10 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:21.218 23:34:10 -- common/autotest_common.sh@822 -- # local max_retries=100 00:31:21.218 23:34:10 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:21.218 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:21.218 23:34:10 -- common/autotest_common.sh@826 -- # xtrace_disable 00:31:21.218 23:34:10 -- common/autotest_common.sh@10 -- # set +x 00:31:21.218 [2024-04-26 23:34:10.244732] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:31:21.218 [2024-04-26 23:34:10.244799] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:21.218 EAL: No free 2048 kB hugepages reported on node 1 00:31:21.218 [2024-04-26 23:34:10.315867] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:21.218 [2024-04-26 23:34:10.352861] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:21.218 [2024-04-26 23:34:10.352904] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:21.218 [2024-04-26 23:34:10.352912] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:21.218 [2024-04-26 23:34:10.352925] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:21.218 [2024-04-26 23:34:10.352931] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:21.218 [2024-04-26 23:34:10.352952] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:31:21.787 23:34:11 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:31:21.787 23:34:11 -- common/autotest_common.sh@850 -- # return 0 00:31:21.787 23:34:11 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:31:21.787 23:34:11 -- common/autotest_common.sh@716 -- # xtrace_disable 00:31:21.787 23:34:11 -- common/autotest_common.sh@10 -- # set +x 00:31:22.047 23:34:11 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:22.047 23:34:11 -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:31:22.047 23:34:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:22.047 23:34:11 -- common/autotest_common.sh@10 -- # set +x 00:31:22.047 [2024-04-26 23:34:11.059486] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:22.047 [2024-04-26 23:34:11.067614] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:31:22.047 null0 00:31:22.047 [2024-04-26 23:34:11.099624] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:22.047 23:34:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:22.047 23:34:11 -- host/discovery_remove_ifc.sh@59 -- # hostpid=4140000 00:31:22.047 23:34:11 -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 4140000 /tmp/host.sock 00:31:22.047 23:34:11 -- host/discovery_remove_ifc.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:31:22.047 23:34:11 -- common/autotest_common.sh@817 -- # '[' -z 4140000 ']' 00:31:22.047 23:34:11 -- common/autotest_common.sh@821 -- # local rpc_addr=/tmp/host.sock 00:31:22.047 23:34:11 -- common/autotest_common.sh@822 -- # local max_retries=100 00:31:22.047 23:34:11 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:31:22.047 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:31:22.047 23:34:11 -- common/autotest_common.sh@826 -- # xtrace_disable 00:31:22.047 23:34:11 -- common/autotest_common.sh@10 -- # set +x 00:31:22.047 [2024-04-26 23:34:11.170731] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:31:22.047 [2024-04-26 23:34:11.170776] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4140000 ] 00:31:22.047 EAL: No free 2048 kB hugepages reported on node 1 00:31:22.047 [2024-04-26 23:34:11.229868] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:22.047 [2024-04-26 23:34:11.258954] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:31:22.047 23:34:11 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:31:22.047 23:34:11 -- common/autotest_common.sh@850 -- # return 0 00:31:22.047 23:34:11 -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:31:22.047 23:34:11 -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:31:22.047 23:34:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:22.047 23:34:11 -- common/autotest_common.sh@10 -- # set +x 00:31:22.307 23:34:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:22.307 23:34:11 -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:31:22.307 23:34:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:22.307 23:34:11 -- common/autotest_common.sh@10 -- # set +x 00:31:22.307 23:34:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:22.307 23:34:11 -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:31:22.307 23:34:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:22.307 23:34:11 -- common/autotest_common.sh@10 -- # set +x 00:31:23.248 [2024-04-26 23:34:12.376703] bdev_nvme.c:6923:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:31:23.248 [2024-04-26 23:34:12.376722] bdev_nvme.c:7003:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:31:23.248 [2024-04-26 23:34:12.376735] bdev_nvme.c:6886:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:31:23.508 [2024-04-26 23:34:12.506170] bdev_nvme.c:6852:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:31:23.508 [2024-04-26 23:34:12.691929] bdev_nvme.c:7713:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:31:23.508 [2024-04-26 23:34:12.691977] bdev_nvme.c:7713:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:31:23.508 [2024-04-26 23:34:12.691997] bdev_nvme.c:7713:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:31:23.508 [2024-04-26 23:34:12.692012] bdev_nvme.c:6742:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:31:23.508 [2024-04-26 23:34:12.692031] bdev_nvme.c:6701:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:31:23.508 23:34:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:23.508 23:34:12 -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:31:23.508 [2024-04-26 23:34:12.695755] bdev_nvme.c:1606:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0xfd9230 was disconnected and freed. delete nvme_qpair. 00:31:23.508 23:34:12 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:31:23.508 23:34:12 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:23.508 23:34:12 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:31:23.508 23:34:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:23.508 23:34:12 -- host/discovery_remove_ifc.sh@29 -- # sort 00:31:23.508 23:34:12 -- common/autotest_common.sh@10 -- # set +x 00:31:23.508 23:34:12 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:31:23.508 23:34:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:23.508 23:34:12 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:31:23.508 23:34:12 -- host/discovery_remove_ifc.sh@75 -- # ip netns exec cvl_0_0_ns_spdk ip addr del 10.0.0.2/24 dev cvl_0_0 00:31:23.508 23:34:12 -- host/discovery_remove_ifc.sh@76 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 down 00:31:23.783 23:34:12 -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:31:23.783 23:34:12 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:31:23.783 23:34:12 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:23.783 23:34:12 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:31:23.783 23:34:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:23.783 23:34:12 -- common/autotest_common.sh@10 -- # set +x 00:31:23.783 23:34:12 -- host/discovery_remove_ifc.sh@29 -- # sort 00:31:23.783 23:34:12 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:31:23.783 23:34:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:23.783 23:34:12 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:31:23.783 23:34:12 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:31:24.731 23:34:13 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:31:24.731 23:34:13 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:24.731 23:34:13 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:31:24.731 23:34:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:24.731 23:34:13 -- host/discovery_remove_ifc.sh@29 -- # sort 00:31:24.731 23:34:13 -- common/autotest_common.sh@10 -- # set +x 00:31:24.731 23:34:13 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:31:24.731 23:34:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:24.731 23:34:13 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:31:24.731 23:34:13 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:31:26.115 23:34:14 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:31:26.115 23:34:14 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:26.115 23:34:14 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:31:26.115 23:34:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:26.115 23:34:14 -- host/discovery_remove_ifc.sh@29 -- # sort 00:31:26.115 23:34:14 -- common/autotest_common.sh@10 -- # set +x 00:31:26.115 23:34:14 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:31:26.115 23:34:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:26.116 23:34:15 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:31:26.116 23:34:15 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:31:27.118 23:34:16 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:31:27.118 23:34:16 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:27.118 23:34:16 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:31:27.118 23:34:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:27.118 23:34:16 -- host/discovery_remove_ifc.sh@29 -- # sort 00:31:27.118 23:34:16 -- common/autotest_common.sh@10 -- # set +x 00:31:27.118 23:34:16 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:31:27.118 23:34:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:27.118 23:34:16 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:31:27.118 23:34:16 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:31:28.059 23:34:17 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:31:28.059 23:34:17 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:28.059 23:34:17 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:31:28.059 23:34:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:28.059 23:34:17 -- host/discovery_remove_ifc.sh@29 -- # sort 00:31:28.059 23:34:17 -- common/autotest_common.sh@10 -- # set +x 00:31:28.059 23:34:17 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:31:28.059 23:34:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:28.059 23:34:17 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:31:28.059 23:34:17 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:31:29.001 [2024-04-26 23:34:18.132458] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:31:29.001 [2024-04-26 23:34:18.132500] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:31:29.001 [2024-04-26 23:34:18.132510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:29.001 [2024-04-26 23:34:18.132520] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:31:29.001 [2024-04-26 23:34:18.132527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:29.001 [2024-04-26 23:34:18.132535] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:31:29.001 [2024-04-26 23:34:18.132542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:29.001 [2024-04-26 23:34:18.132550] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:31:29.001 [2024-04-26 23:34:18.132557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:29.001 [2024-04-26 23:34:18.132565] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:31:29.001 [2024-04-26 23:34:18.132572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:29.001 [2024-04-26 23:34:18.132579] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf9f540 is same with the state(5) to be set 00:31:29.001 [2024-04-26 23:34:18.142480] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf9f540 (9): Bad file descriptor 00:31:29.001 [2024-04-26 23:34:18.152518] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:31:29.001 23:34:18 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:31:29.001 23:34:18 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:29.001 23:34:18 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:31:29.001 23:34:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:29.001 23:34:18 -- common/autotest_common.sh@10 -- # set +x 00:31:29.001 23:34:18 -- host/discovery_remove_ifc.sh@29 -- # sort 00:31:29.001 23:34:18 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:31:30.385 [2024-04-26 23:34:19.208887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:31:31.326 [2024-04-26 23:34:20.232912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:31:31.326 [2024-04-26 23:34:20.232964] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf9f540 with addr=10.0.0.2, port=4420 00:31:31.326 [2024-04-26 23:34:20.232977] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf9f540 is same with the state(5) to be set 00:31:31.326 [2024-04-26 23:34:20.233360] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf9f540 (9): Bad file descriptor 00:31:31.326 [2024-04-26 23:34:20.233382] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:31.326 [2024-04-26 23:34:20.233401] bdev_nvme.c:6674:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:31:31.326 [2024-04-26 23:34:20.233424] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:31:31.326 [2024-04-26 23:34:20.233434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:31.326 [2024-04-26 23:34:20.233444] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:31:31.326 [2024-04-26 23:34:20.233451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:31.326 [2024-04-26 23:34:20.233459] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:31:31.326 [2024-04-26 23:34:20.233466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:31.326 [2024-04-26 23:34:20.233474] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:31:31.326 [2024-04-26 23:34:20.233481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:31.326 [2024-04-26 23:34:20.233489] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:31:31.326 [2024-04-26 23:34:20.233496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:31.326 [2024-04-26 23:34:20.233503] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] in failed state. 00:31:31.326 [2024-04-26 23:34:20.234001] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf9f950 (9): Bad file descriptor 00:31:31.326 [2024-04-26 23:34:20.235013] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:31:31.326 [2024-04-26 23:34:20.235024] nvme_ctrlr.c:1148:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] Failed to read the CC register 00:31:31.326 23:34:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:31.326 23:34:20 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:31:31.326 23:34:20 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:31:32.267 23:34:21 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:31:32.267 23:34:21 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:32.267 23:34:21 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:31:32.267 23:34:21 -- host/discovery_remove_ifc.sh@29 -- # sort 00:31:32.267 23:34:21 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:31:32.267 23:34:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:32.267 23:34:21 -- common/autotest_common.sh@10 -- # set +x 00:31:32.267 23:34:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:32.267 23:34:21 -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:31:32.267 23:34:21 -- host/discovery_remove_ifc.sh@82 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:32.267 23:34:21 -- host/discovery_remove_ifc.sh@83 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:32.267 23:34:21 -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:31:32.267 23:34:21 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:31:32.267 23:34:21 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:31:32.267 23:34:21 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:32.267 23:34:21 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:31:32.267 23:34:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:32.267 23:34:21 -- host/discovery_remove_ifc.sh@29 -- # sort 00:31:32.267 23:34:21 -- common/autotest_common.sh@10 -- # set +x 00:31:32.267 23:34:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:32.267 23:34:21 -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:31:32.267 23:34:21 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:31:33.209 [2024-04-26 23:34:22.251933] bdev_nvme.c:6923:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:31:33.209 [2024-04-26 23:34:22.251954] bdev_nvme.c:7003:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:31:33.209 [2024-04-26 23:34:22.251968] bdev_nvme.c:6886:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:31:33.209 [2024-04-26 23:34:22.379353] bdev_nvme.c:6852:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:31:33.209 [2024-04-26 23:34:22.441044] bdev_nvme.c:7713:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:31:33.209 [2024-04-26 23:34:22.441082] bdev_nvme.c:7713:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:31:33.209 [2024-04-26 23:34:22.441102] bdev_nvme.c:7713:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:31:33.209 [2024-04-26 23:34:22.441115] bdev_nvme.c:6742:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:31:33.209 [2024-04-26 23:34:22.441122] bdev_nvme.c:6701:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:31:33.209 [2024-04-26 23:34:22.450339] bdev_nvme.c:1606:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0xfe3a90 was disconnected and freed. delete nvme_qpair. 00:31:33.470 23:34:22 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:31:33.470 23:34:22 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:33.470 23:34:22 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:31:33.470 23:34:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:33.470 23:34:22 -- host/discovery_remove_ifc.sh@29 -- # sort 00:31:33.470 23:34:22 -- common/autotest_common.sh@10 -- # set +x 00:31:33.470 23:34:22 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:31:33.470 23:34:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:33.470 23:34:22 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:31:33.470 23:34:22 -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:31:33.470 23:34:22 -- host/discovery_remove_ifc.sh@90 -- # killprocess 4140000 00:31:33.470 23:34:22 -- common/autotest_common.sh@936 -- # '[' -z 4140000 ']' 00:31:33.470 23:34:22 -- common/autotest_common.sh@940 -- # kill -0 4140000 00:31:33.470 23:34:22 -- common/autotest_common.sh@941 -- # uname 00:31:33.470 23:34:22 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:31:33.470 23:34:22 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 4140000 00:31:33.470 23:34:22 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:31:33.470 23:34:22 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:31:33.470 23:34:22 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 4140000' 00:31:33.470 killing process with pid 4140000 00:31:33.470 23:34:22 -- common/autotest_common.sh@955 -- # kill 4140000 00:31:33.470 23:34:22 -- common/autotest_common.sh@960 -- # wait 4140000 00:31:33.470 23:34:22 -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:31:33.470 23:34:22 -- nvmf/common.sh@477 -- # nvmfcleanup 00:31:33.470 23:34:22 -- nvmf/common.sh@117 -- # sync 00:31:33.470 23:34:22 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:31:33.470 23:34:22 -- nvmf/common.sh@120 -- # set +e 00:31:33.470 23:34:22 -- nvmf/common.sh@121 -- # for i in {1..20} 00:31:33.470 23:34:22 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:31:33.470 rmmod nvme_tcp 00:31:33.732 rmmod nvme_fabrics 00:31:33.732 rmmod nvme_keyring 00:31:33.732 23:34:22 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:31:33.732 23:34:22 -- nvmf/common.sh@124 -- # set -e 00:31:33.732 23:34:22 -- nvmf/common.sh@125 -- # return 0 00:31:33.732 23:34:22 -- nvmf/common.sh@478 -- # '[' -n 4139654 ']' 00:31:33.732 23:34:22 -- nvmf/common.sh@479 -- # killprocess 4139654 00:31:33.732 23:34:22 -- common/autotest_common.sh@936 -- # '[' -z 4139654 ']' 00:31:33.732 23:34:22 -- common/autotest_common.sh@940 -- # kill -0 4139654 00:31:33.732 23:34:22 -- common/autotest_common.sh@941 -- # uname 00:31:33.732 23:34:22 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:31:33.732 23:34:22 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 4139654 00:31:33.732 23:34:22 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:31:33.732 23:34:22 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:31:33.732 23:34:22 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 4139654' 00:31:33.732 killing process with pid 4139654 00:31:33.732 23:34:22 -- common/autotest_common.sh@955 -- # kill 4139654 00:31:33.732 23:34:22 -- common/autotest_common.sh@960 -- # wait 4139654 00:31:33.732 23:34:22 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:31:33.732 23:34:22 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:31:33.732 23:34:22 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:31:33.732 23:34:22 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:31:33.732 23:34:22 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:31:33.732 23:34:22 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:33.732 23:34:22 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:31:33.732 23:34:22 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:36.281 23:34:25 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:31:36.281 00:31:36.281 real 0m22.223s 00:31:36.281 user 0m24.604s 00:31:36.281 sys 0m6.541s 00:31:36.281 23:34:25 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:31:36.281 23:34:25 -- common/autotest_common.sh@10 -- # set +x 00:31:36.281 ************************************ 00:31:36.281 END TEST nvmf_discovery_remove_ifc 00:31:36.281 ************************************ 00:31:36.281 23:34:25 -- nvmf/nvmf.sh@101 -- # run_test nvmf_identify_kernel_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:31:36.281 23:34:25 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:31:36.281 23:34:25 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:31:36.281 23:34:25 -- common/autotest_common.sh@10 -- # set +x 00:31:36.281 ************************************ 00:31:36.281 START TEST nvmf_identify_kernel_target 00:31:36.281 ************************************ 00:31:36.281 23:34:25 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:31:36.281 * Looking for test storage... 00:31:36.281 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:31:36.281 23:34:25 -- host/identify_kernel_nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:36.281 23:34:25 -- nvmf/common.sh@7 -- # uname -s 00:31:36.281 23:34:25 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:36.281 23:34:25 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:36.281 23:34:25 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:36.281 23:34:25 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:36.281 23:34:25 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:36.281 23:34:25 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:36.281 23:34:25 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:36.281 23:34:25 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:36.281 23:34:25 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:36.281 23:34:25 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:36.281 23:34:25 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:31:36.281 23:34:25 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:31:36.281 23:34:25 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:36.281 23:34:25 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:36.281 23:34:25 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:36.281 23:34:25 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:36.281 23:34:25 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:36.281 23:34:25 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:36.281 23:34:25 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:36.281 23:34:25 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:36.281 23:34:25 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:36.281 23:34:25 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:36.281 23:34:25 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:36.281 23:34:25 -- paths/export.sh@5 -- # export PATH 00:31:36.281 23:34:25 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:36.281 23:34:25 -- nvmf/common.sh@47 -- # : 0 00:31:36.281 23:34:25 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:31:36.281 23:34:25 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:31:36.281 23:34:25 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:36.281 23:34:25 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:36.281 23:34:25 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:36.281 23:34:25 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:31:36.281 23:34:25 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:31:36.281 23:34:25 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:31:36.281 23:34:25 -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:31:36.281 23:34:25 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:31:36.281 23:34:25 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:36.281 23:34:25 -- nvmf/common.sh@437 -- # prepare_net_devs 00:31:36.281 23:34:25 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:31:36.281 23:34:25 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:31:36.281 23:34:25 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:36.281 23:34:25 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:31:36.281 23:34:25 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:36.281 23:34:25 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:31:36.281 23:34:25 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:31:36.281 23:34:25 -- nvmf/common.sh@285 -- # xtrace_disable 00:31:36.281 23:34:25 -- common/autotest_common.sh@10 -- # set +x 00:31:44.425 23:34:32 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:31:44.425 23:34:32 -- nvmf/common.sh@291 -- # pci_devs=() 00:31:44.425 23:34:32 -- nvmf/common.sh@291 -- # local -a pci_devs 00:31:44.425 23:34:32 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:31:44.425 23:34:32 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:31:44.425 23:34:32 -- nvmf/common.sh@293 -- # pci_drivers=() 00:31:44.425 23:34:32 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:31:44.425 23:34:32 -- nvmf/common.sh@295 -- # net_devs=() 00:31:44.425 23:34:32 -- nvmf/common.sh@295 -- # local -ga net_devs 00:31:44.425 23:34:32 -- nvmf/common.sh@296 -- # e810=() 00:31:44.425 23:34:32 -- nvmf/common.sh@296 -- # local -ga e810 00:31:44.425 23:34:32 -- nvmf/common.sh@297 -- # x722=() 00:31:44.425 23:34:32 -- nvmf/common.sh@297 -- # local -ga x722 00:31:44.425 23:34:32 -- nvmf/common.sh@298 -- # mlx=() 00:31:44.425 23:34:32 -- nvmf/common.sh@298 -- # local -ga mlx 00:31:44.425 23:34:32 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:44.425 23:34:32 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:44.425 23:34:32 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:44.425 23:34:32 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:44.425 23:34:32 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:44.425 23:34:32 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:44.425 23:34:32 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:44.425 23:34:32 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:44.425 23:34:32 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:44.425 23:34:32 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:44.425 23:34:32 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:44.425 23:34:32 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:31:44.425 23:34:32 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:31:44.425 23:34:32 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:31:44.425 23:34:32 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:31:44.425 23:34:32 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:31:44.425 23:34:32 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:31:44.425 23:34:32 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:31:44.425 23:34:32 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:31:44.425 Found 0000:31:00.0 (0x8086 - 0x159b) 00:31:44.425 23:34:32 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:31:44.425 23:34:32 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:31:44.425 23:34:32 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:44.425 23:34:32 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:44.425 23:34:32 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:31:44.425 23:34:32 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:31:44.425 23:34:32 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:31:44.425 Found 0000:31:00.1 (0x8086 - 0x159b) 00:31:44.425 23:34:32 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:31:44.425 23:34:32 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:31:44.425 23:34:32 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:44.425 23:34:32 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:44.425 23:34:32 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:31:44.425 23:34:32 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:31:44.425 23:34:32 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:31:44.425 23:34:32 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:31:44.425 23:34:32 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:31:44.425 23:34:32 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:44.425 23:34:32 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:31:44.425 23:34:32 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:44.425 23:34:32 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:31:44.425 Found net devices under 0000:31:00.0: cvl_0_0 00:31:44.425 23:34:32 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:31:44.425 23:34:32 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:31:44.425 23:34:32 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:44.425 23:34:32 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:31:44.425 23:34:32 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:44.425 23:34:32 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:31:44.425 Found net devices under 0000:31:00.1: cvl_0_1 00:31:44.425 23:34:32 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:31:44.425 23:34:32 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:31:44.425 23:34:32 -- nvmf/common.sh@403 -- # is_hw=yes 00:31:44.425 23:34:32 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:31:44.425 23:34:32 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:31:44.425 23:34:32 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:31:44.425 23:34:32 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:44.425 23:34:32 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:44.425 23:34:32 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:44.425 23:34:32 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:31:44.425 23:34:32 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:44.425 23:34:32 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:44.425 23:34:32 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:31:44.425 23:34:32 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:44.425 23:34:32 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:44.425 23:34:32 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:31:44.425 23:34:32 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:31:44.425 23:34:32 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:31:44.425 23:34:32 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:44.425 23:34:32 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:44.425 23:34:32 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:44.425 23:34:32 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:31:44.425 23:34:32 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:44.425 23:34:32 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:44.425 23:34:32 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:44.425 23:34:32 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:31:44.425 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:44.425 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.561 ms 00:31:44.425 00:31:44.425 --- 10.0.0.2 ping statistics --- 00:31:44.425 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:44.425 rtt min/avg/max/mdev = 0.561/0.561/0.561/0.000 ms 00:31:44.425 23:34:32 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:44.425 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:44.425 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.343 ms 00:31:44.425 00:31:44.425 --- 10.0.0.1 ping statistics --- 00:31:44.425 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:44.425 rtt min/avg/max/mdev = 0.343/0.343/0.343/0.000 ms 00:31:44.425 23:34:32 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:44.425 23:34:32 -- nvmf/common.sh@411 -- # return 0 00:31:44.425 23:34:32 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:31:44.425 23:34:32 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:44.425 23:34:32 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:31:44.425 23:34:32 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:31:44.425 23:34:32 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:44.425 23:34:32 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:31:44.425 23:34:32 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:31:44.425 23:34:32 -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:31:44.425 23:34:32 -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:31:44.425 23:34:32 -- nvmf/common.sh@717 -- # local ip 00:31:44.425 23:34:32 -- nvmf/common.sh@718 -- # ip_candidates=() 00:31:44.425 23:34:32 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:31:44.425 23:34:32 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:44.425 23:34:32 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:44.425 23:34:32 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:31:44.425 23:34:32 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:44.425 23:34:32 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:31:44.425 23:34:32 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:31:44.425 23:34:32 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:31:44.425 23:34:32 -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:31:44.425 23:34:32 -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:31:44.425 23:34:32 -- nvmf/common.sh@621 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:31:44.425 23:34:32 -- nvmf/common.sh@623 -- # nvmet=/sys/kernel/config/nvmet 00:31:44.425 23:34:32 -- nvmf/common.sh@624 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:31:44.425 23:34:32 -- nvmf/common.sh@625 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:31:44.425 23:34:32 -- nvmf/common.sh@626 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:31:44.425 23:34:32 -- nvmf/common.sh@628 -- # local block nvme 00:31:44.425 23:34:32 -- nvmf/common.sh@630 -- # [[ ! -e /sys/module/nvmet ]] 00:31:44.425 23:34:32 -- nvmf/common.sh@631 -- # modprobe nvmet 00:31:44.425 23:34:32 -- nvmf/common.sh@634 -- # [[ -e /sys/kernel/config/nvmet ]] 00:31:44.425 23:34:32 -- nvmf/common.sh@636 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:31:46.972 Waiting for block devices as requested 00:31:46.972 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:31:46.973 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:31:46.973 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:31:47.233 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:31:47.233 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:31:47.233 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:31:47.494 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:31:47.494 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:31:47.494 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:31:47.754 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:31:47.754 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:31:47.754 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:31:48.015 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:31:48.015 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:31:48.015 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:31:48.015 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:31:48.275 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:31:48.537 23:34:37 -- nvmf/common.sh@639 -- # for block in /sys/block/nvme* 00:31:48.537 23:34:37 -- nvmf/common.sh@640 -- # [[ -e /sys/block/nvme0n1 ]] 00:31:48.537 23:34:37 -- nvmf/common.sh@641 -- # is_block_zoned nvme0n1 00:31:48.537 23:34:37 -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:31:48.537 23:34:37 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:31:48.537 23:34:37 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:31:48.537 23:34:37 -- nvmf/common.sh@642 -- # block_in_use nvme0n1 00:31:48.537 23:34:37 -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:31:48.537 23:34:37 -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:31:48.537 No valid GPT data, bailing 00:31:48.537 23:34:37 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:31:48.537 23:34:37 -- scripts/common.sh@391 -- # pt= 00:31:48.537 23:34:37 -- scripts/common.sh@392 -- # return 1 00:31:48.537 23:34:37 -- nvmf/common.sh@642 -- # nvme=/dev/nvme0n1 00:31:48.537 23:34:37 -- nvmf/common.sh@645 -- # [[ -b /dev/nvme0n1 ]] 00:31:48.537 23:34:37 -- nvmf/common.sh@647 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:31:48.537 23:34:37 -- nvmf/common.sh@648 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:31:48.537 23:34:37 -- nvmf/common.sh@649 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:31:48.537 23:34:37 -- nvmf/common.sh@654 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:31:48.537 23:34:37 -- nvmf/common.sh@656 -- # echo 1 00:31:48.537 23:34:37 -- nvmf/common.sh@657 -- # echo /dev/nvme0n1 00:31:48.537 23:34:37 -- nvmf/common.sh@658 -- # echo 1 00:31:48.537 23:34:37 -- nvmf/common.sh@660 -- # echo 10.0.0.1 00:31:48.537 23:34:37 -- nvmf/common.sh@661 -- # echo tcp 00:31:48.537 23:34:37 -- nvmf/common.sh@662 -- # echo 4420 00:31:48.537 23:34:37 -- nvmf/common.sh@663 -- # echo ipv4 00:31:48.537 23:34:37 -- nvmf/common.sh@666 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:31:48.537 23:34:37 -- nvmf/common.sh@669 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -a 10.0.0.1 -t tcp -s 4420 00:31:48.537 00:31:48.537 Discovery Log Number of Records 2, Generation counter 2 00:31:48.537 =====Discovery Log Entry 0====== 00:31:48.537 trtype: tcp 00:31:48.537 adrfam: ipv4 00:31:48.537 subtype: current discovery subsystem 00:31:48.537 treq: not specified, sq flow control disable supported 00:31:48.537 portid: 1 00:31:48.537 trsvcid: 4420 00:31:48.537 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:31:48.537 traddr: 10.0.0.1 00:31:48.537 eflags: none 00:31:48.537 sectype: none 00:31:48.537 =====Discovery Log Entry 1====== 00:31:48.537 trtype: tcp 00:31:48.537 adrfam: ipv4 00:31:48.537 subtype: nvme subsystem 00:31:48.537 treq: not specified, sq flow control disable supported 00:31:48.537 portid: 1 00:31:48.537 trsvcid: 4420 00:31:48.537 subnqn: nqn.2016-06.io.spdk:testnqn 00:31:48.537 traddr: 10.0.0.1 00:31:48.537 eflags: none 00:31:48.537 sectype: none 00:31:48.537 23:34:37 -- host/identify_kernel_nvmf.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:31:48.537 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:31:48.800 EAL: No free 2048 kB hugepages reported on node 1 00:31:48.800 ===================================================== 00:31:48.800 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:31:48.800 ===================================================== 00:31:48.800 Controller Capabilities/Features 00:31:48.800 ================================ 00:31:48.800 Vendor ID: 0000 00:31:48.800 Subsystem Vendor ID: 0000 00:31:48.800 Serial Number: d149dc064a2c28f401ad 00:31:48.800 Model Number: Linux 00:31:48.800 Firmware Version: 6.7.0-68 00:31:48.800 Recommended Arb Burst: 0 00:31:48.800 IEEE OUI Identifier: 00 00 00 00:31:48.800 Multi-path I/O 00:31:48.800 May have multiple subsystem ports: No 00:31:48.800 May have multiple controllers: No 00:31:48.800 Associated with SR-IOV VF: No 00:31:48.800 Max Data Transfer Size: Unlimited 00:31:48.800 Max Number of Namespaces: 0 00:31:48.800 Max Number of I/O Queues: 1024 00:31:48.800 NVMe Specification Version (VS): 1.3 00:31:48.800 NVMe Specification Version (Identify): 1.3 00:31:48.800 Maximum Queue Entries: 1024 00:31:48.800 Contiguous Queues Required: No 00:31:48.800 Arbitration Mechanisms Supported 00:31:48.800 Weighted Round Robin: Not Supported 00:31:48.800 Vendor Specific: Not Supported 00:31:48.800 Reset Timeout: 7500 ms 00:31:48.800 Doorbell Stride: 4 bytes 00:31:48.800 NVM Subsystem Reset: Not Supported 00:31:48.800 Command Sets Supported 00:31:48.800 NVM Command Set: Supported 00:31:48.800 Boot Partition: Not Supported 00:31:48.800 Memory Page Size Minimum: 4096 bytes 00:31:48.800 Memory Page Size Maximum: 4096 bytes 00:31:48.800 Persistent Memory Region: Not Supported 00:31:48.800 Optional Asynchronous Events Supported 00:31:48.800 Namespace Attribute Notices: Not Supported 00:31:48.800 Firmware Activation Notices: Not Supported 00:31:48.800 ANA Change Notices: Not Supported 00:31:48.800 PLE Aggregate Log Change Notices: Not Supported 00:31:48.800 LBA Status Info Alert Notices: Not Supported 00:31:48.800 EGE Aggregate Log Change Notices: Not Supported 00:31:48.800 Normal NVM Subsystem Shutdown event: Not Supported 00:31:48.800 Zone Descriptor Change Notices: Not Supported 00:31:48.800 Discovery Log Change Notices: Supported 00:31:48.800 Controller Attributes 00:31:48.800 128-bit Host Identifier: Not Supported 00:31:48.800 Non-Operational Permissive Mode: Not Supported 00:31:48.800 NVM Sets: Not Supported 00:31:48.800 Read Recovery Levels: Not Supported 00:31:48.800 Endurance Groups: Not Supported 00:31:48.800 Predictable Latency Mode: Not Supported 00:31:48.800 Traffic Based Keep ALive: Not Supported 00:31:48.800 Namespace Granularity: Not Supported 00:31:48.800 SQ Associations: Not Supported 00:31:48.800 UUID List: Not Supported 00:31:48.800 Multi-Domain Subsystem: Not Supported 00:31:48.800 Fixed Capacity Management: Not Supported 00:31:48.800 Variable Capacity Management: Not Supported 00:31:48.800 Delete Endurance Group: Not Supported 00:31:48.800 Delete NVM Set: Not Supported 00:31:48.800 Extended LBA Formats Supported: Not Supported 00:31:48.800 Flexible Data Placement Supported: Not Supported 00:31:48.800 00:31:48.800 Controller Memory Buffer Support 00:31:48.800 ================================ 00:31:48.800 Supported: No 00:31:48.800 00:31:48.800 Persistent Memory Region Support 00:31:48.800 ================================ 00:31:48.800 Supported: No 00:31:48.800 00:31:48.800 Admin Command Set Attributes 00:31:48.800 ============================ 00:31:48.800 Security Send/Receive: Not Supported 00:31:48.800 Format NVM: Not Supported 00:31:48.800 Firmware Activate/Download: Not Supported 00:31:48.800 Namespace Management: Not Supported 00:31:48.800 Device Self-Test: Not Supported 00:31:48.800 Directives: Not Supported 00:31:48.800 NVMe-MI: Not Supported 00:31:48.800 Virtualization Management: Not Supported 00:31:48.800 Doorbell Buffer Config: Not Supported 00:31:48.800 Get LBA Status Capability: Not Supported 00:31:48.800 Command & Feature Lockdown Capability: Not Supported 00:31:48.800 Abort Command Limit: 1 00:31:48.800 Async Event Request Limit: 1 00:31:48.800 Number of Firmware Slots: N/A 00:31:48.800 Firmware Slot 1 Read-Only: N/A 00:31:48.800 Firmware Activation Without Reset: N/A 00:31:48.800 Multiple Update Detection Support: N/A 00:31:48.800 Firmware Update Granularity: No Information Provided 00:31:48.800 Per-Namespace SMART Log: No 00:31:48.800 Asymmetric Namespace Access Log Page: Not Supported 00:31:48.800 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:31:48.800 Command Effects Log Page: Not Supported 00:31:48.800 Get Log Page Extended Data: Supported 00:31:48.800 Telemetry Log Pages: Not Supported 00:31:48.800 Persistent Event Log Pages: Not Supported 00:31:48.800 Supported Log Pages Log Page: May Support 00:31:48.800 Commands Supported & Effects Log Page: Not Supported 00:31:48.800 Feature Identifiers & Effects Log Page:May Support 00:31:48.800 NVMe-MI Commands & Effects Log Page: May Support 00:31:48.800 Data Area 4 for Telemetry Log: Not Supported 00:31:48.800 Error Log Page Entries Supported: 1 00:31:48.800 Keep Alive: Not Supported 00:31:48.800 00:31:48.800 NVM Command Set Attributes 00:31:48.800 ========================== 00:31:48.800 Submission Queue Entry Size 00:31:48.800 Max: 1 00:31:48.800 Min: 1 00:31:48.800 Completion Queue Entry Size 00:31:48.800 Max: 1 00:31:48.800 Min: 1 00:31:48.800 Number of Namespaces: 0 00:31:48.800 Compare Command: Not Supported 00:31:48.800 Write Uncorrectable Command: Not Supported 00:31:48.800 Dataset Management Command: Not Supported 00:31:48.800 Write Zeroes Command: Not Supported 00:31:48.800 Set Features Save Field: Not Supported 00:31:48.800 Reservations: Not Supported 00:31:48.800 Timestamp: Not Supported 00:31:48.800 Copy: Not Supported 00:31:48.800 Volatile Write Cache: Not Present 00:31:48.800 Atomic Write Unit (Normal): 1 00:31:48.800 Atomic Write Unit (PFail): 1 00:31:48.800 Atomic Compare & Write Unit: 1 00:31:48.800 Fused Compare & Write: Not Supported 00:31:48.800 Scatter-Gather List 00:31:48.800 SGL Command Set: Supported 00:31:48.800 SGL Keyed: Not Supported 00:31:48.800 SGL Bit Bucket Descriptor: Not Supported 00:31:48.800 SGL Metadata Pointer: Not Supported 00:31:48.800 Oversized SGL: Not Supported 00:31:48.800 SGL Metadata Address: Not Supported 00:31:48.800 SGL Offset: Supported 00:31:48.800 Transport SGL Data Block: Not Supported 00:31:48.800 Replay Protected Memory Block: Not Supported 00:31:48.800 00:31:48.800 Firmware Slot Information 00:31:48.800 ========================= 00:31:48.800 Active slot: 0 00:31:48.800 00:31:48.800 00:31:48.800 Error Log 00:31:48.800 ========= 00:31:48.800 00:31:48.800 Active Namespaces 00:31:48.800 ================= 00:31:48.800 Discovery Log Page 00:31:48.800 ================== 00:31:48.800 Generation Counter: 2 00:31:48.800 Number of Records: 2 00:31:48.800 Record Format: 0 00:31:48.800 00:31:48.800 Discovery Log Entry 0 00:31:48.800 ---------------------- 00:31:48.800 Transport Type: 3 (TCP) 00:31:48.800 Address Family: 1 (IPv4) 00:31:48.800 Subsystem Type: 3 (Current Discovery Subsystem) 00:31:48.800 Entry Flags: 00:31:48.800 Duplicate Returned Information: 0 00:31:48.801 Explicit Persistent Connection Support for Discovery: 0 00:31:48.801 Transport Requirements: 00:31:48.801 Secure Channel: Not Specified 00:31:48.801 Port ID: 1 (0x0001) 00:31:48.801 Controller ID: 65535 (0xffff) 00:31:48.801 Admin Max SQ Size: 32 00:31:48.801 Transport Service Identifier: 4420 00:31:48.801 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:31:48.801 Transport Address: 10.0.0.1 00:31:48.801 Discovery Log Entry 1 00:31:48.801 ---------------------- 00:31:48.801 Transport Type: 3 (TCP) 00:31:48.801 Address Family: 1 (IPv4) 00:31:48.801 Subsystem Type: 2 (NVM Subsystem) 00:31:48.801 Entry Flags: 00:31:48.801 Duplicate Returned Information: 0 00:31:48.801 Explicit Persistent Connection Support for Discovery: 0 00:31:48.801 Transport Requirements: 00:31:48.801 Secure Channel: Not Specified 00:31:48.801 Port ID: 1 (0x0001) 00:31:48.801 Controller ID: 65535 (0xffff) 00:31:48.801 Admin Max SQ Size: 32 00:31:48.801 Transport Service Identifier: 4420 00:31:48.801 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:31:48.801 Transport Address: 10.0.0.1 00:31:48.801 23:34:37 -- host/identify_kernel_nvmf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:31:48.801 EAL: No free 2048 kB hugepages reported on node 1 00:31:48.801 get_feature(0x01) failed 00:31:48.801 get_feature(0x02) failed 00:31:48.801 get_feature(0x04) failed 00:31:48.801 ===================================================== 00:31:48.801 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:31:48.801 ===================================================== 00:31:48.801 Controller Capabilities/Features 00:31:48.801 ================================ 00:31:48.801 Vendor ID: 0000 00:31:48.801 Subsystem Vendor ID: 0000 00:31:48.801 Serial Number: c60ec04745ac55146095 00:31:48.801 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:31:48.801 Firmware Version: 6.7.0-68 00:31:48.801 Recommended Arb Burst: 6 00:31:48.801 IEEE OUI Identifier: 00 00 00 00:31:48.801 Multi-path I/O 00:31:48.801 May have multiple subsystem ports: Yes 00:31:48.801 May have multiple controllers: Yes 00:31:48.801 Associated with SR-IOV VF: No 00:31:48.801 Max Data Transfer Size: Unlimited 00:31:48.801 Max Number of Namespaces: 1024 00:31:48.801 Max Number of I/O Queues: 128 00:31:48.801 NVMe Specification Version (VS): 1.3 00:31:48.801 NVMe Specification Version (Identify): 1.3 00:31:48.801 Maximum Queue Entries: 1024 00:31:48.801 Contiguous Queues Required: No 00:31:48.801 Arbitration Mechanisms Supported 00:31:48.801 Weighted Round Robin: Not Supported 00:31:48.801 Vendor Specific: Not Supported 00:31:48.801 Reset Timeout: 7500 ms 00:31:48.801 Doorbell Stride: 4 bytes 00:31:48.801 NVM Subsystem Reset: Not Supported 00:31:48.801 Command Sets Supported 00:31:48.801 NVM Command Set: Supported 00:31:48.801 Boot Partition: Not Supported 00:31:48.801 Memory Page Size Minimum: 4096 bytes 00:31:48.801 Memory Page Size Maximum: 4096 bytes 00:31:48.801 Persistent Memory Region: Not Supported 00:31:48.801 Optional Asynchronous Events Supported 00:31:48.801 Namespace Attribute Notices: Supported 00:31:48.801 Firmware Activation Notices: Not Supported 00:31:48.801 ANA Change Notices: Supported 00:31:48.801 PLE Aggregate Log Change Notices: Not Supported 00:31:48.801 LBA Status Info Alert Notices: Not Supported 00:31:48.801 EGE Aggregate Log Change Notices: Not Supported 00:31:48.801 Normal NVM Subsystem Shutdown event: Not Supported 00:31:48.801 Zone Descriptor Change Notices: Not Supported 00:31:48.801 Discovery Log Change Notices: Not Supported 00:31:48.801 Controller Attributes 00:31:48.801 128-bit Host Identifier: Supported 00:31:48.801 Non-Operational Permissive Mode: Not Supported 00:31:48.801 NVM Sets: Not Supported 00:31:48.801 Read Recovery Levels: Not Supported 00:31:48.801 Endurance Groups: Not Supported 00:31:48.801 Predictable Latency Mode: Not Supported 00:31:48.801 Traffic Based Keep ALive: Supported 00:31:48.801 Namespace Granularity: Not Supported 00:31:48.801 SQ Associations: Not Supported 00:31:48.801 UUID List: Not Supported 00:31:48.801 Multi-Domain Subsystem: Not Supported 00:31:48.801 Fixed Capacity Management: Not Supported 00:31:48.801 Variable Capacity Management: Not Supported 00:31:48.801 Delete Endurance Group: Not Supported 00:31:48.801 Delete NVM Set: Not Supported 00:31:48.801 Extended LBA Formats Supported: Not Supported 00:31:48.801 Flexible Data Placement Supported: Not Supported 00:31:48.801 00:31:48.801 Controller Memory Buffer Support 00:31:48.801 ================================ 00:31:48.801 Supported: No 00:31:48.801 00:31:48.801 Persistent Memory Region Support 00:31:48.801 ================================ 00:31:48.801 Supported: No 00:31:48.801 00:31:48.801 Admin Command Set Attributes 00:31:48.801 ============================ 00:31:48.801 Security Send/Receive: Not Supported 00:31:48.801 Format NVM: Not Supported 00:31:48.801 Firmware Activate/Download: Not Supported 00:31:48.801 Namespace Management: Not Supported 00:31:48.801 Device Self-Test: Not Supported 00:31:48.801 Directives: Not Supported 00:31:48.801 NVMe-MI: Not Supported 00:31:48.801 Virtualization Management: Not Supported 00:31:48.801 Doorbell Buffer Config: Not Supported 00:31:48.801 Get LBA Status Capability: Not Supported 00:31:48.801 Command & Feature Lockdown Capability: Not Supported 00:31:48.801 Abort Command Limit: 4 00:31:48.801 Async Event Request Limit: 4 00:31:48.801 Number of Firmware Slots: N/A 00:31:48.801 Firmware Slot 1 Read-Only: N/A 00:31:48.801 Firmware Activation Without Reset: N/A 00:31:48.801 Multiple Update Detection Support: N/A 00:31:48.801 Firmware Update Granularity: No Information Provided 00:31:48.801 Per-Namespace SMART Log: Yes 00:31:48.801 Asymmetric Namespace Access Log Page: Supported 00:31:48.801 ANA Transition Time : 10 sec 00:31:48.801 00:31:48.801 Asymmetric Namespace Access Capabilities 00:31:48.801 ANA Optimized State : Supported 00:31:48.801 ANA Non-Optimized State : Supported 00:31:48.801 ANA Inaccessible State : Supported 00:31:48.801 ANA Persistent Loss State : Supported 00:31:48.801 ANA Change State : Supported 00:31:48.801 ANAGRPID is not changed : No 00:31:48.801 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:31:48.801 00:31:48.801 ANA Group Identifier Maximum : 128 00:31:48.801 Number of ANA Group Identifiers : 128 00:31:48.801 Max Number of Allowed Namespaces : 1024 00:31:48.801 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:31:48.801 Command Effects Log Page: Supported 00:31:48.801 Get Log Page Extended Data: Supported 00:31:48.801 Telemetry Log Pages: Not Supported 00:31:48.801 Persistent Event Log Pages: Not Supported 00:31:48.801 Supported Log Pages Log Page: May Support 00:31:48.801 Commands Supported & Effects Log Page: Not Supported 00:31:48.801 Feature Identifiers & Effects Log Page:May Support 00:31:48.801 NVMe-MI Commands & Effects Log Page: May Support 00:31:48.801 Data Area 4 for Telemetry Log: Not Supported 00:31:48.801 Error Log Page Entries Supported: 128 00:31:48.801 Keep Alive: Supported 00:31:48.801 Keep Alive Granularity: 1000 ms 00:31:48.801 00:31:48.801 NVM Command Set Attributes 00:31:48.801 ========================== 00:31:48.801 Submission Queue Entry Size 00:31:48.801 Max: 64 00:31:48.801 Min: 64 00:31:48.801 Completion Queue Entry Size 00:31:48.801 Max: 16 00:31:48.801 Min: 16 00:31:48.801 Number of Namespaces: 1024 00:31:48.801 Compare Command: Not Supported 00:31:48.801 Write Uncorrectable Command: Not Supported 00:31:48.801 Dataset Management Command: Supported 00:31:48.801 Write Zeroes Command: Supported 00:31:48.801 Set Features Save Field: Not Supported 00:31:48.801 Reservations: Not Supported 00:31:48.801 Timestamp: Not Supported 00:31:48.801 Copy: Not Supported 00:31:48.801 Volatile Write Cache: Present 00:31:48.801 Atomic Write Unit (Normal): 1 00:31:48.801 Atomic Write Unit (PFail): 1 00:31:48.801 Atomic Compare & Write Unit: 1 00:31:48.801 Fused Compare & Write: Not Supported 00:31:48.801 Scatter-Gather List 00:31:48.801 SGL Command Set: Supported 00:31:48.801 SGL Keyed: Not Supported 00:31:48.801 SGL Bit Bucket Descriptor: Not Supported 00:31:48.801 SGL Metadata Pointer: Not Supported 00:31:48.801 Oversized SGL: Not Supported 00:31:48.801 SGL Metadata Address: Not Supported 00:31:48.801 SGL Offset: Supported 00:31:48.801 Transport SGL Data Block: Not Supported 00:31:48.801 Replay Protected Memory Block: Not Supported 00:31:48.801 00:31:48.801 Firmware Slot Information 00:31:48.801 ========================= 00:31:48.801 Active slot: 0 00:31:48.801 00:31:48.801 Asymmetric Namespace Access 00:31:48.801 =========================== 00:31:48.801 Change Count : 0 00:31:48.801 Number of ANA Group Descriptors : 1 00:31:48.801 ANA Group Descriptor : 0 00:31:48.802 ANA Group ID : 1 00:31:48.802 Number of NSID Values : 1 00:31:48.802 Change Count : 0 00:31:48.802 ANA State : 1 00:31:48.802 Namespace Identifier : 1 00:31:48.802 00:31:48.802 Commands Supported and Effects 00:31:48.802 ============================== 00:31:48.802 Admin Commands 00:31:48.802 -------------- 00:31:48.802 Get Log Page (02h): Supported 00:31:48.802 Identify (06h): Supported 00:31:48.802 Abort (08h): Supported 00:31:48.802 Set Features (09h): Supported 00:31:48.802 Get Features (0Ah): Supported 00:31:48.802 Asynchronous Event Request (0Ch): Supported 00:31:48.802 Keep Alive (18h): Supported 00:31:48.802 I/O Commands 00:31:48.802 ------------ 00:31:48.802 Flush (00h): Supported 00:31:48.802 Write (01h): Supported LBA-Change 00:31:48.802 Read (02h): Supported 00:31:48.802 Write Zeroes (08h): Supported LBA-Change 00:31:48.802 Dataset Management (09h): Supported 00:31:48.802 00:31:48.802 Error Log 00:31:48.802 ========= 00:31:48.802 Entry: 0 00:31:48.802 Error Count: 0x3 00:31:48.802 Submission Queue Id: 0x0 00:31:48.802 Command Id: 0x5 00:31:48.802 Phase Bit: 0 00:31:48.802 Status Code: 0x2 00:31:48.802 Status Code Type: 0x0 00:31:48.802 Do Not Retry: 1 00:31:48.802 Error Location: 0x28 00:31:48.802 LBA: 0x0 00:31:48.802 Namespace: 0x0 00:31:48.802 Vendor Log Page: 0x0 00:31:48.802 ----------- 00:31:48.802 Entry: 1 00:31:48.802 Error Count: 0x2 00:31:48.802 Submission Queue Id: 0x0 00:31:48.802 Command Id: 0x5 00:31:48.802 Phase Bit: 0 00:31:48.802 Status Code: 0x2 00:31:48.802 Status Code Type: 0x0 00:31:48.802 Do Not Retry: 1 00:31:48.802 Error Location: 0x28 00:31:48.802 LBA: 0x0 00:31:48.802 Namespace: 0x0 00:31:48.802 Vendor Log Page: 0x0 00:31:48.802 ----------- 00:31:48.802 Entry: 2 00:31:48.802 Error Count: 0x1 00:31:48.802 Submission Queue Id: 0x0 00:31:48.802 Command Id: 0x4 00:31:48.802 Phase Bit: 0 00:31:48.802 Status Code: 0x2 00:31:48.802 Status Code Type: 0x0 00:31:48.802 Do Not Retry: 1 00:31:48.802 Error Location: 0x28 00:31:48.802 LBA: 0x0 00:31:48.802 Namespace: 0x0 00:31:48.802 Vendor Log Page: 0x0 00:31:48.802 00:31:48.802 Number of Queues 00:31:48.802 ================ 00:31:48.802 Number of I/O Submission Queues: 128 00:31:48.802 Number of I/O Completion Queues: 128 00:31:48.802 00:31:48.802 ZNS Specific Controller Data 00:31:48.802 ============================ 00:31:48.802 Zone Append Size Limit: 0 00:31:48.802 00:31:48.802 00:31:48.802 Active Namespaces 00:31:48.802 ================= 00:31:48.802 get_feature(0x05) failed 00:31:48.802 Namespace ID:1 00:31:48.802 Command Set Identifier: NVM (00h) 00:31:48.802 Deallocate: Supported 00:31:48.802 Deallocated/Unwritten Error: Not Supported 00:31:48.802 Deallocated Read Value: Unknown 00:31:48.802 Deallocate in Write Zeroes: Not Supported 00:31:48.802 Deallocated Guard Field: 0xFFFF 00:31:48.802 Flush: Supported 00:31:48.802 Reservation: Not Supported 00:31:48.802 Namespace Sharing Capabilities: Multiple Controllers 00:31:48.802 Size (in LBAs): 3750748848 (1788GiB) 00:31:48.802 Capacity (in LBAs): 3750748848 (1788GiB) 00:31:48.802 Utilization (in LBAs): 3750748848 (1788GiB) 00:31:48.802 UUID: 64c091b6-de8c-43a8-9e04-2dcb68912d4c 00:31:48.802 Thin Provisioning: Not Supported 00:31:48.802 Per-NS Atomic Units: Yes 00:31:48.802 Atomic Write Unit (Normal): 8 00:31:48.802 Atomic Write Unit (PFail): 8 00:31:48.802 Preferred Write Granularity: 8 00:31:48.802 Atomic Compare & Write Unit: 8 00:31:48.802 Atomic Boundary Size (Normal): 0 00:31:48.802 Atomic Boundary Size (PFail): 0 00:31:48.802 Atomic Boundary Offset: 0 00:31:48.802 NGUID/EUI64 Never Reused: No 00:31:48.802 ANA group ID: 1 00:31:48.802 Namespace Write Protected: No 00:31:48.802 Number of LBA Formats: 1 00:31:48.802 Current LBA Format: LBA Format #00 00:31:48.802 LBA Format #00: Data Size: 512 Metadata Size: 0 00:31:48.802 00:31:48.802 23:34:37 -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:31:48.802 23:34:37 -- nvmf/common.sh@477 -- # nvmfcleanup 00:31:48.802 23:34:37 -- nvmf/common.sh@117 -- # sync 00:31:48.802 23:34:37 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:31:48.802 23:34:37 -- nvmf/common.sh@120 -- # set +e 00:31:48.802 23:34:37 -- nvmf/common.sh@121 -- # for i in {1..20} 00:31:48.802 23:34:37 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:31:48.802 rmmod nvme_tcp 00:31:48.802 rmmod nvme_fabrics 00:31:48.802 23:34:37 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:31:48.802 23:34:37 -- nvmf/common.sh@124 -- # set -e 00:31:48.802 23:34:37 -- nvmf/common.sh@125 -- # return 0 00:31:48.802 23:34:37 -- nvmf/common.sh@478 -- # '[' -n '' ']' 00:31:48.802 23:34:37 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:31:48.802 23:34:37 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:31:48.802 23:34:37 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:31:48.802 23:34:37 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:31:48.802 23:34:37 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:31:48.802 23:34:37 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:48.802 23:34:37 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:31:48.802 23:34:37 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:51.346 23:34:40 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:31:51.346 23:34:40 -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:31:51.346 23:34:40 -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:31:51.346 23:34:40 -- nvmf/common.sh@675 -- # echo 0 00:31:51.346 23:34:40 -- nvmf/common.sh@677 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:31:51.346 23:34:40 -- nvmf/common.sh@678 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:31:51.346 23:34:40 -- nvmf/common.sh@679 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:31:51.346 23:34:40 -- nvmf/common.sh@680 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:31:51.346 23:34:40 -- nvmf/common.sh@682 -- # modules=(/sys/module/nvmet/holders/*) 00:31:51.346 23:34:40 -- nvmf/common.sh@684 -- # modprobe -r nvmet_tcp nvmet 00:31:51.346 23:34:40 -- nvmf/common.sh@687 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:31:54.647 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:31:54.647 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:31:54.647 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:31:54.647 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:31:54.647 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:31:54.647 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:31:54.647 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:31:54.647 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:31:54.647 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:31:54.647 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:31:54.647 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:31:54.647 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:31:54.647 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:31:54.647 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:31:54.647 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:31:54.648 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:31:54.648 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:31:54.907 00:31:54.908 real 0m18.876s 00:31:54.908 user 0m5.225s 00:31:54.908 sys 0m10.614s 00:31:54.908 23:34:44 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:31:54.908 23:34:44 -- common/autotest_common.sh@10 -- # set +x 00:31:54.908 ************************************ 00:31:54.908 END TEST nvmf_identify_kernel_target 00:31:54.908 ************************************ 00:31:54.908 23:34:44 -- nvmf/nvmf.sh@102 -- # run_test nvmf_auth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:31:54.908 23:34:44 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:31:54.908 23:34:44 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:31:54.908 23:34:44 -- common/autotest_common.sh@10 -- # set +x 00:31:55.169 ************************************ 00:31:55.169 START TEST nvmf_auth 00:31:55.169 ************************************ 00:31:55.169 23:34:44 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:31:55.169 * Looking for test storage... 00:31:55.430 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:31:55.430 23:34:44 -- host/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:55.430 23:34:44 -- nvmf/common.sh@7 -- # uname -s 00:31:55.430 23:34:44 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:55.430 23:34:44 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:55.430 23:34:44 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:55.430 23:34:44 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:55.430 23:34:44 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:55.430 23:34:44 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:55.430 23:34:44 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:55.430 23:34:44 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:55.430 23:34:44 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:55.430 23:34:44 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:55.430 23:34:44 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:31:55.430 23:34:44 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:31:55.430 23:34:44 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:55.430 23:34:44 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:55.430 23:34:44 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:55.430 23:34:44 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:55.430 23:34:44 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:55.430 23:34:44 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:55.430 23:34:44 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:55.430 23:34:44 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:55.430 23:34:44 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:55.430 23:34:44 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:55.430 23:34:44 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:55.430 23:34:44 -- paths/export.sh@5 -- # export PATH 00:31:55.430 23:34:44 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:55.430 23:34:44 -- nvmf/common.sh@47 -- # : 0 00:31:55.430 23:34:44 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:31:55.430 23:34:44 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:31:55.430 23:34:44 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:55.430 23:34:44 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:55.430 23:34:44 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:55.430 23:34:44 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:31:55.430 23:34:44 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:31:55.430 23:34:44 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:31:55.430 23:34:44 -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:31:55.430 23:34:44 -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:31:55.430 23:34:44 -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:31:55.430 23:34:44 -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:31:55.430 23:34:44 -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:31:55.430 23:34:44 -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:31:55.430 23:34:44 -- host/auth.sh@21 -- # keys=() 00:31:55.430 23:34:44 -- host/auth.sh@77 -- # nvmftestinit 00:31:55.430 23:34:44 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:31:55.430 23:34:44 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:55.430 23:34:44 -- nvmf/common.sh@437 -- # prepare_net_devs 00:31:55.430 23:34:44 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:31:55.430 23:34:44 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:31:55.430 23:34:44 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:55.430 23:34:44 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:31:55.430 23:34:44 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:55.430 23:34:44 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:31:55.430 23:34:44 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:31:55.431 23:34:44 -- nvmf/common.sh@285 -- # xtrace_disable 00:31:55.431 23:34:44 -- common/autotest_common.sh@10 -- # set +x 00:32:03.572 23:34:51 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:32:03.572 23:34:51 -- nvmf/common.sh@291 -- # pci_devs=() 00:32:03.572 23:34:51 -- nvmf/common.sh@291 -- # local -a pci_devs 00:32:03.572 23:34:51 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:32:03.572 23:34:51 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:32:03.572 23:34:51 -- nvmf/common.sh@293 -- # pci_drivers=() 00:32:03.572 23:34:51 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:32:03.572 23:34:51 -- nvmf/common.sh@295 -- # net_devs=() 00:32:03.572 23:34:51 -- nvmf/common.sh@295 -- # local -ga net_devs 00:32:03.572 23:34:51 -- nvmf/common.sh@296 -- # e810=() 00:32:03.572 23:34:51 -- nvmf/common.sh@296 -- # local -ga e810 00:32:03.572 23:34:51 -- nvmf/common.sh@297 -- # x722=() 00:32:03.572 23:34:51 -- nvmf/common.sh@297 -- # local -ga x722 00:32:03.572 23:34:51 -- nvmf/common.sh@298 -- # mlx=() 00:32:03.572 23:34:51 -- nvmf/common.sh@298 -- # local -ga mlx 00:32:03.572 23:34:51 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:03.572 23:34:51 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:03.572 23:34:51 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:03.572 23:34:51 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:03.572 23:34:51 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:03.572 23:34:51 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:03.572 23:34:51 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:03.572 23:34:51 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:03.572 23:34:51 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:03.572 23:34:51 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:03.572 23:34:51 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:03.572 23:34:51 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:32:03.572 23:34:51 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:32:03.572 23:34:51 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:32:03.572 23:34:51 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:32:03.572 23:34:51 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:32:03.572 23:34:51 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:32:03.572 23:34:51 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:32:03.572 23:34:51 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:32:03.572 Found 0000:31:00.0 (0x8086 - 0x159b) 00:32:03.572 23:34:51 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:32:03.572 23:34:51 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:32:03.572 23:34:51 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:03.572 23:34:51 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:03.572 23:34:51 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:32:03.572 23:34:51 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:32:03.572 23:34:51 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:32:03.572 Found 0000:31:00.1 (0x8086 - 0x159b) 00:32:03.572 23:34:51 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:32:03.572 23:34:51 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:32:03.572 23:34:51 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:03.572 23:34:51 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:03.572 23:34:51 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:32:03.572 23:34:51 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:32:03.572 23:34:51 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:32:03.572 23:34:51 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:32:03.572 23:34:51 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:32:03.572 23:34:51 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:03.572 23:34:51 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:32:03.572 23:34:51 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:03.572 23:34:51 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:32:03.572 Found net devices under 0000:31:00.0: cvl_0_0 00:32:03.572 23:34:51 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:32:03.572 23:34:51 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:32:03.572 23:34:51 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:03.572 23:34:51 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:32:03.572 23:34:51 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:03.572 23:34:51 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:32:03.572 Found net devices under 0000:31:00.1: cvl_0_1 00:32:03.572 23:34:51 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:32:03.572 23:34:51 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:32:03.572 23:34:51 -- nvmf/common.sh@403 -- # is_hw=yes 00:32:03.572 23:34:51 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:32:03.572 23:34:51 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:32:03.572 23:34:51 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:32:03.572 23:34:51 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:03.572 23:34:51 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:03.572 23:34:51 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:03.572 23:34:51 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:32:03.572 23:34:51 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:03.572 23:34:51 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:03.572 23:34:51 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:32:03.572 23:34:51 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:03.572 23:34:51 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:03.572 23:34:51 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:32:03.572 23:34:51 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:32:03.572 23:34:51 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:32:03.572 23:34:51 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:03.572 23:34:51 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:03.572 23:34:51 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:03.572 23:34:51 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:32:03.572 23:34:51 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:03.572 23:34:51 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:03.572 23:34:51 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:03.572 23:34:51 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:32:03.572 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:03.572 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.573 ms 00:32:03.572 00:32:03.572 --- 10.0.0.2 ping statistics --- 00:32:03.572 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:03.572 rtt min/avg/max/mdev = 0.573/0.573/0.573/0.000 ms 00:32:03.572 23:34:51 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:03.572 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:03.572 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.356 ms 00:32:03.572 00:32:03.572 --- 10.0.0.1 ping statistics --- 00:32:03.572 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:03.572 rtt min/avg/max/mdev = 0.356/0.356/0.356/0.000 ms 00:32:03.572 23:34:51 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:03.572 23:34:51 -- nvmf/common.sh@411 -- # return 0 00:32:03.572 23:34:51 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:32:03.572 23:34:51 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:03.572 23:34:51 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:32:03.572 23:34:51 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:32:03.572 23:34:51 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:03.572 23:34:51 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:32:03.572 23:34:51 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:32:03.572 23:34:51 -- host/auth.sh@78 -- # nvmfappstart -L nvme_auth 00:32:03.572 23:34:51 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:32:03.572 23:34:51 -- common/autotest_common.sh@710 -- # xtrace_disable 00:32:03.572 23:34:51 -- common/autotest_common.sh@10 -- # set +x 00:32:03.572 23:34:51 -- nvmf/common.sh@470 -- # nvmfpid=4154002 00:32:03.572 23:34:51 -- nvmf/common.sh@471 -- # waitforlisten 4154002 00:32:03.572 23:34:51 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:32:03.572 23:34:51 -- common/autotest_common.sh@817 -- # '[' -z 4154002 ']' 00:32:03.572 23:34:51 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:03.572 23:34:51 -- common/autotest_common.sh@822 -- # local max_retries=100 00:32:03.572 23:34:51 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:03.572 23:34:51 -- common/autotest_common.sh@826 -- # xtrace_disable 00:32:03.572 23:34:51 -- common/autotest_common.sh@10 -- # set +x 00:32:03.572 23:34:51 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:32:03.572 23:34:51 -- common/autotest_common.sh@850 -- # return 0 00:32:03.572 23:34:51 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:32:03.572 23:34:51 -- common/autotest_common.sh@716 -- # xtrace_disable 00:32:03.572 23:34:51 -- common/autotest_common.sh@10 -- # set +x 00:32:03.572 23:34:51 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:03.572 23:34:51 -- host/auth.sh@79 -- # trap 'cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:32:03.572 23:34:51 -- host/auth.sh@81 -- # gen_key null 32 00:32:03.572 23:34:51 -- host/auth.sh@53 -- # local digest len file key 00:32:03.572 23:34:51 -- host/auth.sh@54 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:32:03.572 23:34:51 -- host/auth.sh@54 -- # local -A digests 00:32:03.572 23:34:51 -- host/auth.sh@56 -- # digest=null 00:32:03.572 23:34:51 -- host/auth.sh@56 -- # len=32 00:32:03.572 23:34:51 -- host/auth.sh@57 -- # xxd -p -c0 -l 16 /dev/urandom 00:32:03.572 23:34:51 -- host/auth.sh@57 -- # key=74235750fe68c90a54270d3ed7825a3b 00:32:03.572 23:34:51 -- host/auth.sh@58 -- # mktemp -t spdk.key-null.XXX 00:32:03.573 23:34:51 -- host/auth.sh@58 -- # file=/tmp/spdk.key-null.Nvg 00:32:03.573 23:34:51 -- host/auth.sh@59 -- # format_dhchap_key 74235750fe68c90a54270d3ed7825a3b 0 00:32:03.573 23:34:51 -- nvmf/common.sh@708 -- # format_key DHHC-1 74235750fe68c90a54270d3ed7825a3b 0 00:32:03.573 23:34:51 -- nvmf/common.sh@691 -- # local prefix key digest 00:32:03.573 23:34:51 -- nvmf/common.sh@693 -- # prefix=DHHC-1 00:32:03.573 23:34:51 -- nvmf/common.sh@693 -- # key=74235750fe68c90a54270d3ed7825a3b 00:32:03.573 23:34:51 -- nvmf/common.sh@693 -- # digest=0 00:32:03.573 23:34:51 -- nvmf/common.sh@694 -- # python - 00:32:03.573 23:34:51 -- host/auth.sh@60 -- # chmod 0600 /tmp/spdk.key-null.Nvg 00:32:03.573 23:34:51 -- host/auth.sh@62 -- # echo /tmp/spdk.key-null.Nvg 00:32:03.573 23:34:51 -- host/auth.sh@81 -- # keys[0]=/tmp/spdk.key-null.Nvg 00:32:03.573 23:34:52 -- host/auth.sh@82 -- # gen_key null 48 00:32:03.573 23:34:52 -- host/auth.sh@53 -- # local digest len file key 00:32:03.573 23:34:52 -- host/auth.sh@54 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:32:03.573 23:34:52 -- host/auth.sh@54 -- # local -A digests 00:32:03.573 23:34:52 -- host/auth.sh@56 -- # digest=null 00:32:03.573 23:34:52 -- host/auth.sh@56 -- # len=48 00:32:03.573 23:34:52 -- host/auth.sh@57 -- # xxd -p -c0 -l 24 /dev/urandom 00:32:03.573 23:34:52 -- host/auth.sh@57 -- # key=020cad447fe68a65ac03245cd758a3d97536befda4b96521 00:32:03.573 23:34:52 -- host/auth.sh@58 -- # mktemp -t spdk.key-null.XXX 00:32:03.573 23:34:52 -- host/auth.sh@58 -- # file=/tmp/spdk.key-null.ghp 00:32:03.573 23:34:52 -- host/auth.sh@59 -- # format_dhchap_key 020cad447fe68a65ac03245cd758a3d97536befda4b96521 0 00:32:03.573 23:34:52 -- nvmf/common.sh@708 -- # format_key DHHC-1 020cad447fe68a65ac03245cd758a3d97536befda4b96521 0 00:32:03.573 23:34:52 -- nvmf/common.sh@691 -- # local prefix key digest 00:32:03.573 23:34:52 -- nvmf/common.sh@693 -- # prefix=DHHC-1 00:32:03.573 23:34:52 -- nvmf/common.sh@693 -- # key=020cad447fe68a65ac03245cd758a3d97536befda4b96521 00:32:03.573 23:34:52 -- nvmf/common.sh@693 -- # digest=0 00:32:03.573 23:34:52 -- nvmf/common.sh@694 -- # python - 00:32:03.573 23:34:52 -- host/auth.sh@60 -- # chmod 0600 /tmp/spdk.key-null.ghp 00:32:03.573 23:34:52 -- host/auth.sh@62 -- # echo /tmp/spdk.key-null.ghp 00:32:03.573 23:34:52 -- host/auth.sh@82 -- # keys[1]=/tmp/spdk.key-null.ghp 00:32:03.573 23:34:52 -- host/auth.sh@83 -- # gen_key sha256 32 00:32:03.573 23:34:52 -- host/auth.sh@53 -- # local digest len file key 00:32:03.573 23:34:52 -- host/auth.sh@54 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:32:03.573 23:34:52 -- host/auth.sh@54 -- # local -A digests 00:32:03.573 23:34:52 -- host/auth.sh@56 -- # digest=sha256 00:32:03.573 23:34:52 -- host/auth.sh@56 -- # len=32 00:32:03.573 23:34:52 -- host/auth.sh@57 -- # xxd -p -c0 -l 16 /dev/urandom 00:32:03.573 23:34:52 -- host/auth.sh@57 -- # key=5cfa08509025259eb4bcb2eb6feb3a7d 00:32:03.573 23:34:52 -- host/auth.sh@58 -- # mktemp -t spdk.key-sha256.XXX 00:32:03.573 23:34:52 -- host/auth.sh@58 -- # file=/tmp/spdk.key-sha256.IqF 00:32:03.573 23:34:52 -- host/auth.sh@59 -- # format_dhchap_key 5cfa08509025259eb4bcb2eb6feb3a7d 1 00:32:03.573 23:34:52 -- nvmf/common.sh@708 -- # format_key DHHC-1 5cfa08509025259eb4bcb2eb6feb3a7d 1 00:32:03.573 23:34:52 -- nvmf/common.sh@691 -- # local prefix key digest 00:32:03.573 23:34:52 -- nvmf/common.sh@693 -- # prefix=DHHC-1 00:32:03.573 23:34:52 -- nvmf/common.sh@693 -- # key=5cfa08509025259eb4bcb2eb6feb3a7d 00:32:03.573 23:34:52 -- nvmf/common.sh@693 -- # digest=1 00:32:03.573 23:34:52 -- nvmf/common.sh@694 -- # python - 00:32:03.573 23:34:52 -- host/auth.sh@60 -- # chmod 0600 /tmp/spdk.key-sha256.IqF 00:32:03.573 23:34:52 -- host/auth.sh@62 -- # echo /tmp/spdk.key-sha256.IqF 00:32:03.573 23:34:52 -- host/auth.sh@83 -- # keys[2]=/tmp/spdk.key-sha256.IqF 00:32:03.573 23:34:52 -- host/auth.sh@84 -- # gen_key sha384 48 00:32:03.573 23:34:52 -- host/auth.sh@53 -- # local digest len file key 00:32:03.573 23:34:52 -- host/auth.sh@54 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:32:03.573 23:34:52 -- host/auth.sh@54 -- # local -A digests 00:32:03.573 23:34:52 -- host/auth.sh@56 -- # digest=sha384 00:32:03.573 23:34:52 -- host/auth.sh@56 -- # len=48 00:32:03.573 23:34:52 -- host/auth.sh@57 -- # xxd -p -c0 -l 24 /dev/urandom 00:32:03.573 23:34:52 -- host/auth.sh@57 -- # key=0f7a678690a1cdc87906df9218129adc858c2abc61f403d4 00:32:03.573 23:34:52 -- host/auth.sh@58 -- # mktemp -t spdk.key-sha384.XXX 00:32:03.573 23:34:52 -- host/auth.sh@58 -- # file=/tmp/spdk.key-sha384.6ik 00:32:03.573 23:34:52 -- host/auth.sh@59 -- # format_dhchap_key 0f7a678690a1cdc87906df9218129adc858c2abc61f403d4 2 00:32:03.573 23:34:52 -- nvmf/common.sh@708 -- # format_key DHHC-1 0f7a678690a1cdc87906df9218129adc858c2abc61f403d4 2 00:32:03.573 23:34:52 -- nvmf/common.sh@691 -- # local prefix key digest 00:32:03.573 23:34:52 -- nvmf/common.sh@693 -- # prefix=DHHC-1 00:32:03.573 23:34:52 -- nvmf/common.sh@693 -- # key=0f7a678690a1cdc87906df9218129adc858c2abc61f403d4 00:32:03.573 23:34:52 -- nvmf/common.sh@693 -- # digest=2 00:32:03.573 23:34:52 -- nvmf/common.sh@694 -- # python - 00:32:03.573 23:34:52 -- host/auth.sh@60 -- # chmod 0600 /tmp/spdk.key-sha384.6ik 00:32:03.573 23:34:52 -- host/auth.sh@62 -- # echo /tmp/spdk.key-sha384.6ik 00:32:03.573 23:34:52 -- host/auth.sh@84 -- # keys[3]=/tmp/spdk.key-sha384.6ik 00:32:03.573 23:34:52 -- host/auth.sh@85 -- # gen_key sha512 64 00:32:03.573 23:34:52 -- host/auth.sh@53 -- # local digest len file key 00:32:03.573 23:34:52 -- host/auth.sh@54 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:32:03.573 23:34:52 -- host/auth.sh@54 -- # local -A digests 00:32:03.573 23:34:52 -- host/auth.sh@56 -- # digest=sha512 00:32:03.573 23:34:52 -- host/auth.sh@56 -- # len=64 00:32:03.573 23:34:52 -- host/auth.sh@57 -- # xxd -p -c0 -l 32 /dev/urandom 00:32:03.573 23:34:52 -- host/auth.sh@57 -- # key=c83d3cc3763d0a84549e701547794a5fb0dd9abe50d31aa5de1478ea9b1d3b3d 00:32:03.573 23:34:52 -- host/auth.sh@58 -- # mktemp -t spdk.key-sha512.XXX 00:32:03.573 23:34:52 -- host/auth.sh@58 -- # file=/tmp/spdk.key-sha512.Agh 00:32:03.573 23:34:52 -- host/auth.sh@59 -- # format_dhchap_key c83d3cc3763d0a84549e701547794a5fb0dd9abe50d31aa5de1478ea9b1d3b3d 3 00:32:03.573 23:34:52 -- nvmf/common.sh@708 -- # format_key DHHC-1 c83d3cc3763d0a84549e701547794a5fb0dd9abe50d31aa5de1478ea9b1d3b3d 3 00:32:03.573 23:34:52 -- nvmf/common.sh@691 -- # local prefix key digest 00:32:03.573 23:34:52 -- nvmf/common.sh@693 -- # prefix=DHHC-1 00:32:03.573 23:34:52 -- nvmf/common.sh@693 -- # key=c83d3cc3763d0a84549e701547794a5fb0dd9abe50d31aa5de1478ea9b1d3b3d 00:32:03.573 23:34:52 -- nvmf/common.sh@693 -- # digest=3 00:32:03.573 23:34:52 -- nvmf/common.sh@694 -- # python - 00:32:03.573 23:34:52 -- host/auth.sh@60 -- # chmod 0600 /tmp/spdk.key-sha512.Agh 00:32:03.573 23:34:52 -- host/auth.sh@62 -- # echo /tmp/spdk.key-sha512.Agh 00:32:03.573 23:34:52 -- host/auth.sh@85 -- # keys[4]=/tmp/spdk.key-sha512.Agh 00:32:03.573 23:34:52 -- host/auth.sh@87 -- # waitforlisten 4154002 00:32:03.573 23:34:52 -- common/autotest_common.sh@817 -- # '[' -z 4154002 ']' 00:32:03.573 23:34:52 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:03.573 23:34:52 -- common/autotest_common.sh@822 -- # local max_retries=100 00:32:03.573 23:34:52 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:03.573 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:03.573 23:34:52 -- common/autotest_common.sh@826 -- # xtrace_disable 00:32:03.573 23:34:52 -- common/autotest_common.sh@10 -- # set +x 00:32:03.573 23:34:52 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:32:03.573 23:34:52 -- common/autotest_common.sh@850 -- # return 0 00:32:03.573 23:34:52 -- host/auth.sh@88 -- # for i in "${!keys[@]}" 00:32:03.573 23:34:52 -- host/auth.sh@89 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.Nvg 00:32:03.573 23:34:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:03.573 23:34:52 -- common/autotest_common.sh@10 -- # set +x 00:32:03.573 23:34:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:03.573 23:34:52 -- host/auth.sh@88 -- # for i in "${!keys[@]}" 00:32:03.573 23:34:52 -- host/auth.sh@89 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.ghp 00:32:03.573 23:34:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:03.573 23:34:52 -- common/autotest_common.sh@10 -- # set +x 00:32:03.573 23:34:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:03.573 23:34:52 -- host/auth.sh@88 -- # for i in "${!keys[@]}" 00:32:03.573 23:34:52 -- host/auth.sh@89 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.IqF 00:32:03.573 23:34:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:03.573 23:34:52 -- common/autotest_common.sh@10 -- # set +x 00:32:03.573 23:34:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:03.573 23:34:52 -- host/auth.sh@88 -- # for i in "${!keys[@]}" 00:32:03.573 23:34:52 -- host/auth.sh@89 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.6ik 00:32:03.573 23:34:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:03.573 23:34:52 -- common/autotest_common.sh@10 -- # set +x 00:32:03.573 23:34:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:03.573 23:34:52 -- host/auth.sh@88 -- # for i in "${!keys[@]}" 00:32:03.573 23:34:52 -- host/auth.sh@89 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.Agh 00:32:03.573 23:34:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:03.573 23:34:52 -- common/autotest_common.sh@10 -- # set +x 00:32:03.573 23:34:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:03.573 23:34:52 -- host/auth.sh@92 -- # nvmet_auth_init 00:32:03.573 23:34:52 -- host/auth.sh@35 -- # get_main_ns_ip 00:32:03.573 23:34:52 -- nvmf/common.sh@717 -- # local ip 00:32:03.573 23:34:52 -- nvmf/common.sh@718 -- # ip_candidates=() 00:32:03.573 23:34:52 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:32:03.573 23:34:52 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:03.573 23:34:52 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:03.573 23:34:52 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:32:03.573 23:34:52 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:03.573 23:34:52 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:32:03.573 23:34:52 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:32:03.573 23:34:52 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:32:03.573 23:34:52 -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:32:03.573 23:34:52 -- nvmf/common.sh@621 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:32:03.573 23:34:52 -- nvmf/common.sh@623 -- # nvmet=/sys/kernel/config/nvmet 00:32:03.573 23:34:52 -- nvmf/common.sh@624 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:32:03.573 23:34:52 -- nvmf/common.sh@625 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:32:03.573 23:34:52 -- nvmf/common.sh@626 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:32:03.573 23:34:52 -- nvmf/common.sh@628 -- # local block nvme 00:32:03.573 23:34:52 -- nvmf/common.sh@630 -- # [[ ! -e /sys/module/nvmet ]] 00:32:03.573 23:34:52 -- nvmf/common.sh@631 -- # modprobe nvmet 00:32:03.573 23:34:52 -- nvmf/common.sh@634 -- # [[ -e /sys/kernel/config/nvmet ]] 00:32:03.574 23:34:52 -- nvmf/common.sh@636 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:32:06.875 Waiting for block devices as requested 00:32:06.875 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:32:06.875 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:32:06.875 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:32:06.875 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:32:06.875 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:32:06.875 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:32:06.875 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:32:07.136 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:32:07.136 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:32:07.396 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:32:07.396 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:32:07.396 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:32:07.396 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:32:07.656 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:32:07.656 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:32:07.656 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:32:07.918 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:32:08.866 23:34:57 -- nvmf/common.sh@639 -- # for block in /sys/block/nvme* 00:32:08.866 23:34:57 -- nvmf/common.sh@640 -- # [[ -e /sys/block/nvme0n1 ]] 00:32:08.866 23:34:57 -- nvmf/common.sh@641 -- # is_block_zoned nvme0n1 00:32:08.866 23:34:57 -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:32:08.866 23:34:57 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:32:08.866 23:34:57 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:32:08.866 23:34:57 -- nvmf/common.sh@642 -- # block_in_use nvme0n1 00:32:08.866 23:34:57 -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:32:08.866 23:34:57 -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:32:08.866 No valid GPT data, bailing 00:32:08.866 23:34:57 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:32:08.866 23:34:57 -- scripts/common.sh@391 -- # pt= 00:32:08.866 23:34:57 -- scripts/common.sh@392 -- # return 1 00:32:08.866 23:34:57 -- nvmf/common.sh@642 -- # nvme=/dev/nvme0n1 00:32:08.866 23:34:57 -- nvmf/common.sh@645 -- # [[ -b /dev/nvme0n1 ]] 00:32:08.866 23:34:57 -- nvmf/common.sh@647 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:32:08.866 23:34:57 -- nvmf/common.sh@648 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:32:08.866 23:34:57 -- nvmf/common.sh@649 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:32:08.866 23:34:57 -- nvmf/common.sh@654 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:32:08.866 23:34:57 -- nvmf/common.sh@656 -- # echo 1 00:32:08.866 23:34:57 -- nvmf/common.sh@657 -- # echo /dev/nvme0n1 00:32:08.866 23:34:57 -- nvmf/common.sh@658 -- # echo 1 00:32:08.866 23:34:57 -- nvmf/common.sh@660 -- # echo 10.0.0.1 00:32:08.866 23:34:57 -- nvmf/common.sh@661 -- # echo tcp 00:32:08.866 23:34:57 -- nvmf/common.sh@662 -- # echo 4420 00:32:08.866 23:34:57 -- nvmf/common.sh@663 -- # echo ipv4 00:32:08.866 23:34:57 -- nvmf/common.sh@666 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:32:08.866 23:34:57 -- nvmf/common.sh@669 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -a 10.0.0.1 -t tcp -s 4420 00:32:08.866 00:32:08.866 Discovery Log Number of Records 2, Generation counter 2 00:32:08.866 =====Discovery Log Entry 0====== 00:32:08.866 trtype: tcp 00:32:08.866 adrfam: ipv4 00:32:08.866 subtype: current discovery subsystem 00:32:08.866 treq: not specified, sq flow control disable supported 00:32:08.866 portid: 1 00:32:08.866 trsvcid: 4420 00:32:08.866 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:32:08.866 traddr: 10.0.0.1 00:32:08.866 eflags: none 00:32:08.866 sectype: none 00:32:08.866 =====Discovery Log Entry 1====== 00:32:08.866 trtype: tcp 00:32:08.866 adrfam: ipv4 00:32:08.866 subtype: nvme subsystem 00:32:08.866 treq: not specified, sq flow control disable supported 00:32:08.866 portid: 1 00:32:08.866 trsvcid: 4420 00:32:08.866 subnqn: nqn.2024-02.io.spdk:cnode0 00:32:08.866 traddr: 10.0.0.1 00:32:08.866 eflags: none 00:32:08.866 sectype: none 00:32:08.866 23:34:57 -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:32:08.866 23:34:57 -- host/auth.sh@37 -- # echo 0 00:32:08.866 23:34:57 -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:32:08.866 23:34:57 -- host/auth.sh@95 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:32:08.866 23:34:57 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:32:08.866 23:34:57 -- host/auth.sh@44 -- # digest=sha256 00:32:08.866 23:34:57 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:08.866 23:34:57 -- host/auth.sh@44 -- # keyid=1 00:32:08.866 23:34:57 -- host/auth.sh@45 -- # key=DHHC-1:00:MDIwY2FkNDQ3ZmU2OGE2NWFjMDMyNDVjZDc1OGEzZDk3NTM2YmVmZGE0Yjk2NTIxQfjyVA==: 00:32:08.866 23:34:57 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:32:08.866 23:34:57 -- host/auth.sh@48 -- # echo ffdhe2048 00:32:08.866 23:34:57 -- host/auth.sh@49 -- # echo DHHC-1:00:MDIwY2FkNDQ3ZmU2OGE2NWFjMDMyNDVjZDc1OGEzZDk3NTM2YmVmZGE0Yjk2NTIxQfjyVA==: 00:32:08.866 23:34:57 -- host/auth.sh@100 -- # IFS=, 00:32:08.866 23:34:57 -- host/auth.sh@101 -- # printf %s sha256,sha384,sha512 00:32:08.866 23:34:57 -- host/auth.sh@100 -- # IFS=, 00:32:08.866 23:34:57 -- host/auth.sh@101 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:32:08.866 23:34:57 -- host/auth.sh@100 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:32:08.866 23:34:57 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:32:08.866 23:34:57 -- host/auth.sh@68 -- # digest=sha256,sha384,sha512 00:32:08.866 23:34:57 -- host/auth.sh@68 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:32:08.866 23:34:57 -- host/auth.sh@68 -- # keyid=1 00:32:08.866 23:34:57 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:32:08.866 23:34:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:08.866 23:34:57 -- common/autotest_common.sh@10 -- # set +x 00:32:08.866 23:34:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:08.866 23:34:57 -- host/auth.sh@70 -- # get_main_ns_ip 00:32:08.866 23:34:57 -- nvmf/common.sh@717 -- # local ip 00:32:08.866 23:34:57 -- nvmf/common.sh@718 -- # ip_candidates=() 00:32:08.866 23:34:57 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:32:08.866 23:34:57 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:08.866 23:34:57 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:08.866 23:34:57 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:32:08.866 23:34:57 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:08.866 23:34:57 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:32:08.866 23:34:57 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:32:08.866 23:34:57 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:32:08.866 23:34:57 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:32:08.866 23:34:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:08.866 23:34:57 -- common/autotest_common.sh@10 -- # set +x 00:32:09.128 nvme0n1 00:32:09.128 23:34:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:09.128 23:34:58 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:32:09.128 23:34:58 -- host/auth.sh@73 -- # jq -r '.[].name' 00:32:09.128 23:34:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:09.128 23:34:58 -- common/autotest_common.sh@10 -- # set +x 00:32:09.128 23:34:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:09.128 23:34:58 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:09.128 23:34:58 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:09.128 23:34:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:09.128 23:34:58 -- common/autotest_common.sh@10 -- # set +x 00:32:09.128 23:34:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:09.128 23:34:58 -- host/auth.sh@107 -- # for digest in "${digests[@]}" 00:32:09.128 23:34:58 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:32:09.128 23:34:58 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:32:09.128 23:34:58 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:32:09.128 23:34:58 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:32:09.128 23:34:58 -- host/auth.sh@44 -- # digest=sha256 00:32:09.128 23:34:58 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:09.128 23:34:58 -- host/auth.sh@44 -- # keyid=0 00:32:09.128 23:34:58 -- host/auth.sh@45 -- # key=DHHC-1:00:NzQyMzU3NTBmZTY4YzkwYTU0MjcwZDNlZDc4MjVhM2It42mv: 00:32:09.128 23:34:58 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:32:09.128 23:34:58 -- host/auth.sh@48 -- # echo ffdhe2048 00:32:09.128 23:34:58 -- host/auth.sh@49 -- # echo DHHC-1:00:NzQyMzU3NTBmZTY4YzkwYTU0MjcwZDNlZDc4MjVhM2It42mv: 00:32:09.128 23:34:58 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe2048 0 00:32:09.128 23:34:58 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:32:09.128 23:34:58 -- host/auth.sh@68 -- # digest=sha256 00:32:09.128 23:34:58 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:32:09.128 23:34:58 -- host/auth.sh@68 -- # keyid=0 00:32:09.128 23:34:58 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:32:09.128 23:34:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:09.128 23:34:58 -- common/autotest_common.sh@10 -- # set +x 00:32:09.128 23:34:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:09.128 23:34:58 -- host/auth.sh@70 -- # get_main_ns_ip 00:32:09.128 23:34:58 -- nvmf/common.sh@717 -- # local ip 00:32:09.128 23:34:58 -- nvmf/common.sh@718 -- # ip_candidates=() 00:32:09.128 23:34:58 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:32:09.128 23:34:58 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:09.128 23:34:58 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:09.128 23:34:58 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:32:09.128 23:34:58 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:09.128 23:34:58 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:32:09.128 23:34:58 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:32:09.128 23:34:58 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:32:09.128 23:34:58 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:32:09.128 23:34:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:09.128 23:34:58 -- common/autotest_common.sh@10 -- # set +x 00:32:09.128 nvme0n1 00:32:09.128 23:34:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:09.128 23:34:58 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:32:09.128 23:34:58 -- host/auth.sh@73 -- # jq -r '.[].name' 00:32:09.128 23:34:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:09.128 23:34:58 -- common/autotest_common.sh@10 -- # set +x 00:32:09.389 23:34:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:09.389 23:34:58 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:09.389 23:34:58 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:09.389 23:34:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:09.389 23:34:58 -- common/autotest_common.sh@10 -- # set +x 00:32:09.389 23:34:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:09.389 23:34:58 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:32:09.389 23:34:58 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:32:09.389 23:34:58 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:32:09.389 23:34:58 -- host/auth.sh@44 -- # digest=sha256 00:32:09.389 23:34:58 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:09.389 23:34:58 -- host/auth.sh@44 -- # keyid=1 00:32:09.389 23:34:58 -- host/auth.sh@45 -- # key=DHHC-1:00:MDIwY2FkNDQ3ZmU2OGE2NWFjMDMyNDVjZDc1OGEzZDk3NTM2YmVmZGE0Yjk2NTIxQfjyVA==: 00:32:09.389 23:34:58 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:32:09.389 23:34:58 -- host/auth.sh@48 -- # echo ffdhe2048 00:32:09.389 23:34:58 -- host/auth.sh@49 -- # echo DHHC-1:00:MDIwY2FkNDQ3ZmU2OGE2NWFjMDMyNDVjZDc1OGEzZDk3NTM2YmVmZGE0Yjk2NTIxQfjyVA==: 00:32:09.389 23:34:58 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe2048 1 00:32:09.389 23:34:58 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:32:09.389 23:34:58 -- host/auth.sh@68 -- # digest=sha256 00:32:09.389 23:34:58 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:32:09.389 23:34:58 -- host/auth.sh@68 -- # keyid=1 00:32:09.389 23:34:58 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:32:09.389 23:34:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:09.389 23:34:58 -- common/autotest_common.sh@10 -- # set +x 00:32:09.389 23:34:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:09.389 23:34:58 -- host/auth.sh@70 -- # get_main_ns_ip 00:32:09.389 23:34:58 -- nvmf/common.sh@717 -- # local ip 00:32:09.389 23:34:58 -- nvmf/common.sh@718 -- # ip_candidates=() 00:32:09.389 23:34:58 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:32:09.389 23:34:58 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:09.389 23:34:58 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:09.389 23:34:58 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:32:09.389 23:34:58 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:09.389 23:34:58 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:32:09.389 23:34:58 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:32:09.389 23:34:58 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:32:09.389 23:34:58 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:32:09.389 23:34:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:09.389 23:34:58 -- common/autotest_common.sh@10 -- # set +x 00:32:09.389 nvme0n1 00:32:09.389 23:34:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:09.389 23:34:58 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:32:09.389 23:34:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:09.389 23:34:58 -- host/auth.sh@73 -- # jq -r '.[].name' 00:32:09.389 23:34:58 -- common/autotest_common.sh@10 -- # set +x 00:32:09.389 23:34:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:09.650 23:34:58 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:09.651 23:34:58 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:09.651 23:34:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:09.651 23:34:58 -- common/autotest_common.sh@10 -- # set +x 00:32:09.651 23:34:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:09.651 23:34:58 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:32:09.651 23:34:58 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:32:09.651 23:34:58 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:32:09.651 23:34:58 -- host/auth.sh@44 -- # digest=sha256 00:32:09.651 23:34:58 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:09.651 23:34:58 -- host/auth.sh@44 -- # keyid=2 00:32:09.651 23:34:58 -- host/auth.sh@45 -- # key=DHHC-1:01:NWNmYTA4NTA5MDI1MjU5ZWI0YmNiMmViNmZlYjNhN2S+VDAV: 00:32:09.651 23:34:58 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:32:09.651 23:34:58 -- host/auth.sh@48 -- # echo ffdhe2048 00:32:09.651 23:34:58 -- host/auth.sh@49 -- # echo DHHC-1:01:NWNmYTA4NTA5MDI1MjU5ZWI0YmNiMmViNmZlYjNhN2S+VDAV: 00:32:09.651 23:34:58 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe2048 2 00:32:09.651 23:34:58 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:32:09.651 23:34:58 -- host/auth.sh@68 -- # digest=sha256 00:32:09.651 23:34:58 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:32:09.651 23:34:58 -- host/auth.sh@68 -- # keyid=2 00:32:09.651 23:34:58 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:32:09.651 23:34:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:09.651 23:34:58 -- common/autotest_common.sh@10 -- # set +x 00:32:09.651 23:34:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:09.651 23:34:58 -- host/auth.sh@70 -- # get_main_ns_ip 00:32:09.651 23:34:58 -- nvmf/common.sh@717 -- # local ip 00:32:09.651 23:34:58 -- nvmf/common.sh@718 -- # ip_candidates=() 00:32:09.651 23:34:58 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:32:09.651 23:34:58 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:09.651 23:34:58 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:09.651 23:34:58 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:32:09.651 23:34:58 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:09.651 23:34:58 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:32:09.651 23:34:58 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:32:09.651 23:34:58 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:32:09.651 23:34:58 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:32:09.651 23:34:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:09.651 23:34:58 -- common/autotest_common.sh@10 -- # set +x 00:32:09.651 nvme0n1 00:32:09.651 23:34:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:09.651 23:34:58 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:32:09.651 23:34:58 -- host/auth.sh@73 -- # jq -r '.[].name' 00:32:09.651 23:34:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:09.651 23:34:58 -- common/autotest_common.sh@10 -- # set +x 00:32:09.651 23:34:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:09.651 23:34:58 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:09.651 23:34:58 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:09.651 23:34:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:09.651 23:34:58 -- common/autotest_common.sh@10 -- # set +x 00:32:09.651 23:34:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:09.651 23:34:58 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:32:09.651 23:34:58 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:32:09.651 23:34:58 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:32:09.651 23:34:58 -- host/auth.sh@44 -- # digest=sha256 00:32:09.651 23:34:58 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:09.651 23:34:58 -- host/auth.sh@44 -- # keyid=3 00:32:09.651 23:34:58 -- host/auth.sh@45 -- # key=DHHC-1:02:MGY3YTY3ODY5MGExY2RjODc5MDZkZjkyMTgxMjlhZGM4NThjMmFiYzYxZjQwM2Q0JOfm/w==: 00:32:09.651 23:34:58 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:32:09.651 23:34:58 -- host/auth.sh@48 -- # echo ffdhe2048 00:32:09.651 23:34:58 -- host/auth.sh@49 -- # echo DHHC-1:02:MGY3YTY3ODY5MGExY2RjODc5MDZkZjkyMTgxMjlhZGM4NThjMmFiYzYxZjQwM2Q0JOfm/w==: 00:32:09.651 23:34:58 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe2048 3 00:32:09.651 23:34:58 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:32:09.651 23:34:58 -- host/auth.sh@68 -- # digest=sha256 00:32:09.651 23:34:58 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:32:09.651 23:34:58 -- host/auth.sh@68 -- # keyid=3 00:32:09.651 23:34:58 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:32:09.651 23:34:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:09.651 23:34:58 -- common/autotest_common.sh@10 -- # set +x 00:32:09.651 23:34:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:09.651 23:34:58 -- host/auth.sh@70 -- # get_main_ns_ip 00:32:09.651 23:34:58 -- nvmf/common.sh@717 -- # local ip 00:32:09.913 23:34:58 -- nvmf/common.sh@718 -- # ip_candidates=() 00:32:09.913 23:34:58 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:32:09.913 23:34:58 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:09.913 23:34:58 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:09.913 23:34:58 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:32:09.913 23:34:58 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:09.913 23:34:58 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:32:09.913 23:34:58 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:32:09.913 23:34:58 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:32:09.913 23:34:58 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:32:09.913 23:34:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:09.913 23:34:58 -- common/autotest_common.sh@10 -- # set +x 00:32:09.913 nvme0n1 00:32:09.913 23:34:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:09.913 23:34:59 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:32:09.913 23:34:59 -- host/auth.sh@73 -- # jq -r '.[].name' 00:32:09.913 23:34:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:09.913 23:34:59 -- common/autotest_common.sh@10 -- # set +x 00:32:09.913 23:34:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:09.913 23:34:59 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:09.913 23:34:59 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:09.913 23:34:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:09.913 23:34:59 -- common/autotest_common.sh@10 -- # set +x 00:32:09.913 23:34:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:09.913 23:34:59 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:32:09.913 23:34:59 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:32:09.913 23:34:59 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:32:09.913 23:34:59 -- host/auth.sh@44 -- # digest=sha256 00:32:09.913 23:34:59 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:09.913 23:34:59 -- host/auth.sh@44 -- # keyid=4 00:32:09.913 23:34:59 -- host/auth.sh@45 -- # key=DHHC-1:03:YzgzZDNjYzM3NjNkMGE4NDU0OWU3MDE1NDc3OTRhNWZiMGRkOWFiZTUwZDMxYWE1ZGUxNDc4ZWE5YjFkM2IzZOaWO0U=: 00:32:09.913 23:34:59 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:32:09.913 23:34:59 -- host/auth.sh@48 -- # echo ffdhe2048 00:32:09.913 23:34:59 -- host/auth.sh@49 -- # echo DHHC-1:03:YzgzZDNjYzM3NjNkMGE4NDU0OWU3MDE1NDc3OTRhNWZiMGRkOWFiZTUwZDMxYWE1ZGUxNDc4ZWE5YjFkM2IzZOaWO0U=: 00:32:09.913 23:34:59 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe2048 4 00:32:09.913 23:34:59 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:32:09.913 23:34:59 -- host/auth.sh@68 -- # digest=sha256 00:32:09.913 23:34:59 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:32:09.913 23:34:59 -- host/auth.sh@68 -- # keyid=4 00:32:09.913 23:34:59 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:32:09.913 23:34:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:09.913 23:34:59 -- common/autotest_common.sh@10 -- # set +x 00:32:09.913 23:34:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:09.913 23:34:59 -- host/auth.sh@70 -- # get_main_ns_ip 00:32:09.913 23:34:59 -- nvmf/common.sh@717 -- # local ip 00:32:09.913 23:34:59 -- nvmf/common.sh@718 -- # ip_candidates=() 00:32:09.913 23:34:59 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:32:09.913 23:34:59 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:09.913 23:34:59 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:09.913 23:34:59 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:32:09.913 23:34:59 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:09.913 23:34:59 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:32:09.913 23:34:59 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:32:09.913 23:34:59 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:32:09.913 23:34:59 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:32:09.913 23:34:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:09.913 23:34:59 -- common/autotest_common.sh@10 -- # set +x 00:32:10.174 nvme0n1 00:32:10.174 23:34:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:10.174 23:34:59 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:32:10.174 23:34:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:10.174 23:34:59 -- host/auth.sh@73 -- # jq -r '.[].name' 00:32:10.174 23:34:59 -- common/autotest_common.sh@10 -- # set +x 00:32:10.174 23:34:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:10.174 23:34:59 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:10.174 23:34:59 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:10.174 23:34:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:10.174 23:34:59 -- common/autotest_common.sh@10 -- # set +x 00:32:10.174 23:34:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:10.174 23:34:59 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:32:10.174 23:34:59 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:32:10.174 23:34:59 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:32:10.174 23:34:59 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:32:10.174 23:34:59 -- host/auth.sh@44 -- # digest=sha256 00:32:10.174 23:34:59 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:32:10.174 23:34:59 -- host/auth.sh@44 -- # keyid=0 00:32:10.174 23:34:59 -- host/auth.sh@45 -- # key=DHHC-1:00:NzQyMzU3NTBmZTY4YzkwYTU0MjcwZDNlZDc4MjVhM2It42mv: 00:32:10.174 23:34:59 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:32:10.174 23:34:59 -- host/auth.sh@48 -- # echo ffdhe3072 00:32:10.174 23:34:59 -- host/auth.sh@49 -- # echo DHHC-1:00:NzQyMzU3NTBmZTY4YzkwYTU0MjcwZDNlZDc4MjVhM2It42mv: 00:32:10.174 23:34:59 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe3072 0 00:32:10.174 23:34:59 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:32:10.174 23:34:59 -- host/auth.sh@68 -- # digest=sha256 00:32:10.174 23:34:59 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:32:10.174 23:34:59 -- host/auth.sh@68 -- # keyid=0 00:32:10.174 23:34:59 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:32:10.174 23:34:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:10.174 23:34:59 -- common/autotest_common.sh@10 -- # set +x 00:32:10.174 23:34:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:10.174 23:34:59 -- host/auth.sh@70 -- # get_main_ns_ip 00:32:10.174 23:34:59 -- nvmf/common.sh@717 -- # local ip 00:32:10.174 23:34:59 -- nvmf/common.sh@718 -- # ip_candidates=() 00:32:10.174 23:34:59 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:32:10.174 23:34:59 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:10.174 23:34:59 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:10.174 23:34:59 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:32:10.174 23:34:59 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:10.174 23:34:59 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:32:10.174 23:34:59 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:32:10.174 23:34:59 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:32:10.174 23:34:59 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:32:10.174 23:34:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:10.174 23:34:59 -- common/autotest_common.sh@10 -- # set +x 00:32:10.435 nvme0n1 00:32:10.435 23:34:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:10.435 23:34:59 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:32:10.435 23:34:59 -- host/auth.sh@73 -- # jq -r '.[].name' 00:32:10.435 23:34:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:10.435 23:34:59 -- common/autotest_common.sh@10 -- # set +x 00:32:10.435 23:34:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:10.435 23:34:59 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:10.435 23:34:59 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:10.435 23:34:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:10.435 23:34:59 -- common/autotest_common.sh@10 -- # set +x 00:32:10.435 23:34:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:10.435 23:34:59 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:32:10.435 23:34:59 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:32:10.435 23:34:59 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:32:10.435 23:34:59 -- host/auth.sh@44 -- # digest=sha256 00:32:10.435 23:34:59 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:32:10.435 23:34:59 -- host/auth.sh@44 -- # keyid=1 00:32:10.435 23:34:59 -- host/auth.sh@45 -- # key=DHHC-1:00:MDIwY2FkNDQ3ZmU2OGE2NWFjMDMyNDVjZDc1OGEzZDk3NTM2YmVmZGE0Yjk2NTIxQfjyVA==: 00:32:10.435 23:34:59 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:32:10.435 23:34:59 -- host/auth.sh@48 -- # echo ffdhe3072 00:32:10.435 23:34:59 -- host/auth.sh@49 -- # echo DHHC-1:00:MDIwY2FkNDQ3ZmU2OGE2NWFjMDMyNDVjZDc1OGEzZDk3NTM2YmVmZGE0Yjk2NTIxQfjyVA==: 00:32:10.435 23:34:59 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe3072 1 00:32:10.435 23:34:59 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:32:10.435 23:34:59 -- host/auth.sh@68 -- # digest=sha256 00:32:10.435 23:34:59 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:32:10.435 23:34:59 -- host/auth.sh@68 -- # keyid=1 00:32:10.435 23:34:59 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:32:10.436 23:34:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:10.436 23:34:59 -- common/autotest_common.sh@10 -- # set +x 00:32:10.436 23:34:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:10.436 23:34:59 -- host/auth.sh@70 -- # get_main_ns_ip 00:32:10.436 23:34:59 -- nvmf/common.sh@717 -- # local ip 00:32:10.436 23:34:59 -- nvmf/common.sh@718 -- # ip_candidates=() 00:32:10.436 23:34:59 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:32:10.436 23:34:59 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:10.436 23:34:59 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:10.436 23:34:59 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:32:10.436 23:34:59 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:10.436 23:34:59 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:32:10.436 23:34:59 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:32:10.436 23:34:59 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:32:10.436 23:34:59 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:32:10.436 23:34:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:10.436 23:34:59 -- common/autotest_common.sh@10 -- # set +x 00:32:10.730 nvme0n1 00:32:10.730 23:34:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:10.730 23:34:59 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:32:10.730 23:34:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:10.730 23:34:59 -- host/auth.sh@73 -- # jq -r '.[].name' 00:32:10.730 23:34:59 -- common/autotest_common.sh@10 -- # set +x 00:32:10.730 23:34:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:10.730 23:34:59 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:10.730 23:34:59 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:10.730 23:34:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:10.730 23:34:59 -- common/autotest_common.sh@10 -- # set +x 00:32:10.730 23:34:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:10.730 23:34:59 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:32:10.730 23:34:59 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:32:10.730 23:34:59 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:32:10.730 23:34:59 -- host/auth.sh@44 -- # digest=sha256 00:32:10.730 23:34:59 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:32:10.730 23:34:59 -- host/auth.sh@44 -- # keyid=2 00:32:10.730 23:34:59 -- host/auth.sh@45 -- # key=DHHC-1:01:NWNmYTA4NTA5MDI1MjU5ZWI0YmNiMmViNmZlYjNhN2S+VDAV: 00:32:10.730 23:34:59 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:32:10.730 23:34:59 -- host/auth.sh@48 -- # echo ffdhe3072 00:32:10.730 23:34:59 -- host/auth.sh@49 -- # echo DHHC-1:01:NWNmYTA4NTA5MDI1MjU5ZWI0YmNiMmViNmZlYjNhN2S+VDAV: 00:32:10.730 23:34:59 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe3072 2 00:32:10.730 23:34:59 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:32:10.730 23:34:59 -- host/auth.sh@68 -- # digest=sha256 00:32:10.730 23:34:59 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:32:10.730 23:34:59 -- host/auth.sh@68 -- # keyid=2 00:32:10.730 23:34:59 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:32:10.730 23:34:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:10.730 23:34:59 -- common/autotest_common.sh@10 -- # set +x 00:32:10.730 23:34:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:10.730 23:34:59 -- host/auth.sh@70 -- # get_main_ns_ip 00:32:10.730 23:34:59 -- nvmf/common.sh@717 -- # local ip 00:32:10.730 23:34:59 -- nvmf/common.sh@718 -- # ip_candidates=() 00:32:10.730 23:34:59 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:32:10.730 23:34:59 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:10.730 23:34:59 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:10.730 23:34:59 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:32:10.730 23:34:59 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:10.730 23:34:59 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:32:10.730 23:34:59 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:32:10.730 23:34:59 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:32:10.730 23:34:59 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:32:10.730 23:34:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:10.730 23:34:59 -- common/autotest_common.sh@10 -- # set +x 00:32:11.033 nvme0n1 00:32:11.033 23:35:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:11.033 23:35:00 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:32:11.033 23:35:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:11.033 23:35:00 -- host/auth.sh@73 -- # jq -r '.[].name' 00:32:11.033 23:35:00 -- common/autotest_common.sh@10 -- # set +x 00:32:11.033 23:35:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:11.033 23:35:00 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:11.033 23:35:00 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:11.033 23:35:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:11.033 23:35:00 -- common/autotest_common.sh@10 -- # set +x 00:32:11.033 23:35:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:11.033 23:35:00 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:32:11.033 23:35:00 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:32:11.033 23:35:00 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:32:11.033 23:35:00 -- host/auth.sh@44 -- # digest=sha256 00:32:11.033 23:35:00 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:32:11.033 23:35:00 -- host/auth.sh@44 -- # keyid=3 00:32:11.033 23:35:00 -- host/auth.sh@45 -- # key=DHHC-1:02:MGY3YTY3ODY5MGExY2RjODc5MDZkZjkyMTgxMjlhZGM4NThjMmFiYzYxZjQwM2Q0JOfm/w==: 00:32:11.033 23:35:00 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:32:11.033 23:35:00 -- host/auth.sh@48 -- # echo ffdhe3072 00:32:11.033 23:35:00 -- host/auth.sh@49 -- # echo DHHC-1:02:MGY3YTY3ODY5MGExY2RjODc5MDZkZjkyMTgxMjlhZGM4NThjMmFiYzYxZjQwM2Q0JOfm/w==: 00:32:11.033 23:35:00 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe3072 3 00:32:11.033 23:35:00 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:32:11.033 23:35:00 -- host/auth.sh@68 -- # digest=sha256 00:32:11.033 23:35:00 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:32:11.033 23:35:00 -- host/auth.sh@68 -- # keyid=3 00:32:11.033 23:35:00 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:32:11.033 23:35:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:11.033 23:35:00 -- common/autotest_common.sh@10 -- # set +x 00:32:11.033 23:35:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:11.033 23:35:00 -- host/auth.sh@70 -- # get_main_ns_ip 00:32:11.033 23:35:00 -- nvmf/common.sh@717 -- # local ip 00:32:11.033 23:35:00 -- nvmf/common.sh@718 -- # ip_candidates=() 00:32:11.033 23:35:00 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:32:11.033 23:35:00 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:11.033 23:35:00 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:11.033 23:35:00 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:32:11.033 23:35:00 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:11.033 23:35:00 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:32:11.033 23:35:00 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:32:11.033 23:35:00 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:32:11.033 23:35:00 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:32:11.033 23:35:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:11.033 23:35:00 -- common/autotest_common.sh@10 -- # set +x 00:32:11.327 nvme0n1 00:32:11.327 23:35:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:11.327 23:35:00 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:32:11.327 23:35:00 -- host/auth.sh@73 -- # jq -r '.[].name' 00:32:11.327 23:35:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:11.327 23:35:00 -- common/autotest_common.sh@10 -- # set +x 00:32:11.327 23:35:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:11.327 23:35:00 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:11.327 23:35:00 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:11.327 23:35:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:11.327 23:35:00 -- common/autotest_common.sh@10 -- # set +x 00:32:11.327 23:35:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:11.327 23:35:00 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:32:11.327 23:35:00 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:32:11.327 23:35:00 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:32:11.327 23:35:00 -- host/auth.sh@44 -- # digest=sha256 00:32:11.327 23:35:00 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:32:11.327 23:35:00 -- host/auth.sh@44 -- # keyid=4 00:32:11.327 23:35:00 -- host/auth.sh@45 -- # key=DHHC-1:03:YzgzZDNjYzM3NjNkMGE4NDU0OWU3MDE1NDc3OTRhNWZiMGRkOWFiZTUwZDMxYWE1ZGUxNDc4ZWE5YjFkM2IzZOaWO0U=: 00:32:11.327 23:35:00 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:32:11.327 23:35:00 -- host/auth.sh@48 -- # echo ffdhe3072 00:32:11.327 23:35:00 -- host/auth.sh@49 -- # echo DHHC-1:03:YzgzZDNjYzM3NjNkMGE4NDU0OWU3MDE1NDc3OTRhNWZiMGRkOWFiZTUwZDMxYWE1ZGUxNDc4ZWE5YjFkM2IzZOaWO0U=: 00:32:11.327 23:35:00 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe3072 4 00:32:11.327 23:35:00 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:32:11.327 23:35:00 -- host/auth.sh@68 -- # digest=sha256 00:32:11.327 23:35:00 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:32:11.327 23:35:00 -- host/auth.sh@68 -- # keyid=4 00:32:11.327 23:35:00 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:32:11.327 23:35:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:11.327 23:35:00 -- common/autotest_common.sh@10 -- # set +x 00:32:11.327 23:35:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:11.327 23:35:00 -- host/auth.sh@70 -- # get_main_ns_ip 00:32:11.327 23:35:00 -- nvmf/common.sh@717 -- # local ip 00:32:11.327 23:35:00 -- nvmf/common.sh@718 -- # ip_candidates=() 00:32:11.327 23:35:00 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:32:11.327 23:35:00 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:11.327 23:35:00 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:11.327 23:35:00 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:32:11.327 23:35:00 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:11.327 23:35:00 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:32:11.327 23:35:00 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:32:11.327 23:35:00 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:32:11.327 23:35:00 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:32:11.327 23:35:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:11.327 23:35:00 -- common/autotest_common.sh@10 -- # set +x 00:32:11.587 nvme0n1 00:32:11.587 23:35:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:11.587 23:35:00 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:32:11.587 23:35:00 -- host/auth.sh@73 -- # jq -r '.[].name' 00:32:11.587 23:35:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:11.587 23:35:00 -- common/autotest_common.sh@10 -- # set +x 00:32:11.587 23:35:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:11.587 23:35:00 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:11.587 23:35:00 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:11.587 23:35:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:11.587 23:35:00 -- common/autotest_common.sh@10 -- # set +x 00:32:11.587 23:35:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:11.587 23:35:00 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:32:11.587 23:35:00 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:32:11.587 23:35:00 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:32:11.587 23:35:00 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:32:11.587 23:35:00 -- host/auth.sh@44 -- # digest=sha256 00:32:11.587 23:35:00 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:32:11.587 23:35:00 -- host/auth.sh@44 -- # keyid=0 00:32:11.587 23:35:00 -- host/auth.sh@45 -- # key=DHHC-1:00:NzQyMzU3NTBmZTY4YzkwYTU0MjcwZDNlZDc4MjVhM2It42mv: 00:32:11.587 23:35:00 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:32:11.587 23:35:00 -- host/auth.sh@48 -- # echo ffdhe4096 00:32:11.587 23:35:00 -- host/auth.sh@49 -- # echo DHHC-1:00:NzQyMzU3NTBmZTY4YzkwYTU0MjcwZDNlZDc4MjVhM2It42mv: 00:32:11.587 23:35:00 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe4096 0 00:32:11.587 23:35:00 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:32:11.587 23:35:00 -- host/auth.sh@68 -- # digest=sha256 00:32:11.587 23:35:00 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:32:11.587 23:35:00 -- host/auth.sh@68 -- # keyid=0 00:32:11.587 23:35:00 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:32:11.587 23:35:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:11.587 23:35:00 -- common/autotest_common.sh@10 -- # set +x 00:32:11.587 23:35:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:11.587 23:35:00 -- host/auth.sh@70 -- # get_main_ns_ip 00:32:11.587 23:35:00 -- nvmf/common.sh@717 -- # local ip 00:32:11.587 23:35:00 -- nvmf/common.sh@718 -- # ip_candidates=() 00:32:11.587 23:35:00 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:32:11.587 23:35:00 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:11.587 23:35:00 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:11.587 23:35:00 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:32:11.587 23:35:00 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:11.587 23:35:00 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:32:11.587 23:35:00 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:32:11.587 23:35:00 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:32:11.587 23:35:00 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:32:11.587 23:35:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:11.587 23:35:00 -- common/autotest_common.sh@10 -- # set +x 00:32:11.848 nvme0n1 00:32:11.848 23:35:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:11.848 23:35:00 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:32:11.848 23:35:00 -- host/auth.sh@73 -- # jq -r '.[].name' 00:32:11.848 23:35:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:11.848 23:35:00 -- common/autotest_common.sh@10 -- # set +x 00:32:11.848 23:35:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:11.848 23:35:01 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:11.848 23:35:01 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:11.848 23:35:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:11.848 23:35:01 -- common/autotest_common.sh@10 -- # set +x 00:32:11.848 23:35:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:11.848 23:35:01 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:32:11.848 23:35:01 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:32:11.848 23:35:01 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:32:11.848 23:35:01 -- host/auth.sh@44 -- # digest=sha256 00:32:11.848 23:35:01 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:32:11.848 23:35:01 -- host/auth.sh@44 -- # keyid=1 00:32:11.848 23:35:01 -- host/auth.sh@45 -- # key=DHHC-1:00:MDIwY2FkNDQ3ZmU2OGE2NWFjMDMyNDVjZDc1OGEzZDk3NTM2YmVmZGE0Yjk2NTIxQfjyVA==: 00:32:11.848 23:35:01 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:32:11.848 23:35:01 -- host/auth.sh@48 -- # echo ffdhe4096 00:32:11.848 23:35:01 -- host/auth.sh@49 -- # echo DHHC-1:00:MDIwY2FkNDQ3ZmU2OGE2NWFjMDMyNDVjZDc1OGEzZDk3NTM2YmVmZGE0Yjk2NTIxQfjyVA==: 00:32:11.848 23:35:01 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe4096 1 00:32:11.848 23:35:01 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:32:11.848 23:35:01 -- host/auth.sh@68 -- # digest=sha256 00:32:11.848 23:35:01 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:32:11.848 23:35:01 -- host/auth.sh@68 -- # keyid=1 00:32:11.848 23:35:01 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:32:11.848 23:35:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:11.848 23:35:01 -- common/autotest_common.sh@10 -- # set +x 00:32:11.848 23:35:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:11.848 23:35:01 -- host/auth.sh@70 -- # get_main_ns_ip 00:32:11.848 23:35:01 -- nvmf/common.sh@717 -- # local ip 00:32:11.848 23:35:01 -- nvmf/common.sh@718 -- # ip_candidates=() 00:32:11.848 23:35:01 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:32:11.848 23:35:01 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:11.849 23:35:01 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:11.849 23:35:01 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:32:11.849 23:35:01 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:11.849 23:35:01 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:32:11.849 23:35:01 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:32:11.849 23:35:01 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:32:11.849 23:35:01 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:32:11.849 23:35:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:11.849 23:35:01 -- common/autotest_common.sh@10 -- # set +x 00:32:12.109 nvme0n1 00:32:12.109 23:35:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:12.109 23:35:01 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:32:12.109 23:35:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:12.109 23:35:01 -- host/auth.sh@73 -- # jq -r '.[].name' 00:32:12.109 23:35:01 -- common/autotest_common.sh@10 -- # set +x 00:32:12.109 23:35:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:12.109 23:35:01 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:12.109 23:35:01 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:12.109 23:35:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:12.109 23:35:01 -- common/autotest_common.sh@10 -- # set +x 00:32:12.369 23:35:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:12.369 23:35:01 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:32:12.369 23:35:01 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:32:12.369 23:35:01 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:32:12.369 23:35:01 -- host/auth.sh@44 -- # digest=sha256 00:32:12.369 23:35:01 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:32:12.369 23:35:01 -- host/auth.sh@44 -- # keyid=2 00:32:12.369 23:35:01 -- host/auth.sh@45 -- # key=DHHC-1:01:NWNmYTA4NTA5MDI1MjU5ZWI0YmNiMmViNmZlYjNhN2S+VDAV: 00:32:12.369 23:35:01 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:32:12.369 23:35:01 -- host/auth.sh@48 -- # echo ffdhe4096 00:32:12.369 23:35:01 -- host/auth.sh@49 -- # echo DHHC-1:01:NWNmYTA4NTA5MDI1MjU5ZWI0YmNiMmViNmZlYjNhN2S+VDAV: 00:32:12.369 23:35:01 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe4096 2 00:32:12.369 23:35:01 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:32:12.369 23:35:01 -- host/auth.sh@68 -- # digest=sha256 00:32:12.369 23:35:01 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:32:12.369 23:35:01 -- host/auth.sh@68 -- # keyid=2 00:32:12.369 23:35:01 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:32:12.369 23:35:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:12.369 23:35:01 -- common/autotest_common.sh@10 -- # set +x 00:32:12.369 23:35:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:12.369 23:35:01 -- host/auth.sh@70 -- # get_main_ns_ip 00:32:12.369 23:35:01 -- nvmf/common.sh@717 -- # local ip 00:32:12.369 23:35:01 -- nvmf/common.sh@718 -- # ip_candidates=() 00:32:12.369 23:35:01 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:32:12.369 23:35:01 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:12.369 23:35:01 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:12.369 23:35:01 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:32:12.369 23:35:01 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:12.369 23:35:01 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:32:12.369 23:35:01 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:32:12.369 23:35:01 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:32:12.369 23:35:01 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:32:12.369 23:35:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:12.369 23:35:01 -- common/autotest_common.sh@10 -- # set +x 00:32:12.630 nvme0n1 00:32:12.630 23:35:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:12.630 23:35:01 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:32:12.630 23:35:01 -- host/auth.sh@73 -- # jq -r '.[].name' 00:32:12.630 23:35:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:12.630 23:35:01 -- common/autotest_common.sh@10 -- # set +x 00:32:12.630 23:35:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:12.630 23:35:01 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:12.630 23:35:01 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:12.630 23:35:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:12.630 23:35:01 -- common/autotest_common.sh@10 -- # set +x 00:32:12.630 23:35:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:12.630 23:35:01 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:32:12.630 23:35:01 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:32:12.630 23:35:01 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:32:12.630 23:35:01 -- host/auth.sh@44 -- # digest=sha256 00:32:12.630 23:35:01 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:32:12.630 23:35:01 -- host/auth.sh@44 -- # keyid=3 00:32:12.630 23:35:01 -- host/auth.sh@45 -- # key=DHHC-1:02:MGY3YTY3ODY5MGExY2RjODc5MDZkZjkyMTgxMjlhZGM4NThjMmFiYzYxZjQwM2Q0JOfm/w==: 00:32:12.630 23:35:01 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:32:12.630 23:35:01 -- host/auth.sh@48 -- # echo ffdhe4096 00:32:12.630 23:35:01 -- host/auth.sh@49 -- # echo DHHC-1:02:MGY3YTY3ODY5MGExY2RjODc5MDZkZjkyMTgxMjlhZGM4NThjMmFiYzYxZjQwM2Q0JOfm/w==: 00:32:12.630 23:35:01 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe4096 3 00:32:12.630 23:35:01 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:32:12.630 23:35:01 -- host/auth.sh@68 -- # digest=sha256 00:32:12.630 23:35:01 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:32:12.630 23:35:01 -- host/auth.sh@68 -- # keyid=3 00:32:12.630 23:35:01 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:32:12.630 23:35:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:12.630 23:35:01 -- common/autotest_common.sh@10 -- # set +x 00:32:12.630 23:35:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:12.630 23:35:01 -- host/auth.sh@70 -- # get_main_ns_ip 00:32:12.630 23:35:01 -- nvmf/common.sh@717 -- # local ip 00:32:12.630 23:35:01 -- nvmf/common.sh@718 -- # ip_candidates=() 00:32:12.630 23:35:01 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:32:12.630 23:35:01 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:12.630 23:35:01 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:12.630 23:35:01 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:32:12.630 23:35:01 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:12.630 23:35:01 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:32:12.630 23:35:01 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:32:12.630 23:35:01 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:32:12.630 23:35:01 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:32:12.630 23:35:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:12.630 23:35:01 -- common/autotest_common.sh@10 -- # set +x 00:32:12.891 nvme0n1 00:32:12.891 23:35:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:12.891 23:35:02 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:32:12.891 23:35:02 -- host/auth.sh@73 -- # jq -r '.[].name' 00:32:12.891 23:35:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:12.891 23:35:02 -- common/autotest_common.sh@10 -- # set +x 00:32:12.891 23:35:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:12.891 23:35:02 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:12.891 23:35:02 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:12.891 23:35:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:12.891 23:35:02 -- common/autotest_common.sh@10 -- # set +x 00:32:12.891 23:35:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:12.891 23:35:02 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:32:12.891 23:35:02 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:32:12.891 23:35:02 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:32:12.891 23:35:02 -- host/auth.sh@44 -- # digest=sha256 00:32:12.891 23:35:02 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:32:12.891 23:35:02 -- host/auth.sh@44 -- # keyid=4 00:32:12.891 23:35:02 -- host/auth.sh@45 -- # key=DHHC-1:03:YzgzZDNjYzM3NjNkMGE4NDU0OWU3MDE1NDc3OTRhNWZiMGRkOWFiZTUwZDMxYWE1ZGUxNDc4ZWE5YjFkM2IzZOaWO0U=: 00:32:12.891 23:35:02 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:32:12.891 23:35:02 -- host/auth.sh@48 -- # echo ffdhe4096 00:32:12.891 23:35:02 -- host/auth.sh@49 -- # echo DHHC-1:03:YzgzZDNjYzM3NjNkMGE4NDU0OWU3MDE1NDc3OTRhNWZiMGRkOWFiZTUwZDMxYWE1ZGUxNDc4ZWE5YjFkM2IzZOaWO0U=: 00:32:12.891 23:35:02 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe4096 4 00:32:12.891 23:35:02 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:32:12.891 23:35:02 -- host/auth.sh@68 -- # digest=sha256 00:32:12.891 23:35:02 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:32:12.891 23:35:02 -- host/auth.sh@68 -- # keyid=4 00:32:12.891 23:35:02 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:32:12.891 23:35:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:12.891 23:35:02 -- common/autotest_common.sh@10 -- # set +x 00:32:12.891 23:35:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:12.891 23:35:02 -- host/auth.sh@70 -- # get_main_ns_ip 00:32:12.891 23:35:02 -- nvmf/common.sh@717 -- # local ip 00:32:12.891 23:35:02 -- nvmf/common.sh@718 -- # ip_candidates=() 00:32:12.891 23:35:02 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:32:12.891 23:35:02 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:12.891 23:35:02 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:12.891 23:35:02 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:32:12.891 23:35:02 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:12.891 23:35:02 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:32:12.891 23:35:02 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:32:12.891 23:35:02 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:32:12.891 23:35:02 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:32:12.891 23:35:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:12.891 23:35:02 -- common/autotest_common.sh@10 -- # set +x 00:32:13.152 nvme0n1 00:32:13.152 23:35:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:13.152 23:35:02 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:32:13.152 23:35:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:13.152 23:35:02 -- host/auth.sh@73 -- # jq -r '.[].name' 00:32:13.152 23:35:02 -- common/autotest_common.sh@10 -- # set +x 00:32:13.152 23:35:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:13.413 23:35:02 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:13.413 23:35:02 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:13.413 23:35:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:13.413 23:35:02 -- common/autotest_common.sh@10 -- # set +x 00:32:13.413 23:35:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:13.413 23:35:02 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:32:13.413 23:35:02 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:32:13.413 23:35:02 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:32:13.413 23:35:02 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:32:13.413 23:35:02 -- host/auth.sh@44 -- # digest=sha256 00:32:13.413 23:35:02 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:32:13.413 23:35:02 -- host/auth.sh@44 -- # keyid=0 00:32:13.413 23:35:02 -- host/auth.sh@45 -- # key=DHHC-1:00:NzQyMzU3NTBmZTY4YzkwYTU0MjcwZDNlZDc4MjVhM2It42mv: 00:32:13.413 23:35:02 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:32:13.413 23:35:02 -- host/auth.sh@48 -- # echo ffdhe6144 00:32:13.413 23:35:02 -- host/auth.sh@49 -- # echo DHHC-1:00:NzQyMzU3NTBmZTY4YzkwYTU0MjcwZDNlZDc4MjVhM2It42mv: 00:32:13.413 23:35:02 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe6144 0 00:32:13.413 23:35:02 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:32:13.413 23:35:02 -- host/auth.sh@68 -- # digest=sha256 00:32:13.413 23:35:02 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:32:13.413 23:35:02 -- host/auth.sh@68 -- # keyid=0 00:32:13.413 23:35:02 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:32:13.413 23:35:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:13.413 23:35:02 -- common/autotest_common.sh@10 -- # set +x 00:32:13.413 23:35:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:13.413 23:35:02 -- host/auth.sh@70 -- # get_main_ns_ip 00:32:13.413 23:35:02 -- nvmf/common.sh@717 -- # local ip 00:32:13.413 23:35:02 -- nvmf/common.sh@718 -- # ip_candidates=() 00:32:13.413 23:35:02 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:32:13.413 23:35:02 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:13.413 23:35:02 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:13.413 23:35:02 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:32:13.413 23:35:02 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:13.413 23:35:02 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:32:13.413 23:35:02 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:32:13.413 23:35:02 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:32:13.413 23:35:02 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:32:13.413 23:35:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:13.413 23:35:02 -- common/autotest_common.sh@10 -- # set +x 00:32:13.674 nvme0n1 00:32:13.674 23:35:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:13.934 23:35:02 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:32:13.934 23:35:02 -- host/auth.sh@73 -- # jq -r '.[].name' 00:32:13.934 23:35:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:13.934 23:35:02 -- common/autotest_common.sh@10 -- # set +x 00:32:13.934 23:35:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:13.934 23:35:02 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:13.934 23:35:02 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:13.934 23:35:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:13.934 23:35:02 -- common/autotest_common.sh@10 -- # set +x 00:32:13.934 23:35:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:13.934 23:35:02 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:32:13.934 23:35:02 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:32:13.934 23:35:02 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:32:13.934 23:35:02 -- host/auth.sh@44 -- # digest=sha256 00:32:13.934 23:35:02 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:32:13.934 23:35:02 -- host/auth.sh@44 -- # keyid=1 00:32:13.934 23:35:02 -- host/auth.sh@45 -- # key=DHHC-1:00:MDIwY2FkNDQ3ZmU2OGE2NWFjMDMyNDVjZDc1OGEzZDk3NTM2YmVmZGE0Yjk2NTIxQfjyVA==: 00:32:13.934 23:35:02 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:32:13.934 23:35:02 -- host/auth.sh@48 -- # echo ffdhe6144 00:32:13.934 23:35:02 -- host/auth.sh@49 -- # echo DHHC-1:00:MDIwY2FkNDQ3ZmU2OGE2NWFjMDMyNDVjZDc1OGEzZDk3NTM2YmVmZGE0Yjk2NTIxQfjyVA==: 00:32:13.934 23:35:02 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe6144 1 00:32:13.934 23:35:02 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:32:13.934 23:35:02 -- host/auth.sh@68 -- # digest=sha256 00:32:13.934 23:35:02 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:32:13.934 23:35:02 -- host/auth.sh@68 -- # keyid=1 00:32:13.934 23:35:02 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:32:13.934 23:35:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:13.934 23:35:02 -- common/autotest_common.sh@10 -- # set +x 00:32:13.934 23:35:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:13.934 23:35:03 -- host/auth.sh@70 -- # get_main_ns_ip 00:32:13.934 23:35:03 -- nvmf/common.sh@717 -- # local ip 00:32:13.934 23:35:03 -- nvmf/common.sh@718 -- # ip_candidates=() 00:32:13.934 23:35:03 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:32:13.934 23:35:03 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:13.934 23:35:03 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:13.934 23:35:03 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:32:13.934 23:35:03 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:13.934 23:35:03 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:32:13.934 23:35:03 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:32:13.934 23:35:03 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:32:13.934 23:35:03 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:32:13.934 23:35:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:13.934 23:35:03 -- common/autotest_common.sh@10 -- # set +x 00:32:14.506 nvme0n1 00:32:14.506 23:35:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:14.506 23:35:03 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:32:14.506 23:35:03 -- host/auth.sh@73 -- # jq -r '.[].name' 00:32:14.506 23:35:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:14.506 23:35:03 -- common/autotest_common.sh@10 -- # set +x 00:32:14.506 23:35:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:14.506 23:35:03 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:14.506 23:35:03 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:14.506 23:35:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:14.506 23:35:03 -- common/autotest_common.sh@10 -- # set +x 00:32:14.506 23:35:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:14.506 23:35:03 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:32:14.506 23:35:03 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:32:14.506 23:35:03 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:32:14.506 23:35:03 -- host/auth.sh@44 -- # digest=sha256 00:32:14.506 23:35:03 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:32:14.506 23:35:03 -- host/auth.sh@44 -- # keyid=2 00:32:14.506 23:35:03 -- host/auth.sh@45 -- # key=DHHC-1:01:NWNmYTA4NTA5MDI1MjU5ZWI0YmNiMmViNmZlYjNhN2S+VDAV: 00:32:14.506 23:35:03 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:32:14.506 23:35:03 -- host/auth.sh@48 -- # echo ffdhe6144 00:32:14.506 23:35:03 -- host/auth.sh@49 -- # echo DHHC-1:01:NWNmYTA4NTA5MDI1MjU5ZWI0YmNiMmViNmZlYjNhN2S+VDAV: 00:32:14.506 23:35:03 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe6144 2 00:32:14.506 23:35:03 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:32:14.506 23:35:03 -- host/auth.sh@68 -- # digest=sha256 00:32:14.506 23:35:03 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:32:14.506 23:35:03 -- host/auth.sh@68 -- # keyid=2 00:32:14.506 23:35:03 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:32:14.506 23:35:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:14.506 23:35:03 -- common/autotest_common.sh@10 -- # set +x 00:32:14.506 23:35:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:14.506 23:35:03 -- host/auth.sh@70 -- # get_main_ns_ip 00:32:14.506 23:35:03 -- nvmf/common.sh@717 -- # local ip 00:32:14.506 23:35:03 -- nvmf/common.sh@718 -- # ip_candidates=() 00:32:14.506 23:35:03 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:32:14.506 23:35:03 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:14.506 23:35:03 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:14.506 23:35:03 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:32:14.506 23:35:03 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:14.506 23:35:03 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:32:14.506 23:35:03 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:32:14.506 23:35:03 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:32:14.506 23:35:03 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:32:14.506 23:35:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:14.506 23:35:03 -- common/autotest_common.sh@10 -- # set +x 00:32:14.767 nvme0n1 00:32:14.767 23:35:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:14.767 23:35:04 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:32:14.767 23:35:04 -- host/auth.sh@73 -- # jq -r '.[].name' 00:32:14.767 23:35:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:14.767 23:35:04 -- common/autotest_common.sh@10 -- # set +x 00:32:15.028 23:35:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:15.028 23:35:04 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:15.028 23:35:04 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:15.028 23:35:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:15.028 23:35:04 -- common/autotest_common.sh@10 -- # set +x 00:32:15.028 23:35:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:15.028 23:35:04 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:32:15.028 23:35:04 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:32:15.028 23:35:04 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:32:15.028 23:35:04 -- host/auth.sh@44 -- # digest=sha256 00:32:15.028 23:35:04 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:32:15.028 23:35:04 -- host/auth.sh@44 -- # keyid=3 00:32:15.028 23:35:04 -- host/auth.sh@45 -- # key=DHHC-1:02:MGY3YTY3ODY5MGExY2RjODc5MDZkZjkyMTgxMjlhZGM4NThjMmFiYzYxZjQwM2Q0JOfm/w==: 00:32:15.028 23:35:04 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:32:15.028 23:35:04 -- host/auth.sh@48 -- # echo ffdhe6144 00:32:15.028 23:35:04 -- host/auth.sh@49 -- # echo DHHC-1:02:MGY3YTY3ODY5MGExY2RjODc5MDZkZjkyMTgxMjlhZGM4NThjMmFiYzYxZjQwM2Q0JOfm/w==: 00:32:15.028 23:35:04 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe6144 3 00:32:15.028 23:35:04 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:32:15.028 23:35:04 -- host/auth.sh@68 -- # digest=sha256 00:32:15.028 23:35:04 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:32:15.028 23:35:04 -- host/auth.sh@68 -- # keyid=3 00:32:15.028 23:35:04 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:32:15.028 23:35:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:15.028 23:35:04 -- common/autotest_common.sh@10 -- # set +x 00:32:15.028 23:35:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:15.028 23:35:04 -- host/auth.sh@70 -- # get_main_ns_ip 00:32:15.028 23:35:04 -- nvmf/common.sh@717 -- # local ip 00:32:15.028 23:35:04 -- nvmf/common.sh@718 -- # ip_candidates=() 00:32:15.028 23:35:04 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:32:15.028 23:35:04 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:15.028 23:35:04 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:15.028 23:35:04 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:32:15.028 23:35:04 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:15.028 23:35:04 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:32:15.028 23:35:04 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:32:15.028 23:35:04 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:32:15.028 23:35:04 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:32:15.028 23:35:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:15.028 23:35:04 -- common/autotest_common.sh@10 -- # set +x 00:32:15.598 nvme0n1 00:32:15.598 23:35:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:15.598 23:35:04 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:32:15.598 23:35:04 -- host/auth.sh@73 -- # jq -r '.[].name' 00:32:15.598 23:35:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:15.598 23:35:04 -- common/autotest_common.sh@10 -- # set +x 00:32:15.599 23:35:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:15.599 23:35:04 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:15.599 23:35:04 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:15.599 23:35:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:15.599 23:35:04 -- common/autotest_common.sh@10 -- # set +x 00:32:15.599 23:35:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:15.599 23:35:04 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:32:15.599 23:35:04 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:32:15.599 23:35:04 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:32:15.599 23:35:04 -- host/auth.sh@44 -- # digest=sha256 00:32:15.599 23:35:04 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:32:15.599 23:35:04 -- host/auth.sh@44 -- # keyid=4 00:32:15.599 23:35:04 -- host/auth.sh@45 -- # key=DHHC-1:03:YzgzZDNjYzM3NjNkMGE4NDU0OWU3MDE1NDc3OTRhNWZiMGRkOWFiZTUwZDMxYWE1ZGUxNDc4ZWE5YjFkM2IzZOaWO0U=: 00:32:15.599 23:35:04 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:32:15.599 23:35:04 -- host/auth.sh@48 -- # echo ffdhe6144 00:32:15.599 23:35:04 -- host/auth.sh@49 -- # echo DHHC-1:03:YzgzZDNjYzM3NjNkMGE4NDU0OWU3MDE1NDc3OTRhNWZiMGRkOWFiZTUwZDMxYWE1ZGUxNDc4ZWE5YjFkM2IzZOaWO0U=: 00:32:15.599 23:35:04 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe6144 4 00:32:15.599 23:35:04 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:32:15.599 23:35:04 -- host/auth.sh@68 -- # digest=sha256 00:32:15.599 23:35:04 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:32:15.599 23:35:04 -- host/auth.sh@68 -- # keyid=4 00:32:15.599 23:35:04 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:32:15.599 23:35:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:15.599 23:35:04 -- common/autotest_common.sh@10 -- # set +x 00:32:15.599 23:35:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:15.599 23:35:04 -- host/auth.sh@70 -- # get_main_ns_ip 00:32:15.599 23:35:04 -- nvmf/common.sh@717 -- # local ip 00:32:15.599 23:35:04 -- nvmf/common.sh@718 -- # ip_candidates=() 00:32:15.599 23:35:04 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:32:15.599 23:35:04 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:15.599 23:35:04 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:15.599 23:35:04 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:32:15.599 23:35:04 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:15.599 23:35:04 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:32:15.599 23:35:04 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:32:15.599 23:35:04 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:32:15.599 23:35:04 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:32:15.599 23:35:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:15.599 23:35:04 -- common/autotest_common.sh@10 -- # set +x 00:32:15.860 nvme0n1 00:32:15.860 23:35:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:15.860 23:35:05 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:32:15.860 23:35:05 -- host/auth.sh@73 -- # jq -r '.[].name' 00:32:15.860 23:35:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:15.860 23:35:05 -- common/autotest_common.sh@10 -- # set +x 00:32:15.860 23:35:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:16.119 23:35:05 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:16.119 23:35:05 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:16.120 23:35:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:16.120 23:35:05 -- common/autotest_common.sh@10 -- # set +x 00:32:16.120 23:35:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:16.120 23:35:05 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:32:16.120 23:35:05 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:32:16.120 23:35:05 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:32:16.120 23:35:05 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:32:16.120 23:35:05 -- host/auth.sh@44 -- # digest=sha256 00:32:16.120 23:35:05 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:32:16.120 23:35:05 -- host/auth.sh@44 -- # keyid=0 00:32:16.120 23:35:05 -- host/auth.sh@45 -- # key=DHHC-1:00:NzQyMzU3NTBmZTY4YzkwYTU0MjcwZDNlZDc4MjVhM2It42mv: 00:32:16.120 23:35:05 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:32:16.120 23:35:05 -- host/auth.sh@48 -- # echo ffdhe8192 00:32:16.120 23:35:05 -- host/auth.sh@49 -- # echo DHHC-1:00:NzQyMzU3NTBmZTY4YzkwYTU0MjcwZDNlZDc4MjVhM2It42mv: 00:32:16.120 23:35:05 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe8192 0 00:32:16.120 23:35:05 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:32:16.120 23:35:05 -- host/auth.sh@68 -- # digest=sha256 00:32:16.120 23:35:05 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:32:16.120 23:35:05 -- host/auth.sh@68 -- # keyid=0 00:32:16.120 23:35:05 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:32:16.120 23:35:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:16.120 23:35:05 -- common/autotest_common.sh@10 -- # set +x 00:32:16.120 23:35:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:16.120 23:35:05 -- host/auth.sh@70 -- # get_main_ns_ip 00:32:16.120 23:35:05 -- nvmf/common.sh@717 -- # local ip 00:32:16.120 23:35:05 -- nvmf/common.sh@718 -- # ip_candidates=() 00:32:16.120 23:35:05 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:32:16.120 23:35:05 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:16.120 23:35:05 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:16.120 23:35:05 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:32:16.120 23:35:05 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:16.120 23:35:05 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:32:16.120 23:35:05 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:32:16.120 23:35:05 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:32:16.120 23:35:05 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:32:16.120 23:35:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:16.120 23:35:05 -- common/autotest_common.sh@10 -- # set +x 00:32:16.689 nvme0n1 00:32:16.689 23:35:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:16.689 23:35:05 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:32:16.689 23:35:05 -- host/auth.sh@73 -- # jq -r '.[].name' 00:32:16.689 23:35:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:16.689 23:35:05 -- common/autotest_common.sh@10 -- # set +x 00:32:16.689 23:35:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:16.949 23:35:05 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:16.949 23:35:05 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:16.949 23:35:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:16.949 23:35:05 -- common/autotest_common.sh@10 -- # set +x 00:32:16.950 23:35:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:16.950 23:35:05 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:32:16.950 23:35:05 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:32:16.950 23:35:05 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:32:16.950 23:35:05 -- host/auth.sh@44 -- # digest=sha256 00:32:16.950 23:35:05 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:32:16.950 23:35:05 -- host/auth.sh@44 -- # keyid=1 00:32:16.950 23:35:05 -- host/auth.sh@45 -- # key=DHHC-1:00:MDIwY2FkNDQ3ZmU2OGE2NWFjMDMyNDVjZDc1OGEzZDk3NTM2YmVmZGE0Yjk2NTIxQfjyVA==: 00:32:16.950 23:35:05 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:32:16.950 23:35:05 -- host/auth.sh@48 -- # echo ffdhe8192 00:32:16.950 23:35:05 -- host/auth.sh@49 -- # echo DHHC-1:00:MDIwY2FkNDQ3ZmU2OGE2NWFjMDMyNDVjZDc1OGEzZDk3NTM2YmVmZGE0Yjk2NTIxQfjyVA==: 00:32:16.950 23:35:05 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe8192 1 00:32:16.950 23:35:05 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:32:16.950 23:35:05 -- host/auth.sh@68 -- # digest=sha256 00:32:16.950 23:35:05 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:32:16.950 23:35:05 -- host/auth.sh@68 -- # keyid=1 00:32:16.950 23:35:05 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:32:16.950 23:35:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:16.950 23:35:05 -- common/autotest_common.sh@10 -- # set +x 00:32:16.950 23:35:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:16.950 23:35:05 -- host/auth.sh@70 -- # get_main_ns_ip 00:32:16.950 23:35:05 -- nvmf/common.sh@717 -- # local ip 00:32:16.950 23:35:05 -- nvmf/common.sh@718 -- # ip_candidates=() 00:32:16.950 23:35:05 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:32:16.950 23:35:05 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:16.950 23:35:05 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:16.950 23:35:05 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:32:16.950 23:35:05 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:16.950 23:35:05 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:32:16.950 23:35:05 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:32:16.950 23:35:05 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:32:16.950 23:35:05 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:32:16.950 23:35:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:16.950 23:35:05 -- common/autotest_common.sh@10 -- # set +x 00:32:17.520 nvme0n1 00:32:17.520 23:35:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:17.520 23:35:06 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:32:17.520 23:35:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:17.520 23:35:06 -- host/auth.sh@73 -- # jq -r '.[].name' 00:32:17.520 23:35:06 -- common/autotest_common.sh@10 -- # set +x 00:32:17.520 23:35:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:17.780 23:35:06 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:17.780 23:35:06 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:17.780 23:35:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:17.780 23:35:06 -- common/autotest_common.sh@10 -- # set +x 00:32:17.780 23:35:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:17.780 23:35:06 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:32:17.780 23:35:06 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:32:17.780 23:35:06 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:32:17.780 23:35:06 -- host/auth.sh@44 -- # digest=sha256 00:32:17.780 23:35:06 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:32:17.780 23:35:06 -- host/auth.sh@44 -- # keyid=2 00:32:17.780 23:35:06 -- host/auth.sh@45 -- # key=DHHC-1:01:NWNmYTA4NTA5MDI1MjU5ZWI0YmNiMmViNmZlYjNhN2S+VDAV: 00:32:17.780 23:35:06 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:32:17.780 23:35:06 -- host/auth.sh@48 -- # echo ffdhe8192 00:32:17.780 23:35:06 -- host/auth.sh@49 -- # echo DHHC-1:01:NWNmYTA4NTA5MDI1MjU5ZWI0YmNiMmViNmZlYjNhN2S+VDAV: 00:32:17.780 23:35:06 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe8192 2 00:32:17.780 23:35:06 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:32:17.780 23:35:06 -- host/auth.sh@68 -- # digest=sha256 00:32:17.780 23:35:06 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:32:17.780 23:35:06 -- host/auth.sh@68 -- # keyid=2 00:32:17.780 23:35:06 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:32:17.780 23:35:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:17.780 23:35:06 -- common/autotest_common.sh@10 -- # set +x 00:32:17.780 23:35:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:17.780 23:35:06 -- host/auth.sh@70 -- # get_main_ns_ip 00:32:17.780 23:35:06 -- nvmf/common.sh@717 -- # local ip 00:32:17.780 23:35:06 -- nvmf/common.sh@718 -- # ip_candidates=() 00:32:17.780 23:35:06 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:32:17.780 23:35:06 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:17.780 23:35:06 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:17.780 23:35:06 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:32:17.780 23:35:06 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:17.780 23:35:06 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:32:17.780 23:35:06 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:32:17.780 23:35:06 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:32:17.780 23:35:06 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:32:17.780 23:35:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:17.780 23:35:06 -- common/autotest_common.sh@10 -- # set +x 00:32:18.350 nvme0n1 00:32:18.350 23:35:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:18.350 23:35:07 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:32:18.350 23:35:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:18.350 23:35:07 -- host/auth.sh@73 -- # jq -r '.[].name' 00:32:18.350 23:35:07 -- common/autotest_common.sh@10 -- # set +x 00:32:18.350 23:35:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:18.610 23:35:07 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:18.610 23:35:07 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:18.610 23:35:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:18.610 23:35:07 -- common/autotest_common.sh@10 -- # set +x 00:32:18.610 23:35:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:18.610 23:35:07 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:32:18.610 23:35:07 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:32:18.610 23:35:07 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:32:18.610 23:35:07 -- host/auth.sh@44 -- # digest=sha256 00:32:18.610 23:35:07 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:32:18.610 23:35:07 -- host/auth.sh@44 -- # keyid=3 00:32:18.610 23:35:07 -- host/auth.sh@45 -- # key=DHHC-1:02:MGY3YTY3ODY5MGExY2RjODc5MDZkZjkyMTgxMjlhZGM4NThjMmFiYzYxZjQwM2Q0JOfm/w==: 00:32:18.610 23:35:07 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:32:18.610 23:35:07 -- host/auth.sh@48 -- # echo ffdhe8192 00:32:18.610 23:35:07 -- host/auth.sh@49 -- # echo DHHC-1:02:MGY3YTY3ODY5MGExY2RjODc5MDZkZjkyMTgxMjlhZGM4NThjMmFiYzYxZjQwM2Q0JOfm/w==: 00:32:18.610 23:35:07 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe8192 3 00:32:18.610 23:35:07 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:32:18.610 23:35:07 -- host/auth.sh@68 -- # digest=sha256 00:32:18.610 23:35:07 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:32:18.610 23:35:07 -- host/auth.sh@68 -- # keyid=3 00:32:18.610 23:35:07 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:32:18.610 23:35:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:18.610 23:35:07 -- common/autotest_common.sh@10 -- # set +x 00:32:18.610 23:35:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:18.610 23:35:07 -- host/auth.sh@70 -- # get_main_ns_ip 00:32:18.610 23:35:07 -- nvmf/common.sh@717 -- # local ip 00:32:18.610 23:35:07 -- nvmf/common.sh@718 -- # ip_candidates=() 00:32:18.610 23:35:07 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:32:18.610 23:35:07 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:18.610 23:35:07 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:18.610 23:35:07 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:32:18.610 23:35:07 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:18.610 23:35:07 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:32:18.610 23:35:07 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:32:18.610 23:35:07 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:32:18.610 23:35:07 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:32:18.610 23:35:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:18.610 23:35:07 -- common/autotest_common.sh@10 -- # set +x 00:32:19.180 nvme0n1 00:32:19.180 23:35:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:19.180 23:35:08 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:32:19.180 23:35:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:19.180 23:35:08 -- host/auth.sh@73 -- # jq -r '.[].name' 00:32:19.180 23:35:08 -- common/autotest_common.sh@10 -- # set +x 00:32:19.180 23:35:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:19.441 23:35:08 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:19.441 23:35:08 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:19.441 23:35:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:19.441 23:35:08 -- common/autotest_common.sh@10 -- # set +x 00:32:19.441 23:35:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:19.441 23:35:08 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:32:19.441 23:35:08 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:32:19.441 23:35:08 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:32:19.441 23:35:08 -- host/auth.sh@44 -- # digest=sha256 00:32:19.441 23:35:08 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:32:19.441 23:35:08 -- host/auth.sh@44 -- # keyid=4 00:32:19.441 23:35:08 -- host/auth.sh@45 -- # key=DHHC-1:03:YzgzZDNjYzM3NjNkMGE4NDU0OWU3MDE1NDc3OTRhNWZiMGRkOWFiZTUwZDMxYWE1ZGUxNDc4ZWE5YjFkM2IzZOaWO0U=: 00:32:19.441 23:35:08 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:32:19.441 23:35:08 -- host/auth.sh@48 -- # echo ffdhe8192 00:32:19.441 23:35:08 -- host/auth.sh@49 -- # echo DHHC-1:03:YzgzZDNjYzM3NjNkMGE4NDU0OWU3MDE1NDc3OTRhNWZiMGRkOWFiZTUwZDMxYWE1ZGUxNDc4ZWE5YjFkM2IzZOaWO0U=: 00:32:19.441 23:35:08 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe8192 4 00:32:19.441 23:35:08 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:32:19.441 23:35:08 -- host/auth.sh@68 -- # digest=sha256 00:32:19.441 23:35:08 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:32:19.441 23:35:08 -- host/auth.sh@68 -- # keyid=4 00:32:19.441 23:35:08 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:32:19.441 23:35:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:19.441 23:35:08 -- common/autotest_common.sh@10 -- # set +x 00:32:19.441 23:35:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:19.441 23:35:08 -- host/auth.sh@70 -- # get_main_ns_ip 00:32:19.441 23:35:08 -- nvmf/common.sh@717 -- # local ip 00:32:19.441 23:35:08 -- nvmf/common.sh@718 -- # ip_candidates=() 00:32:19.441 23:35:08 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:32:19.441 23:35:08 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:19.441 23:35:08 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:19.441 23:35:08 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:32:19.441 23:35:08 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:19.441 23:35:08 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:32:19.441 23:35:08 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:32:19.441 23:35:08 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:32:19.441 23:35:08 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:32:19.441 23:35:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:19.441 23:35:08 -- common/autotest_common.sh@10 -- # set +x 00:32:20.012 nvme0n1 00:32:20.012 23:35:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:20.012 23:35:09 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:32:20.012 23:35:09 -- host/auth.sh@73 -- # jq -r '.[].name' 00:32:20.012 23:35:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:20.012 23:35:09 -- common/autotest_common.sh@10 -- # set +x 00:32:20.012 23:35:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:20.272 23:35:09 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:20.272 23:35:09 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:20.272 23:35:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:20.272 23:35:09 -- common/autotest_common.sh@10 -- # set +x 00:32:20.272 23:35:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:20.272 23:35:09 -- host/auth.sh@107 -- # for digest in "${digests[@]}" 00:32:20.272 23:35:09 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:32:20.272 23:35:09 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:32:20.272 23:35:09 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:32:20.272 23:35:09 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:32:20.272 23:35:09 -- host/auth.sh@44 -- # digest=sha384 00:32:20.272 23:35:09 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:20.272 23:35:09 -- host/auth.sh@44 -- # keyid=0 00:32:20.272 23:35:09 -- host/auth.sh@45 -- # key=DHHC-1:00:NzQyMzU3NTBmZTY4YzkwYTU0MjcwZDNlZDc4MjVhM2It42mv: 00:32:20.272 23:35:09 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:32:20.272 23:35:09 -- host/auth.sh@48 -- # echo ffdhe2048 00:32:20.272 23:35:09 -- host/auth.sh@49 -- # echo DHHC-1:00:NzQyMzU3NTBmZTY4YzkwYTU0MjcwZDNlZDc4MjVhM2It42mv: 00:32:20.272 23:35:09 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe2048 0 00:32:20.272 23:35:09 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:32:20.272 23:35:09 -- host/auth.sh@68 -- # digest=sha384 00:32:20.272 23:35:09 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:32:20.272 23:35:09 -- host/auth.sh@68 -- # keyid=0 00:32:20.272 23:35:09 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:32:20.272 23:35:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:20.272 23:35:09 -- common/autotest_common.sh@10 -- # set +x 00:32:20.272 23:35:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:20.272 23:35:09 -- host/auth.sh@70 -- # get_main_ns_ip 00:32:20.272 23:35:09 -- nvmf/common.sh@717 -- # local ip 00:32:20.272 23:35:09 -- nvmf/common.sh@718 -- # ip_candidates=() 00:32:20.272 23:35:09 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:32:20.272 23:35:09 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:20.272 23:35:09 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:20.272 23:35:09 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:32:20.272 23:35:09 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:20.272 23:35:09 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:32:20.272 23:35:09 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:32:20.272 23:35:09 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:32:20.272 23:35:09 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:32:20.272 23:35:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:20.272 23:35:09 -- common/autotest_common.sh@10 -- # set +x 00:32:20.272 nvme0n1 00:32:20.272 23:35:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:20.272 23:35:09 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:32:20.272 23:35:09 -- host/auth.sh@73 -- # jq -r '.[].name' 00:32:20.272 23:35:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:20.272 23:35:09 -- common/autotest_common.sh@10 -- # set +x 00:32:20.272 23:35:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:20.272 23:35:09 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:20.272 23:35:09 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:20.272 23:35:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:20.272 23:35:09 -- common/autotest_common.sh@10 -- # set +x 00:32:20.272 23:35:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:20.272 23:35:09 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:32:20.272 23:35:09 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:32:20.272 23:35:09 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:32:20.272 23:35:09 -- host/auth.sh@44 -- # digest=sha384 00:32:20.272 23:35:09 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:20.272 23:35:09 -- host/auth.sh@44 -- # keyid=1 00:32:20.272 23:35:09 -- host/auth.sh@45 -- # key=DHHC-1:00:MDIwY2FkNDQ3ZmU2OGE2NWFjMDMyNDVjZDc1OGEzZDk3NTM2YmVmZGE0Yjk2NTIxQfjyVA==: 00:32:20.272 23:35:09 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:32:20.272 23:35:09 -- host/auth.sh@48 -- # echo ffdhe2048 00:32:20.272 23:35:09 -- host/auth.sh@49 -- # echo DHHC-1:00:MDIwY2FkNDQ3ZmU2OGE2NWFjMDMyNDVjZDc1OGEzZDk3NTM2YmVmZGE0Yjk2NTIxQfjyVA==: 00:32:20.272 23:35:09 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe2048 1 00:32:20.272 23:35:09 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:32:20.272 23:35:09 -- host/auth.sh@68 -- # digest=sha384 00:32:20.272 23:35:09 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:32:20.272 23:35:09 -- host/auth.sh@68 -- # keyid=1 00:32:20.272 23:35:09 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:32:20.272 23:35:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:20.272 23:35:09 -- common/autotest_common.sh@10 -- # set +x 00:32:20.533 23:35:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:20.533 23:35:09 -- host/auth.sh@70 -- # get_main_ns_ip 00:32:20.533 23:35:09 -- nvmf/common.sh@717 -- # local ip 00:32:20.533 23:35:09 -- nvmf/common.sh@718 -- # ip_candidates=() 00:32:20.533 23:35:09 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:32:20.533 23:35:09 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:20.533 23:35:09 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:20.533 23:35:09 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:32:20.533 23:35:09 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:20.533 23:35:09 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:32:20.533 23:35:09 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:32:20.533 23:35:09 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:32:20.533 23:35:09 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:32:20.533 23:35:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:20.533 23:35:09 -- common/autotest_common.sh@10 -- # set +x 00:32:20.533 nvme0n1 00:32:20.533 23:35:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:20.533 23:35:09 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:32:20.533 23:35:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:20.533 23:35:09 -- host/auth.sh@73 -- # jq -r '.[].name' 00:32:20.533 23:35:09 -- common/autotest_common.sh@10 -- # set +x 00:32:20.533 23:35:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:20.533 23:35:09 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:20.533 23:35:09 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:20.533 23:35:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:20.533 23:35:09 -- common/autotest_common.sh@10 -- # set +x 00:32:20.533 23:35:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:20.533 23:35:09 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:32:20.533 23:35:09 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:32:20.533 23:35:09 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:32:20.533 23:35:09 -- host/auth.sh@44 -- # digest=sha384 00:32:20.533 23:35:09 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:20.533 23:35:09 -- host/auth.sh@44 -- # keyid=2 00:32:20.533 23:35:09 -- host/auth.sh@45 -- # key=DHHC-1:01:NWNmYTA4NTA5MDI1MjU5ZWI0YmNiMmViNmZlYjNhN2S+VDAV: 00:32:20.533 23:35:09 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:32:20.533 23:35:09 -- host/auth.sh@48 -- # echo ffdhe2048 00:32:20.533 23:35:09 -- host/auth.sh@49 -- # echo DHHC-1:01:NWNmYTA4NTA5MDI1MjU5ZWI0YmNiMmViNmZlYjNhN2S+VDAV: 00:32:20.533 23:35:09 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe2048 2 00:32:20.533 23:35:09 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:32:20.533 23:35:09 -- host/auth.sh@68 -- # digest=sha384 00:32:20.533 23:35:09 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:32:20.533 23:35:09 -- host/auth.sh@68 -- # keyid=2 00:32:20.533 23:35:09 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:32:20.533 23:35:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:20.533 23:35:09 -- common/autotest_common.sh@10 -- # set +x 00:32:20.533 23:35:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:20.533 23:35:09 -- host/auth.sh@70 -- # get_main_ns_ip 00:32:20.533 23:35:09 -- nvmf/common.sh@717 -- # local ip 00:32:20.533 23:35:09 -- nvmf/common.sh@718 -- # ip_candidates=() 00:32:20.533 23:35:09 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:32:20.533 23:35:09 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:20.533 23:35:09 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:20.533 23:35:09 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:32:20.533 23:35:09 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:20.533 23:35:09 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:32:20.533 23:35:09 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:32:20.533 23:35:09 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:32:20.533 23:35:09 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:32:20.533 23:35:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:20.533 23:35:09 -- common/autotest_common.sh@10 -- # set +x 00:32:20.793 nvme0n1 00:32:20.793 23:35:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:20.793 23:35:09 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:32:20.793 23:35:09 -- host/auth.sh@73 -- # jq -r '.[].name' 00:32:20.793 23:35:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:20.793 23:35:09 -- common/autotest_common.sh@10 -- # set +x 00:32:20.793 23:35:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:20.793 23:35:09 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:20.793 23:35:09 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:20.794 23:35:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:20.794 23:35:09 -- common/autotest_common.sh@10 -- # set +x 00:32:20.794 23:35:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:20.794 23:35:09 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:32:20.794 23:35:09 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:32:20.794 23:35:09 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:32:20.794 23:35:09 -- host/auth.sh@44 -- # digest=sha384 00:32:20.794 23:35:09 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:20.794 23:35:09 -- host/auth.sh@44 -- # keyid=3 00:32:20.794 23:35:09 -- host/auth.sh@45 -- # key=DHHC-1:02:MGY3YTY3ODY5MGExY2RjODc5MDZkZjkyMTgxMjlhZGM4NThjMmFiYzYxZjQwM2Q0JOfm/w==: 00:32:20.794 23:35:09 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:32:20.794 23:35:09 -- host/auth.sh@48 -- # echo ffdhe2048 00:32:20.794 23:35:09 -- host/auth.sh@49 -- # echo DHHC-1:02:MGY3YTY3ODY5MGExY2RjODc5MDZkZjkyMTgxMjlhZGM4NThjMmFiYzYxZjQwM2Q0JOfm/w==: 00:32:20.794 23:35:09 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe2048 3 00:32:20.794 23:35:09 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:32:20.794 23:35:09 -- host/auth.sh@68 -- # digest=sha384 00:32:20.794 23:35:09 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:32:20.794 23:35:09 -- host/auth.sh@68 -- # keyid=3 00:32:20.794 23:35:09 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:32:20.794 23:35:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:20.794 23:35:09 -- common/autotest_common.sh@10 -- # set +x 00:32:20.794 23:35:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:20.794 23:35:09 -- host/auth.sh@70 -- # get_main_ns_ip 00:32:20.794 23:35:09 -- nvmf/common.sh@717 -- # local ip 00:32:20.794 23:35:09 -- nvmf/common.sh@718 -- # ip_candidates=() 00:32:20.794 23:35:09 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:32:20.794 23:35:09 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:20.794 23:35:09 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:20.794 23:35:09 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:32:20.794 23:35:09 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:20.794 23:35:09 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:32:20.794 23:35:09 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:32:20.794 23:35:09 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:32:20.794 23:35:09 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:32:20.794 23:35:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:20.794 23:35:09 -- common/autotest_common.sh@10 -- # set +x 00:32:21.054 nvme0n1 00:32:21.054 23:35:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:21.054 23:35:10 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:32:21.054 23:35:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:21.054 23:35:10 -- host/auth.sh@73 -- # jq -r '.[].name' 00:32:21.054 23:35:10 -- common/autotest_common.sh@10 -- # set +x 00:32:21.054 23:35:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:21.054 23:35:10 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:21.054 23:35:10 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:21.054 23:35:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:21.054 23:35:10 -- common/autotest_common.sh@10 -- # set +x 00:32:21.054 23:35:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:21.054 23:35:10 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:32:21.054 23:35:10 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:32:21.054 23:35:10 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:32:21.054 23:35:10 -- host/auth.sh@44 -- # digest=sha384 00:32:21.054 23:35:10 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:21.054 23:35:10 -- host/auth.sh@44 -- # keyid=4 00:32:21.054 23:35:10 -- host/auth.sh@45 -- # key=DHHC-1:03:YzgzZDNjYzM3NjNkMGE4NDU0OWU3MDE1NDc3OTRhNWZiMGRkOWFiZTUwZDMxYWE1ZGUxNDc4ZWE5YjFkM2IzZOaWO0U=: 00:32:21.054 23:35:10 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:32:21.054 23:35:10 -- host/auth.sh@48 -- # echo ffdhe2048 00:32:21.054 23:35:10 -- host/auth.sh@49 -- # echo DHHC-1:03:YzgzZDNjYzM3NjNkMGE4NDU0OWU3MDE1NDc3OTRhNWZiMGRkOWFiZTUwZDMxYWE1ZGUxNDc4ZWE5YjFkM2IzZOaWO0U=: 00:32:21.054 23:35:10 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe2048 4 00:32:21.054 23:35:10 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:32:21.054 23:35:10 -- host/auth.sh@68 -- # digest=sha384 00:32:21.054 23:35:10 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:32:21.054 23:35:10 -- host/auth.sh@68 -- # keyid=4 00:32:21.054 23:35:10 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:32:21.054 23:35:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:21.054 23:35:10 -- common/autotest_common.sh@10 -- # set +x 00:32:21.054 23:35:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:21.054 23:35:10 -- host/auth.sh@70 -- # get_main_ns_ip 00:32:21.054 23:35:10 -- nvmf/common.sh@717 -- # local ip 00:32:21.054 23:35:10 -- nvmf/common.sh@718 -- # ip_candidates=() 00:32:21.054 23:35:10 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:32:21.054 23:35:10 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:21.054 23:35:10 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:21.054 23:35:10 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:32:21.054 23:35:10 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:21.054 23:35:10 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:32:21.054 23:35:10 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:32:21.054 23:35:10 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:32:21.054 23:35:10 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:32:21.054 23:35:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:21.054 23:35:10 -- common/autotest_common.sh@10 -- # set +x 00:32:21.313 nvme0n1 00:32:21.313 23:35:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:21.313 23:35:10 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:32:21.313 23:35:10 -- host/auth.sh@73 -- # jq -r '.[].name' 00:32:21.313 23:35:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:21.313 23:35:10 -- common/autotest_common.sh@10 -- # set +x 00:32:21.313 23:35:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:21.313 23:35:10 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:21.314 23:35:10 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:21.314 23:35:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:21.314 23:35:10 -- common/autotest_common.sh@10 -- # set +x 00:32:21.314 23:35:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:21.314 23:35:10 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:32:21.314 23:35:10 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:32:21.314 23:35:10 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:32:21.314 23:35:10 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:32:21.314 23:35:10 -- host/auth.sh@44 -- # digest=sha384 00:32:21.314 23:35:10 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:32:21.314 23:35:10 -- host/auth.sh@44 -- # keyid=0 00:32:21.314 23:35:10 -- host/auth.sh@45 -- # key=DHHC-1:00:NzQyMzU3NTBmZTY4YzkwYTU0MjcwZDNlZDc4MjVhM2It42mv: 00:32:21.314 23:35:10 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:32:21.314 23:35:10 -- host/auth.sh@48 -- # echo ffdhe3072 00:32:21.314 23:35:10 -- host/auth.sh@49 -- # echo DHHC-1:00:NzQyMzU3NTBmZTY4YzkwYTU0MjcwZDNlZDc4MjVhM2It42mv: 00:32:21.314 23:35:10 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe3072 0 00:32:21.314 23:35:10 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:32:21.314 23:35:10 -- host/auth.sh@68 -- # digest=sha384 00:32:21.314 23:35:10 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:32:21.314 23:35:10 -- host/auth.sh@68 -- # keyid=0 00:32:21.314 23:35:10 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:32:21.314 23:35:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:21.314 23:35:10 -- common/autotest_common.sh@10 -- # set +x 00:32:21.314 23:35:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:21.314 23:35:10 -- host/auth.sh@70 -- # get_main_ns_ip 00:32:21.314 23:35:10 -- nvmf/common.sh@717 -- # local ip 00:32:21.314 23:35:10 -- nvmf/common.sh@718 -- # ip_candidates=() 00:32:21.314 23:35:10 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:32:21.314 23:35:10 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:21.314 23:35:10 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:21.314 23:35:10 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:32:21.314 23:35:10 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:21.314 23:35:10 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:32:21.314 23:35:10 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:32:21.314 23:35:10 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:32:21.314 23:35:10 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:32:21.314 23:35:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:21.314 23:35:10 -- common/autotest_common.sh@10 -- # set +x 00:32:21.575 nvme0n1 00:32:21.575 23:35:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:21.575 23:35:10 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:32:21.575 23:35:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:21.575 23:35:10 -- host/auth.sh@73 -- # jq -r '.[].name' 00:32:21.575 23:35:10 -- common/autotest_common.sh@10 -- # set +x 00:32:21.575 23:35:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:21.575 23:35:10 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:21.575 23:35:10 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:21.575 23:35:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:21.575 23:35:10 -- common/autotest_common.sh@10 -- # set +x 00:32:21.575 23:35:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:21.575 23:35:10 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:32:21.575 23:35:10 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:32:21.575 23:35:10 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:32:21.575 23:35:10 -- host/auth.sh@44 -- # digest=sha384 00:32:21.575 23:35:10 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:32:21.575 23:35:10 -- host/auth.sh@44 -- # keyid=1 00:32:21.575 23:35:10 -- host/auth.sh@45 -- # key=DHHC-1:00:MDIwY2FkNDQ3ZmU2OGE2NWFjMDMyNDVjZDc1OGEzZDk3NTM2YmVmZGE0Yjk2NTIxQfjyVA==: 00:32:21.575 23:35:10 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:32:21.575 23:35:10 -- host/auth.sh@48 -- # echo ffdhe3072 00:32:21.575 23:35:10 -- host/auth.sh@49 -- # echo DHHC-1:00:MDIwY2FkNDQ3ZmU2OGE2NWFjMDMyNDVjZDc1OGEzZDk3NTM2YmVmZGE0Yjk2NTIxQfjyVA==: 00:32:21.575 23:35:10 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe3072 1 00:32:21.575 23:35:10 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:32:21.575 23:35:10 -- host/auth.sh@68 -- # digest=sha384 00:32:21.575 23:35:10 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:32:21.575 23:35:10 -- host/auth.sh@68 -- # keyid=1 00:32:21.575 23:35:10 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:32:21.575 23:35:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:21.575 23:35:10 -- common/autotest_common.sh@10 -- # set +x 00:32:21.575 23:35:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:21.575 23:35:10 -- host/auth.sh@70 -- # get_main_ns_ip 00:32:21.575 23:35:10 -- nvmf/common.sh@717 -- # local ip 00:32:21.575 23:35:10 -- nvmf/common.sh@718 -- # ip_candidates=() 00:32:21.575 23:35:10 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:32:21.575 23:35:10 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:21.575 23:35:10 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:21.575 23:35:10 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:32:21.575 23:35:10 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:21.575 23:35:10 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:32:21.575 23:35:10 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:32:21.575 23:35:10 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:32:21.575 23:35:10 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:32:21.575 23:35:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:21.575 23:35:10 -- common/autotest_common.sh@10 -- # set +x 00:32:21.835 nvme0n1 00:32:21.835 23:35:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:21.835 23:35:10 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:32:21.835 23:35:10 -- host/auth.sh@73 -- # jq -r '.[].name' 00:32:21.835 23:35:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:21.835 23:35:10 -- common/autotest_common.sh@10 -- # set +x 00:32:21.835 23:35:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:21.835 23:35:10 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:21.835 23:35:10 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:21.835 23:35:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:21.835 23:35:10 -- common/autotest_common.sh@10 -- # set +x 00:32:21.835 23:35:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:21.836 23:35:10 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:32:21.836 23:35:10 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:32:21.836 23:35:10 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:32:21.836 23:35:10 -- host/auth.sh@44 -- # digest=sha384 00:32:21.836 23:35:10 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:32:21.836 23:35:10 -- host/auth.sh@44 -- # keyid=2 00:32:21.836 23:35:10 -- host/auth.sh@45 -- # key=DHHC-1:01:NWNmYTA4NTA5MDI1MjU5ZWI0YmNiMmViNmZlYjNhN2S+VDAV: 00:32:21.836 23:35:10 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:32:21.836 23:35:10 -- host/auth.sh@48 -- # echo ffdhe3072 00:32:21.836 23:35:10 -- host/auth.sh@49 -- # echo DHHC-1:01:NWNmYTA4NTA5MDI1MjU5ZWI0YmNiMmViNmZlYjNhN2S+VDAV: 00:32:21.836 23:35:10 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe3072 2 00:32:21.836 23:35:10 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:32:21.836 23:35:10 -- host/auth.sh@68 -- # digest=sha384 00:32:21.836 23:35:10 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:32:21.836 23:35:10 -- host/auth.sh@68 -- # keyid=2 00:32:21.836 23:35:10 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:32:21.836 23:35:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:21.836 23:35:10 -- common/autotest_common.sh@10 -- # set +x 00:32:21.836 23:35:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:21.836 23:35:10 -- host/auth.sh@70 -- # get_main_ns_ip 00:32:21.836 23:35:10 -- nvmf/common.sh@717 -- # local ip 00:32:21.836 23:35:10 -- nvmf/common.sh@718 -- # ip_candidates=() 00:32:21.836 23:35:10 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:32:21.836 23:35:10 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:21.836 23:35:10 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:21.836 23:35:10 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:32:21.836 23:35:10 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:21.836 23:35:10 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:32:21.836 23:35:10 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:32:21.836 23:35:10 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:32:21.836 23:35:10 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:32:21.836 23:35:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:21.836 23:35:10 -- common/autotest_common.sh@10 -- # set +x 00:32:22.096 nvme0n1 00:32:22.096 23:35:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:22.096 23:35:11 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:32:22.096 23:35:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:22.096 23:35:11 -- host/auth.sh@73 -- # jq -r '.[].name' 00:32:22.096 23:35:11 -- common/autotest_common.sh@10 -- # set +x 00:32:22.096 23:35:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:22.096 23:35:11 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:22.096 23:35:11 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:22.096 23:35:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:22.096 23:35:11 -- common/autotest_common.sh@10 -- # set +x 00:32:22.096 23:35:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:22.096 23:35:11 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:32:22.096 23:35:11 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:32:22.096 23:35:11 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:32:22.096 23:35:11 -- host/auth.sh@44 -- # digest=sha384 00:32:22.096 23:35:11 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:32:22.096 23:35:11 -- host/auth.sh@44 -- # keyid=3 00:32:22.096 23:35:11 -- host/auth.sh@45 -- # key=DHHC-1:02:MGY3YTY3ODY5MGExY2RjODc5MDZkZjkyMTgxMjlhZGM4NThjMmFiYzYxZjQwM2Q0JOfm/w==: 00:32:22.096 23:35:11 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:32:22.096 23:35:11 -- host/auth.sh@48 -- # echo ffdhe3072 00:32:22.096 23:35:11 -- host/auth.sh@49 -- # echo DHHC-1:02:MGY3YTY3ODY5MGExY2RjODc5MDZkZjkyMTgxMjlhZGM4NThjMmFiYzYxZjQwM2Q0JOfm/w==: 00:32:22.096 23:35:11 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe3072 3 00:32:22.096 23:35:11 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:32:22.096 23:35:11 -- host/auth.sh@68 -- # digest=sha384 00:32:22.096 23:35:11 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:32:22.096 23:35:11 -- host/auth.sh@68 -- # keyid=3 00:32:22.096 23:35:11 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:32:22.096 23:35:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:22.096 23:35:11 -- common/autotest_common.sh@10 -- # set +x 00:32:22.096 23:35:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:22.096 23:35:11 -- host/auth.sh@70 -- # get_main_ns_ip 00:32:22.096 23:35:11 -- nvmf/common.sh@717 -- # local ip 00:32:22.096 23:35:11 -- nvmf/common.sh@718 -- # ip_candidates=() 00:32:22.096 23:35:11 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:32:22.096 23:35:11 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:22.096 23:35:11 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:22.097 23:35:11 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:32:22.097 23:35:11 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:22.097 23:35:11 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:32:22.097 23:35:11 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:32:22.097 23:35:11 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:32:22.097 23:35:11 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:32:22.097 23:35:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:22.097 23:35:11 -- common/autotest_common.sh@10 -- # set +x 00:32:22.357 nvme0n1 00:32:22.357 23:35:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:22.357 23:35:11 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:32:22.357 23:35:11 -- host/auth.sh@73 -- # jq -r '.[].name' 00:32:22.357 23:35:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:22.357 23:35:11 -- common/autotest_common.sh@10 -- # set +x 00:32:22.357 23:35:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:22.357 23:35:11 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:22.357 23:35:11 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:22.357 23:35:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:22.357 23:35:11 -- common/autotest_common.sh@10 -- # set +x 00:32:22.357 23:35:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:22.357 23:35:11 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:32:22.357 23:35:11 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:32:22.357 23:35:11 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:32:22.357 23:35:11 -- host/auth.sh@44 -- # digest=sha384 00:32:22.357 23:35:11 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:32:22.357 23:35:11 -- host/auth.sh@44 -- # keyid=4 00:32:22.357 23:35:11 -- host/auth.sh@45 -- # key=DHHC-1:03:YzgzZDNjYzM3NjNkMGE4NDU0OWU3MDE1NDc3OTRhNWZiMGRkOWFiZTUwZDMxYWE1ZGUxNDc4ZWE5YjFkM2IzZOaWO0U=: 00:32:22.357 23:35:11 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:32:22.357 23:35:11 -- host/auth.sh@48 -- # echo ffdhe3072 00:32:22.358 23:35:11 -- host/auth.sh@49 -- # echo DHHC-1:03:YzgzZDNjYzM3NjNkMGE4NDU0OWU3MDE1NDc3OTRhNWZiMGRkOWFiZTUwZDMxYWE1ZGUxNDc4ZWE5YjFkM2IzZOaWO0U=: 00:32:22.358 23:35:11 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe3072 4 00:32:22.358 23:35:11 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:32:22.358 23:35:11 -- host/auth.sh@68 -- # digest=sha384 00:32:22.358 23:35:11 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:32:22.358 23:35:11 -- host/auth.sh@68 -- # keyid=4 00:32:22.358 23:35:11 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:32:22.358 23:35:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:22.358 23:35:11 -- common/autotest_common.sh@10 -- # set +x 00:32:22.358 23:35:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:22.358 23:35:11 -- host/auth.sh@70 -- # get_main_ns_ip 00:32:22.358 23:35:11 -- nvmf/common.sh@717 -- # local ip 00:32:22.358 23:35:11 -- nvmf/common.sh@718 -- # ip_candidates=() 00:32:22.358 23:35:11 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:32:22.358 23:35:11 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:22.358 23:35:11 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:22.358 23:35:11 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:32:22.358 23:35:11 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:22.358 23:35:11 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:32:22.358 23:35:11 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:32:22.358 23:35:11 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:32:22.358 23:35:11 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:32:22.358 23:35:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:22.358 23:35:11 -- common/autotest_common.sh@10 -- # set +x 00:32:22.619 nvme0n1 00:32:22.619 23:35:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:22.619 23:35:11 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:32:22.619 23:35:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:22.619 23:35:11 -- host/auth.sh@73 -- # jq -r '.[].name' 00:32:22.619 23:35:11 -- common/autotest_common.sh@10 -- # set +x 00:32:22.619 23:35:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:22.619 23:35:11 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:22.619 23:35:11 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:22.619 23:35:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:22.619 23:35:11 -- common/autotest_common.sh@10 -- # set +x 00:32:22.619 23:35:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:22.619 23:35:11 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:32:22.619 23:35:11 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:32:22.619 23:35:11 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:32:22.619 23:35:11 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:32:22.619 23:35:11 -- host/auth.sh@44 -- # digest=sha384 00:32:22.619 23:35:11 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:32:22.619 23:35:11 -- host/auth.sh@44 -- # keyid=0 00:32:22.619 23:35:11 -- host/auth.sh@45 -- # key=DHHC-1:00:NzQyMzU3NTBmZTY4YzkwYTU0MjcwZDNlZDc4MjVhM2It42mv: 00:32:22.619 23:35:11 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:32:22.619 23:35:11 -- host/auth.sh@48 -- # echo ffdhe4096 00:32:22.619 23:35:11 -- host/auth.sh@49 -- # echo DHHC-1:00:NzQyMzU3NTBmZTY4YzkwYTU0MjcwZDNlZDc4MjVhM2It42mv: 00:32:22.619 23:35:11 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe4096 0 00:32:22.619 23:35:11 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:32:22.619 23:35:11 -- host/auth.sh@68 -- # digest=sha384 00:32:22.619 23:35:11 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:32:22.619 23:35:11 -- host/auth.sh@68 -- # keyid=0 00:32:22.619 23:35:11 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:32:22.619 23:35:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:22.619 23:35:11 -- common/autotest_common.sh@10 -- # set +x 00:32:22.619 23:35:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:22.619 23:35:11 -- host/auth.sh@70 -- # get_main_ns_ip 00:32:22.619 23:35:11 -- nvmf/common.sh@717 -- # local ip 00:32:22.619 23:35:11 -- nvmf/common.sh@718 -- # ip_candidates=() 00:32:22.619 23:35:11 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:32:22.619 23:35:11 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:22.619 23:35:11 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:22.619 23:35:11 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:32:22.619 23:35:11 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:22.619 23:35:11 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:32:22.619 23:35:11 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:32:22.619 23:35:11 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:32:22.619 23:35:11 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:32:22.619 23:35:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:22.619 23:35:11 -- common/autotest_common.sh@10 -- # set +x 00:32:22.879 nvme0n1 00:32:22.879 23:35:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:22.879 23:35:12 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:32:22.879 23:35:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:22.879 23:35:12 -- host/auth.sh@73 -- # jq -r '.[].name' 00:32:22.879 23:35:12 -- common/autotest_common.sh@10 -- # set +x 00:32:22.879 23:35:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:22.879 23:35:12 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:22.879 23:35:12 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:22.879 23:35:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:22.879 23:35:12 -- common/autotest_common.sh@10 -- # set +x 00:32:22.879 23:35:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:22.879 23:35:12 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:32:22.879 23:35:12 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:32:22.879 23:35:12 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:32:22.879 23:35:12 -- host/auth.sh@44 -- # digest=sha384 00:32:22.879 23:35:12 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:32:22.879 23:35:12 -- host/auth.sh@44 -- # keyid=1 00:32:22.879 23:35:12 -- host/auth.sh@45 -- # key=DHHC-1:00:MDIwY2FkNDQ3ZmU2OGE2NWFjMDMyNDVjZDc1OGEzZDk3NTM2YmVmZGE0Yjk2NTIxQfjyVA==: 00:32:22.879 23:35:12 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:32:22.879 23:35:12 -- host/auth.sh@48 -- # echo ffdhe4096 00:32:22.879 23:35:12 -- host/auth.sh@49 -- # echo DHHC-1:00:MDIwY2FkNDQ3ZmU2OGE2NWFjMDMyNDVjZDc1OGEzZDk3NTM2YmVmZGE0Yjk2NTIxQfjyVA==: 00:32:22.879 23:35:12 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe4096 1 00:32:22.879 23:35:12 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:32:22.879 23:35:12 -- host/auth.sh@68 -- # digest=sha384 00:32:22.879 23:35:12 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:32:22.879 23:35:12 -- host/auth.sh@68 -- # keyid=1 00:32:22.879 23:35:12 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:32:22.879 23:35:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:22.879 23:35:12 -- common/autotest_common.sh@10 -- # set +x 00:32:22.879 23:35:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:22.879 23:35:12 -- host/auth.sh@70 -- # get_main_ns_ip 00:32:22.879 23:35:12 -- nvmf/common.sh@717 -- # local ip 00:32:22.879 23:35:12 -- nvmf/common.sh@718 -- # ip_candidates=() 00:32:22.879 23:35:12 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:32:22.879 23:35:12 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:22.879 23:35:12 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:22.879 23:35:12 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:32:22.879 23:35:12 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:22.879 23:35:12 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:32:22.879 23:35:12 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:32:22.879 23:35:12 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:32:22.879 23:35:12 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:32:22.879 23:35:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:22.879 23:35:12 -- common/autotest_common.sh@10 -- # set +x 00:32:23.139 nvme0n1 00:32:23.139 23:35:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:23.139 23:35:12 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:32:23.139 23:35:12 -- host/auth.sh@73 -- # jq -r '.[].name' 00:32:23.139 23:35:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:23.139 23:35:12 -- common/autotest_common.sh@10 -- # set +x 00:32:23.399 23:35:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:23.399 23:35:12 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:23.400 23:35:12 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:23.400 23:35:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:23.400 23:35:12 -- common/autotest_common.sh@10 -- # set +x 00:32:23.400 23:35:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:23.400 23:35:12 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:32:23.400 23:35:12 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:32:23.400 23:35:12 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:32:23.400 23:35:12 -- host/auth.sh@44 -- # digest=sha384 00:32:23.400 23:35:12 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:32:23.400 23:35:12 -- host/auth.sh@44 -- # keyid=2 00:32:23.400 23:35:12 -- host/auth.sh@45 -- # key=DHHC-1:01:NWNmYTA4NTA5MDI1MjU5ZWI0YmNiMmViNmZlYjNhN2S+VDAV: 00:32:23.400 23:35:12 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:32:23.400 23:35:12 -- host/auth.sh@48 -- # echo ffdhe4096 00:32:23.400 23:35:12 -- host/auth.sh@49 -- # echo DHHC-1:01:NWNmYTA4NTA5MDI1MjU5ZWI0YmNiMmViNmZlYjNhN2S+VDAV: 00:32:23.400 23:35:12 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe4096 2 00:32:23.400 23:35:12 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:32:23.400 23:35:12 -- host/auth.sh@68 -- # digest=sha384 00:32:23.400 23:35:12 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:32:23.400 23:35:12 -- host/auth.sh@68 -- # keyid=2 00:32:23.400 23:35:12 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:32:23.400 23:35:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:23.400 23:35:12 -- common/autotest_common.sh@10 -- # set +x 00:32:23.400 23:35:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:23.400 23:35:12 -- host/auth.sh@70 -- # get_main_ns_ip 00:32:23.400 23:35:12 -- nvmf/common.sh@717 -- # local ip 00:32:23.400 23:35:12 -- nvmf/common.sh@718 -- # ip_candidates=() 00:32:23.400 23:35:12 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:32:23.400 23:35:12 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:23.400 23:35:12 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:23.400 23:35:12 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:32:23.400 23:35:12 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:23.400 23:35:12 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:32:23.400 23:35:12 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:32:23.400 23:35:12 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:32:23.400 23:35:12 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:32:23.400 23:35:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:23.400 23:35:12 -- common/autotest_common.sh@10 -- # set +x 00:32:23.660 nvme0n1 00:32:23.661 23:35:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:23.661 23:35:12 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:32:23.661 23:35:12 -- host/auth.sh@73 -- # jq -r '.[].name' 00:32:23.661 23:35:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:23.661 23:35:12 -- common/autotest_common.sh@10 -- # set +x 00:32:23.661 23:35:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:23.661 23:35:12 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:23.661 23:35:12 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:23.661 23:35:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:23.661 23:35:12 -- common/autotest_common.sh@10 -- # set +x 00:32:23.661 23:35:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:23.661 23:35:12 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:32:23.661 23:35:12 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:32:23.661 23:35:12 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:32:23.661 23:35:12 -- host/auth.sh@44 -- # digest=sha384 00:32:23.661 23:35:12 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:32:23.661 23:35:12 -- host/auth.sh@44 -- # keyid=3 00:32:23.661 23:35:12 -- host/auth.sh@45 -- # key=DHHC-1:02:MGY3YTY3ODY5MGExY2RjODc5MDZkZjkyMTgxMjlhZGM4NThjMmFiYzYxZjQwM2Q0JOfm/w==: 00:32:23.661 23:35:12 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:32:23.661 23:35:12 -- host/auth.sh@48 -- # echo ffdhe4096 00:32:23.661 23:35:12 -- host/auth.sh@49 -- # echo DHHC-1:02:MGY3YTY3ODY5MGExY2RjODc5MDZkZjkyMTgxMjlhZGM4NThjMmFiYzYxZjQwM2Q0JOfm/w==: 00:32:23.661 23:35:12 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe4096 3 00:32:23.661 23:35:12 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:32:23.661 23:35:12 -- host/auth.sh@68 -- # digest=sha384 00:32:23.661 23:35:12 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:32:23.661 23:35:12 -- host/auth.sh@68 -- # keyid=3 00:32:23.661 23:35:12 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:32:23.661 23:35:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:23.661 23:35:12 -- common/autotest_common.sh@10 -- # set +x 00:32:23.661 23:35:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:23.661 23:35:12 -- host/auth.sh@70 -- # get_main_ns_ip 00:32:23.661 23:35:12 -- nvmf/common.sh@717 -- # local ip 00:32:23.661 23:35:12 -- nvmf/common.sh@718 -- # ip_candidates=() 00:32:23.661 23:35:12 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:32:23.661 23:35:12 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:23.661 23:35:12 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:23.661 23:35:12 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:32:23.661 23:35:12 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:23.661 23:35:12 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:32:23.661 23:35:12 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:32:23.661 23:35:12 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:32:23.661 23:35:12 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:32:23.661 23:35:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:23.661 23:35:12 -- common/autotest_common.sh@10 -- # set +x 00:32:23.920 nvme0n1 00:32:23.920 23:35:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:23.920 23:35:13 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:32:23.920 23:35:13 -- host/auth.sh@73 -- # jq -r '.[].name' 00:32:23.920 23:35:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:23.920 23:35:13 -- common/autotest_common.sh@10 -- # set +x 00:32:23.920 23:35:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:23.920 23:35:13 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:23.920 23:35:13 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:23.920 23:35:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:23.920 23:35:13 -- common/autotest_common.sh@10 -- # set +x 00:32:23.920 23:35:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:23.920 23:35:13 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:32:23.920 23:35:13 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:32:23.920 23:35:13 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:32:23.920 23:35:13 -- host/auth.sh@44 -- # digest=sha384 00:32:23.920 23:35:13 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:32:23.920 23:35:13 -- host/auth.sh@44 -- # keyid=4 00:32:23.920 23:35:13 -- host/auth.sh@45 -- # key=DHHC-1:03:YzgzZDNjYzM3NjNkMGE4NDU0OWU3MDE1NDc3OTRhNWZiMGRkOWFiZTUwZDMxYWE1ZGUxNDc4ZWE5YjFkM2IzZOaWO0U=: 00:32:23.920 23:35:13 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:32:23.920 23:35:13 -- host/auth.sh@48 -- # echo ffdhe4096 00:32:23.920 23:35:13 -- host/auth.sh@49 -- # echo DHHC-1:03:YzgzZDNjYzM3NjNkMGE4NDU0OWU3MDE1NDc3OTRhNWZiMGRkOWFiZTUwZDMxYWE1ZGUxNDc4ZWE5YjFkM2IzZOaWO0U=: 00:32:23.920 23:35:13 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe4096 4 00:32:23.920 23:35:13 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:32:23.920 23:35:13 -- host/auth.sh@68 -- # digest=sha384 00:32:23.920 23:35:13 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:32:23.920 23:35:13 -- host/auth.sh@68 -- # keyid=4 00:32:23.920 23:35:13 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:32:23.920 23:35:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:23.920 23:35:13 -- common/autotest_common.sh@10 -- # set +x 00:32:24.180 23:35:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:24.180 23:35:13 -- host/auth.sh@70 -- # get_main_ns_ip 00:32:24.180 23:35:13 -- nvmf/common.sh@717 -- # local ip 00:32:24.180 23:35:13 -- nvmf/common.sh@718 -- # ip_candidates=() 00:32:24.180 23:35:13 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:32:24.180 23:35:13 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:24.180 23:35:13 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:24.180 23:35:13 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:32:24.180 23:35:13 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:24.180 23:35:13 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:32:24.180 23:35:13 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:32:24.180 23:35:13 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:32:24.180 23:35:13 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:32:24.180 23:35:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:24.180 23:35:13 -- common/autotest_common.sh@10 -- # set +x 00:32:24.440 nvme0n1 00:32:24.440 23:35:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:24.440 23:35:13 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:32:24.440 23:35:13 -- host/auth.sh@73 -- # jq -r '.[].name' 00:32:24.440 23:35:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:24.441 23:35:13 -- common/autotest_common.sh@10 -- # set +x 00:32:24.441 23:35:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:24.441 23:35:13 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:24.441 23:35:13 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:24.441 23:35:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:24.441 23:35:13 -- common/autotest_common.sh@10 -- # set +x 00:32:24.441 23:35:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:24.441 23:35:13 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:32:24.441 23:35:13 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:32:24.441 23:35:13 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:32:24.441 23:35:13 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:32:24.441 23:35:13 -- host/auth.sh@44 -- # digest=sha384 00:32:24.441 23:35:13 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:32:24.441 23:35:13 -- host/auth.sh@44 -- # keyid=0 00:32:24.441 23:35:13 -- host/auth.sh@45 -- # key=DHHC-1:00:NzQyMzU3NTBmZTY4YzkwYTU0MjcwZDNlZDc4MjVhM2It42mv: 00:32:24.441 23:35:13 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:32:24.441 23:35:13 -- host/auth.sh@48 -- # echo ffdhe6144 00:32:24.441 23:35:13 -- host/auth.sh@49 -- # echo DHHC-1:00:NzQyMzU3NTBmZTY4YzkwYTU0MjcwZDNlZDc4MjVhM2It42mv: 00:32:24.441 23:35:13 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe6144 0 00:32:24.441 23:35:13 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:32:24.441 23:35:13 -- host/auth.sh@68 -- # digest=sha384 00:32:24.441 23:35:13 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:32:24.441 23:35:13 -- host/auth.sh@68 -- # keyid=0 00:32:24.441 23:35:13 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:32:24.441 23:35:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:24.441 23:35:13 -- common/autotest_common.sh@10 -- # set +x 00:32:24.441 23:35:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:24.441 23:35:13 -- host/auth.sh@70 -- # get_main_ns_ip 00:32:24.441 23:35:13 -- nvmf/common.sh@717 -- # local ip 00:32:24.441 23:35:13 -- nvmf/common.sh@718 -- # ip_candidates=() 00:32:24.441 23:35:13 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:32:24.441 23:35:13 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:24.441 23:35:13 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:24.441 23:35:13 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:32:24.441 23:35:13 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:24.441 23:35:13 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:32:24.441 23:35:13 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:32:24.441 23:35:13 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:32:24.441 23:35:13 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:32:24.441 23:35:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:24.441 23:35:13 -- common/autotest_common.sh@10 -- # set +x 00:32:24.702 nvme0n1 00:32:24.702 23:35:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:24.702 23:35:13 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:32:24.702 23:35:13 -- host/auth.sh@73 -- # jq -r '.[].name' 00:32:24.702 23:35:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:24.702 23:35:13 -- common/autotest_common.sh@10 -- # set +x 00:32:24.702 23:35:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:24.963 23:35:13 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:24.963 23:35:13 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:24.963 23:35:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:24.963 23:35:13 -- common/autotest_common.sh@10 -- # set +x 00:32:24.963 23:35:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:24.963 23:35:14 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:32:24.963 23:35:14 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:32:24.963 23:35:14 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:32:24.964 23:35:14 -- host/auth.sh@44 -- # digest=sha384 00:32:24.964 23:35:14 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:32:24.964 23:35:14 -- host/auth.sh@44 -- # keyid=1 00:32:24.964 23:35:14 -- host/auth.sh@45 -- # key=DHHC-1:00:MDIwY2FkNDQ3ZmU2OGE2NWFjMDMyNDVjZDc1OGEzZDk3NTM2YmVmZGE0Yjk2NTIxQfjyVA==: 00:32:24.964 23:35:14 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:32:24.964 23:35:14 -- host/auth.sh@48 -- # echo ffdhe6144 00:32:24.964 23:35:14 -- host/auth.sh@49 -- # echo DHHC-1:00:MDIwY2FkNDQ3ZmU2OGE2NWFjMDMyNDVjZDc1OGEzZDk3NTM2YmVmZGE0Yjk2NTIxQfjyVA==: 00:32:24.964 23:35:14 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe6144 1 00:32:24.964 23:35:14 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:32:24.964 23:35:14 -- host/auth.sh@68 -- # digest=sha384 00:32:24.964 23:35:14 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:32:24.964 23:35:14 -- host/auth.sh@68 -- # keyid=1 00:32:24.964 23:35:14 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:32:24.964 23:35:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:24.964 23:35:14 -- common/autotest_common.sh@10 -- # set +x 00:32:24.964 23:35:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:24.964 23:35:14 -- host/auth.sh@70 -- # get_main_ns_ip 00:32:24.964 23:35:14 -- nvmf/common.sh@717 -- # local ip 00:32:24.964 23:35:14 -- nvmf/common.sh@718 -- # ip_candidates=() 00:32:24.964 23:35:14 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:32:24.964 23:35:14 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:24.964 23:35:14 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:24.964 23:35:14 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:32:24.964 23:35:14 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:24.964 23:35:14 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:32:24.964 23:35:14 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:32:24.964 23:35:14 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:32:24.964 23:35:14 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:32:24.964 23:35:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:24.964 23:35:14 -- common/autotest_common.sh@10 -- # set +x 00:32:25.537 nvme0n1 00:32:25.537 23:35:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:25.537 23:35:14 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:32:25.537 23:35:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:25.537 23:35:14 -- host/auth.sh@73 -- # jq -r '.[].name' 00:32:25.537 23:35:14 -- common/autotest_common.sh@10 -- # set +x 00:32:25.537 23:35:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:25.537 23:35:14 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:25.537 23:35:14 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:25.537 23:35:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:25.537 23:35:14 -- common/autotest_common.sh@10 -- # set +x 00:32:25.537 23:35:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:25.537 23:35:14 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:32:25.537 23:35:14 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:32:25.537 23:35:14 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:32:25.537 23:35:14 -- host/auth.sh@44 -- # digest=sha384 00:32:25.537 23:35:14 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:32:25.537 23:35:14 -- host/auth.sh@44 -- # keyid=2 00:32:25.537 23:35:14 -- host/auth.sh@45 -- # key=DHHC-1:01:NWNmYTA4NTA5MDI1MjU5ZWI0YmNiMmViNmZlYjNhN2S+VDAV: 00:32:25.537 23:35:14 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:32:25.537 23:35:14 -- host/auth.sh@48 -- # echo ffdhe6144 00:32:25.537 23:35:14 -- host/auth.sh@49 -- # echo DHHC-1:01:NWNmYTA4NTA5MDI1MjU5ZWI0YmNiMmViNmZlYjNhN2S+VDAV: 00:32:25.537 23:35:14 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe6144 2 00:32:25.537 23:35:14 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:32:25.537 23:35:14 -- host/auth.sh@68 -- # digest=sha384 00:32:25.537 23:35:14 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:32:25.537 23:35:14 -- host/auth.sh@68 -- # keyid=2 00:32:25.537 23:35:14 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:32:25.537 23:35:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:25.537 23:35:14 -- common/autotest_common.sh@10 -- # set +x 00:32:25.537 23:35:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:25.537 23:35:14 -- host/auth.sh@70 -- # get_main_ns_ip 00:32:25.537 23:35:14 -- nvmf/common.sh@717 -- # local ip 00:32:25.537 23:35:14 -- nvmf/common.sh@718 -- # ip_candidates=() 00:32:25.537 23:35:14 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:32:25.537 23:35:14 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:25.537 23:35:14 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:25.537 23:35:14 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:32:25.537 23:35:14 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:25.537 23:35:14 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:32:25.537 23:35:14 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:32:25.537 23:35:14 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:32:25.537 23:35:14 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:32:25.537 23:35:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:25.537 23:35:14 -- common/autotest_common.sh@10 -- # set +x 00:32:25.797 nvme0n1 00:32:25.797 23:35:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:25.797 23:35:14 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:32:25.797 23:35:14 -- host/auth.sh@73 -- # jq -r '.[].name' 00:32:25.797 23:35:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:25.797 23:35:14 -- common/autotest_common.sh@10 -- # set +x 00:32:25.797 23:35:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:25.797 23:35:15 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:25.797 23:35:15 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:25.797 23:35:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:25.797 23:35:15 -- common/autotest_common.sh@10 -- # set +x 00:32:25.797 23:35:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:25.797 23:35:15 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:32:25.797 23:35:15 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:32:25.797 23:35:15 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:32:25.797 23:35:15 -- host/auth.sh@44 -- # digest=sha384 00:32:25.797 23:35:15 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:32:25.797 23:35:15 -- host/auth.sh@44 -- # keyid=3 00:32:25.797 23:35:15 -- host/auth.sh@45 -- # key=DHHC-1:02:MGY3YTY3ODY5MGExY2RjODc5MDZkZjkyMTgxMjlhZGM4NThjMmFiYzYxZjQwM2Q0JOfm/w==: 00:32:25.797 23:35:15 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:32:25.797 23:35:15 -- host/auth.sh@48 -- # echo ffdhe6144 00:32:25.797 23:35:15 -- host/auth.sh@49 -- # echo DHHC-1:02:MGY3YTY3ODY5MGExY2RjODc5MDZkZjkyMTgxMjlhZGM4NThjMmFiYzYxZjQwM2Q0JOfm/w==: 00:32:25.797 23:35:15 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe6144 3 00:32:25.797 23:35:15 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:32:25.797 23:35:15 -- host/auth.sh@68 -- # digest=sha384 00:32:25.797 23:35:15 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:32:25.797 23:35:15 -- host/auth.sh@68 -- # keyid=3 00:32:25.797 23:35:15 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:32:25.797 23:35:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:25.797 23:35:15 -- common/autotest_common.sh@10 -- # set +x 00:32:25.797 23:35:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:25.797 23:35:15 -- host/auth.sh@70 -- # get_main_ns_ip 00:32:25.797 23:35:15 -- nvmf/common.sh@717 -- # local ip 00:32:25.797 23:35:15 -- nvmf/common.sh@718 -- # ip_candidates=() 00:32:25.797 23:35:15 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:32:25.798 23:35:15 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:25.798 23:35:15 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:25.798 23:35:15 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:32:25.798 23:35:15 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:25.798 23:35:15 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:32:25.798 23:35:15 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:32:25.798 23:35:15 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:32:25.798 23:35:15 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:32:25.798 23:35:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:25.798 23:35:15 -- common/autotest_common.sh@10 -- # set +x 00:32:26.367 nvme0n1 00:32:26.367 23:35:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:26.367 23:35:15 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:32:26.367 23:35:15 -- host/auth.sh@73 -- # jq -r '.[].name' 00:32:26.367 23:35:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:26.367 23:35:15 -- common/autotest_common.sh@10 -- # set +x 00:32:26.367 23:35:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:26.367 23:35:15 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:26.367 23:35:15 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:26.367 23:35:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:26.367 23:35:15 -- common/autotest_common.sh@10 -- # set +x 00:32:26.367 23:35:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:26.367 23:35:15 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:32:26.367 23:35:15 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:32:26.367 23:35:15 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:32:26.367 23:35:15 -- host/auth.sh@44 -- # digest=sha384 00:32:26.367 23:35:15 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:32:26.367 23:35:15 -- host/auth.sh@44 -- # keyid=4 00:32:26.367 23:35:15 -- host/auth.sh@45 -- # key=DHHC-1:03:YzgzZDNjYzM3NjNkMGE4NDU0OWU3MDE1NDc3OTRhNWZiMGRkOWFiZTUwZDMxYWE1ZGUxNDc4ZWE5YjFkM2IzZOaWO0U=: 00:32:26.367 23:35:15 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:32:26.367 23:35:15 -- host/auth.sh@48 -- # echo ffdhe6144 00:32:26.367 23:35:15 -- host/auth.sh@49 -- # echo DHHC-1:03:YzgzZDNjYzM3NjNkMGE4NDU0OWU3MDE1NDc3OTRhNWZiMGRkOWFiZTUwZDMxYWE1ZGUxNDc4ZWE5YjFkM2IzZOaWO0U=: 00:32:26.367 23:35:15 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe6144 4 00:32:26.367 23:35:15 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:32:26.367 23:35:15 -- host/auth.sh@68 -- # digest=sha384 00:32:26.367 23:35:15 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:32:26.367 23:35:15 -- host/auth.sh@68 -- # keyid=4 00:32:26.367 23:35:15 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:32:26.367 23:35:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:26.367 23:35:15 -- common/autotest_common.sh@10 -- # set +x 00:32:26.367 23:35:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:26.367 23:35:15 -- host/auth.sh@70 -- # get_main_ns_ip 00:32:26.367 23:35:15 -- nvmf/common.sh@717 -- # local ip 00:32:26.367 23:35:15 -- nvmf/common.sh@718 -- # ip_candidates=() 00:32:26.367 23:35:15 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:32:26.367 23:35:15 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:26.367 23:35:15 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:26.367 23:35:15 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:32:26.367 23:35:15 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:26.367 23:35:15 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:32:26.367 23:35:15 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:32:26.367 23:35:15 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:32:26.368 23:35:15 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:32:26.368 23:35:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:26.368 23:35:15 -- common/autotest_common.sh@10 -- # set +x 00:32:26.937 nvme0n1 00:32:26.937 23:35:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:26.937 23:35:15 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:32:26.937 23:35:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:26.937 23:35:15 -- host/auth.sh@73 -- # jq -r '.[].name' 00:32:26.937 23:35:15 -- common/autotest_common.sh@10 -- # set +x 00:32:26.937 23:35:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:26.937 23:35:16 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:26.937 23:35:16 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:26.937 23:35:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:26.937 23:35:16 -- common/autotest_common.sh@10 -- # set +x 00:32:26.937 23:35:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:26.937 23:35:16 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:32:26.937 23:35:16 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:32:26.937 23:35:16 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:32:26.937 23:35:16 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:32:26.937 23:35:16 -- host/auth.sh@44 -- # digest=sha384 00:32:26.937 23:35:16 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:32:26.937 23:35:16 -- host/auth.sh@44 -- # keyid=0 00:32:26.937 23:35:16 -- host/auth.sh@45 -- # key=DHHC-1:00:NzQyMzU3NTBmZTY4YzkwYTU0MjcwZDNlZDc4MjVhM2It42mv: 00:32:26.937 23:35:16 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:32:26.937 23:35:16 -- host/auth.sh@48 -- # echo ffdhe8192 00:32:26.937 23:35:16 -- host/auth.sh@49 -- # echo DHHC-1:00:NzQyMzU3NTBmZTY4YzkwYTU0MjcwZDNlZDc4MjVhM2It42mv: 00:32:26.937 23:35:16 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe8192 0 00:32:26.937 23:35:16 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:32:26.937 23:35:16 -- host/auth.sh@68 -- # digest=sha384 00:32:26.937 23:35:16 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:32:26.937 23:35:16 -- host/auth.sh@68 -- # keyid=0 00:32:26.937 23:35:16 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:32:26.937 23:35:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:26.937 23:35:16 -- common/autotest_common.sh@10 -- # set +x 00:32:26.937 23:35:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:26.937 23:35:16 -- host/auth.sh@70 -- # get_main_ns_ip 00:32:26.937 23:35:16 -- nvmf/common.sh@717 -- # local ip 00:32:26.937 23:35:16 -- nvmf/common.sh@718 -- # ip_candidates=() 00:32:26.937 23:35:16 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:32:26.937 23:35:16 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:26.937 23:35:16 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:26.937 23:35:16 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:32:26.937 23:35:16 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:26.937 23:35:16 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:32:26.937 23:35:16 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:32:26.937 23:35:16 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:32:26.937 23:35:16 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:32:26.937 23:35:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:26.937 23:35:16 -- common/autotest_common.sh@10 -- # set +x 00:32:27.880 nvme0n1 00:32:27.880 23:35:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:27.880 23:35:16 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:32:27.880 23:35:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:27.880 23:35:16 -- host/auth.sh@73 -- # jq -r '.[].name' 00:32:27.880 23:35:16 -- common/autotest_common.sh@10 -- # set +x 00:32:27.880 23:35:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:27.880 23:35:16 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:27.880 23:35:16 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:27.880 23:35:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:27.880 23:35:16 -- common/autotest_common.sh@10 -- # set +x 00:32:27.880 23:35:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:27.880 23:35:16 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:32:27.880 23:35:16 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:32:27.880 23:35:16 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:32:27.880 23:35:16 -- host/auth.sh@44 -- # digest=sha384 00:32:27.880 23:35:16 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:32:27.880 23:35:16 -- host/auth.sh@44 -- # keyid=1 00:32:27.880 23:35:16 -- host/auth.sh@45 -- # key=DHHC-1:00:MDIwY2FkNDQ3ZmU2OGE2NWFjMDMyNDVjZDc1OGEzZDk3NTM2YmVmZGE0Yjk2NTIxQfjyVA==: 00:32:27.880 23:35:16 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:32:27.880 23:35:16 -- host/auth.sh@48 -- # echo ffdhe8192 00:32:27.880 23:35:16 -- host/auth.sh@49 -- # echo DHHC-1:00:MDIwY2FkNDQ3ZmU2OGE2NWFjMDMyNDVjZDc1OGEzZDk3NTM2YmVmZGE0Yjk2NTIxQfjyVA==: 00:32:27.880 23:35:16 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe8192 1 00:32:27.880 23:35:16 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:32:27.880 23:35:16 -- host/auth.sh@68 -- # digest=sha384 00:32:27.880 23:35:16 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:32:27.880 23:35:16 -- host/auth.sh@68 -- # keyid=1 00:32:27.880 23:35:16 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:32:27.880 23:35:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:27.880 23:35:16 -- common/autotest_common.sh@10 -- # set +x 00:32:27.880 23:35:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:27.880 23:35:16 -- host/auth.sh@70 -- # get_main_ns_ip 00:32:27.880 23:35:16 -- nvmf/common.sh@717 -- # local ip 00:32:27.880 23:35:16 -- nvmf/common.sh@718 -- # ip_candidates=() 00:32:27.880 23:35:16 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:32:27.880 23:35:16 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:27.880 23:35:16 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:27.880 23:35:16 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:32:27.880 23:35:16 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:27.880 23:35:16 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:32:27.880 23:35:16 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:32:27.880 23:35:16 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:32:27.880 23:35:16 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:32:27.880 23:35:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:27.880 23:35:16 -- common/autotest_common.sh@10 -- # set +x 00:32:28.453 nvme0n1 00:32:28.453 23:35:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:28.453 23:35:17 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:32:28.453 23:35:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:28.453 23:35:17 -- host/auth.sh@73 -- # jq -r '.[].name' 00:32:28.453 23:35:17 -- common/autotest_common.sh@10 -- # set +x 00:32:28.453 23:35:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:28.453 23:35:17 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:28.453 23:35:17 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:28.453 23:35:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:28.453 23:35:17 -- common/autotest_common.sh@10 -- # set +x 00:32:28.714 23:35:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:28.714 23:35:17 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:32:28.714 23:35:17 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:32:28.714 23:35:17 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:32:28.714 23:35:17 -- host/auth.sh@44 -- # digest=sha384 00:32:28.714 23:35:17 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:32:28.714 23:35:17 -- host/auth.sh@44 -- # keyid=2 00:32:28.714 23:35:17 -- host/auth.sh@45 -- # key=DHHC-1:01:NWNmYTA4NTA5MDI1MjU5ZWI0YmNiMmViNmZlYjNhN2S+VDAV: 00:32:28.714 23:35:17 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:32:28.714 23:35:17 -- host/auth.sh@48 -- # echo ffdhe8192 00:32:28.714 23:35:17 -- host/auth.sh@49 -- # echo DHHC-1:01:NWNmYTA4NTA5MDI1MjU5ZWI0YmNiMmViNmZlYjNhN2S+VDAV: 00:32:28.714 23:35:17 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe8192 2 00:32:28.714 23:35:17 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:32:28.714 23:35:17 -- host/auth.sh@68 -- # digest=sha384 00:32:28.714 23:35:17 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:32:28.714 23:35:17 -- host/auth.sh@68 -- # keyid=2 00:32:28.714 23:35:17 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:32:28.714 23:35:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:28.714 23:35:17 -- common/autotest_common.sh@10 -- # set +x 00:32:28.714 23:35:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:28.714 23:35:17 -- host/auth.sh@70 -- # get_main_ns_ip 00:32:28.714 23:35:17 -- nvmf/common.sh@717 -- # local ip 00:32:28.714 23:35:17 -- nvmf/common.sh@718 -- # ip_candidates=() 00:32:28.714 23:35:17 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:32:28.714 23:35:17 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:28.714 23:35:17 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:28.714 23:35:17 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:32:28.714 23:35:17 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:28.714 23:35:17 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:32:28.714 23:35:17 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:32:28.714 23:35:17 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:32:28.714 23:35:17 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:32:28.714 23:35:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:28.714 23:35:17 -- common/autotest_common.sh@10 -- # set +x 00:32:29.284 nvme0n1 00:32:29.284 23:35:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:29.284 23:35:18 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:32:29.284 23:35:18 -- host/auth.sh@73 -- # jq -r '.[].name' 00:32:29.284 23:35:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:29.284 23:35:18 -- common/autotest_common.sh@10 -- # set +x 00:32:29.284 23:35:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:29.284 23:35:18 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:29.284 23:35:18 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:29.284 23:35:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:29.284 23:35:18 -- common/autotest_common.sh@10 -- # set +x 00:32:29.545 23:35:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:29.545 23:35:18 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:32:29.545 23:35:18 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:32:29.545 23:35:18 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:32:29.545 23:35:18 -- host/auth.sh@44 -- # digest=sha384 00:32:29.545 23:35:18 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:32:29.545 23:35:18 -- host/auth.sh@44 -- # keyid=3 00:32:29.545 23:35:18 -- host/auth.sh@45 -- # key=DHHC-1:02:MGY3YTY3ODY5MGExY2RjODc5MDZkZjkyMTgxMjlhZGM4NThjMmFiYzYxZjQwM2Q0JOfm/w==: 00:32:29.545 23:35:18 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:32:29.545 23:35:18 -- host/auth.sh@48 -- # echo ffdhe8192 00:32:29.545 23:35:18 -- host/auth.sh@49 -- # echo DHHC-1:02:MGY3YTY3ODY5MGExY2RjODc5MDZkZjkyMTgxMjlhZGM4NThjMmFiYzYxZjQwM2Q0JOfm/w==: 00:32:29.545 23:35:18 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe8192 3 00:32:29.545 23:35:18 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:32:29.545 23:35:18 -- host/auth.sh@68 -- # digest=sha384 00:32:29.545 23:35:18 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:32:29.545 23:35:18 -- host/auth.sh@68 -- # keyid=3 00:32:29.545 23:35:18 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:32:29.545 23:35:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:29.545 23:35:18 -- common/autotest_common.sh@10 -- # set +x 00:32:29.545 23:35:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:29.545 23:35:18 -- host/auth.sh@70 -- # get_main_ns_ip 00:32:29.545 23:35:18 -- nvmf/common.sh@717 -- # local ip 00:32:29.545 23:35:18 -- nvmf/common.sh@718 -- # ip_candidates=() 00:32:29.545 23:35:18 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:32:29.545 23:35:18 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:29.545 23:35:18 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:29.545 23:35:18 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:32:29.545 23:35:18 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:29.545 23:35:18 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:32:29.545 23:35:18 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:32:29.545 23:35:18 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:32:29.545 23:35:18 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:32:29.545 23:35:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:29.545 23:35:18 -- common/autotest_common.sh@10 -- # set +x 00:32:30.116 nvme0n1 00:32:30.116 23:35:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:30.116 23:35:19 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:32:30.116 23:35:19 -- host/auth.sh@73 -- # jq -r '.[].name' 00:32:30.116 23:35:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:30.116 23:35:19 -- common/autotest_common.sh@10 -- # set +x 00:32:30.116 23:35:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:30.116 23:35:19 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:30.116 23:35:19 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:30.116 23:35:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:30.116 23:35:19 -- common/autotest_common.sh@10 -- # set +x 00:32:30.376 23:35:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:30.376 23:35:19 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:32:30.376 23:35:19 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:32:30.376 23:35:19 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:32:30.376 23:35:19 -- host/auth.sh@44 -- # digest=sha384 00:32:30.376 23:35:19 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:32:30.376 23:35:19 -- host/auth.sh@44 -- # keyid=4 00:32:30.376 23:35:19 -- host/auth.sh@45 -- # key=DHHC-1:03:YzgzZDNjYzM3NjNkMGE4NDU0OWU3MDE1NDc3OTRhNWZiMGRkOWFiZTUwZDMxYWE1ZGUxNDc4ZWE5YjFkM2IzZOaWO0U=: 00:32:30.376 23:35:19 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:32:30.376 23:35:19 -- host/auth.sh@48 -- # echo ffdhe8192 00:32:30.376 23:35:19 -- host/auth.sh@49 -- # echo DHHC-1:03:YzgzZDNjYzM3NjNkMGE4NDU0OWU3MDE1NDc3OTRhNWZiMGRkOWFiZTUwZDMxYWE1ZGUxNDc4ZWE5YjFkM2IzZOaWO0U=: 00:32:30.376 23:35:19 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe8192 4 00:32:30.376 23:35:19 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:32:30.376 23:35:19 -- host/auth.sh@68 -- # digest=sha384 00:32:30.376 23:35:19 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:32:30.376 23:35:19 -- host/auth.sh@68 -- # keyid=4 00:32:30.376 23:35:19 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:32:30.376 23:35:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:30.376 23:35:19 -- common/autotest_common.sh@10 -- # set +x 00:32:30.376 23:35:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:30.376 23:35:19 -- host/auth.sh@70 -- # get_main_ns_ip 00:32:30.376 23:35:19 -- nvmf/common.sh@717 -- # local ip 00:32:30.376 23:35:19 -- nvmf/common.sh@718 -- # ip_candidates=() 00:32:30.376 23:35:19 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:32:30.376 23:35:19 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:30.376 23:35:19 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:30.376 23:35:19 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:32:30.376 23:35:19 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:30.376 23:35:19 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:32:30.376 23:35:19 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:32:30.376 23:35:19 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:32:30.376 23:35:19 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:32:30.376 23:35:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:30.376 23:35:19 -- common/autotest_common.sh@10 -- # set +x 00:32:30.948 nvme0n1 00:32:30.948 23:35:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:30.948 23:35:20 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:32:30.948 23:35:20 -- host/auth.sh@73 -- # jq -r '.[].name' 00:32:30.948 23:35:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:30.948 23:35:20 -- common/autotest_common.sh@10 -- # set +x 00:32:30.948 23:35:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:30.948 23:35:20 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:30.948 23:35:20 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:30.948 23:35:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:30.948 23:35:20 -- common/autotest_common.sh@10 -- # set +x 00:32:31.209 23:35:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:31.209 23:35:20 -- host/auth.sh@107 -- # for digest in "${digests[@]}" 00:32:31.209 23:35:20 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:32:31.209 23:35:20 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:32:31.209 23:35:20 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:32:31.209 23:35:20 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:32:31.209 23:35:20 -- host/auth.sh@44 -- # digest=sha512 00:32:31.209 23:35:20 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:31.209 23:35:20 -- host/auth.sh@44 -- # keyid=0 00:32:31.209 23:35:20 -- host/auth.sh@45 -- # key=DHHC-1:00:NzQyMzU3NTBmZTY4YzkwYTU0MjcwZDNlZDc4MjVhM2It42mv: 00:32:31.209 23:35:20 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:32:31.209 23:35:20 -- host/auth.sh@48 -- # echo ffdhe2048 00:32:31.209 23:35:20 -- host/auth.sh@49 -- # echo DHHC-1:00:NzQyMzU3NTBmZTY4YzkwYTU0MjcwZDNlZDc4MjVhM2It42mv: 00:32:31.209 23:35:20 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe2048 0 00:32:31.209 23:35:20 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:32:31.209 23:35:20 -- host/auth.sh@68 -- # digest=sha512 00:32:31.209 23:35:20 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:32:31.209 23:35:20 -- host/auth.sh@68 -- # keyid=0 00:32:31.209 23:35:20 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:32:31.209 23:35:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:31.209 23:35:20 -- common/autotest_common.sh@10 -- # set +x 00:32:31.209 23:35:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:31.209 23:35:20 -- host/auth.sh@70 -- # get_main_ns_ip 00:32:31.209 23:35:20 -- nvmf/common.sh@717 -- # local ip 00:32:31.209 23:35:20 -- nvmf/common.sh@718 -- # ip_candidates=() 00:32:31.209 23:35:20 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:32:31.209 23:35:20 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:31.209 23:35:20 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:31.209 23:35:20 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:32:31.209 23:35:20 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:31.209 23:35:20 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:32:31.209 23:35:20 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:32:31.210 23:35:20 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:32:31.210 23:35:20 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:32:31.210 23:35:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:31.210 23:35:20 -- common/autotest_common.sh@10 -- # set +x 00:32:31.210 nvme0n1 00:32:31.210 23:35:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:31.210 23:35:20 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:32:31.210 23:35:20 -- host/auth.sh@73 -- # jq -r '.[].name' 00:32:31.210 23:35:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:31.210 23:35:20 -- common/autotest_common.sh@10 -- # set +x 00:32:31.210 23:35:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:31.210 23:35:20 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:31.210 23:35:20 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:31.210 23:35:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:31.210 23:35:20 -- common/autotest_common.sh@10 -- # set +x 00:32:31.210 23:35:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:31.210 23:35:20 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:32:31.210 23:35:20 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:32:31.210 23:35:20 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:32:31.210 23:35:20 -- host/auth.sh@44 -- # digest=sha512 00:32:31.210 23:35:20 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:31.210 23:35:20 -- host/auth.sh@44 -- # keyid=1 00:32:31.210 23:35:20 -- host/auth.sh@45 -- # key=DHHC-1:00:MDIwY2FkNDQ3ZmU2OGE2NWFjMDMyNDVjZDc1OGEzZDk3NTM2YmVmZGE0Yjk2NTIxQfjyVA==: 00:32:31.210 23:35:20 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:32:31.210 23:35:20 -- host/auth.sh@48 -- # echo ffdhe2048 00:32:31.210 23:35:20 -- host/auth.sh@49 -- # echo DHHC-1:00:MDIwY2FkNDQ3ZmU2OGE2NWFjMDMyNDVjZDc1OGEzZDk3NTM2YmVmZGE0Yjk2NTIxQfjyVA==: 00:32:31.210 23:35:20 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe2048 1 00:32:31.210 23:35:20 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:32:31.210 23:35:20 -- host/auth.sh@68 -- # digest=sha512 00:32:31.210 23:35:20 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:32:31.210 23:35:20 -- host/auth.sh@68 -- # keyid=1 00:32:31.210 23:35:20 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:32:31.210 23:35:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:31.210 23:35:20 -- common/autotest_common.sh@10 -- # set +x 00:32:31.210 23:35:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:31.210 23:35:20 -- host/auth.sh@70 -- # get_main_ns_ip 00:32:31.210 23:35:20 -- nvmf/common.sh@717 -- # local ip 00:32:31.210 23:35:20 -- nvmf/common.sh@718 -- # ip_candidates=() 00:32:31.210 23:35:20 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:32:31.210 23:35:20 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:31.210 23:35:20 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:31.210 23:35:20 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:32:31.210 23:35:20 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:31.210 23:35:20 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:32:31.210 23:35:20 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:32:31.210 23:35:20 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:32:31.210 23:35:20 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:32:31.210 23:35:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:31.210 23:35:20 -- common/autotest_common.sh@10 -- # set +x 00:32:31.471 nvme0n1 00:32:31.471 23:35:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:31.471 23:35:20 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:32:31.471 23:35:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:31.471 23:35:20 -- host/auth.sh@73 -- # jq -r '.[].name' 00:32:31.471 23:35:20 -- common/autotest_common.sh@10 -- # set +x 00:32:31.471 23:35:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:31.471 23:35:20 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:31.471 23:35:20 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:31.471 23:35:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:31.471 23:35:20 -- common/autotest_common.sh@10 -- # set +x 00:32:31.471 23:35:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:31.471 23:35:20 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:32:31.471 23:35:20 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:32:31.471 23:35:20 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:32:31.471 23:35:20 -- host/auth.sh@44 -- # digest=sha512 00:32:31.471 23:35:20 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:31.471 23:35:20 -- host/auth.sh@44 -- # keyid=2 00:32:31.471 23:35:20 -- host/auth.sh@45 -- # key=DHHC-1:01:NWNmYTA4NTA5MDI1MjU5ZWI0YmNiMmViNmZlYjNhN2S+VDAV: 00:32:31.471 23:35:20 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:32:31.471 23:35:20 -- host/auth.sh@48 -- # echo ffdhe2048 00:32:31.471 23:35:20 -- host/auth.sh@49 -- # echo DHHC-1:01:NWNmYTA4NTA5MDI1MjU5ZWI0YmNiMmViNmZlYjNhN2S+VDAV: 00:32:31.471 23:35:20 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe2048 2 00:32:31.471 23:35:20 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:32:31.471 23:35:20 -- host/auth.sh@68 -- # digest=sha512 00:32:31.471 23:35:20 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:32:31.471 23:35:20 -- host/auth.sh@68 -- # keyid=2 00:32:31.471 23:35:20 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:32:31.471 23:35:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:31.471 23:35:20 -- common/autotest_common.sh@10 -- # set +x 00:32:31.471 23:35:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:31.471 23:35:20 -- host/auth.sh@70 -- # get_main_ns_ip 00:32:31.471 23:35:20 -- nvmf/common.sh@717 -- # local ip 00:32:31.471 23:35:20 -- nvmf/common.sh@718 -- # ip_candidates=() 00:32:31.471 23:35:20 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:32:31.471 23:35:20 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:31.471 23:35:20 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:31.471 23:35:20 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:32:31.471 23:35:20 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:31.471 23:35:20 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:32:31.471 23:35:20 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:32:31.471 23:35:20 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:32:31.471 23:35:20 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:32:31.471 23:35:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:31.471 23:35:20 -- common/autotest_common.sh@10 -- # set +x 00:32:31.732 nvme0n1 00:32:31.732 23:35:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:31.732 23:35:20 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:32:31.732 23:35:20 -- host/auth.sh@73 -- # jq -r '.[].name' 00:32:31.732 23:35:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:31.732 23:35:20 -- common/autotest_common.sh@10 -- # set +x 00:32:31.732 23:35:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:31.732 23:35:20 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:31.732 23:35:20 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:31.732 23:35:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:31.732 23:35:20 -- common/autotest_common.sh@10 -- # set +x 00:32:31.732 23:35:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:31.732 23:35:20 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:32:31.732 23:35:20 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:32:31.732 23:35:20 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:32:31.732 23:35:20 -- host/auth.sh@44 -- # digest=sha512 00:32:31.732 23:35:20 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:31.732 23:35:20 -- host/auth.sh@44 -- # keyid=3 00:32:31.732 23:35:20 -- host/auth.sh@45 -- # key=DHHC-1:02:MGY3YTY3ODY5MGExY2RjODc5MDZkZjkyMTgxMjlhZGM4NThjMmFiYzYxZjQwM2Q0JOfm/w==: 00:32:31.732 23:35:20 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:32:31.732 23:35:20 -- host/auth.sh@48 -- # echo ffdhe2048 00:32:31.732 23:35:20 -- host/auth.sh@49 -- # echo DHHC-1:02:MGY3YTY3ODY5MGExY2RjODc5MDZkZjkyMTgxMjlhZGM4NThjMmFiYzYxZjQwM2Q0JOfm/w==: 00:32:31.732 23:35:20 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe2048 3 00:32:31.732 23:35:20 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:32:31.732 23:35:20 -- host/auth.sh@68 -- # digest=sha512 00:32:31.732 23:35:20 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:32:31.732 23:35:20 -- host/auth.sh@68 -- # keyid=3 00:32:31.732 23:35:20 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:32:31.732 23:35:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:31.732 23:35:20 -- common/autotest_common.sh@10 -- # set +x 00:32:31.732 23:35:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:31.732 23:35:20 -- host/auth.sh@70 -- # get_main_ns_ip 00:32:31.732 23:35:20 -- nvmf/common.sh@717 -- # local ip 00:32:31.732 23:35:20 -- nvmf/common.sh@718 -- # ip_candidates=() 00:32:31.732 23:35:20 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:32:31.732 23:35:20 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:31.732 23:35:20 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:31.732 23:35:20 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:32:31.732 23:35:20 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:31.732 23:35:20 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:32:31.732 23:35:20 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:32:31.732 23:35:20 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:32:31.732 23:35:20 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:32:31.732 23:35:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:31.732 23:35:20 -- common/autotest_common.sh@10 -- # set +x 00:32:31.993 nvme0n1 00:32:31.993 23:35:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:31.993 23:35:21 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:32:31.993 23:35:21 -- host/auth.sh@73 -- # jq -r '.[].name' 00:32:31.993 23:35:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:31.993 23:35:21 -- common/autotest_common.sh@10 -- # set +x 00:32:31.993 23:35:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:31.993 23:35:21 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:31.993 23:35:21 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:31.993 23:35:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:31.993 23:35:21 -- common/autotest_common.sh@10 -- # set +x 00:32:31.993 23:35:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:31.993 23:35:21 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:32:31.993 23:35:21 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:32:31.993 23:35:21 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:32:31.993 23:35:21 -- host/auth.sh@44 -- # digest=sha512 00:32:31.993 23:35:21 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:31.993 23:35:21 -- host/auth.sh@44 -- # keyid=4 00:32:31.993 23:35:21 -- host/auth.sh@45 -- # key=DHHC-1:03:YzgzZDNjYzM3NjNkMGE4NDU0OWU3MDE1NDc3OTRhNWZiMGRkOWFiZTUwZDMxYWE1ZGUxNDc4ZWE5YjFkM2IzZOaWO0U=: 00:32:31.993 23:35:21 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:32:31.993 23:35:21 -- host/auth.sh@48 -- # echo ffdhe2048 00:32:31.993 23:35:21 -- host/auth.sh@49 -- # echo DHHC-1:03:YzgzZDNjYzM3NjNkMGE4NDU0OWU3MDE1NDc3OTRhNWZiMGRkOWFiZTUwZDMxYWE1ZGUxNDc4ZWE5YjFkM2IzZOaWO0U=: 00:32:31.993 23:35:21 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe2048 4 00:32:31.993 23:35:21 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:32:31.993 23:35:21 -- host/auth.sh@68 -- # digest=sha512 00:32:31.993 23:35:21 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:32:31.993 23:35:21 -- host/auth.sh@68 -- # keyid=4 00:32:31.993 23:35:21 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:32:31.993 23:35:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:31.993 23:35:21 -- common/autotest_common.sh@10 -- # set +x 00:32:31.993 23:35:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:31.993 23:35:21 -- host/auth.sh@70 -- # get_main_ns_ip 00:32:31.993 23:35:21 -- nvmf/common.sh@717 -- # local ip 00:32:31.993 23:35:21 -- nvmf/common.sh@718 -- # ip_candidates=() 00:32:31.993 23:35:21 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:32:31.993 23:35:21 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:31.993 23:35:21 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:31.993 23:35:21 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:32:31.993 23:35:21 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:31.993 23:35:21 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:32:31.993 23:35:21 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:32:31.993 23:35:21 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:32:31.993 23:35:21 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:32:31.993 23:35:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:31.993 23:35:21 -- common/autotest_common.sh@10 -- # set +x 00:32:32.254 nvme0n1 00:32:32.254 23:35:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:32.254 23:35:21 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:32:32.254 23:35:21 -- host/auth.sh@73 -- # jq -r '.[].name' 00:32:32.254 23:35:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:32.254 23:35:21 -- common/autotest_common.sh@10 -- # set +x 00:32:32.254 23:35:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:32.254 23:35:21 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:32.254 23:35:21 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:32.254 23:35:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:32.254 23:35:21 -- common/autotest_common.sh@10 -- # set +x 00:32:32.254 23:35:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:32.254 23:35:21 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:32:32.254 23:35:21 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:32:32.254 23:35:21 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:32:32.254 23:35:21 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:32:32.254 23:35:21 -- host/auth.sh@44 -- # digest=sha512 00:32:32.254 23:35:21 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:32:32.254 23:35:21 -- host/auth.sh@44 -- # keyid=0 00:32:32.254 23:35:21 -- host/auth.sh@45 -- # key=DHHC-1:00:NzQyMzU3NTBmZTY4YzkwYTU0MjcwZDNlZDc4MjVhM2It42mv: 00:32:32.254 23:35:21 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:32:32.254 23:35:21 -- host/auth.sh@48 -- # echo ffdhe3072 00:32:32.254 23:35:21 -- host/auth.sh@49 -- # echo DHHC-1:00:NzQyMzU3NTBmZTY4YzkwYTU0MjcwZDNlZDc4MjVhM2It42mv: 00:32:32.254 23:35:21 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe3072 0 00:32:32.254 23:35:21 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:32:32.254 23:35:21 -- host/auth.sh@68 -- # digest=sha512 00:32:32.254 23:35:21 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:32:32.254 23:35:21 -- host/auth.sh@68 -- # keyid=0 00:32:32.254 23:35:21 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:32:32.254 23:35:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:32.254 23:35:21 -- common/autotest_common.sh@10 -- # set +x 00:32:32.254 23:35:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:32.254 23:35:21 -- host/auth.sh@70 -- # get_main_ns_ip 00:32:32.254 23:35:21 -- nvmf/common.sh@717 -- # local ip 00:32:32.254 23:35:21 -- nvmf/common.sh@718 -- # ip_candidates=() 00:32:32.254 23:35:21 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:32:32.254 23:35:21 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:32.254 23:35:21 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:32.254 23:35:21 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:32:32.254 23:35:21 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:32.254 23:35:21 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:32:32.254 23:35:21 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:32:32.255 23:35:21 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:32:32.255 23:35:21 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:32:32.255 23:35:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:32.255 23:35:21 -- common/autotest_common.sh@10 -- # set +x 00:32:32.515 nvme0n1 00:32:32.515 23:35:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:32.515 23:35:21 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:32:32.515 23:35:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:32.515 23:35:21 -- host/auth.sh@73 -- # jq -r '.[].name' 00:32:32.515 23:35:21 -- common/autotest_common.sh@10 -- # set +x 00:32:32.515 23:35:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:32.515 23:35:21 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:32.515 23:35:21 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:32.515 23:35:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:32.515 23:35:21 -- common/autotest_common.sh@10 -- # set +x 00:32:32.515 23:35:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:32.515 23:35:21 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:32:32.515 23:35:21 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:32:32.515 23:35:21 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:32:32.515 23:35:21 -- host/auth.sh@44 -- # digest=sha512 00:32:32.515 23:35:21 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:32:32.515 23:35:21 -- host/auth.sh@44 -- # keyid=1 00:32:32.515 23:35:21 -- host/auth.sh@45 -- # key=DHHC-1:00:MDIwY2FkNDQ3ZmU2OGE2NWFjMDMyNDVjZDc1OGEzZDk3NTM2YmVmZGE0Yjk2NTIxQfjyVA==: 00:32:32.515 23:35:21 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:32:32.515 23:35:21 -- host/auth.sh@48 -- # echo ffdhe3072 00:32:32.515 23:35:21 -- host/auth.sh@49 -- # echo DHHC-1:00:MDIwY2FkNDQ3ZmU2OGE2NWFjMDMyNDVjZDc1OGEzZDk3NTM2YmVmZGE0Yjk2NTIxQfjyVA==: 00:32:32.515 23:35:21 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe3072 1 00:32:32.515 23:35:21 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:32:32.515 23:35:21 -- host/auth.sh@68 -- # digest=sha512 00:32:32.515 23:35:21 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:32:32.515 23:35:21 -- host/auth.sh@68 -- # keyid=1 00:32:32.515 23:35:21 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:32:32.515 23:35:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:32.515 23:35:21 -- common/autotest_common.sh@10 -- # set +x 00:32:32.515 23:35:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:32.515 23:35:21 -- host/auth.sh@70 -- # get_main_ns_ip 00:32:32.515 23:35:21 -- nvmf/common.sh@717 -- # local ip 00:32:32.515 23:35:21 -- nvmf/common.sh@718 -- # ip_candidates=() 00:32:32.515 23:35:21 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:32:32.515 23:35:21 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:32.515 23:35:21 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:32.515 23:35:21 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:32:32.515 23:35:21 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:32.515 23:35:21 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:32:32.516 23:35:21 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:32:32.516 23:35:21 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:32:32.516 23:35:21 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:32:32.516 23:35:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:32.516 23:35:21 -- common/autotest_common.sh@10 -- # set +x 00:32:32.776 nvme0n1 00:32:32.776 23:35:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:32.776 23:35:21 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:32:32.776 23:35:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:32.776 23:35:21 -- host/auth.sh@73 -- # jq -r '.[].name' 00:32:32.776 23:35:21 -- common/autotest_common.sh@10 -- # set +x 00:32:32.776 23:35:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:32.776 23:35:21 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:32.776 23:35:21 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:32.776 23:35:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:32.776 23:35:21 -- common/autotest_common.sh@10 -- # set +x 00:32:32.776 23:35:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:32.776 23:35:21 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:32:32.776 23:35:21 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:32:32.776 23:35:21 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:32:32.776 23:35:21 -- host/auth.sh@44 -- # digest=sha512 00:32:32.776 23:35:21 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:32:32.776 23:35:21 -- host/auth.sh@44 -- # keyid=2 00:32:32.776 23:35:21 -- host/auth.sh@45 -- # key=DHHC-1:01:NWNmYTA4NTA5MDI1MjU5ZWI0YmNiMmViNmZlYjNhN2S+VDAV: 00:32:32.776 23:35:21 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:32:32.776 23:35:21 -- host/auth.sh@48 -- # echo ffdhe3072 00:32:32.776 23:35:21 -- host/auth.sh@49 -- # echo DHHC-1:01:NWNmYTA4NTA5MDI1MjU5ZWI0YmNiMmViNmZlYjNhN2S+VDAV: 00:32:32.776 23:35:21 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe3072 2 00:32:32.776 23:35:21 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:32:32.776 23:35:21 -- host/auth.sh@68 -- # digest=sha512 00:32:32.776 23:35:21 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:32:32.776 23:35:21 -- host/auth.sh@68 -- # keyid=2 00:32:32.776 23:35:21 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:32:32.776 23:35:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:32.776 23:35:21 -- common/autotest_common.sh@10 -- # set +x 00:32:32.776 23:35:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:32.776 23:35:21 -- host/auth.sh@70 -- # get_main_ns_ip 00:32:32.776 23:35:21 -- nvmf/common.sh@717 -- # local ip 00:32:32.776 23:35:21 -- nvmf/common.sh@718 -- # ip_candidates=() 00:32:32.776 23:35:21 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:32:32.776 23:35:21 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:32.776 23:35:21 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:32.776 23:35:21 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:32:32.776 23:35:21 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:32.776 23:35:21 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:32:32.776 23:35:21 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:32:32.776 23:35:21 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:32:32.776 23:35:21 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:32:32.776 23:35:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:32.776 23:35:21 -- common/autotest_common.sh@10 -- # set +x 00:32:33.037 nvme0n1 00:32:33.037 23:35:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:33.037 23:35:22 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:32:33.037 23:35:22 -- host/auth.sh@73 -- # jq -r '.[].name' 00:32:33.037 23:35:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:33.037 23:35:22 -- common/autotest_common.sh@10 -- # set +x 00:32:33.037 23:35:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:33.037 23:35:22 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:33.037 23:35:22 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:33.037 23:35:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:33.037 23:35:22 -- common/autotest_common.sh@10 -- # set +x 00:32:33.037 23:35:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:33.037 23:35:22 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:32:33.037 23:35:22 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:32:33.037 23:35:22 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:32:33.037 23:35:22 -- host/auth.sh@44 -- # digest=sha512 00:32:33.037 23:35:22 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:32:33.037 23:35:22 -- host/auth.sh@44 -- # keyid=3 00:32:33.037 23:35:22 -- host/auth.sh@45 -- # key=DHHC-1:02:MGY3YTY3ODY5MGExY2RjODc5MDZkZjkyMTgxMjlhZGM4NThjMmFiYzYxZjQwM2Q0JOfm/w==: 00:32:33.037 23:35:22 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:32:33.037 23:35:22 -- host/auth.sh@48 -- # echo ffdhe3072 00:32:33.037 23:35:22 -- host/auth.sh@49 -- # echo DHHC-1:02:MGY3YTY3ODY5MGExY2RjODc5MDZkZjkyMTgxMjlhZGM4NThjMmFiYzYxZjQwM2Q0JOfm/w==: 00:32:33.037 23:35:22 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe3072 3 00:32:33.037 23:35:22 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:32:33.037 23:35:22 -- host/auth.sh@68 -- # digest=sha512 00:32:33.037 23:35:22 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:32:33.037 23:35:22 -- host/auth.sh@68 -- # keyid=3 00:32:33.037 23:35:22 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:32:33.037 23:35:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:33.037 23:35:22 -- common/autotest_common.sh@10 -- # set +x 00:32:33.037 23:35:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:33.037 23:35:22 -- host/auth.sh@70 -- # get_main_ns_ip 00:32:33.037 23:35:22 -- nvmf/common.sh@717 -- # local ip 00:32:33.037 23:35:22 -- nvmf/common.sh@718 -- # ip_candidates=() 00:32:33.037 23:35:22 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:32:33.037 23:35:22 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:33.037 23:35:22 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:33.037 23:35:22 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:32:33.037 23:35:22 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:33.037 23:35:22 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:32:33.037 23:35:22 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:32:33.037 23:35:22 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:32:33.037 23:35:22 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:32:33.037 23:35:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:33.037 23:35:22 -- common/autotest_common.sh@10 -- # set +x 00:32:33.298 nvme0n1 00:32:33.298 23:35:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:33.298 23:35:22 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:32:33.298 23:35:22 -- host/auth.sh@73 -- # jq -r '.[].name' 00:32:33.298 23:35:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:33.298 23:35:22 -- common/autotest_common.sh@10 -- # set +x 00:32:33.298 23:35:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:33.298 23:35:22 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:33.298 23:35:22 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:33.298 23:35:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:33.298 23:35:22 -- common/autotest_common.sh@10 -- # set +x 00:32:33.298 23:35:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:33.298 23:35:22 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:32:33.298 23:35:22 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:32:33.298 23:35:22 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:32:33.298 23:35:22 -- host/auth.sh@44 -- # digest=sha512 00:32:33.298 23:35:22 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:32:33.298 23:35:22 -- host/auth.sh@44 -- # keyid=4 00:32:33.298 23:35:22 -- host/auth.sh@45 -- # key=DHHC-1:03:YzgzZDNjYzM3NjNkMGE4NDU0OWU3MDE1NDc3OTRhNWZiMGRkOWFiZTUwZDMxYWE1ZGUxNDc4ZWE5YjFkM2IzZOaWO0U=: 00:32:33.298 23:35:22 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:32:33.298 23:35:22 -- host/auth.sh@48 -- # echo ffdhe3072 00:32:33.298 23:35:22 -- host/auth.sh@49 -- # echo DHHC-1:03:YzgzZDNjYzM3NjNkMGE4NDU0OWU3MDE1NDc3OTRhNWZiMGRkOWFiZTUwZDMxYWE1ZGUxNDc4ZWE5YjFkM2IzZOaWO0U=: 00:32:33.298 23:35:22 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe3072 4 00:32:33.298 23:35:22 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:32:33.298 23:35:22 -- host/auth.sh@68 -- # digest=sha512 00:32:33.298 23:35:22 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:32:33.298 23:35:22 -- host/auth.sh@68 -- # keyid=4 00:32:33.298 23:35:22 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:32:33.298 23:35:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:33.298 23:35:22 -- common/autotest_common.sh@10 -- # set +x 00:32:33.298 23:35:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:33.298 23:35:22 -- host/auth.sh@70 -- # get_main_ns_ip 00:32:33.298 23:35:22 -- nvmf/common.sh@717 -- # local ip 00:32:33.298 23:35:22 -- nvmf/common.sh@718 -- # ip_candidates=() 00:32:33.298 23:35:22 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:32:33.298 23:35:22 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:33.298 23:35:22 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:33.298 23:35:22 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:32:33.298 23:35:22 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:33.298 23:35:22 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:32:33.298 23:35:22 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:32:33.298 23:35:22 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:32:33.298 23:35:22 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:32:33.298 23:35:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:33.298 23:35:22 -- common/autotest_common.sh@10 -- # set +x 00:32:33.559 nvme0n1 00:32:33.559 23:35:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:33.559 23:35:22 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:32:33.559 23:35:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:33.559 23:35:22 -- host/auth.sh@73 -- # jq -r '.[].name' 00:32:33.559 23:35:22 -- common/autotest_common.sh@10 -- # set +x 00:32:33.559 23:35:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:33.559 23:35:22 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:33.559 23:35:22 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:33.559 23:35:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:33.559 23:35:22 -- common/autotest_common.sh@10 -- # set +x 00:32:33.559 23:35:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:33.559 23:35:22 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:32:33.559 23:35:22 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:32:33.559 23:35:22 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:32:33.559 23:35:22 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:32:33.559 23:35:22 -- host/auth.sh@44 -- # digest=sha512 00:32:33.559 23:35:22 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:32:33.559 23:35:22 -- host/auth.sh@44 -- # keyid=0 00:32:33.559 23:35:22 -- host/auth.sh@45 -- # key=DHHC-1:00:NzQyMzU3NTBmZTY4YzkwYTU0MjcwZDNlZDc4MjVhM2It42mv: 00:32:33.559 23:35:22 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:32:33.559 23:35:22 -- host/auth.sh@48 -- # echo ffdhe4096 00:32:33.559 23:35:22 -- host/auth.sh@49 -- # echo DHHC-1:00:NzQyMzU3NTBmZTY4YzkwYTU0MjcwZDNlZDc4MjVhM2It42mv: 00:32:33.559 23:35:22 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe4096 0 00:32:33.559 23:35:22 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:32:33.559 23:35:22 -- host/auth.sh@68 -- # digest=sha512 00:32:33.559 23:35:22 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:32:33.559 23:35:22 -- host/auth.sh@68 -- # keyid=0 00:32:33.559 23:35:22 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:32:33.559 23:35:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:33.559 23:35:22 -- common/autotest_common.sh@10 -- # set +x 00:32:33.559 23:35:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:33.559 23:35:22 -- host/auth.sh@70 -- # get_main_ns_ip 00:32:33.559 23:35:22 -- nvmf/common.sh@717 -- # local ip 00:32:33.559 23:35:22 -- nvmf/common.sh@718 -- # ip_candidates=() 00:32:33.559 23:35:22 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:32:33.559 23:35:22 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:33.559 23:35:22 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:33.559 23:35:22 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:32:33.559 23:35:22 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:33.559 23:35:22 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:32:33.559 23:35:22 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:32:33.559 23:35:22 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:32:33.559 23:35:22 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:32:33.559 23:35:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:33.559 23:35:22 -- common/autotest_common.sh@10 -- # set +x 00:32:33.823 nvme0n1 00:32:33.823 23:35:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:33.823 23:35:22 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:32:33.823 23:35:22 -- host/auth.sh@73 -- # jq -r '.[].name' 00:32:33.823 23:35:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:33.823 23:35:22 -- common/autotest_common.sh@10 -- # set +x 00:32:33.823 23:35:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:33.823 23:35:23 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:33.823 23:35:23 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:33.823 23:35:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:33.823 23:35:23 -- common/autotest_common.sh@10 -- # set +x 00:32:33.823 23:35:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:33.823 23:35:23 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:32:33.823 23:35:23 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:32:33.823 23:35:23 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:32:33.823 23:35:23 -- host/auth.sh@44 -- # digest=sha512 00:32:33.823 23:35:23 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:32:33.823 23:35:23 -- host/auth.sh@44 -- # keyid=1 00:32:33.823 23:35:23 -- host/auth.sh@45 -- # key=DHHC-1:00:MDIwY2FkNDQ3ZmU2OGE2NWFjMDMyNDVjZDc1OGEzZDk3NTM2YmVmZGE0Yjk2NTIxQfjyVA==: 00:32:33.823 23:35:23 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:32:33.823 23:35:23 -- host/auth.sh@48 -- # echo ffdhe4096 00:32:33.823 23:35:23 -- host/auth.sh@49 -- # echo DHHC-1:00:MDIwY2FkNDQ3ZmU2OGE2NWFjMDMyNDVjZDc1OGEzZDk3NTM2YmVmZGE0Yjk2NTIxQfjyVA==: 00:32:33.823 23:35:23 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe4096 1 00:32:33.823 23:35:23 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:32:33.823 23:35:23 -- host/auth.sh@68 -- # digest=sha512 00:32:33.823 23:35:23 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:32:33.823 23:35:23 -- host/auth.sh@68 -- # keyid=1 00:32:33.823 23:35:23 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:32:33.823 23:35:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:33.823 23:35:23 -- common/autotest_common.sh@10 -- # set +x 00:32:33.823 23:35:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:33.823 23:35:23 -- host/auth.sh@70 -- # get_main_ns_ip 00:32:33.823 23:35:23 -- nvmf/common.sh@717 -- # local ip 00:32:33.823 23:35:23 -- nvmf/common.sh@718 -- # ip_candidates=() 00:32:33.823 23:35:23 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:32:33.823 23:35:23 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:33.823 23:35:23 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:33.823 23:35:23 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:32:33.823 23:35:23 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:33.823 23:35:23 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:32:33.823 23:35:23 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:32:33.823 23:35:23 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:32:33.823 23:35:23 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:32:33.823 23:35:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:33.823 23:35:23 -- common/autotest_common.sh@10 -- # set +x 00:32:34.177 nvme0n1 00:32:34.177 23:35:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:34.177 23:35:23 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:32:34.177 23:35:23 -- host/auth.sh@73 -- # jq -r '.[].name' 00:32:34.177 23:35:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:34.177 23:35:23 -- common/autotest_common.sh@10 -- # set +x 00:32:34.177 23:35:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:34.177 23:35:23 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:34.177 23:35:23 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:34.177 23:35:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:34.177 23:35:23 -- common/autotest_common.sh@10 -- # set +x 00:32:34.177 23:35:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:34.177 23:35:23 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:32:34.177 23:35:23 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:32:34.177 23:35:23 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:32:34.177 23:35:23 -- host/auth.sh@44 -- # digest=sha512 00:32:34.177 23:35:23 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:32:34.177 23:35:23 -- host/auth.sh@44 -- # keyid=2 00:32:34.177 23:35:23 -- host/auth.sh@45 -- # key=DHHC-1:01:NWNmYTA4NTA5MDI1MjU5ZWI0YmNiMmViNmZlYjNhN2S+VDAV: 00:32:34.177 23:35:23 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:32:34.177 23:35:23 -- host/auth.sh@48 -- # echo ffdhe4096 00:32:34.177 23:35:23 -- host/auth.sh@49 -- # echo DHHC-1:01:NWNmYTA4NTA5MDI1MjU5ZWI0YmNiMmViNmZlYjNhN2S+VDAV: 00:32:34.177 23:35:23 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe4096 2 00:32:34.177 23:35:23 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:32:34.177 23:35:23 -- host/auth.sh@68 -- # digest=sha512 00:32:34.177 23:35:23 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:32:34.177 23:35:23 -- host/auth.sh@68 -- # keyid=2 00:32:34.177 23:35:23 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:32:34.177 23:35:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:34.177 23:35:23 -- common/autotest_common.sh@10 -- # set +x 00:32:34.177 23:35:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:34.177 23:35:23 -- host/auth.sh@70 -- # get_main_ns_ip 00:32:34.177 23:35:23 -- nvmf/common.sh@717 -- # local ip 00:32:34.177 23:35:23 -- nvmf/common.sh@718 -- # ip_candidates=() 00:32:34.177 23:35:23 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:32:34.177 23:35:23 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:34.177 23:35:23 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:34.177 23:35:23 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:32:34.177 23:35:23 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:34.177 23:35:23 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:32:34.177 23:35:23 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:32:34.177 23:35:23 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:32:34.177 23:35:23 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:32:34.177 23:35:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:34.177 23:35:23 -- common/autotest_common.sh@10 -- # set +x 00:32:34.442 nvme0n1 00:32:34.442 23:35:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:34.442 23:35:23 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:32:34.442 23:35:23 -- host/auth.sh@73 -- # jq -r '.[].name' 00:32:34.442 23:35:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:34.442 23:35:23 -- common/autotest_common.sh@10 -- # set +x 00:32:34.442 23:35:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:34.703 23:35:23 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:34.704 23:35:23 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:34.704 23:35:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:34.704 23:35:23 -- common/autotest_common.sh@10 -- # set +x 00:32:34.704 23:35:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:34.704 23:35:23 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:32:34.704 23:35:23 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:32:34.704 23:35:23 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:32:34.704 23:35:23 -- host/auth.sh@44 -- # digest=sha512 00:32:34.704 23:35:23 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:32:34.704 23:35:23 -- host/auth.sh@44 -- # keyid=3 00:32:34.704 23:35:23 -- host/auth.sh@45 -- # key=DHHC-1:02:MGY3YTY3ODY5MGExY2RjODc5MDZkZjkyMTgxMjlhZGM4NThjMmFiYzYxZjQwM2Q0JOfm/w==: 00:32:34.704 23:35:23 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:32:34.704 23:35:23 -- host/auth.sh@48 -- # echo ffdhe4096 00:32:34.704 23:35:23 -- host/auth.sh@49 -- # echo DHHC-1:02:MGY3YTY3ODY5MGExY2RjODc5MDZkZjkyMTgxMjlhZGM4NThjMmFiYzYxZjQwM2Q0JOfm/w==: 00:32:34.704 23:35:23 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe4096 3 00:32:34.704 23:35:23 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:32:34.704 23:35:23 -- host/auth.sh@68 -- # digest=sha512 00:32:34.704 23:35:23 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:32:34.704 23:35:23 -- host/auth.sh@68 -- # keyid=3 00:32:34.704 23:35:23 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:32:34.704 23:35:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:34.704 23:35:23 -- common/autotest_common.sh@10 -- # set +x 00:32:34.704 23:35:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:34.704 23:35:23 -- host/auth.sh@70 -- # get_main_ns_ip 00:32:34.704 23:35:23 -- nvmf/common.sh@717 -- # local ip 00:32:34.704 23:35:23 -- nvmf/common.sh@718 -- # ip_candidates=() 00:32:34.704 23:35:23 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:32:34.704 23:35:23 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:34.704 23:35:23 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:34.704 23:35:23 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:32:34.704 23:35:23 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:34.704 23:35:23 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:32:34.704 23:35:23 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:32:34.704 23:35:23 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:32:34.704 23:35:23 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:32:34.704 23:35:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:34.704 23:35:23 -- common/autotest_common.sh@10 -- # set +x 00:32:34.966 nvme0n1 00:32:34.966 23:35:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:34.966 23:35:24 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:32:34.966 23:35:24 -- host/auth.sh@73 -- # jq -r '.[].name' 00:32:34.966 23:35:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:34.966 23:35:24 -- common/autotest_common.sh@10 -- # set +x 00:32:34.966 23:35:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:34.966 23:35:24 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:34.966 23:35:24 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:34.966 23:35:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:34.966 23:35:24 -- common/autotest_common.sh@10 -- # set +x 00:32:34.966 23:35:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:34.966 23:35:24 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:32:34.966 23:35:24 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:32:34.966 23:35:24 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:32:34.966 23:35:24 -- host/auth.sh@44 -- # digest=sha512 00:32:34.966 23:35:24 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:32:34.966 23:35:24 -- host/auth.sh@44 -- # keyid=4 00:32:34.966 23:35:24 -- host/auth.sh@45 -- # key=DHHC-1:03:YzgzZDNjYzM3NjNkMGE4NDU0OWU3MDE1NDc3OTRhNWZiMGRkOWFiZTUwZDMxYWE1ZGUxNDc4ZWE5YjFkM2IzZOaWO0U=: 00:32:34.966 23:35:24 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:32:34.966 23:35:24 -- host/auth.sh@48 -- # echo ffdhe4096 00:32:34.966 23:35:24 -- host/auth.sh@49 -- # echo DHHC-1:03:YzgzZDNjYzM3NjNkMGE4NDU0OWU3MDE1NDc3OTRhNWZiMGRkOWFiZTUwZDMxYWE1ZGUxNDc4ZWE5YjFkM2IzZOaWO0U=: 00:32:34.966 23:35:24 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe4096 4 00:32:34.966 23:35:24 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:32:34.966 23:35:24 -- host/auth.sh@68 -- # digest=sha512 00:32:34.966 23:35:24 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:32:34.966 23:35:24 -- host/auth.sh@68 -- # keyid=4 00:32:34.966 23:35:24 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:32:34.966 23:35:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:34.966 23:35:24 -- common/autotest_common.sh@10 -- # set +x 00:32:34.966 23:35:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:34.966 23:35:24 -- host/auth.sh@70 -- # get_main_ns_ip 00:32:34.966 23:35:24 -- nvmf/common.sh@717 -- # local ip 00:32:34.966 23:35:24 -- nvmf/common.sh@718 -- # ip_candidates=() 00:32:34.966 23:35:24 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:32:34.966 23:35:24 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:34.966 23:35:24 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:34.966 23:35:24 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:32:34.966 23:35:24 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:34.966 23:35:24 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:32:34.966 23:35:24 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:32:34.966 23:35:24 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:32:34.966 23:35:24 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:32:34.966 23:35:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:34.966 23:35:24 -- common/autotest_common.sh@10 -- # set +x 00:32:35.227 nvme0n1 00:32:35.228 23:35:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:35.228 23:35:24 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:32:35.228 23:35:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:35.228 23:35:24 -- host/auth.sh@73 -- # jq -r '.[].name' 00:32:35.228 23:35:24 -- common/autotest_common.sh@10 -- # set +x 00:32:35.228 23:35:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:35.228 23:35:24 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:35.228 23:35:24 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:35.228 23:35:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:35.228 23:35:24 -- common/autotest_common.sh@10 -- # set +x 00:32:35.228 23:35:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:35.228 23:35:24 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:32:35.228 23:35:24 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:32:35.228 23:35:24 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:32:35.228 23:35:24 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:32:35.228 23:35:24 -- host/auth.sh@44 -- # digest=sha512 00:32:35.228 23:35:24 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:32:35.228 23:35:24 -- host/auth.sh@44 -- # keyid=0 00:32:35.228 23:35:24 -- host/auth.sh@45 -- # key=DHHC-1:00:NzQyMzU3NTBmZTY4YzkwYTU0MjcwZDNlZDc4MjVhM2It42mv: 00:32:35.228 23:35:24 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:32:35.228 23:35:24 -- host/auth.sh@48 -- # echo ffdhe6144 00:32:35.228 23:35:24 -- host/auth.sh@49 -- # echo DHHC-1:00:NzQyMzU3NTBmZTY4YzkwYTU0MjcwZDNlZDc4MjVhM2It42mv: 00:32:35.228 23:35:24 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe6144 0 00:32:35.228 23:35:24 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:32:35.228 23:35:24 -- host/auth.sh@68 -- # digest=sha512 00:32:35.228 23:35:24 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:32:35.228 23:35:24 -- host/auth.sh@68 -- # keyid=0 00:32:35.228 23:35:24 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:32:35.228 23:35:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:35.228 23:35:24 -- common/autotest_common.sh@10 -- # set +x 00:32:35.228 23:35:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:35.228 23:35:24 -- host/auth.sh@70 -- # get_main_ns_ip 00:32:35.228 23:35:24 -- nvmf/common.sh@717 -- # local ip 00:32:35.228 23:35:24 -- nvmf/common.sh@718 -- # ip_candidates=() 00:32:35.228 23:35:24 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:32:35.228 23:35:24 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:35.228 23:35:24 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:35.228 23:35:24 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:32:35.228 23:35:24 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:35.228 23:35:24 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:32:35.228 23:35:24 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:32:35.228 23:35:24 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:32:35.228 23:35:24 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:32:35.228 23:35:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:35.228 23:35:24 -- common/autotest_common.sh@10 -- # set +x 00:32:35.801 nvme0n1 00:32:35.801 23:35:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:35.801 23:35:24 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:32:35.801 23:35:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:35.801 23:35:24 -- host/auth.sh@73 -- # jq -r '.[].name' 00:32:35.801 23:35:24 -- common/autotest_common.sh@10 -- # set +x 00:32:35.801 23:35:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:35.801 23:35:24 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:35.801 23:35:24 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:35.801 23:35:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:35.801 23:35:24 -- common/autotest_common.sh@10 -- # set +x 00:32:35.801 23:35:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:35.801 23:35:24 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:32:35.801 23:35:24 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:32:35.801 23:35:24 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:32:35.801 23:35:24 -- host/auth.sh@44 -- # digest=sha512 00:32:35.801 23:35:24 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:32:35.801 23:35:24 -- host/auth.sh@44 -- # keyid=1 00:32:35.801 23:35:24 -- host/auth.sh@45 -- # key=DHHC-1:00:MDIwY2FkNDQ3ZmU2OGE2NWFjMDMyNDVjZDc1OGEzZDk3NTM2YmVmZGE0Yjk2NTIxQfjyVA==: 00:32:35.801 23:35:24 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:32:35.801 23:35:24 -- host/auth.sh@48 -- # echo ffdhe6144 00:32:35.801 23:35:24 -- host/auth.sh@49 -- # echo DHHC-1:00:MDIwY2FkNDQ3ZmU2OGE2NWFjMDMyNDVjZDc1OGEzZDk3NTM2YmVmZGE0Yjk2NTIxQfjyVA==: 00:32:35.801 23:35:24 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe6144 1 00:32:35.801 23:35:24 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:32:35.801 23:35:24 -- host/auth.sh@68 -- # digest=sha512 00:32:35.801 23:35:24 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:32:35.801 23:35:24 -- host/auth.sh@68 -- # keyid=1 00:32:35.801 23:35:24 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:32:35.801 23:35:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:35.801 23:35:24 -- common/autotest_common.sh@10 -- # set +x 00:32:35.801 23:35:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:35.801 23:35:24 -- host/auth.sh@70 -- # get_main_ns_ip 00:32:35.801 23:35:24 -- nvmf/common.sh@717 -- # local ip 00:32:35.801 23:35:24 -- nvmf/common.sh@718 -- # ip_candidates=() 00:32:35.801 23:35:24 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:32:35.801 23:35:24 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:35.801 23:35:24 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:35.801 23:35:24 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:32:35.801 23:35:24 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:35.801 23:35:24 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:32:35.801 23:35:24 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:32:35.801 23:35:24 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:32:35.801 23:35:24 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:32:35.801 23:35:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:35.801 23:35:24 -- common/autotest_common.sh@10 -- # set +x 00:32:36.374 nvme0n1 00:32:36.374 23:35:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:36.374 23:35:25 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:32:36.374 23:35:25 -- host/auth.sh@73 -- # jq -r '.[].name' 00:32:36.374 23:35:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:36.374 23:35:25 -- common/autotest_common.sh@10 -- # set +x 00:32:36.375 23:35:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:36.375 23:35:25 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:36.375 23:35:25 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:36.375 23:35:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:36.375 23:35:25 -- common/autotest_common.sh@10 -- # set +x 00:32:36.375 23:35:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:36.375 23:35:25 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:32:36.375 23:35:25 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:32:36.375 23:35:25 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:32:36.375 23:35:25 -- host/auth.sh@44 -- # digest=sha512 00:32:36.375 23:35:25 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:32:36.375 23:35:25 -- host/auth.sh@44 -- # keyid=2 00:32:36.375 23:35:25 -- host/auth.sh@45 -- # key=DHHC-1:01:NWNmYTA4NTA5MDI1MjU5ZWI0YmNiMmViNmZlYjNhN2S+VDAV: 00:32:36.375 23:35:25 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:32:36.375 23:35:25 -- host/auth.sh@48 -- # echo ffdhe6144 00:32:36.375 23:35:25 -- host/auth.sh@49 -- # echo DHHC-1:01:NWNmYTA4NTA5MDI1MjU5ZWI0YmNiMmViNmZlYjNhN2S+VDAV: 00:32:36.375 23:35:25 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe6144 2 00:32:36.375 23:35:25 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:32:36.375 23:35:25 -- host/auth.sh@68 -- # digest=sha512 00:32:36.375 23:35:25 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:32:36.375 23:35:25 -- host/auth.sh@68 -- # keyid=2 00:32:36.375 23:35:25 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:32:36.375 23:35:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:36.375 23:35:25 -- common/autotest_common.sh@10 -- # set +x 00:32:36.375 23:35:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:36.375 23:35:25 -- host/auth.sh@70 -- # get_main_ns_ip 00:32:36.375 23:35:25 -- nvmf/common.sh@717 -- # local ip 00:32:36.375 23:35:25 -- nvmf/common.sh@718 -- # ip_candidates=() 00:32:36.375 23:35:25 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:32:36.375 23:35:25 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:36.375 23:35:25 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:36.375 23:35:25 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:32:36.375 23:35:25 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:36.375 23:35:25 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:32:36.375 23:35:25 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:32:36.375 23:35:25 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:32:36.375 23:35:25 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:32:36.375 23:35:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:36.375 23:35:25 -- common/autotest_common.sh@10 -- # set +x 00:32:36.948 nvme0n1 00:32:36.948 23:35:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:36.948 23:35:25 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:32:36.948 23:35:25 -- host/auth.sh@73 -- # jq -r '.[].name' 00:32:36.948 23:35:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:36.948 23:35:25 -- common/autotest_common.sh@10 -- # set +x 00:32:36.948 23:35:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:36.948 23:35:26 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:36.948 23:35:26 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:36.948 23:35:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:36.948 23:35:26 -- common/autotest_common.sh@10 -- # set +x 00:32:36.948 23:35:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:36.948 23:35:26 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:32:36.948 23:35:26 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:32:36.948 23:35:26 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:32:36.948 23:35:26 -- host/auth.sh@44 -- # digest=sha512 00:32:36.948 23:35:26 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:32:36.948 23:35:26 -- host/auth.sh@44 -- # keyid=3 00:32:36.948 23:35:26 -- host/auth.sh@45 -- # key=DHHC-1:02:MGY3YTY3ODY5MGExY2RjODc5MDZkZjkyMTgxMjlhZGM4NThjMmFiYzYxZjQwM2Q0JOfm/w==: 00:32:36.948 23:35:26 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:32:36.948 23:35:26 -- host/auth.sh@48 -- # echo ffdhe6144 00:32:36.948 23:35:26 -- host/auth.sh@49 -- # echo DHHC-1:02:MGY3YTY3ODY5MGExY2RjODc5MDZkZjkyMTgxMjlhZGM4NThjMmFiYzYxZjQwM2Q0JOfm/w==: 00:32:36.948 23:35:26 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe6144 3 00:32:36.948 23:35:26 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:32:36.948 23:35:26 -- host/auth.sh@68 -- # digest=sha512 00:32:36.948 23:35:26 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:32:36.948 23:35:26 -- host/auth.sh@68 -- # keyid=3 00:32:36.948 23:35:26 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:32:36.948 23:35:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:36.948 23:35:26 -- common/autotest_common.sh@10 -- # set +x 00:32:36.948 23:35:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:36.948 23:35:26 -- host/auth.sh@70 -- # get_main_ns_ip 00:32:36.948 23:35:26 -- nvmf/common.sh@717 -- # local ip 00:32:36.948 23:35:26 -- nvmf/common.sh@718 -- # ip_candidates=() 00:32:36.948 23:35:26 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:32:36.948 23:35:26 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:36.948 23:35:26 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:36.948 23:35:26 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:32:36.948 23:35:26 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:36.948 23:35:26 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:32:36.948 23:35:26 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:32:36.948 23:35:26 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:32:36.948 23:35:26 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:32:36.948 23:35:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:36.948 23:35:26 -- common/autotest_common.sh@10 -- # set +x 00:32:37.521 nvme0n1 00:32:37.521 23:35:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:37.521 23:35:26 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:32:37.521 23:35:26 -- host/auth.sh@73 -- # jq -r '.[].name' 00:32:37.521 23:35:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:37.521 23:35:26 -- common/autotest_common.sh@10 -- # set +x 00:32:37.521 23:35:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:37.521 23:35:26 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:37.521 23:35:26 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:37.521 23:35:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:37.521 23:35:26 -- common/autotest_common.sh@10 -- # set +x 00:32:37.521 23:35:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:37.521 23:35:26 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:32:37.521 23:35:26 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:32:37.521 23:35:26 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:32:37.521 23:35:26 -- host/auth.sh@44 -- # digest=sha512 00:32:37.521 23:35:26 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:32:37.521 23:35:26 -- host/auth.sh@44 -- # keyid=4 00:32:37.521 23:35:26 -- host/auth.sh@45 -- # key=DHHC-1:03:YzgzZDNjYzM3NjNkMGE4NDU0OWU3MDE1NDc3OTRhNWZiMGRkOWFiZTUwZDMxYWE1ZGUxNDc4ZWE5YjFkM2IzZOaWO0U=: 00:32:37.521 23:35:26 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:32:37.521 23:35:26 -- host/auth.sh@48 -- # echo ffdhe6144 00:32:37.521 23:35:26 -- host/auth.sh@49 -- # echo DHHC-1:03:YzgzZDNjYzM3NjNkMGE4NDU0OWU3MDE1NDc3OTRhNWZiMGRkOWFiZTUwZDMxYWE1ZGUxNDc4ZWE5YjFkM2IzZOaWO0U=: 00:32:37.521 23:35:26 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe6144 4 00:32:37.521 23:35:26 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:32:37.521 23:35:26 -- host/auth.sh@68 -- # digest=sha512 00:32:37.521 23:35:26 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:32:37.521 23:35:26 -- host/auth.sh@68 -- # keyid=4 00:32:37.521 23:35:26 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:32:37.521 23:35:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:37.521 23:35:26 -- common/autotest_common.sh@10 -- # set +x 00:32:37.521 23:35:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:37.521 23:35:26 -- host/auth.sh@70 -- # get_main_ns_ip 00:32:37.521 23:35:26 -- nvmf/common.sh@717 -- # local ip 00:32:37.521 23:35:26 -- nvmf/common.sh@718 -- # ip_candidates=() 00:32:37.521 23:35:26 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:32:37.521 23:35:26 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:37.521 23:35:26 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:37.521 23:35:26 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:32:37.521 23:35:26 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:37.521 23:35:26 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:32:37.521 23:35:26 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:32:37.521 23:35:26 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:32:37.521 23:35:26 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:32:37.522 23:35:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:37.522 23:35:26 -- common/autotest_common.sh@10 -- # set +x 00:32:38.094 nvme0n1 00:32:38.094 23:35:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:38.094 23:35:27 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:32:38.094 23:35:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:38.094 23:35:27 -- host/auth.sh@73 -- # jq -r '.[].name' 00:32:38.094 23:35:27 -- common/autotest_common.sh@10 -- # set +x 00:32:38.094 23:35:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:38.094 23:35:27 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:38.094 23:35:27 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:38.094 23:35:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:38.094 23:35:27 -- common/autotest_common.sh@10 -- # set +x 00:32:38.094 23:35:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:38.094 23:35:27 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:32:38.094 23:35:27 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:32:38.094 23:35:27 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:32:38.094 23:35:27 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:32:38.094 23:35:27 -- host/auth.sh@44 -- # digest=sha512 00:32:38.094 23:35:27 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:32:38.094 23:35:27 -- host/auth.sh@44 -- # keyid=0 00:32:38.094 23:35:27 -- host/auth.sh@45 -- # key=DHHC-1:00:NzQyMzU3NTBmZTY4YzkwYTU0MjcwZDNlZDc4MjVhM2It42mv: 00:32:38.094 23:35:27 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:32:38.094 23:35:27 -- host/auth.sh@48 -- # echo ffdhe8192 00:32:38.094 23:35:27 -- host/auth.sh@49 -- # echo DHHC-1:00:NzQyMzU3NTBmZTY4YzkwYTU0MjcwZDNlZDc4MjVhM2It42mv: 00:32:38.094 23:35:27 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe8192 0 00:32:38.094 23:35:27 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:32:38.094 23:35:27 -- host/auth.sh@68 -- # digest=sha512 00:32:38.094 23:35:27 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:32:38.094 23:35:27 -- host/auth.sh@68 -- # keyid=0 00:32:38.094 23:35:27 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:32:38.094 23:35:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:38.094 23:35:27 -- common/autotest_common.sh@10 -- # set +x 00:32:38.094 23:35:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:38.094 23:35:27 -- host/auth.sh@70 -- # get_main_ns_ip 00:32:38.094 23:35:27 -- nvmf/common.sh@717 -- # local ip 00:32:38.094 23:35:27 -- nvmf/common.sh@718 -- # ip_candidates=() 00:32:38.094 23:35:27 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:32:38.094 23:35:27 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:38.094 23:35:27 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:38.094 23:35:27 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:32:38.094 23:35:27 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:38.094 23:35:27 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:32:38.094 23:35:27 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:32:38.094 23:35:27 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:32:38.094 23:35:27 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:32:38.094 23:35:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:38.094 23:35:27 -- common/autotest_common.sh@10 -- # set +x 00:32:38.667 nvme0n1 00:32:38.667 23:35:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:38.667 23:35:27 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:32:38.667 23:35:27 -- host/auth.sh@73 -- # jq -r '.[].name' 00:32:38.667 23:35:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:38.667 23:35:27 -- common/autotest_common.sh@10 -- # set +x 00:32:38.928 23:35:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:38.929 23:35:27 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:38.929 23:35:27 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:38.929 23:35:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:38.929 23:35:27 -- common/autotest_common.sh@10 -- # set +x 00:32:38.929 23:35:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:38.929 23:35:27 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:32:38.929 23:35:27 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:32:38.929 23:35:27 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:32:38.929 23:35:27 -- host/auth.sh@44 -- # digest=sha512 00:32:38.929 23:35:27 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:32:38.929 23:35:27 -- host/auth.sh@44 -- # keyid=1 00:32:38.929 23:35:27 -- host/auth.sh@45 -- # key=DHHC-1:00:MDIwY2FkNDQ3ZmU2OGE2NWFjMDMyNDVjZDc1OGEzZDk3NTM2YmVmZGE0Yjk2NTIxQfjyVA==: 00:32:38.929 23:35:27 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:32:38.929 23:35:27 -- host/auth.sh@48 -- # echo ffdhe8192 00:32:38.929 23:35:27 -- host/auth.sh@49 -- # echo DHHC-1:00:MDIwY2FkNDQ3ZmU2OGE2NWFjMDMyNDVjZDc1OGEzZDk3NTM2YmVmZGE0Yjk2NTIxQfjyVA==: 00:32:38.929 23:35:27 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe8192 1 00:32:38.929 23:35:27 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:32:38.929 23:35:27 -- host/auth.sh@68 -- # digest=sha512 00:32:38.929 23:35:27 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:32:38.929 23:35:27 -- host/auth.sh@68 -- # keyid=1 00:32:38.929 23:35:27 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:32:38.929 23:35:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:38.929 23:35:27 -- common/autotest_common.sh@10 -- # set +x 00:32:38.929 23:35:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:38.929 23:35:27 -- host/auth.sh@70 -- # get_main_ns_ip 00:32:38.929 23:35:27 -- nvmf/common.sh@717 -- # local ip 00:32:38.929 23:35:27 -- nvmf/common.sh@718 -- # ip_candidates=() 00:32:38.929 23:35:27 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:32:38.929 23:35:27 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:38.929 23:35:27 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:38.929 23:35:27 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:32:38.929 23:35:27 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:38.929 23:35:27 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:32:38.929 23:35:27 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:32:38.929 23:35:27 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:32:38.929 23:35:27 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:32:38.929 23:35:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:38.929 23:35:27 -- common/autotest_common.sh@10 -- # set +x 00:32:39.502 nvme0n1 00:32:39.502 23:35:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:39.502 23:35:28 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:32:39.502 23:35:28 -- host/auth.sh@73 -- # jq -r '.[].name' 00:32:39.502 23:35:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:39.502 23:35:28 -- common/autotest_common.sh@10 -- # set +x 00:32:39.502 23:35:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:39.502 23:35:28 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:39.502 23:35:28 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:39.502 23:35:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:39.502 23:35:28 -- common/autotest_common.sh@10 -- # set +x 00:32:39.764 23:35:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:39.764 23:35:28 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:32:39.764 23:35:28 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:32:39.764 23:35:28 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:32:39.764 23:35:28 -- host/auth.sh@44 -- # digest=sha512 00:32:39.764 23:35:28 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:32:39.764 23:35:28 -- host/auth.sh@44 -- # keyid=2 00:32:39.764 23:35:28 -- host/auth.sh@45 -- # key=DHHC-1:01:NWNmYTA4NTA5MDI1MjU5ZWI0YmNiMmViNmZlYjNhN2S+VDAV: 00:32:39.764 23:35:28 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:32:39.764 23:35:28 -- host/auth.sh@48 -- # echo ffdhe8192 00:32:39.764 23:35:28 -- host/auth.sh@49 -- # echo DHHC-1:01:NWNmYTA4NTA5MDI1MjU5ZWI0YmNiMmViNmZlYjNhN2S+VDAV: 00:32:39.764 23:35:28 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe8192 2 00:32:39.764 23:35:28 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:32:39.764 23:35:28 -- host/auth.sh@68 -- # digest=sha512 00:32:39.764 23:35:28 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:32:39.764 23:35:28 -- host/auth.sh@68 -- # keyid=2 00:32:39.764 23:35:28 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:32:39.764 23:35:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:39.764 23:35:28 -- common/autotest_common.sh@10 -- # set +x 00:32:39.764 23:35:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:39.764 23:35:28 -- host/auth.sh@70 -- # get_main_ns_ip 00:32:39.764 23:35:28 -- nvmf/common.sh@717 -- # local ip 00:32:39.764 23:35:28 -- nvmf/common.sh@718 -- # ip_candidates=() 00:32:39.764 23:35:28 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:32:39.764 23:35:28 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:39.764 23:35:28 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:39.764 23:35:28 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:32:39.764 23:35:28 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:39.764 23:35:28 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:32:39.764 23:35:28 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:32:39.764 23:35:28 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:32:39.764 23:35:28 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:32:39.764 23:35:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:39.764 23:35:28 -- common/autotest_common.sh@10 -- # set +x 00:32:40.338 nvme0n1 00:32:40.338 23:35:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:40.338 23:35:29 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:32:40.338 23:35:29 -- host/auth.sh@73 -- # jq -r '.[].name' 00:32:40.338 23:35:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:40.338 23:35:29 -- common/autotest_common.sh@10 -- # set +x 00:32:40.338 23:35:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:40.338 23:35:29 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:40.338 23:35:29 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:40.338 23:35:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:40.338 23:35:29 -- common/autotest_common.sh@10 -- # set +x 00:32:40.338 23:35:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:40.338 23:35:29 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:32:40.338 23:35:29 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:32:40.338 23:35:29 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:32:40.338 23:35:29 -- host/auth.sh@44 -- # digest=sha512 00:32:40.338 23:35:29 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:32:40.338 23:35:29 -- host/auth.sh@44 -- # keyid=3 00:32:40.338 23:35:29 -- host/auth.sh@45 -- # key=DHHC-1:02:MGY3YTY3ODY5MGExY2RjODc5MDZkZjkyMTgxMjlhZGM4NThjMmFiYzYxZjQwM2Q0JOfm/w==: 00:32:40.338 23:35:29 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:32:40.338 23:35:29 -- host/auth.sh@48 -- # echo ffdhe8192 00:32:40.338 23:35:29 -- host/auth.sh@49 -- # echo DHHC-1:02:MGY3YTY3ODY5MGExY2RjODc5MDZkZjkyMTgxMjlhZGM4NThjMmFiYzYxZjQwM2Q0JOfm/w==: 00:32:40.338 23:35:29 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe8192 3 00:32:40.338 23:35:29 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:32:40.338 23:35:29 -- host/auth.sh@68 -- # digest=sha512 00:32:40.338 23:35:29 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:32:40.338 23:35:29 -- host/auth.sh@68 -- # keyid=3 00:32:40.338 23:35:29 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:32:40.338 23:35:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:40.600 23:35:29 -- common/autotest_common.sh@10 -- # set +x 00:32:40.600 23:35:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:40.600 23:35:29 -- host/auth.sh@70 -- # get_main_ns_ip 00:32:40.600 23:35:29 -- nvmf/common.sh@717 -- # local ip 00:32:40.600 23:35:29 -- nvmf/common.sh@718 -- # ip_candidates=() 00:32:40.600 23:35:29 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:32:40.600 23:35:29 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:40.600 23:35:29 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:40.600 23:35:29 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:32:40.600 23:35:29 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:40.600 23:35:29 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:32:40.600 23:35:29 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:32:40.600 23:35:29 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:32:40.600 23:35:29 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:32:40.600 23:35:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:40.600 23:35:29 -- common/autotest_common.sh@10 -- # set +x 00:32:41.173 nvme0n1 00:32:41.173 23:35:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:41.173 23:35:30 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:32:41.173 23:35:30 -- host/auth.sh@73 -- # jq -r '.[].name' 00:32:41.173 23:35:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:41.173 23:35:30 -- common/autotest_common.sh@10 -- # set +x 00:32:41.173 23:35:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:41.173 23:35:30 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:41.173 23:35:30 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:41.173 23:35:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:41.173 23:35:30 -- common/autotest_common.sh@10 -- # set +x 00:32:41.173 23:35:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:41.173 23:35:30 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:32:41.173 23:35:30 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:32:41.173 23:35:30 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:32:41.173 23:35:30 -- host/auth.sh@44 -- # digest=sha512 00:32:41.173 23:35:30 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:32:41.173 23:35:30 -- host/auth.sh@44 -- # keyid=4 00:32:41.173 23:35:30 -- host/auth.sh@45 -- # key=DHHC-1:03:YzgzZDNjYzM3NjNkMGE4NDU0OWU3MDE1NDc3OTRhNWZiMGRkOWFiZTUwZDMxYWE1ZGUxNDc4ZWE5YjFkM2IzZOaWO0U=: 00:32:41.173 23:35:30 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:32:41.173 23:35:30 -- host/auth.sh@48 -- # echo ffdhe8192 00:32:41.173 23:35:30 -- host/auth.sh@49 -- # echo DHHC-1:03:YzgzZDNjYzM3NjNkMGE4NDU0OWU3MDE1NDc3OTRhNWZiMGRkOWFiZTUwZDMxYWE1ZGUxNDc4ZWE5YjFkM2IzZOaWO0U=: 00:32:41.173 23:35:30 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe8192 4 00:32:41.173 23:35:30 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:32:41.173 23:35:30 -- host/auth.sh@68 -- # digest=sha512 00:32:41.173 23:35:30 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:32:41.173 23:35:30 -- host/auth.sh@68 -- # keyid=4 00:32:41.173 23:35:30 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:32:41.173 23:35:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:41.173 23:35:30 -- common/autotest_common.sh@10 -- # set +x 00:32:41.435 23:35:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:41.435 23:35:30 -- host/auth.sh@70 -- # get_main_ns_ip 00:32:41.435 23:35:30 -- nvmf/common.sh@717 -- # local ip 00:32:41.435 23:35:30 -- nvmf/common.sh@718 -- # ip_candidates=() 00:32:41.435 23:35:30 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:32:41.435 23:35:30 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:41.435 23:35:30 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:41.435 23:35:30 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:32:41.435 23:35:30 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:41.435 23:35:30 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:32:41.435 23:35:30 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:32:41.435 23:35:30 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:32:41.435 23:35:30 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:32:41.435 23:35:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:41.435 23:35:30 -- common/autotest_common.sh@10 -- # set +x 00:32:42.008 nvme0n1 00:32:42.008 23:35:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:42.008 23:35:31 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:32:42.008 23:35:31 -- host/auth.sh@73 -- # jq -r '.[].name' 00:32:42.008 23:35:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:42.008 23:35:31 -- common/autotest_common.sh@10 -- # set +x 00:32:42.008 23:35:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:42.008 23:35:31 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:42.008 23:35:31 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:42.008 23:35:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:42.008 23:35:31 -- common/autotest_common.sh@10 -- # set +x 00:32:42.008 23:35:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:42.008 23:35:31 -- host/auth.sh@117 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:32:42.008 23:35:31 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:32:42.008 23:35:31 -- host/auth.sh@44 -- # digest=sha256 00:32:42.008 23:35:31 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:42.008 23:35:31 -- host/auth.sh@44 -- # keyid=1 00:32:42.008 23:35:31 -- host/auth.sh@45 -- # key=DHHC-1:00:MDIwY2FkNDQ3ZmU2OGE2NWFjMDMyNDVjZDc1OGEzZDk3NTM2YmVmZGE0Yjk2NTIxQfjyVA==: 00:32:42.008 23:35:31 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:32:42.008 23:35:31 -- host/auth.sh@48 -- # echo ffdhe2048 00:32:42.008 23:35:31 -- host/auth.sh@49 -- # echo DHHC-1:00:MDIwY2FkNDQ3ZmU2OGE2NWFjMDMyNDVjZDc1OGEzZDk3NTM2YmVmZGE0Yjk2NTIxQfjyVA==: 00:32:42.008 23:35:31 -- host/auth.sh@118 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:32:42.008 23:35:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:42.008 23:35:31 -- common/autotest_common.sh@10 -- # set +x 00:32:42.008 23:35:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:42.008 23:35:31 -- host/auth.sh@119 -- # get_main_ns_ip 00:32:42.008 23:35:31 -- nvmf/common.sh@717 -- # local ip 00:32:42.008 23:35:31 -- nvmf/common.sh@718 -- # ip_candidates=() 00:32:42.008 23:35:31 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:32:42.009 23:35:31 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:42.009 23:35:31 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:42.009 23:35:31 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:32:42.009 23:35:31 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:42.009 23:35:31 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:32:42.009 23:35:31 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:32:42.009 23:35:31 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:32:42.009 23:35:31 -- host/auth.sh@119 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:32:42.009 23:35:31 -- common/autotest_common.sh@638 -- # local es=0 00:32:42.009 23:35:31 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:32:42.009 23:35:31 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:32:42.009 23:35:31 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:32:42.009 23:35:31 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:32:42.009 23:35:31 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:32:42.009 23:35:31 -- common/autotest_common.sh@641 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:32:42.009 23:35:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:42.009 23:35:31 -- common/autotest_common.sh@10 -- # set +x 00:32:42.270 request: 00:32:42.270 { 00:32:42.270 "name": "nvme0", 00:32:42.270 "trtype": "tcp", 00:32:42.270 "traddr": "10.0.0.1", 00:32:42.270 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:32:42.270 "adrfam": "ipv4", 00:32:42.270 "trsvcid": "4420", 00:32:42.270 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:32:42.270 "method": "bdev_nvme_attach_controller", 00:32:42.270 "req_id": 1 00:32:42.270 } 00:32:42.270 Got JSON-RPC error response 00:32:42.270 response: 00:32:42.270 { 00:32:42.270 "code": -32602, 00:32:42.270 "message": "Invalid parameters" 00:32:42.270 } 00:32:42.270 23:35:31 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:32:42.270 23:35:31 -- common/autotest_common.sh@641 -- # es=1 00:32:42.270 23:35:31 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:32:42.270 23:35:31 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:32:42.270 23:35:31 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:32:42.270 23:35:31 -- host/auth.sh@121 -- # rpc_cmd bdev_nvme_get_controllers 00:32:42.270 23:35:31 -- host/auth.sh@121 -- # jq length 00:32:42.270 23:35:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:42.270 23:35:31 -- common/autotest_common.sh@10 -- # set +x 00:32:42.270 23:35:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:42.270 23:35:31 -- host/auth.sh@121 -- # (( 0 == 0 )) 00:32:42.270 23:35:31 -- host/auth.sh@124 -- # get_main_ns_ip 00:32:42.270 23:35:31 -- nvmf/common.sh@717 -- # local ip 00:32:42.270 23:35:31 -- nvmf/common.sh@718 -- # ip_candidates=() 00:32:42.270 23:35:31 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:32:42.270 23:35:31 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:42.270 23:35:31 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:42.270 23:35:31 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:32:42.270 23:35:31 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:42.270 23:35:31 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:32:42.270 23:35:31 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:32:42.270 23:35:31 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:32:42.270 23:35:31 -- host/auth.sh@124 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:32:42.270 23:35:31 -- common/autotest_common.sh@638 -- # local es=0 00:32:42.270 23:35:31 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:32:42.270 23:35:31 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:32:42.270 23:35:31 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:32:42.270 23:35:31 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:32:42.270 23:35:31 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:32:42.270 23:35:31 -- common/autotest_common.sh@641 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:32:42.270 23:35:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:42.270 23:35:31 -- common/autotest_common.sh@10 -- # set +x 00:32:42.271 request: 00:32:42.271 { 00:32:42.271 "name": "nvme0", 00:32:42.271 "trtype": "tcp", 00:32:42.271 "traddr": "10.0.0.1", 00:32:42.271 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:32:42.271 "adrfam": "ipv4", 00:32:42.271 "trsvcid": "4420", 00:32:42.271 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:32:42.271 "dhchap_key": "key2", 00:32:42.271 "method": "bdev_nvme_attach_controller", 00:32:42.271 "req_id": 1 00:32:42.271 } 00:32:42.271 Got JSON-RPC error response 00:32:42.271 response: 00:32:42.271 { 00:32:42.271 "code": -32602, 00:32:42.271 "message": "Invalid parameters" 00:32:42.271 } 00:32:42.271 23:35:31 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:32:42.271 23:35:31 -- common/autotest_common.sh@641 -- # es=1 00:32:42.271 23:35:31 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:32:42.271 23:35:31 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:32:42.271 23:35:31 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:32:42.271 23:35:31 -- host/auth.sh@127 -- # rpc_cmd bdev_nvme_get_controllers 00:32:42.271 23:35:31 -- host/auth.sh@127 -- # jq length 00:32:42.271 23:35:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:42.271 23:35:31 -- common/autotest_common.sh@10 -- # set +x 00:32:42.271 23:35:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:42.271 23:35:31 -- host/auth.sh@127 -- # (( 0 == 0 )) 00:32:42.271 23:35:31 -- host/auth.sh@129 -- # trap - SIGINT SIGTERM EXIT 00:32:42.271 23:35:31 -- host/auth.sh@130 -- # cleanup 00:32:42.271 23:35:31 -- host/auth.sh@24 -- # nvmftestfini 00:32:42.271 23:35:31 -- nvmf/common.sh@477 -- # nvmfcleanup 00:32:42.271 23:35:31 -- nvmf/common.sh@117 -- # sync 00:32:42.271 23:35:31 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:32:42.271 23:35:31 -- nvmf/common.sh@120 -- # set +e 00:32:42.271 23:35:31 -- nvmf/common.sh@121 -- # for i in {1..20} 00:32:42.271 23:35:31 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:32:42.271 rmmod nvme_tcp 00:32:42.271 rmmod nvme_fabrics 00:32:42.271 23:35:31 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:32:42.271 23:35:31 -- nvmf/common.sh@124 -- # set -e 00:32:42.271 23:35:31 -- nvmf/common.sh@125 -- # return 0 00:32:42.271 23:35:31 -- nvmf/common.sh@478 -- # '[' -n 4154002 ']' 00:32:42.271 23:35:31 -- nvmf/common.sh@479 -- # killprocess 4154002 00:32:42.271 23:35:31 -- common/autotest_common.sh@936 -- # '[' -z 4154002 ']' 00:32:42.271 23:35:31 -- common/autotest_common.sh@940 -- # kill -0 4154002 00:32:42.271 23:35:31 -- common/autotest_common.sh@941 -- # uname 00:32:42.532 23:35:31 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:32:42.532 23:35:31 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 4154002 00:32:42.532 23:35:31 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:32:42.532 23:35:31 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:32:42.532 23:35:31 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 4154002' 00:32:42.532 killing process with pid 4154002 00:32:42.532 23:35:31 -- common/autotest_common.sh@955 -- # kill 4154002 00:32:42.532 23:35:31 -- common/autotest_common.sh@960 -- # wait 4154002 00:32:42.532 23:35:31 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:32:42.532 23:35:31 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:32:42.532 23:35:31 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:32:42.532 23:35:31 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:32:42.532 23:35:31 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:32:42.532 23:35:31 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:42.532 23:35:31 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:32:42.532 23:35:31 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:45.079 23:35:33 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:32:45.079 23:35:33 -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:32:45.079 23:35:33 -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:32:45.079 23:35:33 -- host/auth.sh@27 -- # clean_kernel_target 00:32:45.079 23:35:33 -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:32:45.079 23:35:33 -- nvmf/common.sh@675 -- # echo 0 00:32:45.079 23:35:33 -- nvmf/common.sh@677 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:32:45.079 23:35:33 -- nvmf/common.sh@678 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:32:45.079 23:35:33 -- nvmf/common.sh@679 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:32:45.079 23:35:33 -- nvmf/common.sh@680 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:32:45.079 23:35:33 -- nvmf/common.sh@682 -- # modules=(/sys/module/nvmet/holders/*) 00:32:45.079 23:35:33 -- nvmf/common.sh@684 -- # modprobe -r nvmet_tcp nvmet 00:32:45.079 23:35:33 -- nvmf/common.sh@687 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:32:48.385 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:32:48.385 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:32:48.385 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:32:48.385 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:32:48.385 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:32:48.385 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:32:48.385 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:32:48.385 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:32:48.385 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:32:48.385 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:32:48.385 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:32:48.385 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:32:48.385 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:32:48.385 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:32:48.385 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:32:48.385 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:32:48.385 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:32:48.646 23:35:37 -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.Nvg /tmp/spdk.key-null.ghp /tmp/spdk.key-sha256.IqF /tmp/spdk.key-sha384.6ik /tmp/spdk.key-sha512.Agh /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log 00:32:48.646 23:35:37 -- host/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:32:51.986 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:32:51.986 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:32:51.986 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:32:51.986 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:32:51.986 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:32:51.986 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:32:51.986 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:32:51.986 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:32:51.986 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:32:51.986 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:32:51.986 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:32:51.986 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:32:51.986 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:32:51.986 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:32:51.986 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:32:51.986 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:32:51.986 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:32:52.245 00:32:52.245 real 0m57.156s 00:32:52.245 user 0m50.460s 00:32:52.245 sys 0m14.990s 00:32:52.245 23:35:41 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:32:52.245 23:35:41 -- common/autotest_common.sh@10 -- # set +x 00:32:52.245 ************************************ 00:32:52.245 END TEST nvmf_auth 00:32:52.245 ************************************ 00:32:52.506 23:35:41 -- nvmf/nvmf.sh@104 -- # [[ tcp == \t\c\p ]] 00:32:52.506 23:35:41 -- nvmf/nvmf.sh@105 -- # run_test nvmf_digest /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:32:52.506 23:35:41 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:32:52.506 23:35:41 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:32:52.506 23:35:41 -- common/autotest_common.sh@10 -- # set +x 00:32:52.506 ************************************ 00:32:52.506 START TEST nvmf_digest 00:32:52.506 ************************************ 00:32:52.506 23:35:41 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:32:52.767 * Looking for test storage... 00:32:52.767 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:32:52.767 23:35:41 -- host/digest.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:52.767 23:35:41 -- nvmf/common.sh@7 -- # uname -s 00:32:52.767 23:35:41 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:52.767 23:35:41 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:52.767 23:35:41 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:52.767 23:35:41 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:52.767 23:35:41 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:52.767 23:35:41 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:52.767 23:35:41 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:52.767 23:35:41 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:52.767 23:35:41 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:52.767 23:35:41 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:52.767 23:35:41 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:32:52.767 23:35:41 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:32:52.767 23:35:41 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:52.767 23:35:41 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:52.767 23:35:41 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:52.767 23:35:41 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:52.767 23:35:41 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:52.767 23:35:41 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:52.767 23:35:41 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:52.767 23:35:41 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:52.767 23:35:41 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:52.768 23:35:41 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:52.768 23:35:41 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:52.768 23:35:41 -- paths/export.sh@5 -- # export PATH 00:32:52.768 23:35:41 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:52.768 23:35:41 -- nvmf/common.sh@47 -- # : 0 00:32:52.768 23:35:41 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:32:52.768 23:35:41 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:32:52.768 23:35:41 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:52.768 23:35:41 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:52.768 23:35:41 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:52.768 23:35:41 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:32:52.768 23:35:41 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:32:52.768 23:35:41 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:32:52.768 23:35:41 -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:32:52.768 23:35:41 -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:32:52.768 23:35:41 -- host/digest.sh@16 -- # runtime=2 00:32:52.768 23:35:41 -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:32:52.768 23:35:41 -- host/digest.sh@138 -- # nvmftestinit 00:32:52.768 23:35:41 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:32:52.768 23:35:41 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:52.768 23:35:41 -- nvmf/common.sh@437 -- # prepare_net_devs 00:32:52.768 23:35:41 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:32:52.768 23:35:41 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:32:52.768 23:35:41 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:52.768 23:35:41 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:32:52.768 23:35:41 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:52.768 23:35:41 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:32:52.768 23:35:41 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:32:52.768 23:35:41 -- nvmf/common.sh@285 -- # xtrace_disable 00:32:52.768 23:35:41 -- common/autotest_common.sh@10 -- # set +x 00:33:00.913 23:35:48 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:33:00.913 23:35:48 -- nvmf/common.sh@291 -- # pci_devs=() 00:33:00.913 23:35:48 -- nvmf/common.sh@291 -- # local -a pci_devs 00:33:00.913 23:35:48 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:33:00.913 23:35:48 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:33:00.913 23:35:48 -- nvmf/common.sh@293 -- # pci_drivers=() 00:33:00.913 23:35:48 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:33:00.913 23:35:48 -- nvmf/common.sh@295 -- # net_devs=() 00:33:00.913 23:35:48 -- nvmf/common.sh@295 -- # local -ga net_devs 00:33:00.913 23:35:48 -- nvmf/common.sh@296 -- # e810=() 00:33:00.913 23:35:48 -- nvmf/common.sh@296 -- # local -ga e810 00:33:00.913 23:35:48 -- nvmf/common.sh@297 -- # x722=() 00:33:00.913 23:35:48 -- nvmf/common.sh@297 -- # local -ga x722 00:33:00.913 23:35:48 -- nvmf/common.sh@298 -- # mlx=() 00:33:00.913 23:35:48 -- nvmf/common.sh@298 -- # local -ga mlx 00:33:00.913 23:35:48 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:00.913 23:35:48 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:00.913 23:35:48 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:00.913 23:35:48 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:00.913 23:35:48 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:00.913 23:35:48 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:00.913 23:35:48 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:00.913 23:35:48 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:00.913 23:35:48 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:00.913 23:35:48 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:00.913 23:35:48 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:00.913 23:35:48 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:33:00.913 23:35:48 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:33:00.913 23:35:48 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:33:00.913 23:35:48 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:33:00.913 23:35:48 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:33:00.913 23:35:48 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:33:00.913 23:35:48 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:33:00.913 23:35:48 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:33:00.913 Found 0000:31:00.0 (0x8086 - 0x159b) 00:33:00.913 23:35:48 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:33:00.913 23:35:48 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:33:00.913 23:35:48 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:00.913 23:35:48 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:00.913 23:35:48 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:33:00.913 23:35:48 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:33:00.913 23:35:48 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:33:00.913 Found 0000:31:00.1 (0x8086 - 0x159b) 00:33:00.913 23:35:48 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:33:00.913 23:35:48 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:33:00.913 23:35:48 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:00.913 23:35:48 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:00.913 23:35:48 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:33:00.913 23:35:48 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:33:00.913 23:35:48 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:33:00.913 23:35:48 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:33:00.913 23:35:48 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:33:00.913 23:35:48 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:00.913 23:35:48 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:33:00.913 23:35:48 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:00.913 23:35:48 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:33:00.913 Found net devices under 0000:31:00.0: cvl_0_0 00:33:00.913 23:35:48 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:33:00.913 23:35:48 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:33:00.913 23:35:48 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:00.913 23:35:48 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:33:00.913 23:35:48 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:00.913 23:35:48 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:33:00.913 Found net devices under 0000:31:00.1: cvl_0_1 00:33:00.913 23:35:48 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:33:00.913 23:35:48 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:33:00.913 23:35:48 -- nvmf/common.sh@403 -- # is_hw=yes 00:33:00.913 23:35:48 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:33:00.913 23:35:48 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:33:00.913 23:35:48 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:33:00.913 23:35:48 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:00.913 23:35:48 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:00.913 23:35:48 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:00.913 23:35:48 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:33:00.913 23:35:48 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:00.913 23:35:48 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:00.913 23:35:48 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:33:00.913 23:35:48 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:00.913 23:35:48 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:00.913 23:35:48 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:33:00.913 23:35:48 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:33:00.913 23:35:48 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:33:00.913 23:35:48 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:00.913 23:35:49 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:00.913 23:35:49 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:00.913 23:35:49 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:33:00.913 23:35:49 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:00.913 23:35:49 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:00.913 23:35:49 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:00.913 23:35:49 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:33:00.913 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:00.913 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.608 ms 00:33:00.913 00:33:00.913 --- 10.0.0.2 ping statistics --- 00:33:00.913 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:00.913 rtt min/avg/max/mdev = 0.608/0.608/0.608/0.000 ms 00:33:00.913 23:35:49 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:00.913 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:00.913 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.274 ms 00:33:00.913 00:33:00.913 --- 10.0.0.1 ping statistics --- 00:33:00.913 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:00.913 rtt min/avg/max/mdev = 0.274/0.274/0.274/0.000 ms 00:33:00.913 23:35:49 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:00.913 23:35:49 -- nvmf/common.sh@411 -- # return 0 00:33:00.913 23:35:49 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:33:00.913 23:35:49 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:00.913 23:35:49 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:33:00.913 23:35:49 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:33:00.913 23:35:49 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:00.913 23:35:49 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:33:00.913 23:35:49 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:33:00.913 23:35:49 -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:33:00.913 23:35:49 -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:33:00.913 23:35:49 -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:33:00.913 23:35:49 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:33:00.913 23:35:49 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:33:00.913 23:35:49 -- common/autotest_common.sh@10 -- # set +x 00:33:00.914 ************************************ 00:33:00.914 START TEST nvmf_digest_clean 00:33:00.914 ************************************ 00:33:00.914 23:35:49 -- common/autotest_common.sh@1111 -- # run_digest 00:33:00.914 23:35:49 -- host/digest.sh@120 -- # local dsa_initiator 00:33:00.914 23:35:49 -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:33:00.914 23:35:49 -- host/digest.sh@121 -- # dsa_initiator=false 00:33:00.914 23:35:49 -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:33:00.914 23:35:49 -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:33:00.914 23:35:49 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:33:00.914 23:35:49 -- common/autotest_common.sh@710 -- # xtrace_disable 00:33:00.914 23:35:49 -- common/autotest_common.sh@10 -- # set +x 00:33:00.914 23:35:49 -- nvmf/common.sh@470 -- # nvmfpid=4170440 00:33:00.914 23:35:49 -- nvmf/common.sh@471 -- # waitforlisten 4170440 00:33:00.914 23:35:49 -- common/autotest_common.sh@817 -- # '[' -z 4170440 ']' 00:33:00.914 23:35:49 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:33:00.914 23:35:49 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:00.914 23:35:49 -- common/autotest_common.sh@822 -- # local max_retries=100 00:33:00.914 23:35:49 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:00.914 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:00.914 23:35:49 -- common/autotest_common.sh@826 -- # xtrace_disable 00:33:00.914 23:35:49 -- common/autotest_common.sh@10 -- # set +x 00:33:00.914 [2024-04-26 23:35:49.442648] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:33:00.914 [2024-04-26 23:35:49.442691] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:00.914 EAL: No free 2048 kB hugepages reported on node 1 00:33:00.914 [2024-04-26 23:35:49.508729] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:00.914 [2024-04-26 23:35:49.537596] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:00.914 [2024-04-26 23:35:49.537632] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:00.914 [2024-04-26 23:35:49.537640] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:00.914 [2024-04-26 23:35:49.537646] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:00.914 [2024-04-26 23:35:49.537651] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:00.914 [2024-04-26 23:35:49.537670] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:33:01.174 23:35:50 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:33:01.174 23:35:50 -- common/autotest_common.sh@850 -- # return 0 00:33:01.174 23:35:50 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:33:01.175 23:35:50 -- common/autotest_common.sh@716 -- # xtrace_disable 00:33:01.175 23:35:50 -- common/autotest_common.sh@10 -- # set +x 00:33:01.175 23:35:50 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:01.175 23:35:50 -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:33:01.175 23:35:50 -- host/digest.sh@126 -- # common_target_config 00:33:01.175 23:35:50 -- host/digest.sh@43 -- # rpc_cmd 00:33:01.175 23:35:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:33:01.175 23:35:50 -- common/autotest_common.sh@10 -- # set +x 00:33:01.175 null0 00:33:01.175 [2024-04-26 23:35:50.309807] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:01.175 [2024-04-26 23:35:50.333986] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:01.175 23:35:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:33:01.175 23:35:50 -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:33:01.175 23:35:50 -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:33:01.175 23:35:50 -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:33:01.175 23:35:50 -- host/digest.sh@80 -- # rw=randread 00:33:01.175 23:35:50 -- host/digest.sh@80 -- # bs=4096 00:33:01.175 23:35:50 -- host/digest.sh@80 -- # qd=128 00:33:01.175 23:35:50 -- host/digest.sh@80 -- # scan_dsa=false 00:33:01.175 23:35:50 -- host/digest.sh@83 -- # bperfpid=4170762 00:33:01.175 23:35:50 -- host/digest.sh@84 -- # waitforlisten 4170762 /var/tmp/bperf.sock 00:33:01.175 23:35:50 -- common/autotest_common.sh@817 -- # '[' -z 4170762 ']' 00:33:01.175 23:35:50 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bperf.sock 00:33:01.175 23:35:50 -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:33:01.175 23:35:50 -- common/autotest_common.sh@822 -- # local max_retries=100 00:33:01.175 23:35:50 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:33:01.175 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:33:01.175 23:35:50 -- common/autotest_common.sh@826 -- # xtrace_disable 00:33:01.175 23:35:50 -- common/autotest_common.sh@10 -- # set +x 00:33:01.175 [2024-04-26 23:35:50.387096] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:33:01.175 [2024-04-26 23:35:50.387142] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4170762 ] 00:33:01.175 EAL: No free 2048 kB hugepages reported on node 1 00:33:01.436 [2024-04-26 23:35:50.446061] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:01.436 [2024-04-26 23:35:50.474952] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:33:01.436 23:35:50 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:33:01.436 23:35:50 -- common/autotest_common.sh@850 -- # return 0 00:33:01.436 23:35:50 -- host/digest.sh@86 -- # false 00:33:01.436 23:35:50 -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:33:01.436 23:35:50 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:33:01.696 23:35:50 -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:01.697 23:35:50 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:01.957 nvme0n1 00:33:01.957 23:35:51 -- host/digest.sh@92 -- # bperf_py perform_tests 00:33:01.957 23:35:51 -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:33:01.957 Running I/O for 2 seconds... 00:33:04.513 00:33:04.513 Latency(us) 00:33:04.513 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:04.513 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:33:04.513 nvme0n1 : 2.00 19847.58 77.53 0.00 0.00 6441.61 3003.73 21189.97 00:33:04.513 =================================================================================================================== 00:33:04.514 Total : 19847.58 77.53 0.00 0.00 6441.61 3003.73 21189.97 00:33:04.514 0 00:33:04.514 23:35:53 -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:33:04.514 23:35:53 -- host/digest.sh@93 -- # get_accel_stats 00:33:04.514 23:35:53 -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:33:04.514 23:35:53 -- host/digest.sh@37 -- # jq -rc '.operations[] 00:33:04.514 | select(.opcode=="crc32c") 00:33:04.514 | "\(.module_name) \(.executed)"' 00:33:04.514 23:35:53 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:33:04.514 23:35:53 -- host/digest.sh@94 -- # false 00:33:04.514 23:35:53 -- host/digest.sh@94 -- # exp_module=software 00:33:04.514 23:35:53 -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:33:04.514 23:35:53 -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:33:04.514 23:35:53 -- host/digest.sh@98 -- # killprocess 4170762 00:33:04.514 23:35:53 -- common/autotest_common.sh@936 -- # '[' -z 4170762 ']' 00:33:04.514 23:35:53 -- common/autotest_common.sh@940 -- # kill -0 4170762 00:33:04.514 23:35:53 -- common/autotest_common.sh@941 -- # uname 00:33:04.514 23:35:53 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:33:04.514 23:35:53 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 4170762 00:33:04.514 23:35:53 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:33:04.514 23:35:53 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:33:04.514 23:35:53 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 4170762' 00:33:04.514 killing process with pid 4170762 00:33:04.514 23:35:53 -- common/autotest_common.sh@955 -- # kill 4170762 00:33:04.514 Received shutdown signal, test time was about 2.000000 seconds 00:33:04.514 00:33:04.514 Latency(us) 00:33:04.514 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:04.514 =================================================================================================================== 00:33:04.514 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:33:04.514 23:35:53 -- common/autotest_common.sh@960 -- # wait 4170762 00:33:04.514 23:35:53 -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:33:04.514 23:35:53 -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:33:04.514 23:35:53 -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:33:04.514 23:35:53 -- host/digest.sh@80 -- # rw=randread 00:33:04.514 23:35:53 -- host/digest.sh@80 -- # bs=131072 00:33:04.514 23:35:53 -- host/digest.sh@80 -- # qd=16 00:33:04.514 23:35:53 -- host/digest.sh@80 -- # scan_dsa=false 00:33:04.514 23:35:53 -- host/digest.sh@83 -- # bperfpid=4171414 00:33:04.514 23:35:53 -- host/digest.sh@84 -- # waitforlisten 4171414 /var/tmp/bperf.sock 00:33:04.514 23:35:53 -- common/autotest_common.sh@817 -- # '[' -z 4171414 ']' 00:33:04.514 23:35:53 -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:33:04.514 23:35:53 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bperf.sock 00:33:04.514 23:35:53 -- common/autotest_common.sh@822 -- # local max_retries=100 00:33:04.514 23:35:53 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:33:04.514 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:33:04.514 23:35:53 -- common/autotest_common.sh@826 -- # xtrace_disable 00:33:04.514 23:35:53 -- common/autotest_common.sh@10 -- # set +x 00:33:04.514 [2024-04-26 23:35:53.597615] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:33:04.514 [2024-04-26 23:35:53.597669] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4171414 ] 00:33:04.514 I/O size of 131072 is greater than zero copy threshold (65536). 00:33:04.514 Zero copy mechanism will not be used. 00:33:04.514 EAL: No free 2048 kB hugepages reported on node 1 00:33:04.514 [2024-04-26 23:35:53.657868] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:04.514 [2024-04-26 23:35:53.685475] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:33:05.456 23:35:54 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:33:05.456 23:35:54 -- common/autotest_common.sh@850 -- # return 0 00:33:05.456 23:35:54 -- host/digest.sh@86 -- # false 00:33:05.456 23:35:54 -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:33:05.456 23:35:54 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:33:05.456 23:35:54 -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:05.456 23:35:54 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:05.717 nvme0n1 00:33:05.717 23:35:54 -- host/digest.sh@92 -- # bperf_py perform_tests 00:33:05.717 23:35:54 -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:33:05.717 I/O size of 131072 is greater than zero copy threshold (65536). 00:33:05.717 Zero copy mechanism will not be used. 00:33:05.717 Running I/O for 2 seconds... 00:33:07.628 00:33:07.629 Latency(us) 00:33:07.629 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:07.629 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:33:07.629 nvme0n1 : 2.00 2903.75 362.97 0.00 0.00 5506.03 1467.73 12014.93 00:33:07.629 =================================================================================================================== 00:33:07.629 Total : 2903.75 362.97 0.00 0.00 5506.03 1467.73 12014.93 00:33:07.629 0 00:33:07.629 23:35:56 -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:33:07.629 23:35:56 -- host/digest.sh@93 -- # get_accel_stats 00:33:07.629 23:35:56 -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:33:07.629 23:35:56 -- host/digest.sh@37 -- # jq -rc '.operations[] 00:33:07.629 | select(.opcode=="crc32c") 00:33:07.629 | "\(.module_name) \(.executed)"' 00:33:07.629 23:35:56 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:33:07.889 23:35:57 -- host/digest.sh@94 -- # false 00:33:07.889 23:35:57 -- host/digest.sh@94 -- # exp_module=software 00:33:07.889 23:35:57 -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:33:07.889 23:35:57 -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:33:07.889 23:35:57 -- host/digest.sh@98 -- # killprocess 4171414 00:33:07.889 23:35:57 -- common/autotest_common.sh@936 -- # '[' -z 4171414 ']' 00:33:07.889 23:35:57 -- common/autotest_common.sh@940 -- # kill -0 4171414 00:33:07.889 23:35:57 -- common/autotest_common.sh@941 -- # uname 00:33:07.889 23:35:57 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:33:07.889 23:35:57 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 4171414 00:33:07.889 23:35:57 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:33:07.889 23:35:57 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:33:07.889 23:35:57 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 4171414' 00:33:07.889 killing process with pid 4171414 00:33:07.889 23:35:57 -- common/autotest_common.sh@955 -- # kill 4171414 00:33:07.889 Received shutdown signal, test time was about 2.000000 seconds 00:33:07.889 00:33:07.889 Latency(us) 00:33:07.889 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:07.889 =================================================================================================================== 00:33:07.889 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:33:07.889 23:35:57 -- common/autotest_common.sh@960 -- # wait 4171414 00:33:08.150 23:35:57 -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:33:08.150 23:35:57 -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:33:08.150 23:35:57 -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:33:08.150 23:35:57 -- host/digest.sh@80 -- # rw=randwrite 00:33:08.150 23:35:57 -- host/digest.sh@80 -- # bs=4096 00:33:08.150 23:35:57 -- host/digest.sh@80 -- # qd=128 00:33:08.150 23:35:57 -- host/digest.sh@80 -- # scan_dsa=false 00:33:08.150 23:35:57 -- host/digest.sh@83 -- # bperfpid=4172119 00:33:08.150 23:35:57 -- host/digest.sh@84 -- # waitforlisten 4172119 /var/tmp/bperf.sock 00:33:08.150 23:35:57 -- common/autotest_common.sh@817 -- # '[' -z 4172119 ']' 00:33:08.150 23:35:57 -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:33:08.150 23:35:57 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bperf.sock 00:33:08.150 23:35:57 -- common/autotest_common.sh@822 -- # local max_retries=100 00:33:08.150 23:35:57 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:33:08.150 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:33:08.150 23:35:57 -- common/autotest_common.sh@826 -- # xtrace_disable 00:33:08.150 23:35:57 -- common/autotest_common.sh@10 -- # set +x 00:33:08.150 [2024-04-26 23:35:57.247938] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:33:08.150 [2024-04-26 23:35:57.247996] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4172119 ] 00:33:08.150 EAL: No free 2048 kB hugepages reported on node 1 00:33:08.150 [2024-04-26 23:35:57.306782] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:08.150 [2024-04-26 23:35:57.334747] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:33:08.150 23:35:57 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:33:08.150 23:35:57 -- common/autotest_common.sh@850 -- # return 0 00:33:08.150 23:35:57 -- host/digest.sh@86 -- # false 00:33:08.150 23:35:57 -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:33:08.150 23:35:57 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:33:08.410 23:35:57 -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:08.410 23:35:57 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:08.670 nvme0n1 00:33:08.670 23:35:57 -- host/digest.sh@92 -- # bperf_py perform_tests 00:33:08.670 23:35:57 -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:33:08.935 Running I/O for 2 seconds... 00:33:10.850 00:33:10.850 Latency(us) 00:33:10.850 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:10.850 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:33:10.850 nvme0n1 : 2.00 21123.38 82.51 0.00 0.00 6049.63 2853.55 15510.19 00:33:10.850 =================================================================================================================== 00:33:10.850 Total : 21123.38 82.51 0.00 0.00 6049.63 2853.55 15510.19 00:33:10.850 0 00:33:10.850 23:35:59 -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:33:10.850 23:35:59 -- host/digest.sh@93 -- # get_accel_stats 00:33:10.850 23:35:59 -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:33:10.850 23:35:59 -- host/digest.sh@37 -- # jq -rc '.operations[] 00:33:10.850 | select(.opcode=="crc32c") 00:33:10.850 | "\(.module_name) \(.executed)"' 00:33:10.850 23:35:59 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:33:11.111 23:36:00 -- host/digest.sh@94 -- # false 00:33:11.111 23:36:00 -- host/digest.sh@94 -- # exp_module=software 00:33:11.111 23:36:00 -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:33:11.111 23:36:00 -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:33:11.111 23:36:00 -- host/digest.sh@98 -- # killprocess 4172119 00:33:11.111 23:36:00 -- common/autotest_common.sh@936 -- # '[' -z 4172119 ']' 00:33:11.111 23:36:00 -- common/autotest_common.sh@940 -- # kill -0 4172119 00:33:11.111 23:36:00 -- common/autotest_common.sh@941 -- # uname 00:33:11.111 23:36:00 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:33:11.111 23:36:00 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 4172119 00:33:11.111 23:36:00 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:33:11.111 23:36:00 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:33:11.111 23:36:00 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 4172119' 00:33:11.111 killing process with pid 4172119 00:33:11.111 23:36:00 -- common/autotest_common.sh@955 -- # kill 4172119 00:33:11.111 Received shutdown signal, test time was about 2.000000 seconds 00:33:11.111 00:33:11.111 Latency(us) 00:33:11.111 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:11.111 =================================================================================================================== 00:33:11.111 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:33:11.111 23:36:00 -- common/autotest_common.sh@960 -- # wait 4172119 00:33:11.111 23:36:00 -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:33:11.111 23:36:00 -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:33:11.111 23:36:00 -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:33:11.112 23:36:00 -- host/digest.sh@80 -- # rw=randwrite 00:33:11.112 23:36:00 -- host/digest.sh@80 -- # bs=131072 00:33:11.112 23:36:00 -- host/digest.sh@80 -- # qd=16 00:33:11.112 23:36:00 -- host/digest.sh@80 -- # scan_dsa=false 00:33:11.112 23:36:00 -- host/digest.sh@83 -- # bperfpid=4172654 00:33:11.112 23:36:00 -- host/digest.sh@84 -- # waitforlisten 4172654 /var/tmp/bperf.sock 00:33:11.112 23:36:00 -- common/autotest_common.sh@817 -- # '[' -z 4172654 ']' 00:33:11.112 23:36:00 -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:33:11.112 23:36:00 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bperf.sock 00:33:11.112 23:36:00 -- common/autotest_common.sh@822 -- # local max_retries=100 00:33:11.112 23:36:00 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:33:11.112 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:33:11.112 23:36:00 -- common/autotest_common.sh@826 -- # xtrace_disable 00:33:11.112 23:36:00 -- common/autotest_common.sh@10 -- # set +x 00:33:11.112 [2024-04-26 23:36:00.363296] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:33:11.112 [2024-04-26 23:36:00.363353] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4172654 ] 00:33:11.112 I/O size of 131072 is greater than zero copy threshold (65536). 00:33:11.112 Zero copy mechanism will not be used. 00:33:11.373 EAL: No free 2048 kB hugepages reported on node 1 00:33:11.373 [2024-04-26 23:36:00.422774] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:11.373 [2024-04-26 23:36:00.451802] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:33:11.373 23:36:00 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:33:11.373 23:36:00 -- common/autotest_common.sh@850 -- # return 0 00:33:11.373 23:36:00 -- host/digest.sh@86 -- # false 00:33:11.373 23:36:00 -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:33:11.373 23:36:00 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:33:11.635 23:36:00 -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:11.635 23:36:00 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:11.897 nvme0n1 00:33:11.897 23:36:01 -- host/digest.sh@92 -- # bperf_py perform_tests 00:33:11.897 23:36:01 -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:33:11.897 I/O size of 131072 is greater than zero copy threshold (65536). 00:33:11.897 Zero copy mechanism will not be used. 00:33:11.897 Running I/O for 2 seconds... 00:33:14.534 00:33:14.534 Latency(us) 00:33:14.534 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:14.534 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:33:14.534 nvme0n1 : 2.00 4337.59 542.20 0.00 0.00 3683.76 1658.88 11632.64 00:33:14.534 =================================================================================================================== 00:33:14.534 Total : 4337.59 542.20 0.00 0.00 3683.76 1658.88 11632.64 00:33:14.534 0 00:33:14.534 23:36:03 -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:33:14.534 23:36:03 -- host/digest.sh@93 -- # get_accel_stats 00:33:14.534 23:36:03 -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:33:14.534 23:36:03 -- host/digest.sh@37 -- # jq -rc '.operations[] 00:33:14.534 | select(.opcode=="crc32c") 00:33:14.534 | "\(.module_name) \(.executed)"' 00:33:14.534 23:36:03 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:33:14.534 23:36:03 -- host/digest.sh@94 -- # false 00:33:14.534 23:36:03 -- host/digest.sh@94 -- # exp_module=software 00:33:14.534 23:36:03 -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:33:14.534 23:36:03 -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:33:14.534 23:36:03 -- host/digest.sh@98 -- # killprocess 4172654 00:33:14.534 23:36:03 -- common/autotest_common.sh@936 -- # '[' -z 4172654 ']' 00:33:14.534 23:36:03 -- common/autotest_common.sh@940 -- # kill -0 4172654 00:33:14.535 23:36:03 -- common/autotest_common.sh@941 -- # uname 00:33:14.535 23:36:03 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:33:14.535 23:36:03 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 4172654 00:33:14.535 23:36:03 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:33:14.535 23:36:03 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:33:14.535 23:36:03 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 4172654' 00:33:14.535 killing process with pid 4172654 00:33:14.535 23:36:03 -- common/autotest_common.sh@955 -- # kill 4172654 00:33:14.535 Received shutdown signal, test time was about 2.000000 seconds 00:33:14.535 00:33:14.535 Latency(us) 00:33:14.535 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:14.535 =================================================================================================================== 00:33:14.535 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:33:14.535 23:36:03 -- common/autotest_common.sh@960 -- # wait 4172654 00:33:14.535 23:36:03 -- host/digest.sh@132 -- # killprocess 4170440 00:33:14.535 23:36:03 -- common/autotest_common.sh@936 -- # '[' -z 4170440 ']' 00:33:14.535 23:36:03 -- common/autotest_common.sh@940 -- # kill -0 4170440 00:33:14.535 23:36:03 -- common/autotest_common.sh@941 -- # uname 00:33:14.535 23:36:03 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:33:14.535 23:36:03 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 4170440 00:33:14.535 23:36:03 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:33:14.535 23:36:03 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:33:14.535 23:36:03 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 4170440' 00:33:14.535 killing process with pid 4170440 00:33:14.535 23:36:03 -- common/autotest_common.sh@955 -- # kill 4170440 00:33:14.535 23:36:03 -- common/autotest_common.sh@960 -- # wait 4170440 00:33:14.535 00:33:14.535 real 0m14.299s 00:33:14.535 user 0m27.597s 00:33:14.535 sys 0m3.249s 00:33:14.535 23:36:03 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:33:14.535 23:36:03 -- common/autotest_common.sh@10 -- # set +x 00:33:14.535 ************************************ 00:33:14.535 END TEST nvmf_digest_clean 00:33:14.535 ************************************ 00:33:14.535 23:36:03 -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:33:14.535 23:36:03 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:33:14.535 23:36:03 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:33:14.535 23:36:03 -- common/autotest_common.sh@10 -- # set +x 00:33:14.796 ************************************ 00:33:14.796 START TEST nvmf_digest_error 00:33:14.796 ************************************ 00:33:14.796 23:36:03 -- common/autotest_common.sh@1111 -- # run_digest_error 00:33:14.796 23:36:03 -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:33:14.796 23:36:03 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:33:14.796 23:36:03 -- common/autotest_common.sh@710 -- # xtrace_disable 00:33:14.796 23:36:03 -- common/autotest_common.sh@10 -- # set +x 00:33:14.796 23:36:03 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:33:14.796 23:36:03 -- nvmf/common.sh@470 -- # nvmfpid=4173438 00:33:14.796 23:36:03 -- nvmf/common.sh@471 -- # waitforlisten 4173438 00:33:14.796 23:36:03 -- common/autotest_common.sh@817 -- # '[' -z 4173438 ']' 00:33:14.796 23:36:03 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:14.796 23:36:03 -- common/autotest_common.sh@822 -- # local max_retries=100 00:33:14.796 23:36:03 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:14.796 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:14.796 23:36:03 -- common/autotest_common.sh@826 -- # xtrace_disable 00:33:14.796 23:36:03 -- common/autotest_common.sh@10 -- # set +x 00:33:14.796 [2024-04-26 23:36:03.915727] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:33:14.796 [2024-04-26 23:36:03.915777] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:14.796 EAL: No free 2048 kB hugepages reported on node 1 00:33:14.796 [2024-04-26 23:36:03.981278] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:14.796 [2024-04-26 23:36:04.010204] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:14.796 [2024-04-26 23:36:04.010240] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:14.796 [2024-04-26 23:36:04.010248] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:14.796 [2024-04-26 23:36:04.010255] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:14.796 [2024-04-26 23:36:04.010260] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:14.796 [2024-04-26 23:36:04.010281] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:33:14.796 23:36:04 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:33:14.796 23:36:04 -- common/autotest_common.sh@850 -- # return 0 00:33:14.796 23:36:04 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:33:14.796 23:36:04 -- common/autotest_common.sh@716 -- # xtrace_disable 00:33:14.796 23:36:04 -- common/autotest_common.sh@10 -- # set +x 00:33:15.057 23:36:04 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:15.057 23:36:04 -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:33:15.057 23:36:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:33:15.057 23:36:04 -- common/autotest_common.sh@10 -- # set +x 00:33:15.057 [2024-04-26 23:36:04.090739] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:33:15.057 23:36:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:33:15.057 23:36:04 -- host/digest.sh@105 -- # common_target_config 00:33:15.057 23:36:04 -- host/digest.sh@43 -- # rpc_cmd 00:33:15.057 23:36:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:33:15.057 23:36:04 -- common/autotest_common.sh@10 -- # set +x 00:33:15.057 null0 00:33:15.057 [2024-04-26 23:36:04.165243] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:15.057 [2024-04-26 23:36:04.189434] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:15.057 23:36:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:33:15.057 23:36:04 -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:33:15.057 23:36:04 -- host/digest.sh@54 -- # local rw bs qd 00:33:15.057 23:36:04 -- host/digest.sh@56 -- # rw=randread 00:33:15.057 23:36:04 -- host/digest.sh@56 -- # bs=4096 00:33:15.057 23:36:04 -- host/digest.sh@56 -- # qd=128 00:33:15.057 23:36:04 -- host/digest.sh@58 -- # bperfpid=4173590 00:33:15.057 23:36:04 -- host/digest.sh@60 -- # waitforlisten 4173590 /var/tmp/bperf.sock 00:33:15.057 23:36:04 -- common/autotest_common.sh@817 -- # '[' -z 4173590 ']' 00:33:15.057 23:36:04 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bperf.sock 00:33:15.057 23:36:04 -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:33:15.057 23:36:04 -- common/autotest_common.sh@822 -- # local max_retries=100 00:33:15.057 23:36:04 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:33:15.057 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:33:15.058 23:36:04 -- common/autotest_common.sh@826 -- # xtrace_disable 00:33:15.058 23:36:04 -- common/autotest_common.sh@10 -- # set +x 00:33:15.058 [2024-04-26 23:36:04.241048] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:33:15.058 [2024-04-26 23:36:04.241095] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4173590 ] 00:33:15.058 EAL: No free 2048 kB hugepages reported on node 1 00:33:15.058 [2024-04-26 23:36:04.300130] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:15.319 [2024-04-26 23:36:04.329265] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:33:15.319 23:36:04 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:33:15.319 23:36:04 -- common/autotest_common.sh@850 -- # return 0 00:33:15.319 23:36:04 -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:33:15.319 23:36:04 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:33:15.319 23:36:04 -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:33:15.319 23:36:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:33:15.319 23:36:04 -- common/autotest_common.sh@10 -- # set +x 00:33:15.319 23:36:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:33:15.319 23:36:04 -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:15.319 23:36:04 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:15.580 nvme0n1 00:33:15.841 23:36:04 -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:33:15.841 23:36:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:33:15.841 23:36:04 -- common/autotest_common.sh@10 -- # set +x 00:33:15.841 23:36:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:33:15.841 23:36:04 -- host/digest.sh@69 -- # bperf_py perform_tests 00:33:15.841 23:36:04 -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:33:15.841 Running I/O for 2 seconds... 00:33:15.841 [2024-04-26 23:36:04.944751] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13f7920) 00:33:15.841 [2024-04-26 23:36:04.944786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:7055 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:15.841 [2024-04-26 23:36:04.944797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:15.841 [2024-04-26 23:36:04.960800] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13f7920) 00:33:15.841 [2024-04-26 23:36:04.960825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:11264 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:15.841 [2024-04-26 23:36:04.960834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:15.841 [2024-04-26 23:36:04.975557] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13f7920) 00:33:15.841 [2024-04-26 23:36:04.975580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:8842 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:15.841 [2024-04-26 23:36:04.975589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:15.841 [2024-04-26 23:36:04.990962] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13f7920) 00:33:15.842 [2024-04-26 23:36:04.990985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:11288 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:15.842 [2024-04-26 23:36:04.990994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:15.842 [2024-04-26 23:36:05.004208] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13f7920) 00:33:15.842 [2024-04-26 23:36:05.004230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:4431 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:15.842 [2024-04-26 23:36:05.004239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:15.842 [2024-04-26 23:36:05.015213] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13f7920) 00:33:15.842 [2024-04-26 23:36:05.015234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:6737 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:15.842 [2024-04-26 23:36:05.015243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:15.842 [2024-04-26 23:36:05.027912] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13f7920) 00:33:15.842 [2024-04-26 23:36:05.027939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:18987 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:15.842 [2024-04-26 23:36:05.027948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:15.842 [2024-04-26 23:36:05.042439] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13f7920) 00:33:15.842 [2024-04-26 23:36:05.042461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:19180 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:15.842 [2024-04-26 23:36:05.042469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:15.842 [2024-04-26 23:36:05.054697] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13f7920) 00:33:15.842 [2024-04-26 23:36:05.054719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:12856 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:15.842 [2024-04-26 23:36:05.054727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:15.842 [2024-04-26 23:36:05.066763] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13f7920) 00:33:15.842 [2024-04-26 23:36:05.066783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:1327 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:15.842 [2024-04-26 23:36:05.066792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:15.842 [2024-04-26 23:36:05.080201] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13f7920) 00:33:15.842 [2024-04-26 23:36:05.080222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:9619 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:15.842 [2024-04-26 23:36:05.080230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:15.842 [2024-04-26 23:36:05.093357] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13f7920) 00:33:15.842 [2024-04-26 23:36:05.093378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:3347 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:15.842 [2024-04-26 23:36:05.093387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:16.104 [2024-04-26 23:36:05.104494] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13f7920) 00:33:16.104 [2024-04-26 23:36:05.104516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:25221 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:16.104 [2024-04-26 23:36:05.104525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:16.104 [2024-04-26 23:36:05.118650] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13f7920) 00:33:16.104 [2024-04-26 23:36:05.118672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17406 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:16.104 [2024-04-26 23:36:05.118680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:16.104 [2024-04-26 23:36:05.132791] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13f7920) 00:33:16.104 [2024-04-26 23:36:05.132813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:1972 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:16.104 [2024-04-26 23:36:05.132822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:16.104 [2024-04-26 23:36:05.142954] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13f7920) 00:33:16.104 [2024-04-26 23:36:05.142984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18538 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:16.104 [2024-04-26 23:36:05.142993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:16.104 [2024-04-26 23:36:05.158629] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13f7920) 00:33:16.104 [2024-04-26 23:36:05.158650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:6492 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:16.104 [2024-04-26 23:36:05.158659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:16.104 [2024-04-26 23:36:05.173783] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13f7920) 00:33:16.104 [2024-04-26 23:36:05.173804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:12437 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:16.104 [2024-04-26 23:36:05.173813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:16.104 [2024-04-26 23:36:05.186772] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13f7920) 00:33:16.104 [2024-04-26 23:36:05.186792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:2416 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:16.104 [2024-04-26 23:36:05.186800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:16.104 [2024-04-26 23:36:05.199197] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13f7920) 00:33:16.104 [2024-04-26 23:36:05.199217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:6203 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:16.104 [2024-04-26 23:36:05.199227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:16.104 [2024-04-26 23:36:05.214911] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13f7920) 00:33:16.104 [2024-04-26 23:36:05.214932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:20184 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:16.104 [2024-04-26 23:36:05.214940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:16.104 [2024-04-26 23:36:05.231125] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13f7920) 00:33:16.104 [2024-04-26 23:36:05.231147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:20793 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:16.104 [2024-04-26 23:36:05.231155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:16.104 [2024-04-26 23:36:05.245699] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13f7920) 00:33:16.104 [2024-04-26 23:36:05.245721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:5016 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:16.104 [2024-04-26 23:36:05.245730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:16.104 [2024-04-26 23:36:05.257244] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13f7920) 00:33:16.104 [2024-04-26 23:36:05.257265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:838 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:16.104 [2024-04-26 23:36:05.257280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:16.104 [2024-04-26 23:36:05.272141] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13f7920) 00:33:16.104 [2024-04-26 23:36:05.272161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:25091 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:16.104 [2024-04-26 23:36:05.272170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:16.104 [2024-04-26 23:36:05.284126] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13f7920) 00:33:16.104 [2024-04-26 23:36:05.284146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:7840 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:16.104 [2024-04-26 23:36:05.284155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:16.104 [2024-04-26 23:36:05.297541] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13f7920) 00:33:16.104 [2024-04-26 23:36:05.297561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:13549 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:16.104 [2024-04-26 23:36:05.297570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:16.104 [2024-04-26 23:36:05.309751] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13f7920) 00:33:16.104 [2024-04-26 23:36:05.309771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22969 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:16.104 [2024-04-26 23:36:05.309780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:16.104 [2024-04-26 23:36:05.324822] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13f7920) 00:33:16.104 [2024-04-26 23:36:05.324848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:15062 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:16.104 [2024-04-26 23:36:05.324857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:16.104 [2024-04-26 23:36:05.337609] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13f7920) 00:33:16.104 [2024-04-26 23:36:05.337630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:7662 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:16.104 [2024-04-26 23:36:05.337640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:16.104 [2024-04-26 23:36:05.348580] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13f7920) 00:33:16.104 [2024-04-26 23:36:05.348600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:23584 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:16.104 [2024-04-26 23:36:05.348609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:16.367 [2024-04-26 23:36:05.361198] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13f7920) 00:33:16.367 [2024-04-26 23:36:05.361219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:17201 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:16.367 [2024-04-26 23:36:05.361228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:16.367 [2024-04-26 23:36:05.374082] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13f7920) 00:33:16.367 [2024-04-26 23:36:05.374103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:14394 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:16.367 [2024-04-26 23:36:05.374112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:16.367 [2024-04-26 23:36:05.386816] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13f7920) 00:33:16.367 [2024-04-26 23:36:05.386842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:21178 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:16.367 [2024-04-26 23:36:05.386851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:16.367 [2024-04-26 23:36:05.400320] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13f7920) 00:33:16.367 [2024-04-26 23:36:05.400342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:5588 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:16.367 [2024-04-26 23:36:05.400350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:16.367 [2024-04-26 23:36:05.413576] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13f7920) 00:33:16.367 [2024-04-26 23:36:05.413596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:3258 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:16.367 [2024-04-26 23:36:05.413606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:16.367 [2024-04-26 23:36:05.425047] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13f7920) 00:33:16.367 [2024-04-26 23:36:05.425068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:5642 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:16.367 [2024-04-26 23:36:05.425077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:16.367 [2024-04-26 23:36:05.441701] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13f7920) 00:33:16.367 [2024-04-26 23:36:05.441722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:18697 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:16.367 [2024-04-26 23:36:05.441731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:16.367 [2024-04-26 23:36:05.456641] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13f7920) 00:33:16.367 [2024-04-26 23:36:05.456661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:13978 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:16.367 [2024-04-26 23:36:05.456670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:16.367 [2024-04-26 23:36:05.470402] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13f7920) 00:33:16.367 [2024-04-26 23:36:05.470423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:14036 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:16.367 [2024-04-26 23:36:05.470432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:16.367 [2024-04-26 23:36:05.481435] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13f7920) 00:33:16.367 [2024-04-26 23:36:05.481457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:22287 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:16.367 [2024-04-26 23:36:05.481469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:16.367 [2024-04-26 23:36:05.495792] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13f7920) 00:33:16.367 [2024-04-26 23:36:05.495813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:9040 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:16.367 [2024-04-26 23:36:05.495822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:16.367 [2024-04-26 23:36:05.507126] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13f7920) 00:33:16.367 [2024-04-26 23:36:05.507147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:13555 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:16.367 [2024-04-26 23:36:05.507156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:16.367 [2024-04-26 23:36:05.522189] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13f7920) 00:33:16.367 [2024-04-26 23:36:05.522210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:14186 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:16.367 [2024-04-26 23:36:05.522219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:16.367 [2024-04-26 23:36:05.532981] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13f7920) 00:33:16.367 [2024-04-26 23:36:05.533002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:19202 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:16.367 [2024-04-26 23:36:05.533011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:16.367 [2024-04-26 23:36:05.546776] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13f7920) 00:33:16.367 [2024-04-26 23:36:05.546797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:9334 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:16.367 [2024-04-26 23:36:05.546805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:16.367 [2024-04-26 23:36:05.562161] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13f7920) 00:33:16.367 [2024-04-26 23:36:05.562182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:5969 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:16.367 [2024-04-26 23:36:05.562190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:16.367 [2024-04-26 23:36:05.576382] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13f7920) 00:33:16.367 [2024-04-26 23:36:05.576403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:4060 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:16.367 [2024-04-26 23:36:05.576411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:16.367 [2024-04-26 23:36:05.589566] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13f7920) 00:33:16.367 [2024-04-26 23:36:05.589587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17547 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:16.367 [2024-04-26 23:36:05.589596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:16.367 [2024-04-26 23:36:05.602951] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13f7920) 00:33:16.367 [2024-04-26 23:36:05.602975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:5583 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:16.367 [2024-04-26 23:36:05.602984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:16.367 [2024-04-26 23:36:05.615204] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13f7920) 00:33:16.367 [2024-04-26 23:36:05.615225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:24086 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:16.367 [2024-04-26 23:36:05.615234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:16.629 [2024-04-26 23:36:05.627487] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13f7920) 00:33:16.629 [2024-04-26 23:36:05.627508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:9973 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:16.629 [2024-04-26 23:36:05.627517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:16.629 [2024-04-26 23:36:05.640011] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13f7920) 00:33:16.629 [2024-04-26 23:36:05.640032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:18246 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:16.629 [2024-04-26 23:36:05.640041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:16.629 [2024-04-26 23:36:05.653043] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13f7920) 00:33:16.629 [2024-04-26 23:36:05.653065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:16672 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:16.629 [2024-04-26 23:36:05.653074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:16.629 [2024-04-26 23:36:05.668695] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13f7920) 00:33:16.629 [2024-04-26 23:36:05.668716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:7570 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:16.629 [2024-04-26 23:36:05.668724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:16.629 [2024-04-26 23:36:05.679071] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13f7920) 00:33:16.629 [2024-04-26 23:36:05.679093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:22261 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:16.629 [2024-04-26 23:36:05.679102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:16.629 [2024-04-26 23:36:05.693282] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13f7920) 00:33:16.629 [2024-04-26 23:36:05.693304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:12732 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:16.629 [2024-04-26 23:36:05.693316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:16.629 [2024-04-26 23:36:05.708998] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13f7920) 00:33:16.629 [2024-04-26 23:36:05.709019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:6817 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:16.629 [2024-04-26 23:36:05.709028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:16.629 [2024-04-26 23:36:05.720733] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13f7920) 00:33:16.629 [2024-04-26 23:36:05.720754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:10248 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:16.629 [2024-04-26 23:36:05.720762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:16.629 [2024-04-26 23:36:05.733981] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13f7920) 00:33:16.629 [2024-04-26 23:36:05.734002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:20359 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:16.629 [2024-04-26 23:36:05.734011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:16.629 [2024-04-26 23:36:05.747762] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13f7920) 00:33:16.629 [2024-04-26 23:36:05.747783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:14182 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:16.629 [2024-04-26 23:36:05.747791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:16.629 [2024-04-26 23:36:05.759904] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13f7920) 00:33:16.629 [2024-04-26 23:36:05.759925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:24309 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:16.630 [2024-04-26 23:36:05.759934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:16.630 [2024-04-26 23:36:05.772295] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13f7920) 00:33:16.630 [2024-04-26 23:36:05.772316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25512 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:16.630 [2024-04-26 23:36:05.772325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:16.630 [2024-04-26 23:36:05.784603] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13f7920) 00:33:16.630 [2024-04-26 23:36:05.784629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19403 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:16.630 [2024-04-26 23:36:05.784640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:16.630 [2024-04-26 23:36:05.798679] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13f7920) 00:33:16.630 [2024-04-26 23:36:05.798702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:1709 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:16.630 [2024-04-26 23:36:05.798712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:16.630 [2024-04-26 23:36:05.809874] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13f7920) 00:33:16.630 [2024-04-26 23:36:05.809896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:19763 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:16.630 [2024-04-26 23:36:05.809905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:16.630 [2024-04-26 23:36:05.824257] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13f7920) 00:33:16.630 [2024-04-26 23:36:05.824279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:7317 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:16.630 [2024-04-26 23:36:05.824291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:16.630 [2024-04-26 23:36:05.839224] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13f7920) 00:33:16.630 [2024-04-26 23:36:05.839246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:13389 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:16.630 [2024-04-26 23:36:05.839255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:16.630 [2024-04-26 23:36:05.850011] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13f7920) 00:33:16.630 [2024-04-26 23:36:05.850033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:23277 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:16.630 [2024-04-26 23:36:05.850042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:16.630 [2024-04-26 23:36:05.863801] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13f7920) 00:33:16.630 [2024-04-26 23:36:05.863823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:2272 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:16.630 [2024-04-26 23:36:05.863831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:16.630 [2024-04-26 23:36:05.876500] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13f7920) 00:33:16.630 [2024-04-26 23:36:05.876522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:9951 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:16.630 [2024-04-26 23:36:05.876530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:16.891 [2024-04-26 23:36:05.892609] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13f7920) 00:33:16.891 [2024-04-26 23:36:05.892631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:15626 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:16.891 [2024-04-26 23:36:05.892640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:16.891 [2024-04-26 23:36:05.903935] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13f7920) 00:33:16.892 [2024-04-26 23:36:05.903957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:15980 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:16.892 [2024-04-26 23:36:05.903966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:16.892 [2024-04-26 23:36:05.919296] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13f7920) 00:33:16.892 [2024-04-26 23:36:05.919317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:16369 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:16.892 [2024-04-26 23:36:05.919326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:16.892 [2024-04-26 23:36:05.934876] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13f7920) 00:33:16.892 [2024-04-26 23:36:05.934897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:23690 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:16.892 [2024-04-26 23:36:05.934906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:16.892 [2024-04-26 23:36:05.949717] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13f7920) 00:33:16.892 [2024-04-26 23:36:05.949738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:881 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:16.892 [2024-04-26 23:36:05.949747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:16.892 [2024-04-26 23:36:05.964767] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13f7920) 00:33:16.892 [2024-04-26 23:36:05.964788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:3173 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:16.892 [2024-04-26 23:36:05.964798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:16.892 [2024-04-26 23:36:05.977003] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13f7920) 00:33:16.892 [2024-04-26 23:36:05.977025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:16588 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:16.892 [2024-04-26 23:36:05.977033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:16.892 [2024-04-26 23:36:05.991789] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13f7920) 00:33:16.892 [2024-04-26 23:36:05.991811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:6021 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:16.892 [2024-04-26 23:36:05.991819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:16.892 [2024-04-26 23:36:06.004051] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13f7920) 00:33:16.892 [2024-04-26 23:36:06.004072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:4190 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:16.892 [2024-04-26 23:36:06.004081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:16.892 [2024-04-26 23:36:06.018293] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13f7920) 00:33:16.892 [2024-04-26 23:36:06.018314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:6914 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:16.892 [2024-04-26 23:36:06.018323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:16.892 [2024-04-26 23:36:06.029135] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13f7920) 00:33:16.892 [2024-04-26 23:36:06.029156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:71 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:16.892 [2024-04-26 23:36:06.029165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:16.892 [2024-04-26 23:36:06.043636] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13f7920) 00:33:16.892 [2024-04-26 23:36:06.043657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:23475 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:16.892 [2024-04-26 23:36:06.043666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:16.892 [2024-04-26 23:36:06.055921] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13f7920) 00:33:16.892 [2024-04-26 23:36:06.055942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:10783 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:16.892 [2024-04-26 23:36:06.055955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:16.892 [2024-04-26 23:36:06.067395] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13f7920) 00:33:16.892 [2024-04-26 23:36:06.067416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:1372 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:16.892 [2024-04-26 23:36:06.067425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:16.892 [2024-04-26 23:36:06.080924] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13f7920) 00:33:16.892 [2024-04-26 23:36:06.080945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:9127 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:16.892 [2024-04-26 23:36:06.080953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:16.892 [2024-04-26 23:36:06.095324] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13f7920) 00:33:16.892 [2024-04-26 23:36:06.095345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:13540 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:16.892 [2024-04-26 23:36:06.095354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:16.892 [2024-04-26 23:36:06.111085] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13f7920) 00:33:16.892 [2024-04-26 23:36:06.111106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:2674 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:16.892 [2024-04-26 23:36:06.111114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:16.892 [2024-04-26 23:36:06.122117] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13f7920) 00:33:16.892 [2024-04-26 23:36:06.122138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:13692 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:16.892 [2024-04-26 23:36:06.122147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:16.892 [2024-04-26 23:36:06.136685] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13f7920) 00:33:16.892 [2024-04-26 23:36:06.136706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:20129 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:16.892 [2024-04-26 23:36:06.136714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:17.155 [2024-04-26 23:36:06.150027] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13f7920) 00:33:17.155 [2024-04-26 23:36:06.150048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:10458 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:17.155 [2024-04-26 23:36:06.150057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:17.155 [2024-04-26 23:36:06.160876] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13f7920) 00:33:17.155 [2024-04-26 23:36:06.160897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:133 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:17.155 [2024-04-26 23:36:06.160906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:17.155 [2024-04-26 23:36:06.174664] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13f7920) 00:33:17.155 [2024-04-26 23:36:06.174689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25181 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:17.155 [2024-04-26 23:36:06.174698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:17.155 [2024-04-26 23:36:06.188108] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13f7920) 00:33:17.155 [2024-04-26 23:36:06.188129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:16731 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:17.155 [2024-04-26 23:36:06.188138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:17.155 [2024-04-26 23:36:06.200559] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13f7920) 00:33:17.155 [2024-04-26 23:36:06.200579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:24903 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:17.155 [2024-04-26 23:36:06.200588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:17.155 [2024-04-26 23:36:06.211284] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13f7920) 00:33:17.155 [2024-04-26 23:36:06.211305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:12021 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:17.155 [2024-04-26 23:36:06.211314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:17.155 [2024-04-26 23:36:06.226104] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13f7920) 00:33:17.155 [2024-04-26 23:36:06.226125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:18919 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:17.155 [2024-04-26 23:36:06.226134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:17.155 [2024-04-26 23:36:06.236942] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13f7920) 00:33:17.155 [2024-04-26 23:36:06.236962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:18026 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:17.155 [2024-04-26 23:36:06.236972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:17.155 [2024-04-26 23:36:06.252463] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13f7920) 00:33:17.155 [2024-04-26 23:36:06.252484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:5581 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:17.155 [2024-04-26 23:36:06.252493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:17.155 [2024-04-26 23:36:06.265961] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13f7920) 00:33:17.155 [2024-04-26 23:36:06.265982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:7630 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:17.155 [2024-04-26 23:36:06.265991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:17.155 [2024-04-26 23:36:06.278892] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13f7920) 00:33:17.155 [2024-04-26 23:36:06.278914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:23285 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:17.155 [2024-04-26 23:36:06.278922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:17.155 [2024-04-26 23:36:06.291204] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13f7920) 00:33:17.155 [2024-04-26 23:36:06.291224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:5333 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:17.155 [2024-04-26 23:36:06.291233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:17.155 [2024-04-26 23:36:06.302511] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13f7920) 00:33:17.155 [2024-04-26 23:36:06.302531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:25085 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:17.155 [2024-04-26 23:36:06.302540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:17.155 [2024-04-26 23:36:06.315569] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13f7920) 00:33:17.155 [2024-04-26 23:36:06.315589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24718 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:17.155 [2024-04-26 23:36:06.315598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:17.155 [2024-04-26 23:36:06.329927] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13f7920) 00:33:17.155 [2024-04-26 23:36:06.329947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:1872 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:17.155 [2024-04-26 23:36:06.329956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:17.155 [2024-04-26 23:36:06.340400] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13f7920) 00:33:17.155 [2024-04-26 23:36:06.340420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:13840 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:17.155 [2024-04-26 23:36:06.340430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:17.155 [2024-04-26 23:36:06.354697] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13f7920) 00:33:17.155 [2024-04-26 23:36:06.354718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13144 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:17.155 [2024-04-26 23:36:06.354727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:17.155 [2024-04-26 23:36:06.368757] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13f7920) 00:33:17.155 [2024-04-26 23:36:06.368777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:24613 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:17.155 [2024-04-26 23:36:06.368786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:17.155 [2024-04-26 23:36:06.380331] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13f7920) 00:33:17.155 [2024-04-26 23:36:06.380352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:5552 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:17.155 [2024-04-26 23:36:06.380361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:17.155 [2024-04-26 23:36:06.394178] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13f7920) 00:33:17.155 [2024-04-26 23:36:06.394199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:12289 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:17.155 [2024-04-26 23:36:06.394210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:17.418 [2024-04-26 23:36:06.409057] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13f7920) 00:33:17.418 [2024-04-26 23:36:06.409077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:23376 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:17.418 [2024-04-26 23:36:06.409086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:17.418 [2024-04-26 23:36:06.420214] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13f7920) 00:33:17.418 [2024-04-26 23:36:06.420234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:20719 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:17.418 [2024-04-26 23:36:06.420243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:17.418 [2024-04-26 23:36:06.433993] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13f7920) 00:33:17.418 [2024-04-26 23:36:06.434014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:13248 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:17.418 [2024-04-26 23:36:06.434022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:17.418 [2024-04-26 23:36:06.446457] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13f7920) 00:33:17.418 [2024-04-26 23:36:06.446478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:17150 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:17.418 [2024-04-26 23:36:06.446486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:17.418 [2024-04-26 23:36:06.460269] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13f7920) 00:33:17.418 [2024-04-26 23:36:06.460289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:16710 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:17.418 [2024-04-26 23:36:06.460298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:17.418 [2024-04-26 23:36:06.473685] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13f7920) 00:33:17.418 [2024-04-26 23:36:06.473706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:6328 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:17.418 [2024-04-26 23:36:06.473714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:17.418 [2024-04-26 23:36:06.484599] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13f7920) 00:33:17.418 [2024-04-26 23:36:06.484620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:3097 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:17.418 [2024-04-26 23:36:06.484629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:17.418 [2024-04-26 23:36:06.500384] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13f7920) 00:33:17.418 [2024-04-26 23:36:06.500405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:14283 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:17.418 [2024-04-26 23:36:06.500414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:17.418 [2024-04-26 23:36:06.514317] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13f7920) 00:33:17.418 [2024-04-26 23:36:06.514338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:10380 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:17.418 [2024-04-26 23:36:06.514347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:17.418 [2024-04-26 23:36:06.524993] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13f7920) 00:33:17.418 [2024-04-26 23:36:06.525014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:4287 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:17.418 [2024-04-26 23:36:06.525023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:17.418 [2024-04-26 23:36:06.540969] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13f7920) 00:33:17.418 [2024-04-26 23:36:06.540991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:16452 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:17.418 [2024-04-26 23:36:06.540999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:17.418 [2024-04-26 23:36:06.553278] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13f7920) 00:33:17.418 [2024-04-26 23:36:06.553299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:17823 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:17.418 [2024-04-26 23:36:06.553308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:17.418 [2024-04-26 23:36:06.566073] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13f7920) 00:33:17.418 [2024-04-26 23:36:06.566093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:611 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:17.418 [2024-04-26 23:36:06.566101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:17.418 [2024-04-26 23:36:06.577913] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13f7920) 00:33:17.418 [2024-04-26 23:36:06.577933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:12209 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:17.418 [2024-04-26 23:36:06.577942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:17.418 [2024-04-26 23:36:06.592150] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13f7920) 00:33:17.418 [2024-04-26 23:36:06.592171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:699 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:17.418 [2024-04-26 23:36:06.592180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:17.418 [2024-04-26 23:36:06.606738] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13f7920) 00:33:17.418 [2024-04-26 23:36:06.606759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:18464 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:17.418 [2024-04-26 23:36:06.606768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:17.418 [2024-04-26 23:36:06.623670] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13f7920) 00:33:17.418 [2024-04-26 23:36:06.623691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:17406 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:17.418 [2024-04-26 23:36:06.623704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:17.418 [2024-04-26 23:36:06.637201] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13f7920) 00:33:17.418 [2024-04-26 23:36:06.637221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24493 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:17.418 [2024-04-26 23:36:06.637230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:17.418 [2024-04-26 23:36:06.649275] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13f7920) 00:33:17.418 [2024-04-26 23:36:06.649296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:11497 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:17.418 [2024-04-26 23:36:06.649305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:17.418 [2024-04-26 23:36:06.662551] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13f7920) 00:33:17.418 [2024-04-26 23:36:06.662572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:1378 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:17.418 [2024-04-26 23:36:06.662580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:17.681 [2024-04-26 23:36:06.674965] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13f7920) 00:33:17.681 [2024-04-26 23:36:06.674986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:18557 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:17.681 [2024-04-26 23:36:06.674995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:17.681 [2024-04-26 23:36:06.686507] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13f7920) 00:33:17.681 [2024-04-26 23:36:06.686529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:4157 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:17.681 [2024-04-26 23:36:06.686538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:17.681 [2024-04-26 23:36:06.700472] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13f7920) 00:33:17.681 [2024-04-26 23:36:06.700493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:24869 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:17.681 [2024-04-26 23:36:06.700502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:17.681 [2024-04-26 23:36:06.712501] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13f7920) 00:33:17.681 [2024-04-26 23:36:06.712521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:5098 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:17.681 [2024-04-26 23:36:06.712530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:17.681 [2024-04-26 23:36:06.724333] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13f7920) 00:33:17.681 [2024-04-26 23:36:06.724354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:1777 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:17.681 [2024-04-26 23:36:06.724363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:17.681 [2024-04-26 23:36:06.738185] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13f7920) 00:33:17.681 [2024-04-26 23:36:06.738210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:16483 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:17.681 [2024-04-26 23:36:06.738218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:17.682 [2024-04-26 23:36:06.753013] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13f7920) 00:33:17.682 [2024-04-26 23:36:06.753034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:22714 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:17.682 [2024-04-26 23:36:06.753043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:17.682 [2024-04-26 23:36:06.766003] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13f7920) 00:33:17.682 [2024-04-26 23:36:06.766023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:9390 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:17.682 [2024-04-26 23:36:06.766032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:17.682 [2024-04-26 23:36:06.779522] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13f7920) 00:33:17.682 [2024-04-26 23:36:06.779542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:4189 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:17.682 [2024-04-26 23:36:06.779551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:17.682 [2024-04-26 23:36:06.790392] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13f7920) 00:33:17.682 [2024-04-26 23:36:06.790413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:14474 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:17.682 [2024-04-26 23:36:06.790421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:17.682 [2024-04-26 23:36:06.805141] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13f7920) 00:33:17.682 [2024-04-26 23:36:06.805162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:25058 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:17.682 [2024-04-26 23:36:06.805171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:17.682 [2024-04-26 23:36:06.820632] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13f7920) 00:33:17.682 [2024-04-26 23:36:06.820652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:12566 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:17.682 [2024-04-26 23:36:06.820661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:17.682 [2024-04-26 23:36:06.834392] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13f7920) 00:33:17.682 [2024-04-26 23:36:06.834412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13746 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:17.682 [2024-04-26 23:36:06.834421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:17.682 [2024-04-26 23:36:06.845710] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13f7920) 00:33:17.682 [2024-04-26 23:36:06.845731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:12430 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:17.682 [2024-04-26 23:36:06.845740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:17.682 [2024-04-26 23:36:06.861482] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13f7920) 00:33:17.682 [2024-04-26 23:36:06.861503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:18884 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:17.682 [2024-04-26 23:36:06.861511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:17.682 [2024-04-26 23:36:06.877072] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13f7920) 00:33:17.682 [2024-04-26 23:36:06.877093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:11508 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:17.682 [2024-04-26 23:36:06.877102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:17.682 [2024-04-26 23:36:06.890457] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13f7920) 00:33:17.682 [2024-04-26 23:36:06.890477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:13777 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:17.682 [2024-04-26 23:36:06.890485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:17.682 [2024-04-26 23:36:06.902249] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13f7920) 00:33:17.682 [2024-04-26 23:36:06.902271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:13760 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:17.682 [2024-04-26 23:36:06.902279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:17.682 [2024-04-26 23:36:06.917420] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13f7920) 00:33:17.682 [2024-04-26 23:36:06.917441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:5756 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:17.682 [2024-04-26 23:36:06.917449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:17.682 [2024-04-26 23:36:06.927849] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13f7920) 00:33:17.682 [2024-04-26 23:36:06.927870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:6287 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:17.682 [2024-04-26 23:36:06.927878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:17.944 00:33:17.944 Latency(us) 00:33:17.944 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:17.944 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:33:17.944 nvme0n1 : 2.00 19148.44 74.80 0.00 0.00 6676.75 3222.19 23156.05 00:33:17.944 =================================================================================================================== 00:33:17.944 Total : 19148.44 74.80 0.00 0.00 6676.75 3222.19 23156.05 00:33:17.944 0 00:33:17.944 23:36:06 -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:33:17.944 23:36:06 -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:33:17.944 23:36:06 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:33:17.944 23:36:06 -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:33:17.944 | .driver_specific 00:33:17.944 | .nvme_error 00:33:17.944 | .status_code 00:33:17.944 | .command_transient_transport_error' 00:33:17.944 23:36:07 -- host/digest.sh@71 -- # (( 150 > 0 )) 00:33:17.944 23:36:07 -- host/digest.sh@73 -- # killprocess 4173590 00:33:17.944 23:36:07 -- common/autotest_common.sh@936 -- # '[' -z 4173590 ']' 00:33:17.944 23:36:07 -- common/autotest_common.sh@940 -- # kill -0 4173590 00:33:17.944 23:36:07 -- common/autotest_common.sh@941 -- # uname 00:33:17.944 23:36:07 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:33:17.944 23:36:07 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 4173590 00:33:17.944 23:36:07 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:33:17.944 23:36:07 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:33:17.944 23:36:07 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 4173590' 00:33:17.944 killing process with pid 4173590 00:33:17.944 23:36:07 -- common/autotest_common.sh@955 -- # kill 4173590 00:33:17.944 Received shutdown signal, test time was about 2.000000 seconds 00:33:17.944 00:33:17.944 Latency(us) 00:33:17.944 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:17.944 =================================================================================================================== 00:33:17.944 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:33:17.944 23:36:07 -- common/autotest_common.sh@960 -- # wait 4173590 00:33:18.206 23:36:07 -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:33:18.206 23:36:07 -- host/digest.sh@54 -- # local rw bs qd 00:33:18.206 23:36:07 -- host/digest.sh@56 -- # rw=randread 00:33:18.206 23:36:07 -- host/digest.sh@56 -- # bs=131072 00:33:18.206 23:36:07 -- host/digest.sh@56 -- # qd=16 00:33:18.206 23:36:07 -- host/digest.sh@58 -- # bperfpid=4174054 00:33:18.206 23:36:07 -- host/digest.sh@60 -- # waitforlisten 4174054 /var/tmp/bperf.sock 00:33:18.206 23:36:07 -- common/autotest_common.sh@817 -- # '[' -z 4174054 ']' 00:33:18.206 23:36:07 -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:33:18.206 23:36:07 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bperf.sock 00:33:18.206 23:36:07 -- common/autotest_common.sh@822 -- # local max_retries=100 00:33:18.206 23:36:07 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:33:18.206 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:33:18.206 23:36:07 -- common/autotest_common.sh@826 -- # xtrace_disable 00:33:18.206 23:36:07 -- common/autotest_common.sh@10 -- # set +x 00:33:18.206 [2024-04-26 23:36:07.344022] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:33:18.206 [2024-04-26 23:36:07.344081] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4174054 ] 00:33:18.206 I/O size of 131072 is greater than zero copy threshold (65536). 00:33:18.206 Zero copy mechanism will not be used. 00:33:18.206 EAL: No free 2048 kB hugepages reported on node 1 00:33:18.206 [2024-04-26 23:36:07.402882] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:18.206 [2024-04-26 23:36:07.432556] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:33:19.151 23:36:08 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:33:19.151 23:36:08 -- common/autotest_common.sh@850 -- # return 0 00:33:19.151 23:36:08 -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:33:19.152 23:36:08 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:33:19.152 23:36:08 -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:33:19.152 23:36:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:33:19.152 23:36:08 -- common/autotest_common.sh@10 -- # set +x 00:33:19.152 23:36:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:33:19.152 23:36:08 -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:19.152 23:36:08 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:19.413 nvme0n1 00:33:19.413 23:36:08 -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:33:19.413 23:36:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:33:19.413 23:36:08 -- common/autotest_common.sh@10 -- # set +x 00:33:19.413 23:36:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:33:19.413 23:36:08 -- host/digest.sh@69 -- # bperf_py perform_tests 00:33:19.413 23:36:08 -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:33:19.413 I/O size of 131072 is greater than zero copy threshold (65536). 00:33:19.413 Zero copy mechanism will not be used. 00:33:19.413 Running I/O for 2 seconds... 00:33:19.413 [2024-04-26 23:36:08.582763] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d47e0) 00:33:19.413 [2024-04-26 23:36:08.582801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:19.413 [2024-04-26 23:36:08.582813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:19.413 [2024-04-26 23:36:08.593644] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d47e0) 00:33:19.413 [2024-04-26 23:36:08.593671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:19.413 [2024-04-26 23:36:08.593682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:19.413 [2024-04-26 23:36:08.604074] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d47e0) 00:33:19.413 [2024-04-26 23:36:08.604098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:19.413 [2024-04-26 23:36:08.604108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:19.413 [2024-04-26 23:36:08.615102] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d47e0) 00:33:19.413 [2024-04-26 23:36:08.615124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:19.413 [2024-04-26 23:36:08.615133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:19.413 [2024-04-26 23:36:08.626023] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d47e0) 00:33:19.413 [2024-04-26 23:36:08.626046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:19.413 [2024-04-26 23:36:08.626055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:19.413 [2024-04-26 23:36:08.637089] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d47e0) 00:33:19.413 [2024-04-26 23:36:08.637111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:19.413 [2024-04-26 23:36:08.637121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:19.413 [2024-04-26 23:36:08.648096] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d47e0) 00:33:19.413 [2024-04-26 23:36:08.648118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:19.413 [2024-04-26 23:36:08.648126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:19.413 [2024-04-26 23:36:08.658597] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d47e0) 00:33:19.413 [2024-04-26 23:36:08.658624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:19.413 [2024-04-26 23:36:08.658633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:19.676 [2024-04-26 23:36:08.669272] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d47e0) 00:33:19.676 [2024-04-26 23:36:08.669298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:19.676 [2024-04-26 23:36:08.669306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:19.676 [2024-04-26 23:36:08.679691] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d47e0) 00:33:19.676 [2024-04-26 23:36:08.679713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:19.676 [2024-04-26 23:36:08.679722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:19.676 [2024-04-26 23:36:08.690931] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d47e0) 00:33:19.676 [2024-04-26 23:36:08.690953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:19.676 [2024-04-26 23:36:08.690962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:19.676 [2024-04-26 23:36:08.702059] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d47e0) 00:33:19.676 [2024-04-26 23:36:08.702081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:19.676 [2024-04-26 23:36:08.702090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:19.676 [2024-04-26 23:36:08.712626] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d47e0) 00:33:19.676 [2024-04-26 23:36:08.712647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:19.676 [2024-04-26 23:36:08.712656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:19.676 [2024-04-26 23:36:08.723493] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d47e0) 00:33:19.676 [2024-04-26 23:36:08.723514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:19.676 [2024-04-26 23:36:08.723523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:19.676 [2024-04-26 23:36:08.732644] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d47e0) 00:33:19.676 [2024-04-26 23:36:08.732666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:19.676 [2024-04-26 23:36:08.732674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:19.676 [2024-04-26 23:36:08.741957] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d47e0) 00:33:19.676 [2024-04-26 23:36:08.741978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:19.676 [2024-04-26 23:36:08.741987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:19.676 [2024-04-26 23:36:08.751120] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d47e0) 00:33:19.676 [2024-04-26 23:36:08.751142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:19.676 [2024-04-26 23:36:08.751151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:19.676 [2024-04-26 23:36:08.761653] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d47e0) 00:33:19.676 [2024-04-26 23:36:08.761675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:19.676 [2024-04-26 23:36:08.761684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:19.676 [2024-04-26 23:36:08.771693] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d47e0) 00:33:19.676 [2024-04-26 23:36:08.771716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:19.676 [2024-04-26 23:36:08.771724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:19.676 [2024-04-26 23:36:08.782613] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d47e0) 00:33:19.676 [2024-04-26 23:36:08.782635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:19.676 [2024-04-26 23:36:08.782644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:19.676 [2024-04-26 23:36:08.793362] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d47e0) 00:33:19.676 [2024-04-26 23:36:08.793384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:19.676 [2024-04-26 23:36:08.793392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:19.676 [2024-04-26 23:36:08.804850] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d47e0) 00:33:19.676 [2024-04-26 23:36:08.804872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:19.676 [2024-04-26 23:36:08.804880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:19.676 [2024-04-26 23:36:08.816226] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d47e0) 00:33:19.676 [2024-04-26 23:36:08.816248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:19.676 [2024-04-26 23:36:08.816257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:19.676 [2024-04-26 23:36:08.825750] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d47e0) 00:33:19.676 [2024-04-26 23:36:08.825774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:19.676 [2024-04-26 23:36:08.825783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:19.676 [2024-04-26 23:36:08.838183] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d47e0) 00:33:19.676 [2024-04-26 23:36:08.838213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:19.676 [2024-04-26 23:36:08.838221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:19.676 [2024-04-26 23:36:08.848194] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d47e0) 00:33:19.677 [2024-04-26 23:36:08.848217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:19.677 [2024-04-26 23:36:08.848226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:19.677 [2024-04-26 23:36:08.858755] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d47e0) 00:33:19.677 [2024-04-26 23:36:08.858778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:19.677 [2024-04-26 23:36:08.858787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:19.677 [2024-04-26 23:36:08.869988] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d47e0) 00:33:19.677 [2024-04-26 23:36:08.870011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:19.677 [2024-04-26 23:36:08.870019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:19.677 [2024-04-26 23:36:08.881340] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d47e0) 00:33:19.677 [2024-04-26 23:36:08.881363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:19.677 [2024-04-26 23:36:08.881374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:19.677 [2024-04-26 23:36:08.893344] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d47e0) 00:33:19.677 [2024-04-26 23:36:08.893366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:19.677 [2024-04-26 23:36:08.893375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:19.677 [2024-04-26 23:36:08.904201] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d47e0) 00:33:19.677 [2024-04-26 23:36:08.904225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:19.677 [2024-04-26 23:36:08.904236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:19.677 [2024-04-26 23:36:08.915130] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d47e0) 00:33:19.677 [2024-04-26 23:36:08.915153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:19.677 [2024-04-26 23:36:08.915161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:19.677 [2024-04-26 23:36:08.926522] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d47e0) 00:33:19.677 [2024-04-26 23:36:08.926544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:19.677 [2024-04-26 23:36:08.926553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:19.939 [2024-04-26 23:36:08.936335] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d47e0) 00:33:19.939 [2024-04-26 23:36:08.936358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:19.939 [2024-04-26 23:36:08.936367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:19.939 [2024-04-26 23:36:08.946600] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d47e0) 00:33:19.939 [2024-04-26 23:36:08.946623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:19.939 [2024-04-26 23:36:08.946632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:19.939 [2024-04-26 23:36:08.958025] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d47e0) 00:33:19.939 [2024-04-26 23:36:08.958049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:19.939 [2024-04-26 23:36:08.958058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:19.939 [2024-04-26 23:36:08.967602] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d47e0) 00:33:19.939 [2024-04-26 23:36:08.967625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:19.939 [2024-04-26 23:36:08.967634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:19.939 [2024-04-26 23:36:08.976819] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d47e0) 00:33:19.939 [2024-04-26 23:36:08.976846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:19.939 [2024-04-26 23:36:08.976856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:19.939 [2024-04-26 23:36:08.984973] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d47e0) 00:33:19.939 [2024-04-26 23:36:08.984995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:19.939 [2024-04-26 23:36:08.985004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:19.939 [2024-04-26 23:36:08.992310] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d47e0) 00:33:19.939 [2024-04-26 23:36:08.992333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:19.939 [2024-04-26 23:36:08.992342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:19.940 [2024-04-26 23:36:08.999479] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d47e0) 00:33:19.940 [2024-04-26 23:36:08.999502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:19.940 [2024-04-26 23:36:08.999510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:19.940 [2024-04-26 23:36:09.007195] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d47e0) 00:33:19.940 [2024-04-26 23:36:09.007218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:19.940 [2024-04-26 23:36:09.007231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:19.940 [2024-04-26 23:36:09.018171] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d47e0) 00:33:19.940 [2024-04-26 23:36:09.018194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:19.940 [2024-04-26 23:36:09.018202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:19.940 [2024-04-26 23:36:09.028573] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d47e0) 00:33:19.940 [2024-04-26 23:36:09.028595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:19.940 [2024-04-26 23:36:09.028604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:19.940 [2024-04-26 23:36:09.038168] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d47e0) 00:33:19.940 [2024-04-26 23:36:09.038190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:19.940 [2024-04-26 23:36:09.038199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:19.940 [2024-04-26 23:36:09.048824] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d47e0) 00:33:19.940 [2024-04-26 23:36:09.048854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:19.940 [2024-04-26 23:36:09.048863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:19.940 [2024-04-26 23:36:09.061389] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d47e0) 00:33:19.940 [2024-04-26 23:36:09.061411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:19.940 [2024-04-26 23:36:09.061420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:19.940 [2024-04-26 23:36:09.072053] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d47e0) 00:33:19.940 [2024-04-26 23:36:09.072075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:19.940 [2024-04-26 23:36:09.072084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:19.940 [2024-04-26 23:36:09.082858] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d47e0) 00:33:19.940 [2024-04-26 23:36:09.082880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:19.940 [2024-04-26 23:36:09.082888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:19.940 [2024-04-26 23:36:09.094183] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d47e0) 00:33:19.940 [2024-04-26 23:36:09.094205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:19.940 [2024-04-26 23:36:09.094214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:19.940 [2024-04-26 23:36:09.105969] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d47e0) 00:33:19.940 [2024-04-26 23:36:09.105996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:19.940 [2024-04-26 23:36:09.106005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:19.940 [2024-04-26 23:36:09.115883] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d47e0) 00:33:19.940 [2024-04-26 23:36:09.115906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:19.940 [2024-04-26 23:36:09.115915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:19.940 [2024-04-26 23:36:09.126184] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d47e0) 00:33:19.940 [2024-04-26 23:36:09.126206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:19.940 [2024-04-26 23:36:09.126215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:19.940 [2024-04-26 23:36:09.135057] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d47e0) 00:33:19.940 [2024-04-26 23:36:09.135079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:19.940 [2024-04-26 23:36:09.135088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:19.940 [2024-04-26 23:36:09.144281] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d47e0) 00:33:19.940 [2024-04-26 23:36:09.144304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:19.940 [2024-04-26 23:36:09.144312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:19.940 [2024-04-26 23:36:09.157254] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d47e0) 00:33:19.940 [2024-04-26 23:36:09.157277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:19.940 [2024-04-26 23:36:09.157285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:19.940 [2024-04-26 23:36:09.170986] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d47e0) 00:33:19.940 [2024-04-26 23:36:09.171009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:19.940 [2024-04-26 23:36:09.171017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:19.940 [2024-04-26 23:36:09.184458] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d47e0) 00:33:19.940 [2024-04-26 23:36:09.184481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:19.940 [2024-04-26 23:36:09.184489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:19.940 [2024-04-26 23:36:09.192776] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d47e0) 00:33:19.940 [2024-04-26 23:36:09.192798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:19.940 [2024-04-26 23:36:09.192807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:20.203 [2024-04-26 23:36:09.200411] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d47e0) 00:33:20.203 [2024-04-26 23:36:09.200434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.203 [2024-04-26 23:36:09.200443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:20.203 [2024-04-26 23:36:09.210509] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d47e0) 00:33:20.203 [2024-04-26 23:36:09.210532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.203 [2024-04-26 23:36:09.210540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:20.203 [2024-04-26 23:36:09.222877] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d47e0) 00:33:20.203 [2024-04-26 23:36:09.222901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.203 [2024-04-26 23:36:09.222911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:20.203 [2024-04-26 23:36:09.236147] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d47e0) 00:33:20.203 [2024-04-26 23:36:09.236169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.203 [2024-04-26 23:36:09.236178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:20.203 [2024-04-26 23:36:09.249013] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d47e0) 00:33:20.203 [2024-04-26 23:36:09.249036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.203 [2024-04-26 23:36:09.249045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:20.203 [2024-04-26 23:36:09.260629] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d47e0) 00:33:20.203 [2024-04-26 23:36:09.260652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.203 [2024-04-26 23:36:09.260661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:20.203 [2024-04-26 23:36:09.271309] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d47e0) 00:33:20.203 [2024-04-26 23:36:09.271332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.203 [2024-04-26 23:36:09.271340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:20.203 [2024-04-26 23:36:09.283960] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d47e0) 00:33:20.203 [2024-04-26 23:36:09.283982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.203 [2024-04-26 23:36:09.283991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:20.203 [2024-04-26 23:36:09.294536] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d47e0) 00:33:20.203 [2024-04-26 23:36:09.294559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.203 [2024-04-26 23:36:09.294571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:20.203 [2024-04-26 23:36:09.304993] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d47e0) 00:33:20.203 [2024-04-26 23:36:09.305017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.203 [2024-04-26 23:36:09.305026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:20.203 [2024-04-26 23:36:09.314957] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d47e0) 00:33:20.203 [2024-04-26 23:36:09.314980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.203 [2024-04-26 23:36:09.314990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:20.203 [2024-04-26 23:36:09.324087] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d47e0) 00:33:20.203 [2024-04-26 23:36:09.324110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.203 [2024-04-26 23:36:09.324119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:20.203 [2024-04-26 23:36:09.334074] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d47e0) 00:33:20.203 [2024-04-26 23:36:09.334097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.203 [2024-04-26 23:36:09.334106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:20.203 [2024-04-26 23:36:09.344970] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d47e0) 00:33:20.203 [2024-04-26 23:36:09.344992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.203 [2024-04-26 23:36:09.345001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:20.203 [2024-04-26 23:36:09.355908] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d47e0) 00:33:20.203 [2024-04-26 23:36:09.355931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:14912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.203 [2024-04-26 23:36:09.355939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:20.203 [2024-04-26 23:36:09.367124] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d47e0) 00:33:20.203 [2024-04-26 23:36:09.367147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.203 [2024-04-26 23:36:09.367155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:20.203 [2024-04-26 23:36:09.378429] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d47e0) 00:33:20.203 [2024-04-26 23:36:09.378452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.203 [2024-04-26 23:36:09.378461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:20.203 [2024-04-26 23:36:09.390537] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d47e0) 00:33:20.203 [2024-04-26 23:36:09.390576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.204 [2024-04-26 23:36:09.390585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:20.204 [2024-04-26 23:36:09.401862] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d47e0) 00:33:20.204 [2024-04-26 23:36:09.401884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.204 [2024-04-26 23:36:09.401893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:20.204 [2024-04-26 23:36:09.412281] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d47e0) 00:33:20.204 [2024-04-26 23:36:09.412303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.204 [2024-04-26 23:36:09.412312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:20.204 [2024-04-26 23:36:09.422457] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d47e0) 00:33:20.204 [2024-04-26 23:36:09.422480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.204 [2024-04-26 23:36:09.422489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:20.204 [2024-04-26 23:36:09.432963] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d47e0) 00:33:20.204 [2024-04-26 23:36:09.432985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:18048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.204 [2024-04-26 23:36:09.432994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:20.204 [2024-04-26 23:36:09.443300] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d47e0) 00:33:20.204 [2024-04-26 23:36:09.443322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.204 [2024-04-26 23:36:09.443331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:20.204 [2024-04-26 23:36:09.454071] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d47e0) 00:33:20.204 [2024-04-26 23:36:09.454093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.204 [2024-04-26 23:36:09.454102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:20.465 [2024-04-26 23:36:09.461983] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d47e0) 00:33:20.465 [2024-04-26 23:36:09.462005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.465 [2024-04-26 23:36:09.462014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:20.465 [2024-04-26 23:36:09.469299] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d47e0) 00:33:20.465 [2024-04-26 23:36:09.469322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.465 [2024-04-26 23:36:09.469330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:20.465 [2024-04-26 23:36:09.476224] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d47e0) 00:33:20.465 [2024-04-26 23:36:09.476246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.465 [2024-04-26 23:36:09.476254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:20.465 [2024-04-26 23:36:09.483198] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d47e0) 00:33:20.465 [2024-04-26 23:36:09.483220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.465 [2024-04-26 23:36:09.483229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:20.465 [2024-04-26 23:36:09.490052] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d47e0) 00:33:20.465 [2024-04-26 23:36:09.490074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.465 [2024-04-26 23:36:09.490083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:20.465 [2024-04-26 23:36:09.496760] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d47e0) 00:33:20.465 [2024-04-26 23:36:09.496782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.465 [2024-04-26 23:36:09.496791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:20.465 [2024-04-26 23:36:09.503329] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d47e0) 00:33:20.465 [2024-04-26 23:36:09.503351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.465 [2024-04-26 23:36:09.503360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:20.465 [2024-04-26 23:36:09.510013] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d47e0) 00:33:20.466 [2024-04-26 23:36:09.510035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.466 [2024-04-26 23:36:09.510043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:20.466 [2024-04-26 23:36:09.516869] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d47e0) 00:33:20.466 [2024-04-26 23:36:09.516891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.466 [2024-04-26 23:36:09.516899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:20.466 [2024-04-26 23:36:09.523762] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d47e0) 00:33:20.466 [2024-04-26 23:36:09.523790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.466 [2024-04-26 23:36:09.523803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:20.466 [2024-04-26 23:36:09.531517] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d47e0) 00:33:20.466 [2024-04-26 23:36:09.531541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.466 [2024-04-26 23:36:09.531554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:20.466 [2024-04-26 23:36:09.539359] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d47e0) 00:33:20.466 [2024-04-26 23:36:09.539382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.466 [2024-04-26 23:36:09.539390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:20.466 [2024-04-26 23:36:09.548003] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d47e0) 00:33:20.466 [2024-04-26 23:36:09.548025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.466 [2024-04-26 23:36:09.548034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:20.466 [2024-04-26 23:36:09.558665] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d47e0) 00:33:20.466 [2024-04-26 23:36:09.558687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.466 [2024-04-26 23:36:09.558696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:20.466 [2024-04-26 23:36:09.568238] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d47e0) 00:33:20.466 [2024-04-26 23:36:09.568260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.466 [2024-04-26 23:36:09.568269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:20.466 [2024-04-26 23:36:09.579121] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d47e0) 00:33:20.466 [2024-04-26 23:36:09.579143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.466 [2024-04-26 23:36:09.579152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:20.466 [2024-04-26 23:36:09.591304] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d47e0) 00:33:20.466 [2024-04-26 23:36:09.591325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.466 [2024-04-26 23:36:09.591334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:20.466 [2024-04-26 23:36:09.600830] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d47e0) 00:33:20.466 [2024-04-26 23:36:09.600859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.466 [2024-04-26 23:36:09.600868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:20.466 [2024-04-26 23:36:09.611806] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d47e0) 00:33:20.466 [2024-04-26 23:36:09.611828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.466 [2024-04-26 23:36:09.611844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:20.466 [2024-04-26 23:36:09.623240] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d47e0) 00:33:20.466 [2024-04-26 23:36:09.623266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.466 [2024-04-26 23:36:09.623275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:20.466 [2024-04-26 23:36:09.633075] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d47e0) 00:33:20.466 [2024-04-26 23:36:09.633098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.466 [2024-04-26 23:36:09.633107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:20.466 [2024-04-26 23:36:09.643807] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d47e0) 00:33:20.466 [2024-04-26 23:36:09.643830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.466 [2024-04-26 23:36:09.643846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:20.466 [2024-04-26 23:36:09.654295] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d47e0) 00:33:20.466 [2024-04-26 23:36:09.654318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.466 [2024-04-26 23:36:09.654327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:20.466 [2024-04-26 23:36:09.665013] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d47e0) 00:33:20.466 [2024-04-26 23:36:09.665036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.466 [2024-04-26 23:36:09.665044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:20.466 [2024-04-26 23:36:09.675055] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d47e0) 00:33:20.466 [2024-04-26 23:36:09.675077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.466 [2024-04-26 23:36:09.675086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:20.466 [2024-04-26 23:36:09.688286] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d47e0) 00:33:20.466 [2024-04-26 23:36:09.688309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.466 [2024-04-26 23:36:09.688318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:20.466 [2024-04-26 23:36:09.698689] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d47e0) 00:33:20.466 [2024-04-26 23:36:09.698711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.466 [2024-04-26 23:36:09.698720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:20.466 [2024-04-26 23:36:09.710330] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d47e0) 00:33:20.466 [2024-04-26 23:36:09.710352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.466 [2024-04-26 23:36:09.710361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:20.728 [2024-04-26 23:36:09.719961] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d47e0) 00:33:20.728 [2024-04-26 23:36:09.719983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.728 [2024-04-26 23:36:09.719992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:20.728 [2024-04-26 23:36:09.727875] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d47e0) 00:33:20.728 [2024-04-26 23:36:09.727897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.728 [2024-04-26 23:36:09.727906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:20.728 [2024-04-26 23:36:09.735586] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d47e0) 00:33:20.728 [2024-04-26 23:36:09.735608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.728 [2024-04-26 23:36:09.735617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:20.728 [2024-04-26 23:36:09.743077] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d47e0) 00:33:20.728 [2024-04-26 23:36:09.743099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.728 [2024-04-26 23:36:09.743108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:20.728 [2024-04-26 23:36:09.751718] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d47e0) 00:33:20.728 [2024-04-26 23:36:09.751741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.728 [2024-04-26 23:36:09.751750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:20.728 [2024-04-26 23:36:09.763516] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d47e0) 00:33:20.728 [2024-04-26 23:36:09.763539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.728 [2024-04-26 23:36:09.763550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:20.728 [2024-04-26 23:36:09.774440] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d47e0) 00:33:20.728 [2024-04-26 23:36:09.774464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.728 [2024-04-26 23:36:09.774475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:20.728 [2024-04-26 23:36:09.785135] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d47e0) 00:33:20.728 [2024-04-26 23:36:09.785159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.728 [2024-04-26 23:36:09.785168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:20.728 [2024-04-26 23:36:09.796006] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d47e0) 00:33:20.728 [2024-04-26 23:36:09.796030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.728 [2024-04-26 23:36:09.796044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:20.728 [2024-04-26 23:36:09.808323] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d47e0) 00:33:20.728 [2024-04-26 23:36:09.808347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.728 [2024-04-26 23:36:09.808355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:20.728 [2024-04-26 23:36:09.821609] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d47e0) 00:33:20.728 [2024-04-26 23:36:09.821632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.728 [2024-04-26 23:36:09.821641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:20.728 [2024-04-26 23:36:09.835211] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d47e0) 00:33:20.728 [2024-04-26 23:36:09.835234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.728 [2024-04-26 23:36:09.835242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:20.728 [2024-04-26 23:36:09.847715] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d47e0) 00:33:20.728 [2024-04-26 23:36:09.847738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.728 [2024-04-26 23:36:09.847747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:20.728 [2024-04-26 23:36:09.858412] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d47e0) 00:33:20.728 [2024-04-26 23:36:09.858434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.728 [2024-04-26 23:36:09.858443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:20.728 [2024-04-26 23:36:09.869418] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d47e0) 00:33:20.728 [2024-04-26 23:36:09.869440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.728 [2024-04-26 23:36:09.869449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:20.729 [2024-04-26 23:36:09.880782] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d47e0) 00:33:20.729 [2024-04-26 23:36:09.880806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.729 [2024-04-26 23:36:09.880815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:20.729 [2024-04-26 23:36:09.892479] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d47e0) 00:33:20.729 [2024-04-26 23:36:09.892502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.729 [2024-04-26 23:36:09.892510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:20.729 [2024-04-26 23:36:09.902008] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d47e0) 00:33:20.729 [2024-04-26 23:36:09.902031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.729 [2024-04-26 23:36:09.902040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:20.729 [2024-04-26 23:36:09.911994] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d47e0) 00:33:20.729 [2024-04-26 23:36:09.912017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.729 [2024-04-26 23:36:09.912026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:20.729 [2024-04-26 23:36:09.923200] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d47e0) 00:33:20.729 [2024-04-26 23:36:09.923222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.729 [2024-04-26 23:36:09.923231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:20.729 [2024-04-26 23:36:09.934395] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d47e0) 00:33:20.729 [2024-04-26 23:36:09.934418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.729 [2024-04-26 23:36:09.934427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:20.729 [2024-04-26 23:36:09.944872] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d47e0) 00:33:20.729 [2024-04-26 23:36:09.944894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.729 [2024-04-26 23:36:09.944903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:20.729 [2024-04-26 23:36:09.955661] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d47e0) 00:33:20.729 [2024-04-26 23:36:09.955685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.729 [2024-04-26 23:36:09.955694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:20.729 [2024-04-26 23:36:09.966339] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d47e0) 00:33:20.729 [2024-04-26 23:36:09.966361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.729 [2024-04-26 23:36:09.966370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:20.729 [2024-04-26 23:36:09.978080] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d47e0) 00:33:20.729 [2024-04-26 23:36:09.978102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.729 [2024-04-26 23:36:09.978111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:20.991 [2024-04-26 23:36:09.989599] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d47e0) 00:33:20.991 [2024-04-26 23:36:09.989622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.991 [2024-04-26 23:36:09.989635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:20.991 [2024-04-26 23:36:10.000289] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d47e0) 00:33:20.991 [2024-04-26 23:36:10.000312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.991 [2024-04-26 23:36:10.000320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:20.991 [2024-04-26 23:36:10.011042] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d47e0) 00:33:20.991 [2024-04-26 23:36:10.011066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:5088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.991 [2024-04-26 23:36:10.011075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:20.991 [2024-04-26 23:36:10.021174] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d47e0) 00:33:20.991 [2024-04-26 23:36:10.021197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.991 [2024-04-26 23:36:10.021206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:20.991 [2024-04-26 23:36:10.029370] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d47e0) 00:33:20.991 [2024-04-26 23:36:10.029393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.991 [2024-04-26 23:36:10.029401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:20.991 [2024-04-26 23:36:10.040012] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d47e0) 00:33:20.991 [2024-04-26 23:36:10.040034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.991 [2024-04-26 23:36:10.040043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:20.991 [2024-04-26 23:36:10.050449] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d47e0) 00:33:20.991 [2024-04-26 23:36:10.050473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.991 [2024-04-26 23:36:10.050482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:20.991 [2024-04-26 23:36:10.061121] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d47e0) 00:33:20.991 [2024-04-26 23:36:10.061143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.991 [2024-04-26 23:36:10.061152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:20.991 [2024-04-26 23:36:10.074526] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d47e0) 00:33:20.991 [2024-04-26 23:36:10.074549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.991 [2024-04-26 23:36:10.074557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:20.991 [2024-04-26 23:36:10.086081] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d47e0) 00:33:20.991 [2024-04-26 23:36:10.086108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.991 [2024-04-26 23:36:10.086117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:20.991 [2024-04-26 23:36:10.097056] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d47e0) 00:33:20.991 [2024-04-26 23:36:10.097079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.991 [2024-04-26 23:36:10.097088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:20.991 [2024-04-26 23:36:10.107517] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d47e0) 00:33:20.991 [2024-04-26 23:36:10.107539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.991 [2024-04-26 23:36:10.107548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:20.991 [2024-04-26 23:36:10.118125] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d47e0) 00:33:20.991 [2024-04-26 23:36:10.118147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.991 [2024-04-26 23:36:10.118156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:20.991 [2024-04-26 23:36:10.128524] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d47e0) 00:33:20.991 [2024-04-26 23:36:10.128547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.991 [2024-04-26 23:36:10.128556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:20.991 [2024-04-26 23:36:10.140020] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d47e0) 00:33:20.991 [2024-04-26 23:36:10.140042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.991 [2024-04-26 23:36:10.140051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:20.991 [2024-04-26 23:36:10.149649] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d47e0) 00:33:20.991 [2024-04-26 23:36:10.149671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.991 [2024-04-26 23:36:10.149679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:20.991 [2024-04-26 23:36:10.159877] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d47e0) 00:33:20.991 [2024-04-26 23:36:10.159900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.991 [2024-04-26 23:36:10.159908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:20.991 [2024-04-26 23:36:10.169147] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d47e0) 00:33:20.991 [2024-04-26 23:36:10.169170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.991 [2024-04-26 23:36:10.169179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:20.991 [2024-04-26 23:36:10.179208] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d47e0) 00:33:20.991 [2024-04-26 23:36:10.179230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.991 [2024-04-26 23:36:10.179238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:20.991 [2024-04-26 23:36:10.189204] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d47e0) 00:33:20.991 [2024-04-26 23:36:10.189226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.991 [2024-04-26 23:36:10.189235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:20.991 [2024-04-26 23:36:10.201768] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d47e0) 00:33:20.991 [2024-04-26 23:36:10.201790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.991 [2024-04-26 23:36:10.201799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:20.991 [2024-04-26 23:36:10.212279] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d47e0) 00:33:20.991 [2024-04-26 23:36:10.212301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.991 [2024-04-26 23:36:10.212310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:20.991 [2024-04-26 23:36:10.222981] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d47e0) 00:33:20.991 [2024-04-26 23:36:10.223003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.992 [2024-04-26 23:36:10.223012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:20.992 [2024-04-26 23:36:10.234627] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d47e0) 00:33:20.992 [2024-04-26 23:36:10.234649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.992 [2024-04-26 23:36:10.234658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:21.253 [2024-04-26 23:36:10.244833] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d47e0) 00:33:21.253 [2024-04-26 23:36:10.244861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:21.253 [2024-04-26 23:36:10.244870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:21.253 [2024-04-26 23:36:10.255453] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d47e0) 00:33:21.253 [2024-04-26 23:36:10.255475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:21.253 [2024-04-26 23:36:10.255484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:21.253 [2024-04-26 23:36:10.268856] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d47e0) 00:33:21.253 [2024-04-26 23:36:10.268878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:21.253 [2024-04-26 23:36:10.268890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:21.253 [2024-04-26 23:36:10.281740] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d47e0) 00:33:21.253 [2024-04-26 23:36:10.281763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:21.253 [2024-04-26 23:36:10.281771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:21.253 [2024-04-26 23:36:10.294179] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d47e0) 00:33:21.253 [2024-04-26 23:36:10.294202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:21.253 [2024-04-26 23:36:10.294210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:21.253 [2024-04-26 23:36:10.307022] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d47e0) 00:33:21.253 [2024-04-26 23:36:10.307045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:21.253 [2024-04-26 23:36:10.307054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:21.253 [2024-04-26 23:36:10.319955] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d47e0) 00:33:21.253 [2024-04-26 23:36:10.319978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:21.253 [2024-04-26 23:36:10.319987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:21.253 [2024-04-26 23:36:10.333779] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d47e0) 00:33:21.253 [2024-04-26 23:36:10.333801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:21.253 [2024-04-26 23:36:10.333809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:21.253 [2024-04-26 23:36:10.344543] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d47e0) 00:33:21.253 [2024-04-26 23:36:10.344566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:21.253 [2024-04-26 23:36:10.344575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:21.253 [2024-04-26 23:36:10.356007] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d47e0) 00:33:21.253 [2024-04-26 23:36:10.356029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:21.253 [2024-04-26 23:36:10.356038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:21.253 [2024-04-26 23:36:10.364627] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d47e0) 00:33:21.253 [2024-04-26 23:36:10.364650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:21.253 [2024-04-26 23:36:10.364659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:21.253 [2024-04-26 23:36:10.376090] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d47e0) 00:33:21.253 [2024-04-26 23:36:10.376116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:21.253 [2024-04-26 23:36:10.376125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:21.253 [2024-04-26 23:36:10.386923] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d47e0) 00:33:21.253 [2024-04-26 23:36:10.386946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:21.253 [2024-04-26 23:36:10.386954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:21.253 [2024-04-26 23:36:10.397882] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d47e0) 00:33:21.253 [2024-04-26 23:36:10.397904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:21.253 [2024-04-26 23:36:10.397913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:21.253 [2024-04-26 23:36:10.405458] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d47e0) 00:33:21.253 [2024-04-26 23:36:10.405481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:21.253 [2024-04-26 23:36:10.405489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:21.253 [2024-04-26 23:36:10.414548] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d47e0) 00:33:21.253 [2024-04-26 23:36:10.414569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:21.253 [2024-04-26 23:36:10.414578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:21.253 [2024-04-26 23:36:10.419455] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d47e0) 00:33:21.253 [2024-04-26 23:36:10.419476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:21.253 [2024-04-26 23:36:10.419485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:21.253 [2024-04-26 23:36:10.430727] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d47e0) 00:33:21.253 [2024-04-26 23:36:10.430749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:21.253 [2024-04-26 23:36:10.430757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:21.253 [2024-04-26 23:36:10.440883] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d47e0) 00:33:21.253 [2024-04-26 23:36:10.440905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:21.253 [2024-04-26 23:36:10.440913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:21.253 [2024-04-26 23:36:10.451852] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d47e0) 00:33:21.253 [2024-04-26 23:36:10.451875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:19744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:21.253 [2024-04-26 23:36:10.451883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:21.253 [2024-04-26 23:36:10.462573] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d47e0) 00:33:21.253 [2024-04-26 23:36:10.462596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:21.253 [2024-04-26 23:36:10.462605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:21.253 [2024-04-26 23:36:10.473527] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d47e0) 00:33:21.253 [2024-04-26 23:36:10.473550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:21.253 [2024-04-26 23:36:10.473558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:21.253 [2024-04-26 23:36:10.484005] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d47e0) 00:33:21.253 [2024-04-26 23:36:10.484027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:21.253 [2024-04-26 23:36:10.484036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:21.253 [2024-04-26 23:36:10.493833] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d47e0) 00:33:21.253 [2024-04-26 23:36:10.493861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:21.253 [2024-04-26 23:36:10.493870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:21.254 [2024-04-26 23:36:10.501914] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d47e0) 00:33:21.254 [2024-04-26 23:36:10.501936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:21.254 [2024-04-26 23:36:10.501944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:21.514 [2024-04-26 23:36:10.512176] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d47e0) 00:33:21.514 [2024-04-26 23:36:10.512199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:21.514 [2024-04-26 23:36:10.512208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:21.514 [2024-04-26 23:36:10.521547] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d47e0) 00:33:21.514 [2024-04-26 23:36:10.521569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:21.514 [2024-04-26 23:36:10.521578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:21.514 [2024-04-26 23:36:10.531533] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d47e0) 00:33:21.514 [2024-04-26 23:36:10.531556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:21.514 [2024-04-26 23:36:10.531565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:21.514 [2024-04-26 23:36:10.543095] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d47e0) 00:33:21.515 [2024-04-26 23:36:10.543117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:21.515 [2024-04-26 23:36:10.543130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:21.515 [2024-04-26 23:36:10.553536] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d47e0) 00:33:21.515 [2024-04-26 23:36:10.553558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:21.515 [2024-04-26 23:36:10.553567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:21.515 [2024-04-26 23:36:10.563407] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d47e0) 00:33:21.515 [2024-04-26 23:36:10.563430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:21.515 [2024-04-26 23:36:10.563438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:21.515 [2024-04-26 23:36:10.573932] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d47e0) 00:33:21.515 [2024-04-26 23:36:10.573954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:21.515 [2024-04-26 23:36:10.573962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:21.515 00:33:21.515 Latency(us) 00:33:21.515 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:21.515 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:33:21.515 nvme0n1 : 2.00 2974.10 371.76 0.00 0.00 5373.95 1269.76 13981.01 00:33:21.515 =================================================================================================================== 00:33:21.515 Total : 2974.10 371.76 0.00 0.00 5373.95 1269.76 13981.01 00:33:21.515 0 00:33:21.515 23:36:10 -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:33:21.515 23:36:10 -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:33:21.515 23:36:10 -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:33:21.515 | .driver_specific 00:33:21.515 | .nvme_error 00:33:21.515 | .status_code 00:33:21.515 | .command_transient_transport_error' 00:33:21.515 23:36:10 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:33:21.515 23:36:10 -- host/digest.sh@71 -- # (( 192 > 0 )) 00:33:21.515 23:36:10 -- host/digest.sh@73 -- # killprocess 4174054 00:33:21.515 23:36:10 -- common/autotest_common.sh@936 -- # '[' -z 4174054 ']' 00:33:21.515 23:36:10 -- common/autotest_common.sh@940 -- # kill -0 4174054 00:33:21.515 23:36:10 -- common/autotest_common.sh@941 -- # uname 00:33:21.515 23:36:10 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:33:21.515 23:36:10 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 4174054 00:33:21.775 23:36:10 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:33:21.775 23:36:10 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:33:21.776 23:36:10 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 4174054' 00:33:21.776 killing process with pid 4174054 00:33:21.776 23:36:10 -- common/autotest_common.sh@955 -- # kill 4174054 00:33:21.776 Received shutdown signal, test time was about 2.000000 seconds 00:33:21.776 00:33:21.776 Latency(us) 00:33:21.776 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:21.776 =================================================================================================================== 00:33:21.776 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:33:21.776 23:36:10 -- common/autotest_common.sh@960 -- # wait 4174054 00:33:21.776 23:36:10 -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:33:21.776 23:36:10 -- host/digest.sh@54 -- # local rw bs qd 00:33:21.776 23:36:10 -- host/digest.sh@56 -- # rw=randwrite 00:33:21.776 23:36:10 -- host/digest.sh@56 -- # bs=4096 00:33:21.776 23:36:10 -- host/digest.sh@56 -- # qd=128 00:33:21.776 23:36:10 -- host/digest.sh@58 -- # bperfpid=4175222 00:33:21.776 23:36:10 -- host/digest.sh@60 -- # waitforlisten 4175222 /var/tmp/bperf.sock 00:33:21.776 23:36:10 -- common/autotest_common.sh@817 -- # '[' -z 4175222 ']' 00:33:21.776 23:36:10 -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:33:21.776 23:36:10 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bperf.sock 00:33:21.776 23:36:10 -- common/autotest_common.sh@822 -- # local max_retries=100 00:33:21.776 23:36:10 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:33:21.776 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:33:21.776 23:36:10 -- common/autotest_common.sh@826 -- # xtrace_disable 00:33:21.776 23:36:10 -- common/autotest_common.sh@10 -- # set +x 00:33:21.776 [2024-04-26 23:36:10.978502] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:33:21.776 [2024-04-26 23:36:10.978557] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4175222 ] 00:33:21.776 EAL: No free 2048 kB hugepages reported on node 1 00:33:22.036 [2024-04-26 23:36:11.037086] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:22.036 [2024-04-26 23:36:11.065708] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:33:22.036 23:36:11 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:33:22.036 23:36:11 -- common/autotest_common.sh@850 -- # return 0 00:33:22.036 23:36:11 -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:33:22.036 23:36:11 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:33:22.036 23:36:11 -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:33:22.296 23:36:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:33:22.296 23:36:11 -- common/autotest_common.sh@10 -- # set +x 00:33:22.296 23:36:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:33:22.296 23:36:11 -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:22.296 23:36:11 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:22.556 nvme0n1 00:33:22.556 23:36:11 -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:33:22.556 23:36:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:33:22.556 23:36:11 -- common/autotest_common.sh@10 -- # set +x 00:33:22.556 23:36:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:33:22.556 23:36:11 -- host/digest.sh@69 -- # bperf_py perform_tests 00:33:22.556 23:36:11 -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:33:22.556 Running I/O for 2 seconds... 00:33:22.556 [2024-04-26 23:36:11.788257] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd54730) with pdu=0x2000190fef90 00:33:22.556 [2024-04-26 23:36:11.788475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:6020 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:22.556 [2024-04-26 23:36:11.788505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:33:22.556 [2024-04-26 23:36:11.800985] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd54730) with pdu=0x2000190fef90 00:33:22.557 [2024-04-26 23:36:11.801285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:17488 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:22.557 [2024-04-26 23:36:11.801313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:33:22.818 [2024-04-26 23:36:11.813655] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd54730) with pdu=0x2000190fef90 00:33:22.818 [2024-04-26 23:36:11.813975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:13939 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:22.818 [2024-04-26 23:36:11.813996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:33:22.818 [2024-04-26 23:36:11.826340] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd54730) with pdu=0x2000190fef90 00:33:22.818 [2024-04-26 23:36:11.826626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17498 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:22.818 [2024-04-26 23:36:11.826646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:33:22.818 [2024-04-26 23:36:11.838966] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd54730) with pdu=0x2000190fef90 00:33:22.818 [2024-04-26 23:36:11.839279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24767 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:22.818 [2024-04-26 23:36:11.839299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:33:22.818 [2024-04-26 23:36:11.851649] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd54730) with pdu=0x2000190fef90 00:33:22.818 [2024-04-26 23:36:11.851973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:153 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:22.818 [2024-04-26 23:36:11.851994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:33:22.818 [2024-04-26 23:36:11.864273] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd54730) with pdu=0x2000190fef90 00:33:22.818 [2024-04-26 23:36:11.864592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:5992 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:22.818 [2024-04-26 23:36:11.864612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:33:22.818 [2024-04-26 23:36:11.876895] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd54730) with pdu=0x2000190fef90 00:33:22.818 [2024-04-26 23:36:11.877221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:11200 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:22.819 [2024-04-26 23:36:11.877241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:33:22.819 [2024-04-26 23:36:11.889517] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd54730) with pdu=0x2000190fef90 00:33:22.819 [2024-04-26 23:36:11.889848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20710 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:22.819 [2024-04-26 23:36:11.889868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:33:22.819 [2024-04-26 23:36:11.902118] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd54730) with pdu=0x2000190fef90 00:33:22.819 [2024-04-26 23:36:11.902436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8566 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:22.819 [2024-04-26 23:36:11.902457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:33:22.819 [2024-04-26 23:36:11.914739] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd54730) with pdu=0x2000190fef90 00:33:22.819 [2024-04-26 23:36:11.915051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:2521 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:22.819 [2024-04-26 23:36:11.915074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:33:22.819 [2024-04-26 23:36:11.927348] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd54730) with pdu=0x2000190fef90 00:33:22.819 [2024-04-26 23:36:11.927682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:17537 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:22.819 [2024-04-26 23:36:11.927701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:33:22.819 [2024-04-26 23:36:11.940013] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd54730) with pdu=0x2000190fef90 00:33:22.819 [2024-04-26 23:36:11.940317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:19640 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:22.819 [2024-04-26 23:36:11.940336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:33:22.819 [2024-04-26 23:36:11.952583] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd54730) with pdu=0x2000190fef90 00:33:22.819 [2024-04-26 23:36:11.952898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8179 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:22.819 [2024-04-26 23:36:11.952918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:33:22.819 [2024-04-26 23:36:11.965179] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd54730) with pdu=0x2000190fef90 00:33:22.819 [2024-04-26 23:36:11.965509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23677 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:22.819 [2024-04-26 23:36:11.965528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:33:22.819 [2024-04-26 23:36:11.977760] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd54730) with pdu=0x2000190fef90 00:33:22.819 [2024-04-26 23:36:11.978081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:5983 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:22.819 [2024-04-26 23:36:11.978101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:33:22.819 [2024-04-26 23:36:11.990332] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd54730) with pdu=0x2000190fef90 00:33:22.819 [2024-04-26 23:36:11.990648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:6751 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:22.819 [2024-04-26 23:36:11.990668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:33:22.819 [2024-04-26 23:36:12.002891] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd54730) with pdu=0x2000190fef90 00:33:22.819 [2024-04-26 23:36:12.003212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:16903 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:22.819 [2024-04-26 23:36:12.003231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:33:22.819 [2024-04-26 23:36:12.015486] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd54730) with pdu=0x2000190fef90 00:33:22.819 [2024-04-26 23:36:12.015783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2681 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:22.819 [2024-04-26 23:36:12.015803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:33:22.819 [2024-04-26 23:36:12.028077] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd54730) with pdu=0x2000190fef90 00:33:22.819 [2024-04-26 23:36:12.028381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19724 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:22.819 [2024-04-26 23:36:12.028404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:33:22.819 [2024-04-26 23:36:12.040646] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd54730) with pdu=0x2000190fef90 00:33:22.819 [2024-04-26 23:36:12.040978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:12157 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:22.819 [2024-04-26 23:36:12.040998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:33:22.819 [2024-04-26 23:36:12.053197] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd54730) with pdu=0x2000190fef90 00:33:22.819 [2024-04-26 23:36:12.053511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:1181 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:22.819 [2024-04-26 23:36:12.053531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:33:22.819 [2024-04-26 23:36:12.065741] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd54730) with pdu=0x2000190fef90 00:33:22.819 [2024-04-26 23:36:12.066068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:14691 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:22.819 [2024-04-26 23:36:12.066088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:33:23.081 [2024-04-26 23:36:12.078282] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd54730) with pdu=0x2000190fef90 00:33:23.081 [2024-04-26 23:36:12.078602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18474 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.081 [2024-04-26 23:36:12.078622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:33:23.081 [2024-04-26 23:36:12.090864] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd54730) with pdu=0x2000190fef90 00:33:23.081 [2024-04-26 23:36:12.091166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24348 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.081 [2024-04-26 23:36:12.091186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:33:23.081 [2024-04-26 23:36:12.103411] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd54730) with pdu=0x2000190fef90 00:33:23.081 [2024-04-26 23:36:12.103732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:14734 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.081 [2024-04-26 23:36:12.103751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:33:23.081 [2024-04-26 23:36:12.115956] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd54730) with pdu=0x2000190fef90 00:33:23.081 [2024-04-26 23:36:12.116252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:15167 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.081 [2024-04-26 23:36:12.116272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:33:23.081 [2024-04-26 23:36:12.128530] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd54730) with pdu=0x2000190fef90 00:33:23.081 [2024-04-26 23:36:12.128859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:20297 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.081 [2024-04-26 23:36:12.128879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:33:23.081 [2024-04-26 23:36:12.141061] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd54730) with pdu=0x2000190fef90 00:33:23.081 [2024-04-26 23:36:12.141370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16315 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.081 [2024-04-26 23:36:12.141389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:33:23.081 [2024-04-26 23:36:12.153617] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd54730) with pdu=0x2000190fef90 00:33:23.081 [2024-04-26 23:36:12.153936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6564 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.081 [2024-04-26 23:36:12.153956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:33:23.081 [2024-04-26 23:36:12.166148] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd54730) with pdu=0x2000190fef90 00:33:23.081 [2024-04-26 23:36:12.166448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:15492 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.081 [2024-04-26 23:36:12.166468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:33:23.081 [2024-04-26 23:36:12.178676] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd54730) with pdu=0x2000190fef90 00:33:23.081 [2024-04-26 23:36:12.178974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:4843 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.081 [2024-04-26 23:36:12.178994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:33:23.081 [2024-04-26 23:36:12.191238] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd54730) with pdu=0x2000190fef90 00:33:23.081 [2024-04-26 23:36:12.191547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:1351 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.081 [2024-04-26 23:36:12.191568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:33:23.081 [2024-04-26 23:36:12.203782] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd54730) with pdu=0x2000190fef90 00:33:23.081 [2024-04-26 23:36:12.204089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1689 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.081 [2024-04-26 23:36:12.204108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:33:23.081 [2024-04-26 23:36:12.216531] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd54730) with pdu=0x2000190fef90 00:33:23.081 [2024-04-26 23:36:12.216852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5655 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.081 [2024-04-26 23:36:12.216873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:33:23.081 [2024-04-26 23:36:12.229100] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd54730) with pdu=0x2000190fef90 00:33:23.081 [2024-04-26 23:36:12.229384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:9091 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.081 [2024-04-26 23:36:12.229404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:33:23.081 [2024-04-26 23:36:12.241649] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd54730) with pdu=0x2000190fef90 00:33:23.081 [2024-04-26 23:36:12.241964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:17182 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.081 [2024-04-26 23:36:12.241988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:33:23.081 [2024-04-26 23:36:12.254187] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd54730) with pdu=0x2000190fef90 00:33:23.081 [2024-04-26 23:36:12.254487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:19732 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.081 [2024-04-26 23:36:12.254506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:33:23.081 [2024-04-26 23:36:12.266754] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd54730) with pdu=0x2000190fef90 00:33:23.081 [2024-04-26 23:36:12.267081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23373 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.081 [2024-04-26 23:36:12.267100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:33:23.081 [2024-04-26 23:36:12.279293] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd54730) with pdu=0x2000190fef90 00:33:23.081 [2024-04-26 23:36:12.279606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10097 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.081 [2024-04-26 23:36:12.279625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:33:23.081 [2024-04-26 23:36:12.291867] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd54730) with pdu=0x2000190fef90 00:33:23.081 [2024-04-26 23:36:12.292166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:23053 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.081 [2024-04-26 23:36:12.292186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:33:23.081 [2024-04-26 23:36:12.304412] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd54730) with pdu=0x2000190fef90 00:33:23.081 [2024-04-26 23:36:12.304744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:23217 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.081 [2024-04-26 23:36:12.304763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:33:23.081 [2024-04-26 23:36:12.316987] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd54730) with pdu=0x2000190fef90 00:33:23.081 [2024-04-26 23:36:12.317311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:13292 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.081 [2024-04-26 23:36:12.317330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:33:23.081 [2024-04-26 23:36:12.329509] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd54730) with pdu=0x2000190fef90 00:33:23.081 [2024-04-26 23:36:12.329842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4359 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.081 [2024-04-26 23:36:12.329862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:33:23.343 [2024-04-26 23:36:12.342083] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd54730) with pdu=0x2000190fef90 00:33:23.343 [2024-04-26 23:36:12.342372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11858 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.343 [2024-04-26 23:36:12.342392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:33:23.343 [2024-04-26 23:36:12.354602] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd54730) with pdu=0x2000190fef90 00:33:23.343 [2024-04-26 23:36:12.354937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:25592 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.343 [2024-04-26 23:36:12.354956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:33:23.343 [2024-04-26 23:36:12.367222] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd54730) with pdu=0x2000190fef90 00:33:23.343 [2024-04-26 23:36:12.367507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:14890 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.343 [2024-04-26 23:36:12.367526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:33:23.343 [2024-04-26 23:36:12.379779] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd54730) with pdu=0x2000190fef90 00:33:23.343 [2024-04-26 23:36:12.380105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:17542 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.343 [2024-04-26 23:36:12.380124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:33:23.343 [2024-04-26 23:36:12.392327] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd54730) with pdu=0x2000190fef90 00:33:23.343 [2024-04-26 23:36:12.392642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3482 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.343 [2024-04-26 23:36:12.392662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:33:23.343 [2024-04-26 23:36:12.404875] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd54730) with pdu=0x2000190fef90 00:33:23.343 [2024-04-26 23:36:12.405174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7780 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.343 [2024-04-26 23:36:12.405193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:33:23.343 [2024-04-26 23:36:12.417388] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd54730) with pdu=0x2000190fef90 00:33:23.343 [2024-04-26 23:36:12.417711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:9388 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.343 [2024-04-26 23:36:12.417730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:33:23.343 [2024-04-26 23:36:12.429970] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd54730) with pdu=0x2000190fef90 00:33:23.343 [2024-04-26 23:36:12.430306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:1568 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.343 [2024-04-26 23:36:12.430325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:33:23.343 [2024-04-26 23:36:12.442524] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd54730) with pdu=0x2000190fef90 00:33:23.343 [2024-04-26 23:36:12.442817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:17667 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.343 [2024-04-26 23:36:12.442840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:33:23.343 [2024-04-26 23:36:12.455076] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd54730) with pdu=0x2000190fef90 00:33:23.343 [2024-04-26 23:36:12.455367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16988 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.343 [2024-04-26 23:36:12.455386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:33:23.343 [2024-04-26 23:36:12.467610] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd54730) with pdu=0x2000190fef90 00:33:23.343 [2024-04-26 23:36:12.467919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9351 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.343 [2024-04-26 23:36:12.467939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:33:23.343 [2024-04-26 23:36:12.480145] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd54730) with pdu=0x2000190fef90 00:33:23.343 [2024-04-26 23:36:12.480473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:15351 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.343 [2024-04-26 23:36:12.480493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:33:23.343 [2024-04-26 23:36:12.492705] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd54730) with pdu=0x2000190fef90 00:33:23.343 [2024-04-26 23:36:12.493025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:3866 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.343 [2024-04-26 23:36:12.493045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:33:23.343 [2024-04-26 23:36:12.505303] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd54730) with pdu=0x2000190fef90 00:33:23.343 [2024-04-26 23:36:12.505626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24893 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.343 [2024-04-26 23:36:12.505646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:33:23.343 [2024-04-26 23:36:12.517884] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd54730) with pdu=0x2000190fef90 00:33:23.343 [2024-04-26 23:36:12.518212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17217 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.343 [2024-04-26 23:36:12.518231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:33:23.343 [2024-04-26 23:36:12.530488] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd54730) with pdu=0x2000190fef90 00:33:23.343 [2024-04-26 23:36:12.530777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13339 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.343 [2024-04-26 23:36:12.530796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:33:23.343 [2024-04-26 23:36:12.543040] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd54730) with pdu=0x2000190fef90 00:33:23.343 [2024-04-26 23:36:12.543360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:13789 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.343 [2024-04-26 23:36:12.543380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:33:23.343 [2024-04-26 23:36:12.555601] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd54730) with pdu=0x2000190fef90 00:33:23.343 [2024-04-26 23:36:12.555938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:1870 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.343 [2024-04-26 23:36:12.555957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:33:23.343 [2024-04-26 23:36:12.568157] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd54730) with pdu=0x2000190fef90 00:33:23.343 [2024-04-26 23:36:12.568488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:8401 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.343 [2024-04-26 23:36:12.568510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:33:23.343 [2024-04-26 23:36:12.580723] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd54730) with pdu=0x2000190fef90 00:33:23.343 [2024-04-26 23:36:12.581030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12942 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.343 [2024-04-26 23:36:12.581049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:33:23.344 [2024-04-26 23:36:12.593400] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd54730) with pdu=0x2000190fef90 00:33:23.344 [2024-04-26 23:36:12.593589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24761 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.344 [2024-04-26 23:36:12.593607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:33:23.607 [2024-04-26 23:36:12.605938] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd54730) with pdu=0x2000190fef90 00:33:23.607 [2024-04-26 23:36:12.606236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:24008 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.607 [2024-04-26 23:36:12.606255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:33:23.607 [2024-04-26 23:36:12.618483] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd54730) with pdu=0x2000190fef90 00:33:23.607 [2024-04-26 23:36:12.618818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:1433 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.607 [2024-04-26 23:36:12.618843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:33:23.607 [2024-04-26 23:36:12.631076] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd54730) with pdu=0x2000190fef90 00:33:23.607 [2024-04-26 23:36:12.631385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:4931 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.607 [2024-04-26 23:36:12.631404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:33:23.607 [2024-04-26 23:36:12.643643] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd54730) with pdu=0x2000190fef90 00:33:23.607 [2024-04-26 23:36:12.643974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6795 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.607 [2024-04-26 23:36:12.643994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:33:23.607 [2024-04-26 23:36:12.656228] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd54730) with pdu=0x2000190fef90 00:33:23.607 [2024-04-26 23:36:12.656564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11478 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.607 [2024-04-26 23:36:12.656583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:33:23.607 [2024-04-26 23:36:12.668802] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd54730) with pdu=0x2000190fef90 00:33:23.607 [2024-04-26 23:36:12.669157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:3186 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.607 [2024-04-26 23:36:12.669176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:33:23.607 [2024-04-26 23:36:12.681352] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd54730) with pdu=0x2000190fef90 00:33:23.607 [2024-04-26 23:36:12.681645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:7194 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.607 [2024-04-26 23:36:12.681664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:33:23.607 [2024-04-26 23:36:12.693911] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd54730) with pdu=0x2000190fef90 00:33:23.607 [2024-04-26 23:36:12.694205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:19363 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.607 [2024-04-26 23:36:12.694224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:33:23.608 [2024-04-26 23:36:12.706475] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd54730) with pdu=0x2000190fef90 00:33:23.608 [2024-04-26 23:36:12.706786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16786 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.608 [2024-04-26 23:36:12.706806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:33:23.608 [2024-04-26 23:36:12.719054] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd54730) with pdu=0x2000190fef90 00:33:23.608 [2024-04-26 23:36:12.719376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18591 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.608 [2024-04-26 23:36:12.719395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:33:23.608 [2024-04-26 23:36:12.731645] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd54730) with pdu=0x2000190fef90 00:33:23.608 [2024-04-26 23:36:12.731962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:20197 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.608 [2024-04-26 23:36:12.731982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:33:23.608 [2024-04-26 23:36:12.744223] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd54730) with pdu=0x2000190fef90 00:33:23.608 [2024-04-26 23:36:12.744520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:13244 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.608 [2024-04-26 23:36:12.744539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:33:23.608 [2024-04-26 23:36:12.756790] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd54730) with pdu=0x2000190fef90 00:33:23.608 [2024-04-26 23:36:12.757119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:21508 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.608 [2024-04-26 23:36:12.757138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:33:23.608 [2024-04-26 23:36:12.769366] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd54730) with pdu=0x2000190fef90 00:33:23.608 [2024-04-26 23:36:12.769688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5078 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.608 [2024-04-26 23:36:12.769708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:33:23.608 [2024-04-26 23:36:12.781939] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd54730) with pdu=0x2000190fef90 00:33:23.608 [2024-04-26 23:36:12.782254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18277 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.608 [2024-04-26 23:36:12.782273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:33:23.608 [2024-04-26 23:36:12.794459] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd54730) with pdu=0x2000190fef90 00:33:23.608 [2024-04-26 23:36:12.794774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:11268 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.608 [2024-04-26 23:36:12.794793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:33:23.608 [2024-04-26 23:36:12.807069] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd54730) with pdu=0x2000190fef90 00:33:23.608 [2024-04-26 23:36:12.807408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:2342 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.608 [2024-04-26 23:36:12.807427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:33:23.608 [2024-04-26 23:36:12.819577] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd54730) with pdu=0x2000190fef90 00:33:23.608 [2024-04-26 23:36:12.819771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:5182 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.608 [2024-04-26 23:36:12.819790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:33:23.608 [2024-04-26 23:36:12.832176] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd54730) with pdu=0x2000190fef90 00:33:23.608 [2024-04-26 23:36:12.832491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13644 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.608 [2024-04-26 23:36:12.832510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:33:23.608 [2024-04-26 23:36:12.844702] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd54730) with pdu=0x2000190fef90 00:33:23.608 [2024-04-26 23:36:12.845021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20760 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.608 [2024-04-26 23:36:12.845040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:33:23.608 [2024-04-26 23:36:12.857284] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd54730) with pdu=0x2000190fef90 00:33:23.608 [2024-04-26 23:36:12.857598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:1250 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.608 [2024-04-26 23:36:12.857617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:33:23.871 [2024-04-26 23:36:12.869809] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd54730) with pdu=0x2000190fef90 00:33:23.871 [2024-04-26 23:36:12.870160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:10860 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.871 [2024-04-26 23:36:12.870179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:33:23.871 [2024-04-26 23:36:12.882468] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd54730) with pdu=0x2000190fef90 00:33:23.871 [2024-04-26 23:36:12.882782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:8763 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.871 [2024-04-26 23:36:12.882801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:33:23.871 [2024-04-26 23:36:12.895017] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd54730) with pdu=0x2000190fef90 00:33:23.871 [2024-04-26 23:36:12.895345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5383 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.871 [2024-04-26 23:36:12.895364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:33:23.871 [2024-04-26 23:36:12.907605] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd54730) with pdu=0x2000190fef90 00:33:23.871 [2024-04-26 23:36:12.907893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19473 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.871 [2024-04-26 23:36:12.907914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:33:23.871 [2024-04-26 23:36:12.920215] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd54730) with pdu=0x2000190fef90 00:33:23.871 [2024-04-26 23:36:12.920553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:22372 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.871 [2024-04-26 23:36:12.920573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:33:23.871 [2024-04-26 23:36:12.932784] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd54730) with pdu=0x2000190fef90 00:33:23.871 [2024-04-26 23:36:12.933083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:11204 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.871 [2024-04-26 23:36:12.933103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:33:23.871 [2024-04-26 23:36:12.945417] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd54730) with pdu=0x2000190fef90 00:33:23.871 [2024-04-26 23:36:12.945721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:13645 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.871 [2024-04-26 23:36:12.945741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:33:23.871 [2024-04-26 23:36:12.957967] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd54730) with pdu=0x2000190fef90 00:33:23.871 [2024-04-26 23:36:12.958282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5757 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.871 [2024-04-26 23:36:12.958301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:33:23.871 [2024-04-26 23:36:12.970525] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd54730) with pdu=0x2000190fef90 00:33:23.871 [2024-04-26 23:36:12.970820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12955 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.871 [2024-04-26 23:36:12.970844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:33:23.871 [2024-04-26 23:36:12.983088] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd54730) with pdu=0x2000190fef90 00:33:23.871 [2024-04-26 23:36:12.983406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:4456 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.871 [2024-04-26 23:36:12.983426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:33:23.871 [2024-04-26 23:36:12.995689] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd54730) with pdu=0x2000190fef90 00:33:23.871 [2024-04-26 23:36:12.996006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:5338 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.871 [2024-04-26 23:36:12.996027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:33:23.871 [2024-04-26 23:36:13.008271] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd54730) with pdu=0x2000190fef90 00:33:23.871 [2024-04-26 23:36:13.008588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:6367 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.871 [2024-04-26 23:36:13.008611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:33:23.871 [2024-04-26 23:36:13.020821] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd54730) with pdu=0x2000190fef90 00:33:23.871 [2024-04-26 23:36:13.021154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14331 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.871 [2024-04-26 23:36:13.021173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:33:23.871 [2024-04-26 23:36:13.033422] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd54730) with pdu=0x2000190fef90 00:33:23.871 [2024-04-26 23:36:13.033739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6810 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.871 [2024-04-26 23:36:13.033758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:33:23.871 [2024-04-26 23:36:13.045995] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd54730) with pdu=0x2000190fef90 00:33:23.871 [2024-04-26 23:36:13.046306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:2569 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.871 [2024-04-26 23:36:13.046326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:33:23.871 [2024-04-26 23:36:13.058550] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd54730) with pdu=0x2000190fef90 00:33:23.871 [2024-04-26 23:36:13.058742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:736 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.871 [2024-04-26 23:36:13.058760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:33:23.871 [2024-04-26 23:36:13.071133] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd54730) with pdu=0x2000190fef90 00:33:23.872 [2024-04-26 23:36:13.071472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24021 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.872 [2024-04-26 23:36:13.071491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:33:23.872 [2024-04-26 23:36:13.083687] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd54730) with pdu=0x2000190fef90 00:33:23.872 [2024-04-26 23:36:13.084017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9402 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.872 [2024-04-26 23:36:13.084036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:33:23.872 [2024-04-26 23:36:13.096255] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd54730) with pdu=0x2000190fef90 00:33:23.872 [2024-04-26 23:36:13.096576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22695 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.872 [2024-04-26 23:36:13.096595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:33:23.872 [2024-04-26 23:36:13.108832] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd54730) with pdu=0x2000190fef90 00:33:23.872 [2024-04-26 23:36:13.109167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:9499 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.872 [2024-04-26 23:36:13.109186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:33:23.872 [2024-04-26 23:36:13.121398] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd54730) with pdu=0x2000190fef90 00:33:23.872 [2024-04-26 23:36:13.121595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:10992 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.872 [2024-04-26 23:36:13.121613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:33:24.132 [2024-04-26 23:36:13.134014] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd54730) with pdu=0x2000190fef90 00:33:24.132 [2024-04-26 23:36:13.134304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:4377 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:24.132 [2024-04-26 23:36:13.134324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:33:24.132 [2024-04-26 23:36:13.146576] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd54730) with pdu=0x2000190fef90 00:33:24.132 [2024-04-26 23:36:13.146895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11251 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:24.132 [2024-04-26 23:36:13.146914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:33:24.132 [2024-04-26 23:36:13.159126] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd54730) with pdu=0x2000190fef90 00:33:24.132 [2024-04-26 23:36:13.159428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25368 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:24.132 [2024-04-26 23:36:13.159447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:33:24.132 [2024-04-26 23:36:13.171679] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd54730) with pdu=0x2000190fef90 00:33:24.132 [2024-04-26 23:36:13.171982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:1153 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:24.132 [2024-04-26 23:36:13.172001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:33:24.132 [2024-04-26 23:36:13.184228] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd54730) with pdu=0x2000190fef90 00:33:24.132 [2024-04-26 23:36:13.184547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:4858 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:24.132 [2024-04-26 23:36:13.184566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:33:24.132 [2024-04-26 23:36:13.196807] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd54730) with pdu=0x2000190fef90 00:33:24.132 [2024-04-26 23:36:13.197121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:10396 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:24.132 [2024-04-26 23:36:13.197141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:33:24.132 [2024-04-26 23:36:13.209561] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd54730) with pdu=0x2000190fef90 00:33:24.132 [2024-04-26 23:36:13.209876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13649 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:24.132 [2024-04-26 23:36:13.209896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:33:24.132 [2024-04-26 23:36:13.222091] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd54730) with pdu=0x2000190fef90 00:33:24.132 [2024-04-26 23:36:13.222415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1071 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:24.132 [2024-04-26 23:36:13.222435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:33:24.132 [2024-04-26 23:36:13.234690] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd54730) with pdu=0x2000190fef90 00:33:24.132 [2024-04-26 23:36:13.234980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:14733 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:24.132 [2024-04-26 23:36:13.235000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:33:24.132 [2024-04-26 23:36:13.247244] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd54730) with pdu=0x2000190fef90 00:33:24.132 [2024-04-26 23:36:13.247569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:5821 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:24.132 [2024-04-26 23:36:13.247589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:33:24.132 [2024-04-26 23:36:13.259791] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd54730) with pdu=0x2000190fef90 00:33:24.132 [2024-04-26 23:36:13.260105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:23344 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:24.132 [2024-04-26 23:36:13.260125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:33:24.132 [2024-04-26 23:36:13.272350] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd54730) with pdu=0x2000190fef90 00:33:24.132 [2024-04-26 23:36:13.272659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10766 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:24.132 [2024-04-26 23:36:13.272679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:33:24.132 [2024-04-26 23:36:13.284893] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd54730) with pdu=0x2000190fef90 00:33:24.132 [2024-04-26 23:36:13.285203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18934 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:24.132 [2024-04-26 23:36:13.285225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:33:24.132 [2024-04-26 23:36:13.297426] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd54730) with pdu=0x2000190fef90 00:33:24.132 [2024-04-26 23:36:13.297736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:19427 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:24.132 [2024-04-26 23:36:13.297764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:33:24.132 [2024-04-26 23:36:13.309972] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd54730) with pdu=0x2000190fef90 00:33:24.132 [2024-04-26 23:36:13.310265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:15630 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:24.132 [2024-04-26 23:36:13.310284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:33:24.132 [2024-04-26 23:36:13.322503] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd54730) with pdu=0x2000190fef90 00:33:24.132 [2024-04-26 23:36:13.322843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:16036 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:24.132 [2024-04-26 23:36:13.322862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:33:24.132 [2024-04-26 23:36:13.335082] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd54730) with pdu=0x2000190fef90 00:33:24.132 [2024-04-26 23:36:13.335378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21828 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:24.132 [2024-04-26 23:36:13.335400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:33:24.132 [2024-04-26 23:36:13.347628] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd54730) with pdu=0x2000190fef90 00:33:24.132 [2024-04-26 23:36:13.347946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24846 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:24.133 [2024-04-26 23:36:13.347965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:33:24.133 [2024-04-26 23:36:13.360125] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd54730) with pdu=0x2000190fef90 00:33:24.133 [2024-04-26 23:36:13.360458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:24116 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:24.133 [2024-04-26 23:36:13.360477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:33:24.133 [2024-04-26 23:36:13.372706] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd54730) with pdu=0x2000190fef90 00:33:24.133 [2024-04-26 23:36:13.372901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:6374 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:24.133 [2024-04-26 23:36:13.372920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:33:24.133 [2024-04-26 23:36:13.385251] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd54730) with pdu=0x2000190fef90 00:33:24.133 [2024-04-26 23:36:13.385579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:4729 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:24.133 [2024-04-26 23:36:13.385599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:33:24.394 [2024-04-26 23:36:13.397818] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd54730) with pdu=0x2000190fef90 00:33:24.394 [2024-04-26 23:36:13.398130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15530 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:24.394 [2024-04-26 23:36:13.398149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:33:24.394 [2024-04-26 23:36:13.410334] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd54730) with pdu=0x2000190fef90 00:33:24.394 [2024-04-26 23:36:13.410657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7901 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:24.394 [2024-04-26 23:36:13.410676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:33:24.394 [2024-04-26 23:36:13.422905] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd54730) with pdu=0x2000190fef90 00:33:24.394 [2024-04-26 23:36:13.423227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:5179 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:24.394 [2024-04-26 23:36:13.423246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:33:24.394 [2024-04-26 23:36:13.435456] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd54730) with pdu=0x2000190fef90 00:33:24.394 [2024-04-26 23:36:13.435743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:24712 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:24.394 [2024-04-26 23:36:13.435763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:33:24.394 [2024-04-26 23:36:13.447981] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd54730) with pdu=0x2000190fef90 00:33:24.394 [2024-04-26 23:36:13.448320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:12437 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:24.394 [2024-04-26 23:36:13.448340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:33:24.394 [2024-04-26 23:36:13.460527] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd54730) with pdu=0x2000190fef90 00:33:24.394 [2024-04-26 23:36:13.460821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4657 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:24.394 [2024-04-26 23:36:13.460845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:33:24.394 [2024-04-26 23:36:13.473090] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd54730) with pdu=0x2000190fef90 00:33:24.394 [2024-04-26 23:36:13.473401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20087 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:24.394 [2024-04-26 23:36:13.473420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:33:24.394 [2024-04-26 23:36:13.485625] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd54730) with pdu=0x2000190fef90 00:33:24.394 [2024-04-26 23:36:13.485949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:9296 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:24.394 [2024-04-26 23:36:13.485969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:33:24.394 [2024-04-26 23:36:13.498170] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd54730) with pdu=0x2000190fef90 00:33:24.394 [2024-04-26 23:36:13.498488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:21531 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:24.394 [2024-04-26 23:36:13.498507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:33:24.394 [2024-04-26 23:36:13.510741] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd54730) with pdu=0x2000190fef90 00:33:24.394 [2024-04-26 23:36:13.511080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:2123 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:24.394 [2024-04-26 23:36:13.511099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:33:24.394 [2024-04-26 23:36:13.523267] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd54730) with pdu=0x2000190fef90 00:33:24.394 [2024-04-26 23:36:13.523588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16863 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:24.394 [2024-04-26 23:36:13.523608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:33:24.394 [2024-04-26 23:36:13.535794] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd54730) with pdu=0x2000190fef90 00:33:24.394 [2024-04-26 23:36:13.536104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22181 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:24.394 [2024-04-26 23:36:13.536123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:33:24.394 [2024-04-26 23:36:13.548344] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd54730) with pdu=0x2000190fef90 00:33:24.394 [2024-04-26 23:36:13.548659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:13820 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:24.394 [2024-04-26 23:36:13.548677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:33:24.394 [2024-04-26 23:36:13.560868] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd54730) with pdu=0x2000190fef90 00:33:24.394 [2024-04-26 23:36:13.561182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:21554 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:24.394 [2024-04-26 23:36:13.561201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:33:24.394 [2024-04-26 23:36:13.573432] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd54730) with pdu=0x2000190fef90 00:33:24.394 [2024-04-26 23:36:13.573746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:23339 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:24.394 [2024-04-26 23:36:13.573764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:33:24.394 [2024-04-26 23:36:13.585976] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd54730) with pdu=0x2000190fef90 00:33:24.394 [2024-04-26 23:36:13.586314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21062 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:24.394 [2024-04-26 23:36:13.586333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:33:24.394 [2024-04-26 23:36:13.598519] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd54730) with pdu=0x2000190fef90 00:33:24.394 [2024-04-26 23:36:13.598835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19043 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:24.394 [2024-04-26 23:36:13.598858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:33:24.394 [2024-04-26 23:36:13.611040] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd54730) with pdu=0x2000190fef90 00:33:24.394 [2024-04-26 23:36:13.611361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:16867 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:24.394 [2024-04-26 23:36:13.611380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:33:24.394 [2024-04-26 23:36:13.623565] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd54730) with pdu=0x2000190fef90 00:33:24.394 [2024-04-26 23:36:13.623867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:1919 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:24.394 [2024-04-26 23:36:13.623887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:33:24.394 [2024-04-26 23:36:13.636115] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd54730) with pdu=0x2000190fef90 00:33:24.394 [2024-04-26 23:36:13.636411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:15847 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:24.394 [2024-04-26 23:36:13.636430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:33:24.655 [2024-04-26 23:36:13.648716] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd54730) with pdu=0x2000190fef90 00:33:24.655 [2024-04-26 23:36:13.648913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12528 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:24.655 [2024-04-26 23:36:13.648931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:33:24.655 [2024-04-26 23:36:13.661259] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd54730) with pdu=0x2000190fef90 00:33:24.655 [2024-04-26 23:36:13.661545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4164 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:24.656 [2024-04-26 23:36:13.661564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:33:24.656 [2024-04-26 23:36:13.673802] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd54730) with pdu=0x2000190fef90 00:33:24.656 [2024-04-26 23:36:13.674110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:21103 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:24.656 [2024-04-26 23:36:13.674130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:33:24.656 [2024-04-26 23:36:13.686341] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd54730) with pdu=0x2000190fef90 00:33:24.656 [2024-04-26 23:36:13.686656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:23080 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:24.656 [2024-04-26 23:36:13.686675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:33:24.656 [2024-04-26 23:36:13.698873] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd54730) with pdu=0x2000190fef90 00:33:24.656 [2024-04-26 23:36:13.699179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:22191 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:24.656 [2024-04-26 23:36:13.699198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:33:24.656 [2024-04-26 23:36:13.711433] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd54730) with pdu=0x2000190fef90 00:33:24.656 [2024-04-26 23:36:13.711747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21297 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:24.656 [2024-04-26 23:36:13.711767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:33:24.656 [2024-04-26 23:36:13.723978] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd54730) with pdu=0x2000190fef90 00:33:24.656 [2024-04-26 23:36:13.724303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3075 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:24.656 [2024-04-26 23:36:13.724322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:33:24.656 [2024-04-26 23:36:13.736515] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd54730) with pdu=0x2000190fef90 00:33:24.656 [2024-04-26 23:36:13.736852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:25498 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:24.656 [2024-04-26 23:36:13.736872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:33:24.656 [2024-04-26 23:36:13.749021] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd54730) with pdu=0x2000190fef90 00:33:24.656 [2024-04-26 23:36:13.749355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:6173 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:24.656 [2024-04-26 23:36:13.749374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:33:24.656 [2024-04-26 23:36:13.761603] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd54730) with pdu=0x2000190fef90 00:33:24.656 [2024-04-26 23:36:13.761931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:14706 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:24.656 [2024-04-26 23:36:13.761953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:33:24.656 [2024-04-26 23:36:13.774147] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd54730) with pdu=0x2000190fef90 00:33:24.656 [2024-04-26 23:36:13.774432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9904 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:24.656 [2024-04-26 23:36:13.774455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:33:24.656 00:33:24.656 Latency(us) 00:33:24.656 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:24.656 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:33:24.656 nvme0n1 : 2.01 20253.00 79.11 0.00 0.00 6306.78 5925.55 14636.37 00:33:24.656 =================================================================================================================== 00:33:24.656 Total : 20253.00 79.11 0.00 0.00 6306.78 5925.55 14636.37 00:33:24.656 0 00:33:24.656 23:36:13 -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:33:24.656 23:36:13 -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:33:24.656 23:36:13 -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:33:24.656 | .driver_specific 00:33:24.656 | .nvme_error 00:33:24.656 | .status_code 00:33:24.656 | .command_transient_transport_error' 00:33:24.656 23:36:13 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:33:24.918 23:36:13 -- host/digest.sh@71 -- # (( 159 > 0 )) 00:33:24.918 23:36:13 -- host/digest.sh@73 -- # killprocess 4175222 00:33:24.918 23:36:13 -- common/autotest_common.sh@936 -- # '[' -z 4175222 ']' 00:33:24.918 23:36:13 -- common/autotest_common.sh@940 -- # kill -0 4175222 00:33:24.918 23:36:13 -- common/autotest_common.sh@941 -- # uname 00:33:24.918 23:36:13 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:33:24.918 23:36:13 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 4175222 00:33:24.918 23:36:14 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:33:24.918 23:36:14 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:33:24.918 23:36:14 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 4175222' 00:33:24.918 killing process with pid 4175222 00:33:24.918 23:36:14 -- common/autotest_common.sh@955 -- # kill 4175222 00:33:24.918 Received shutdown signal, test time was about 2.000000 seconds 00:33:24.918 00:33:24.918 Latency(us) 00:33:24.918 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:24.918 =================================================================================================================== 00:33:24.918 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:33:24.918 23:36:14 -- common/autotest_common.sh@960 -- # wait 4175222 00:33:24.918 23:36:14 -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:33:24.918 23:36:14 -- host/digest.sh@54 -- # local rw bs qd 00:33:24.918 23:36:14 -- host/digest.sh@56 -- # rw=randwrite 00:33:24.918 23:36:14 -- host/digest.sh@56 -- # bs=131072 00:33:24.918 23:36:14 -- host/digest.sh@56 -- # qd=16 00:33:24.918 23:36:14 -- host/digest.sh@58 -- # bperfpid=4175818 00:33:24.918 23:36:14 -- host/digest.sh@60 -- # waitforlisten 4175818 /var/tmp/bperf.sock 00:33:24.918 23:36:14 -- common/autotest_common.sh@817 -- # '[' -z 4175818 ']' 00:33:24.918 23:36:14 -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:33:24.918 23:36:14 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bperf.sock 00:33:24.918 23:36:14 -- common/autotest_common.sh@822 -- # local max_retries=100 00:33:24.918 23:36:14 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:33:24.918 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:33:24.918 23:36:14 -- common/autotest_common.sh@826 -- # xtrace_disable 00:33:24.918 23:36:14 -- common/autotest_common.sh@10 -- # set +x 00:33:25.179 [2024-04-26 23:36:14.189274] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:33:25.179 [2024-04-26 23:36:14.189347] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4175818 ] 00:33:25.179 I/O size of 131072 is greater than zero copy threshold (65536). 00:33:25.179 Zero copy mechanism will not be used. 00:33:25.179 EAL: No free 2048 kB hugepages reported on node 1 00:33:25.179 [2024-04-26 23:36:14.250974] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:25.179 [2024-04-26 23:36:14.278043] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:33:25.752 23:36:14 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:33:25.752 23:36:14 -- common/autotest_common.sh@850 -- # return 0 00:33:25.752 23:36:14 -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:33:25.752 23:36:14 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:33:26.013 23:36:15 -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:33:26.013 23:36:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:33:26.013 23:36:15 -- common/autotest_common.sh@10 -- # set +x 00:33:26.013 23:36:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:33:26.013 23:36:15 -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:26.013 23:36:15 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:26.273 nvme0n1 00:33:26.273 23:36:15 -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:33:26.273 23:36:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:33:26.273 23:36:15 -- common/autotest_common.sh@10 -- # set +x 00:33:26.273 23:36:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:33:26.273 23:36:15 -- host/digest.sh@69 -- # bperf_py perform_tests 00:33:26.273 23:36:15 -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:33:26.536 I/O size of 131072 is greater than zero copy threshold (65536). 00:33:26.536 Zero copy mechanism will not be used. 00:33:26.536 Running I/O for 2 seconds... 00:33:26.536 [2024-04-26 23:36:15.597641] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd54ae0) with pdu=0x2000190fef90 00:33:26.536 [2024-04-26 23:36:15.597947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.536 [2024-04-26 23:36:15.597985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:26.536 [2024-04-26 23:36:15.603199] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd54ae0) with pdu=0x2000190fef90 00:33:26.536 [2024-04-26 23:36:15.603459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.536 [2024-04-26 23:36:15.603487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:26.536 [2024-04-26 23:36:15.608254] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd54ae0) with pdu=0x2000190fef90 00:33:26.536 [2024-04-26 23:36:15.608505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.536 [2024-04-26 23:36:15.608530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:26.536 [2024-04-26 23:36:15.612907] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd54ae0) with pdu=0x2000190fef90 00:33:26.536 [2024-04-26 23:36:15.613145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.536 [2024-04-26 23:36:15.613168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:26.536 [2024-04-26 23:36:15.617426] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd54ae0) with pdu=0x2000190fef90 00:33:26.536 [2024-04-26 23:36:15.617669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.536 [2024-04-26 23:36:15.617693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:26.536 [2024-04-26 23:36:15.622944] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd54ae0) with pdu=0x2000190fef90 00:33:26.536 [2024-04-26 23:36:15.623183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.536 [2024-04-26 23:36:15.623207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:26.536 [2024-04-26 23:36:15.628045] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd54ae0) with pdu=0x2000190fef90 00:33:26.536 [2024-04-26 23:36:15.628280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.536 [2024-04-26 23:36:15.628302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:26.536 [2024-04-26 23:36:15.632345] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd54ae0) with pdu=0x2000190fef90 00:33:26.536 [2024-04-26 23:36:15.632617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.536 [2024-04-26 23:36:15.632640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:26.536 [2024-04-26 23:36:15.637387] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd54ae0) with pdu=0x2000190fef90 00:33:26.536 [2024-04-26 23:36:15.637622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.536 [2024-04-26 23:36:15.637643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:26.536 [2024-04-26 23:36:15.642344] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd54ae0) with pdu=0x2000190fef90 00:33:26.536 [2024-04-26 23:36:15.642579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.536 [2024-04-26 23:36:15.642602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:26.536 [2024-04-26 23:36:15.647466] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd54ae0) with pdu=0x2000190fef90 00:33:26.536 [2024-04-26 23:36:15.647701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.536 [2024-04-26 23:36:15.647724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:26.536 [2024-04-26 23:36:15.652378] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd54ae0) with pdu=0x2000190fef90 00:33:26.536 [2024-04-26 23:36:15.652611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.536 [2024-04-26 23:36:15.652632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:26.536 [2024-04-26 23:36:15.658818] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd54ae0) with pdu=0x2000190fef90 00:33:26.536 [2024-04-26 23:36:15.659069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.536 [2024-04-26 23:36:15.659093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:26.536 [2024-04-26 23:36:15.664780] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd54ae0) with pdu=0x2000190fef90 00:33:26.536 [2024-04-26 23:36:15.665018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.536 [2024-04-26 23:36:15.665042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:26.536 [2024-04-26 23:36:15.669775] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd54ae0) with pdu=0x2000190fef90 00:33:26.536 [2024-04-26 23:36:15.670015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.536 [2024-04-26 23:36:15.670038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:26.536 [2024-04-26 23:36:15.675368] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd54ae0) with pdu=0x2000190fef90 00:33:26.536 [2024-04-26 23:36:15.675603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.536 [2024-04-26 23:36:15.675626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:26.536 [2024-04-26 23:36:15.679767] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd54ae0) with pdu=0x2000190fef90 00:33:26.536 [2024-04-26 23:36:15.680010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.536 [2024-04-26 23:36:15.680033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:26.536 [2024-04-26 23:36:15.684602] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd54ae0) with pdu=0x2000190fef90 00:33:26.536 [2024-04-26 23:36:15.684851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.536 [2024-04-26 23:36:15.684881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:26.536 [2024-04-26 23:36:15.688991] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd54ae0) with pdu=0x2000190fef90 00:33:26.536 [2024-04-26 23:36:15.689228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.536 [2024-04-26 23:36:15.689250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:26.536 [2024-04-26 23:36:15.693703] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd54ae0) with pdu=0x2000190fef90 00:33:26.536 [2024-04-26 23:36:15.693945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.536 [2024-04-26 23:36:15.693968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:26.536 [2024-04-26 23:36:15.698725] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd54ae0) with pdu=0x2000190fef90 00:33:26.536 [2024-04-26 23:36:15.698968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.537 [2024-04-26 23:36:15.698991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:26.537 [2024-04-26 23:36:15.703265] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd54ae0) with pdu=0x2000190fef90 00:33:26.537 [2024-04-26 23:36:15.703500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.537 [2024-04-26 23:36:15.703528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:26.537 [2024-04-26 23:36:15.707602] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd54ae0) with pdu=0x2000190fef90 00:33:26.537 [2024-04-26 23:36:15.707835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.537 [2024-04-26 23:36:15.707864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:26.537 [2024-04-26 23:36:15.712305] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd54ae0) with pdu=0x2000190fef90 00:33:26.537 [2024-04-26 23:36:15.712542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.537 [2024-04-26 23:36:15.712566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:26.537 [2024-04-26 23:36:15.716823] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd54ae0) with pdu=0x2000190fef90 00:33:26.537 [2024-04-26 23:36:15.717065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.537 [2024-04-26 23:36:15.717088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:26.537 [2024-04-26 23:36:15.722032] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd54ae0) with pdu=0x2000190fef90 00:33:26.537 [2024-04-26 23:36:15.722270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.537 [2024-04-26 23:36:15.722293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:26.537 [2024-04-26 23:36:15.728072] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd54ae0) with pdu=0x2000190fef90 00:33:26.537 [2024-04-26 23:36:15.728306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.537 [2024-04-26 23:36:15.728330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:26.537 [2024-04-26 23:36:15.733593] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd54ae0) with pdu=0x2000190fef90 00:33:26.537 [2024-04-26 23:36:15.733824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.537 [2024-04-26 23:36:15.733856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:26.537 [2024-04-26 23:36:15.740005] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd54ae0) with pdu=0x2000190fef90 00:33:26.537 [2024-04-26 23:36:15.740320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.537 [2024-04-26 23:36:15.740343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:26.537 [2024-04-26 23:36:15.749135] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd54ae0) with pdu=0x2000190fef90 00:33:26.537 [2024-04-26 23:36:15.749367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.537 [2024-04-26 23:36:15.749390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:26.537 [2024-04-26 23:36:15.755522] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd54ae0) with pdu=0x2000190fef90 00:33:26.537 [2024-04-26 23:36:15.755790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.537 [2024-04-26 23:36:15.755813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:26.537 [2024-04-26 23:36:15.761442] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd54ae0) with pdu=0x2000190fef90 00:33:26.537 [2024-04-26 23:36:15.761678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.537 [2024-04-26 23:36:15.761701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:26.537 [2024-04-26 23:36:15.768457] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd54ae0) with pdu=0x2000190fef90 00:33:26.537 [2024-04-26 23:36:15.768847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.537 [2024-04-26 23:36:15.768869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:26.537 [2024-04-26 23:36:15.777281] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd54ae0) with pdu=0x2000190fef90 00:33:26.537 [2024-04-26 23:36:15.777599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.537 [2024-04-26 23:36:15.777628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:26.537 [2024-04-26 23:36:15.784651] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd54ae0) with pdu=0x2000190fef90 00:33:26.537 [2024-04-26 23:36:15.784950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.537 [2024-04-26 23:36:15.784974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:26.799 [2024-04-26 23:36:15.791415] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd54ae0) with pdu=0x2000190fef90 00:33:26.799 [2024-04-26 23:36:15.791646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.799 [2024-04-26 23:36:15.791668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:26.800 [2024-04-26 23:36:15.797184] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd54ae0) with pdu=0x2000190fef90 00:33:26.800 [2024-04-26 23:36:15.797501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.800 [2024-04-26 23:36:15.797525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:26.800 [2024-04-26 23:36:15.802248] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd54ae0) with pdu=0x2000190fef90 00:33:26.800 [2024-04-26 23:36:15.802480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.800 [2024-04-26 23:36:15.802502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:26.800 [2024-04-26 23:36:15.807443] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd54ae0) with pdu=0x2000190fef90 00:33:26.800 [2024-04-26 23:36:15.807676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.800 [2024-04-26 23:36:15.807703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:26.800 [2024-04-26 23:36:15.812711] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd54ae0) with pdu=0x2000190fef90 00:33:26.800 [2024-04-26 23:36:15.812951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.800 [2024-04-26 23:36:15.812973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:26.800 [2024-04-26 23:36:15.818683] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd54ae0) with pdu=0x2000190fef90 00:33:26.800 [2024-04-26 23:36:15.818930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.800 [2024-04-26 23:36:15.818953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:26.800 [2024-04-26 23:36:15.825195] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd54ae0) with pdu=0x2000190fef90 00:33:26.800 [2024-04-26 23:36:15.825431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.800 [2024-04-26 23:36:15.825453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:26.800 [2024-04-26 23:36:15.830432] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd54ae0) with pdu=0x2000190fef90 00:33:26.800 [2024-04-26 23:36:15.830667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.800 [2024-04-26 23:36:15.830690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:26.800 [2024-04-26 23:36:15.835517] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd54ae0) with pdu=0x2000190fef90 00:33:26.800 [2024-04-26 23:36:15.835754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.800 [2024-04-26 23:36:15.835775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:26.800 [2024-04-26 23:36:15.840130] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd54ae0) with pdu=0x2000190fef90 00:33:26.800 [2024-04-26 23:36:15.840363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.800 [2024-04-26 23:36:15.840386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:26.800 [2024-04-26 23:36:15.844497] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd54ae0) with pdu=0x2000190fef90 00:33:26.800 [2024-04-26 23:36:15.844734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.800 [2024-04-26 23:36:15.844756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:26.800 [2024-04-26 23:36:15.848930] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd54ae0) with pdu=0x2000190fef90 00:33:26.800 [2024-04-26 23:36:15.849167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.800 [2024-04-26 23:36:15.849188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:26.800 [2024-04-26 23:36:15.853292] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd54ae0) with pdu=0x2000190fef90 00:33:26.800 [2024-04-26 23:36:15.853528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.800 [2024-04-26 23:36:15.853551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:26.800 [2024-04-26 23:36:15.857733] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd54ae0) with pdu=0x2000190fef90 00:33:26.800 [2024-04-26 23:36:15.857974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.800 [2024-04-26 23:36:15.857996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:26.800 [2024-04-26 23:36:15.862109] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd54ae0) with pdu=0x2000190fef90 00:33:26.800 [2024-04-26 23:36:15.862341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.800 [2024-04-26 23:36:15.862364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:26.800 [2024-04-26 23:36:15.866541] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd54ae0) with pdu=0x2000190fef90 00:33:26.800 [2024-04-26 23:36:15.866775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.800 [2024-04-26 23:36:15.866797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:26.800 [2024-04-26 23:36:15.870815] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd54ae0) with pdu=0x2000190fef90 00:33:26.800 [2024-04-26 23:36:15.871056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.800 [2024-04-26 23:36:15.871080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:26.800 [2024-04-26 23:36:15.875318] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd54ae0) with pdu=0x2000190fef90 00:33:26.800 [2024-04-26 23:36:15.875552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.800 [2024-04-26 23:36:15.875574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:26.800 [2024-04-26 23:36:15.879457] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd54ae0) with pdu=0x2000190fef90 00:33:26.800 [2024-04-26 23:36:15.879691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.800 [2024-04-26 23:36:15.879713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:26.800 [2024-04-26 23:36:15.883907] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd54ae0) with pdu=0x2000190fef90 00:33:26.800 [2024-04-26 23:36:15.884143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.800 [2024-04-26 23:36:15.884165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:26.800 [2024-04-26 23:36:15.888298] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd54ae0) with pdu=0x2000190fef90 00:33:26.800 [2024-04-26 23:36:15.888532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.800 [2024-04-26 23:36:15.888555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:26.800 [2024-04-26 23:36:15.893607] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd54ae0) with pdu=0x2000190fef90 00:33:26.800 [2024-04-26 23:36:15.893844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.800 [2024-04-26 23:36:15.893867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:26.800 [2024-04-26 23:36:15.898578] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd54ae0) with pdu=0x2000190fef90 00:33:26.800 [2024-04-26 23:36:15.898829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.800 [2024-04-26 23:36:15.898860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:26.800 [2024-04-26 23:36:15.903567] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd54ae0) with pdu=0x2000190fef90 00:33:26.800 [2024-04-26 23:36:15.903801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.800 [2024-04-26 23:36:15.903824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:26.800 [2024-04-26 23:36:15.908972] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd54ae0) with pdu=0x2000190fef90 00:33:26.800 [2024-04-26 23:36:15.909205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.800 [2024-04-26 23:36:15.909228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:26.800 [2024-04-26 23:36:15.914111] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd54ae0) with pdu=0x2000190fef90 00:33:26.800 [2024-04-26 23:36:15.914345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.800 [2024-04-26 23:36:15.914367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:26.800 [2024-04-26 23:36:15.919484] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd54ae0) with pdu=0x2000190fef90 00:33:26.800 [2024-04-26 23:36:15.919716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.800 [2024-04-26 23:36:15.919739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:26.800 [2024-04-26 23:36:15.924452] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd54ae0) with pdu=0x2000190fef90 00:33:26.800 [2024-04-26 23:36:15.924687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.800 [2024-04-26 23:36:15.924710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:26.800 [2024-04-26 23:36:15.929495] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd54ae0) with pdu=0x2000190fef90 00:33:26.801 [2024-04-26 23:36:15.929734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.801 [2024-04-26 23:36:15.929757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:26.801 [2024-04-26 23:36:15.934751] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd54ae0) with pdu=0x2000190fef90 00:33:26.801 [2024-04-26 23:36:15.935023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.801 [2024-04-26 23:36:15.935049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:26.801 [2024-04-26 23:36:15.939384] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd54ae0) with pdu=0x2000190fef90 00:33:26.801 [2024-04-26 23:36:15.939619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.801 [2024-04-26 23:36:15.939643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:26.801 [2024-04-26 23:36:15.944628] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd54ae0) with pdu=0x2000190fef90 00:33:26.801 [2024-04-26 23:36:15.944873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.801 [2024-04-26 23:36:15.944895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:26.801 [2024-04-26 23:36:15.950430] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd54ae0) with pdu=0x2000190fef90 00:33:26.801 [2024-04-26 23:36:15.950705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.801 [2024-04-26 23:36:15.950727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:26.801 [2024-04-26 23:36:15.956603] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd54ae0) with pdu=0x2000190fef90 00:33:26.801 [2024-04-26 23:36:15.956846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.801 [2024-04-26 23:36:15.956867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:26.801 [2024-04-26 23:36:15.961636] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd54ae0) with pdu=0x2000190fef90 00:33:26.801 [2024-04-26 23:36:15.961874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.801 [2024-04-26 23:36:15.961896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:26.801 [2024-04-26 23:36:15.966998] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd54ae0) with pdu=0x2000190fef90 00:33:26.801 [2024-04-26 23:36:15.967232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.801 [2024-04-26 23:36:15.967255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:26.801 [2024-04-26 23:36:15.971929] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd54ae0) with pdu=0x2000190fef90 00:33:26.801 [2024-04-26 23:36:15.972167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.801 [2024-04-26 23:36:15.972189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:26.801 [2024-04-26 23:36:15.977015] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd54ae0) with pdu=0x2000190fef90 00:33:26.801 [2024-04-26 23:36:15.977249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.801 [2024-04-26 23:36:15.977270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:26.801 [2024-04-26 23:36:15.981872] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd54ae0) with pdu=0x2000190fef90 00:33:26.801 [2024-04-26 23:36:15.982107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.801 [2024-04-26 23:36:15.982131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:26.801 [2024-04-26 23:36:15.986262] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd54ae0) with pdu=0x2000190fef90 00:33:26.801 [2024-04-26 23:36:15.986495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.801 [2024-04-26 23:36:15.986518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:26.801 [2024-04-26 23:36:15.990655] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd54ae0) with pdu=0x2000190fef90 00:33:26.801 [2024-04-26 23:36:15.990896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.801 [2024-04-26 23:36:15.990917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:26.801 [2024-04-26 23:36:15.995030] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd54ae0) with pdu=0x2000190fef90 00:33:26.801 [2024-04-26 23:36:15.995262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.801 [2024-04-26 23:36:15.995284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:26.801 [2024-04-26 23:36:15.999953] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd54ae0) with pdu=0x2000190fef90 00:33:26.801 [2024-04-26 23:36:16.000189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.801 [2024-04-26 23:36:16.000212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:26.801 [2024-04-26 23:36:16.004474] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd54ae0) with pdu=0x2000190fef90 00:33:26.801 [2024-04-26 23:36:16.004707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.801 [2024-04-26 23:36:16.004728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:26.801 [2024-04-26 23:36:16.008625] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd54ae0) with pdu=0x2000190fef90 00:33:26.801 [2024-04-26 23:36:16.008866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.801 [2024-04-26 23:36:16.008888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:26.801 [2024-04-26 23:36:16.012792] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd54ae0) with pdu=0x2000190fef90 00:33:26.801 [2024-04-26 23:36:16.013028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.801 [2024-04-26 23:36:16.013052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:26.801 [2024-04-26 23:36:16.017539] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd54ae0) with pdu=0x2000190fef90 00:33:26.801 [2024-04-26 23:36:16.017759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.801 [2024-04-26 23:36:16.017779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:26.801 [2024-04-26 23:36:16.021836] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd54ae0) with pdu=0x2000190fef90 00:33:26.801 [2024-04-26 23:36:16.022071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.801 [2024-04-26 23:36:16.022091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:26.801 [2024-04-26 23:36:16.026086] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd54ae0) with pdu=0x2000190fef90 00:33:26.801 [2024-04-26 23:36:16.026288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.801 [2024-04-26 23:36:16.026307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:26.801 [2024-04-26 23:36:16.030474] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd54ae0) with pdu=0x2000190fef90 00:33:26.801 [2024-04-26 23:36:16.030677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.801 [2024-04-26 23:36:16.030695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:26.801 [2024-04-26 23:36:16.035862] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd54ae0) with pdu=0x2000190fef90 00:33:26.801 [2024-04-26 23:36:16.036065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.801 [2024-04-26 23:36:16.036084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:26.801 [2024-04-26 23:36:16.041146] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd54ae0) with pdu=0x2000190fef90 00:33:26.801 [2024-04-26 23:36:16.041357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.801 [2024-04-26 23:36:16.041376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:26.801 [2024-04-26 23:36:16.045865] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd54ae0) with pdu=0x2000190fef90 00:33:26.801 [2024-04-26 23:36:16.046076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.801 [2024-04-26 23:36:16.046096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:26.801 [2024-04-26 23:36:16.050653] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd54ae0) with pdu=0x2000190fef90 00:33:26.801 [2024-04-26 23:36:16.050876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.801 [2024-04-26 23:36:16.050895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:27.064 [2024-04-26 23:36:16.054974] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd54ae0) with pdu=0x2000190fef90 00:33:27.064 [2024-04-26 23:36:16.055191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.064 [2024-04-26 23:36:16.055211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:27.064 [2024-04-26 23:36:16.059225] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd54ae0) with pdu=0x2000190fef90 00:33:27.064 [2024-04-26 23:36:16.059440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.064 [2024-04-26 23:36:16.059463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:27.064 [2024-04-26 23:36:16.065009] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd54ae0) with pdu=0x2000190fef90 00:33:27.064 [2024-04-26 23:36:16.065228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.064 [2024-04-26 23:36:16.065248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:27.064 [2024-04-26 23:36:16.069832] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd54ae0) with pdu=0x2000190fef90 00:33:27.064 [2024-04-26 23:36:16.070056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.064 [2024-04-26 23:36:16.070076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:27.064 [2024-04-26 23:36:16.074064] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd54ae0) with pdu=0x2000190fef90 00:33:27.064 [2024-04-26 23:36:16.074285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.064 [2024-04-26 23:36:16.074305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:27.064 [2024-04-26 23:36:16.078561] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd54ae0) with pdu=0x2000190fef90 00:33:27.064 [2024-04-26 23:36:16.078779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.064 [2024-04-26 23:36:16.078800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:27.064 [2024-04-26 23:36:16.083665] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd54ae0) with pdu=0x2000190fef90 00:33:27.064 [2024-04-26 23:36:16.083892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.064 [2024-04-26 23:36:16.083912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:27.064 [2024-04-26 23:36:16.089439] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd54ae0) with pdu=0x2000190fef90 00:33:27.064 [2024-04-26 23:36:16.089670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.064 [2024-04-26 23:36:16.089689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:27.064 [2024-04-26 23:36:16.094401] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd54ae0) with pdu=0x2000190fef90 00:33:27.064 [2024-04-26 23:36:16.094616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.064 [2024-04-26 23:36:16.094636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:27.064 [2024-04-26 23:36:16.099122] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd54ae0) with pdu=0x2000190fef90 00:33:27.064 [2024-04-26 23:36:16.099338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.064 [2024-04-26 23:36:16.099358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:27.064 [2024-04-26 23:36:16.103943] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd54ae0) with pdu=0x2000190fef90 00:33:27.064 [2024-04-26 23:36:16.104173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.064 [2024-04-26 23:36:16.104194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:27.064 [2024-04-26 23:36:16.108761] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd54ae0) with pdu=0x2000190fef90 00:33:27.064 [2024-04-26 23:36:16.108993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.064 [2024-04-26 23:36:16.109014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:27.064 [2024-04-26 23:36:16.113662] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd54ae0) with pdu=0x2000190fef90 00:33:27.064 [2024-04-26 23:36:16.113893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.064 [2024-04-26 23:36:16.113914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:27.064 [2024-04-26 23:36:16.118712] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd54ae0) with pdu=0x2000190fef90 00:33:27.064 [2024-04-26 23:36:16.118946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.064 [2024-04-26 23:36:16.118967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:27.064 [2024-04-26 23:36:16.123807] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd54ae0) with pdu=0x2000190fef90 00:33:27.064 [2024-04-26 23:36:16.124054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.065 [2024-04-26 23:36:16.124077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:27.065 [2024-04-26 23:36:16.128650] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd54ae0) with pdu=0x2000190fef90 00:33:27.065 [2024-04-26 23:36:16.128893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.065 [2024-04-26 23:36:16.128914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:27.065 [2024-04-26 23:36:16.133352] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd54ae0) with pdu=0x2000190fef90 00:33:27.065 [2024-04-26 23:36:16.133586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.065 [2024-04-26 23:36:16.133609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:27.065 [2024-04-26 23:36:16.137717] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd54ae0) with pdu=0x2000190fef90 00:33:27.065 [2024-04-26 23:36:16.137944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.065 [2024-04-26 23:36:16.137964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:27.065 [2024-04-26 23:36:16.142030] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd54ae0) with pdu=0x2000190fef90 00:33:27.065 [2024-04-26 23:36:16.142262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.065 [2024-04-26 23:36:16.142282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:27.065 [2024-04-26 23:36:16.146654] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd54ae0) with pdu=0x2000190fef90 00:33:27.065 [2024-04-26 23:36:16.146879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.065 [2024-04-26 23:36:16.146900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:27.065 [2024-04-26 23:36:16.151205] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd54ae0) with pdu=0x2000190fef90 00:33:27.065 [2024-04-26 23:36:16.151415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.065 [2024-04-26 23:36:16.151434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:27.065 [2024-04-26 23:36:16.155215] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd54ae0) with pdu=0x2000190fef90 00:33:27.065 [2024-04-26 23:36:16.155418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.065 [2024-04-26 23:36:16.155437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:27.065 [2024-04-26 23:36:16.159435] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd54ae0) with pdu=0x2000190fef90 00:33:27.065 [2024-04-26 23:36:16.159640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.065 [2024-04-26 23:36:16.159661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:27.065 [2024-04-26 23:36:16.164169] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd54ae0) with pdu=0x2000190fef90 00:33:27.065 [2024-04-26 23:36:16.164379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.065 [2024-04-26 23:36:16.164398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:27.065 [2024-04-26 23:36:16.168408] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd54ae0) with pdu=0x2000190fef90 00:33:27.065 [2024-04-26 23:36:16.168604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.065 [2024-04-26 23:36:16.168622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:27.065 [2024-04-26 23:36:16.172694] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd54ae0) with pdu=0x2000190fef90 00:33:27.065 [2024-04-26 23:36:16.172913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.065 [2024-04-26 23:36:16.172937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:27.065 [2024-04-26 23:36:16.176941] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd54ae0) with pdu=0x2000190fef90 00:33:27.065 [2024-04-26 23:36:16.177139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.065 [2024-04-26 23:36:16.177157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:27.065 [2024-04-26 23:36:16.181048] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd54ae0) with pdu=0x2000190fef90 00:33:27.065 [2024-04-26 23:36:16.181249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.065 [2024-04-26 23:36:16.181272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:27.065 [2024-04-26 23:36:16.185331] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd54ae0) with pdu=0x2000190fef90 00:33:27.065 [2024-04-26 23:36:16.185528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.065 [2024-04-26 23:36:16.185546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:27.065 [2024-04-26 23:36:16.189181] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd54ae0) with pdu=0x2000190fef90 00:33:27.065 [2024-04-26 23:36:16.189377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.065 [2024-04-26 23:36:16.189395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:27.065 [2024-04-26 23:36:16.193064] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd54ae0) with pdu=0x2000190fef90 00:33:27.065 [2024-04-26 23:36:16.193259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.065 [2024-04-26 23:36:16.193277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:27.065 [2024-04-26 23:36:16.196926] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd54ae0) with pdu=0x2000190fef90 00:33:27.065 [2024-04-26 23:36:16.197129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.065 [2024-04-26 23:36:16.197148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:27.065 [2024-04-26 23:36:16.200778] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd54ae0) with pdu=0x2000190fef90 00:33:27.065 [2024-04-26 23:36:16.200986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.065 [2024-04-26 23:36:16.201005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:27.065 [2024-04-26 23:36:16.204963] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd54ae0) with pdu=0x2000190fef90 00:33:27.065 [2024-04-26 23:36:16.205166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.065 [2024-04-26 23:36:16.205184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:27.065 [2024-04-26 23:36:16.209134] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd54ae0) with pdu=0x2000190fef90 00:33:27.065 [2024-04-26 23:36:16.209331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.065 [2024-04-26 23:36:16.209350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:27.065 [2024-04-26 23:36:16.213222] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd54ae0) with pdu=0x2000190fef90 00:33:27.065 [2024-04-26 23:36:16.213422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.065 [2024-04-26 23:36:16.213440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:27.065 [2024-04-26 23:36:16.217305] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd54ae0) with pdu=0x2000190fef90 00:33:27.065 [2024-04-26 23:36:16.217498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.065 [2024-04-26 23:36:16.217515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:27.065 [2024-04-26 23:36:16.221251] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd54ae0) with pdu=0x2000190fef90 00:33:27.065 [2024-04-26 23:36:16.221438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.065 [2024-04-26 23:36:16.221456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:27.065 [2024-04-26 23:36:16.225068] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd54ae0) with pdu=0x2000190fef90 00:33:27.065 [2024-04-26 23:36:16.225252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.065 [2024-04-26 23:36:16.225272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:27.065 [2024-04-26 23:36:16.229472] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd54ae0) with pdu=0x2000190fef90 00:33:27.065 [2024-04-26 23:36:16.229655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.065 [2024-04-26 23:36:16.229673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:27.065 [2024-04-26 23:36:16.237484] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd54ae0) with pdu=0x2000190fef90 00:33:27.065 [2024-04-26 23:36:16.237791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.065 [2024-04-26 23:36:16.237810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:27.065 [2024-04-26 23:36:16.244914] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd54ae0) with pdu=0x2000190fef90 00:33:27.065 [2024-04-26 23:36:16.245098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.065 [2024-04-26 23:36:16.245118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:27.065 [2024-04-26 23:36:16.250348] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd54ae0) with pdu=0x2000190fef90 00:33:27.065 [2024-04-26 23:36:16.250569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.065 [2024-04-26 23:36:16.250590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:27.065 [2024-04-26 23:36:16.254931] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd54ae0) with pdu=0x2000190fef90 00:33:27.065 [2024-04-26 23:36:16.255153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.065 [2024-04-26 23:36:16.255173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:27.065 [2024-04-26 23:36:16.259660] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd54ae0) with pdu=0x2000190fef90 00:33:27.065 [2024-04-26 23:36:16.259881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.065 [2024-04-26 23:36:16.259902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:27.065 [2024-04-26 23:36:16.264649] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd54ae0) with pdu=0x2000190fef90 00:33:27.065 [2024-04-26 23:36:16.264874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.065 [2024-04-26 23:36:16.264901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:27.065 [2024-04-26 23:36:16.269384] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd54ae0) with pdu=0x2000190fef90 00:33:27.065 [2024-04-26 23:36:16.269611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.065 [2024-04-26 23:36:16.269632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:27.065 [2024-04-26 23:36:16.273980] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd54ae0) with pdu=0x2000190fef90 00:33:27.065 [2024-04-26 23:36:16.274200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.065 [2024-04-26 23:36:16.274222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:27.065 [2024-04-26 23:36:16.278900] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd54ae0) with pdu=0x2000190fef90 00:33:27.065 [2024-04-26 23:36:16.279129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.065 [2024-04-26 23:36:16.279150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:27.065 [2024-04-26 23:36:16.284144] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd54ae0) with pdu=0x2000190fef90 00:33:27.065 [2024-04-26 23:36:16.284358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.065 [2024-04-26 23:36:16.284378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:27.065 [2024-04-26 23:36:16.291126] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd54ae0) with pdu=0x2000190fef90 00:33:27.065 [2024-04-26 23:36:16.291373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.065 [2024-04-26 23:36:16.291396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:27.065 [2024-04-26 23:36:16.299443] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd54ae0) with pdu=0x2000190fef90 00:33:27.065 [2024-04-26 23:36:16.299751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.065 [2024-04-26 23:36:16.299774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:27.065 [2024-04-26 23:36:16.305651] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd54ae0) with pdu=0x2000190fef90 00:33:27.065 [2024-04-26 23:36:16.305891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.065 [2024-04-26 23:36:16.305913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:27.065 [2024-04-26 23:36:16.310281] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd54ae0) with pdu=0x2000190fef90 00:33:27.065 [2024-04-26 23:36:16.310512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.065 [2024-04-26 23:36:16.310538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:27.065 [2024-04-26 23:36:16.315357] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd54ae0) with pdu=0x2000190fef90 00:33:27.065 [2024-04-26 23:36:16.315591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.065 [2024-04-26 23:36:16.315614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:27.327 [2024-04-26 23:36:16.321180] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd54ae0) with pdu=0x2000190fef90 00:33:27.327 [2024-04-26 23:36:16.321453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.327 [2024-04-26 23:36:16.321476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:27.327 [2024-04-26 23:36:16.326545] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd54ae0) with pdu=0x2000190fef90 00:33:27.327 [2024-04-26 23:36:16.326775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.327 [2024-04-26 23:36:16.326798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:27.327 [2024-04-26 23:36:16.332777] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd54ae0) with pdu=0x2000190fef90 00:33:27.327 [2024-04-26 23:36:16.333019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.327 [2024-04-26 23:36:16.333041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:27.327 [2024-04-26 23:36:16.338920] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd54ae0) with pdu=0x2000190fef90 00:33:27.327 [2024-04-26 23:36:16.339157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.327 [2024-04-26 23:36:16.339179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:27.327 [2024-04-26 23:36:16.344202] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd54ae0) with pdu=0x2000190fef90 00:33:27.327 [2024-04-26 23:36:16.344434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.327 [2024-04-26 23:36:16.344457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:27.327 [2024-04-26 23:36:16.349748] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd54ae0) with pdu=0x2000190fef90 00:33:27.327 [2024-04-26 23:36:16.349987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.327 [2024-04-26 23:36:16.350009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:27.327 [2024-04-26 23:36:16.354962] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd54ae0) with pdu=0x2000190fef90 00:33:27.327 [2024-04-26 23:36:16.355196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.327 [2024-04-26 23:36:16.355218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:27.327 [2024-04-26 23:36:16.359755] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd54ae0) with pdu=0x2000190fef90 00:33:27.327 [2024-04-26 23:36:16.360007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.327 [2024-04-26 23:36:16.360030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:27.327 [2024-04-26 23:36:16.364852] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd54ae0) with pdu=0x2000190fef90 00:33:27.327 [2024-04-26 23:36:16.365086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.327 [2024-04-26 23:36:16.365108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:27.327 [2024-04-26 23:36:16.370171] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd54ae0) with pdu=0x2000190fef90 00:33:27.327 [2024-04-26 23:36:16.370404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.327 [2024-04-26 23:36:16.370427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:27.327 [2024-04-26 23:36:16.375169] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd54ae0) with pdu=0x2000190fef90 00:33:27.327 [2024-04-26 23:36:16.375416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.327 [2024-04-26 23:36:16.375439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:27.327 [2024-04-26 23:36:16.380535] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd54ae0) with pdu=0x2000190fef90 00:33:27.327 [2024-04-26 23:36:16.380765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.327 [2024-04-26 23:36:16.380787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:27.327 [2024-04-26 23:36:16.385790] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd54ae0) with pdu=0x2000190fef90 00:33:27.327 [2024-04-26 23:36:16.386027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.327 [2024-04-26 23:36:16.386049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:27.327 [2024-04-26 23:36:16.391574] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd54ae0) with pdu=0x2000190fef90 00:33:27.327 [2024-04-26 23:36:16.391807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.327 [2024-04-26 23:36:16.391828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:27.327 [2024-04-26 23:36:16.396572] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd54ae0) with pdu=0x2000190fef90 00:33:27.327 [2024-04-26 23:36:16.396803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.327 [2024-04-26 23:36:16.396825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:27.328 [2024-04-26 23:36:16.401502] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd54ae0) with pdu=0x2000190fef90 00:33:27.328 [2024-04-26 23:36:16.401734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.328 [2024-04-26 23:36:16.401757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:27.328 [2024-04-26 23:36:16.406639] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd54ae0) with pdu=0x2000190fef90 00:33:27.328 [2024-04-26 23:36:16.406880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.328 [2024-04-26 23:36:16.406901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:27.328 [2024-04-26 23:36:16.411719] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd54ae0) with pdu=0x2000190fef90 00:33:27.328 [2024-04-26 23:36:16.412022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.328 [2024-04-26 23:36:16.412044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:27.328 [2024-04-26 23:36:16.417729] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd54ae0) with pdu=0x2000190fef90 00:33:27.328 [2024-04-26 23:36:16.417967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.328 [2024-04-26 23:36:16.417989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:27.328 [2024-04-26 23:36:16.422052] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd54ae0) with pdu=0x2000190fef90 00:33:27.328 [2024-04-26 23:36:16.422286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.328 [2024-04-26 23:36:16.422308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:27.328 [2024-04-26 23:36:16.427124] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd54ae0) with pdu=0x2000190fef90 00:33:27.328 [2024-04-26 23:36:16.427361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.328 [2024-04-26 23:36:16.427384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:27.328 [2024-04-26 23:36:16.432049] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd54ae0) with pdu=0x2000190fef90 00:33:27.328 [2024-04-26 23:36:16.432305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.328 [2024-04-26 23:36:16.432328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:27.328 [2024-04-26 23:36:16.436594] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd54ae0) with pdu=0x2000190fef90 00:33:27.328 [2024-04-26 23:36:16.436827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.328 [2024-04-26 23:36:16.436858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:27.328 [2024-04-26 23:36:16.441403] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd54ae0) with pdu=0x2000190fef90 00:33:27.328 [2024-04-26 23:36:16.441636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.328 [2024-04-26 23:36:16.441657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:27.328 [2024-04-26 23:36:16.446641] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd54ae0) with pdu=0x2000190fef90 00:33:27.328 [2024-04-26 23:36:16.446985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.328 [2024-04-26 23:36:16.447010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:27.328 [2024-04-26 23:36:16.455084] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd54ae0) with pdu=0x2000190fef90 00:33:27.328 [2024-04-26 23:36:16.455394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.328 [2024-04-26 23:36:16.455416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:27.328 [2024-04-26 23:36:16.461982] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd54ae0) with pdu=0x2000190fef90 00:33:27.328 [2024-04-26 23:36:16.462324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.328 [2024-04-26 23:36:16.462346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:27.328 [2024-04-26 23:36:16.470710] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd54ae0) with pdu=0x2000190fef90 00:33:27.328 [2024-04-26 23:36:16.471017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.328 [2024-04-26 23:36:16.471040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:27.328 [2024-04-26 23:36:16.479788] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd54ae0) with pdu=0x2000190fef90 00:33:27.328 [2024-04-26 23:36:16.480032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.328 [2024-04-26 23:36:16.480054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:27.328 [2024-04-26 23:36:16.488778] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd54ae0) with pdu=0x2000190fef90 00:33:27.328 [2024-04-26 23:36:16.489028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.328 [2024-04-26 23:36:16.489051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:27.328 [2024-04-26 23:36:16.497938] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd54ae0) with pdu=0x2000190fef90 00:33:27.328 [2024-04-26 23:36:16.498282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.328 [2024-04-26 23:36:16.498305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:27.328 [2024-04-26 23:36:16.507271] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd54ae0) with pdu=0x2000190fef90 00:33:27.328 [2024-04-26 23:36:16.507557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.328 [2024-04-26 23:36:16.507579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:27.328 [2024-04-26 23:36:16.516399] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd54ae0) with pdu=0x2000190fef90 00:33:27.328 [2024-04-26 23:36:16.516699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.328 [2024-04-26 23:36:16.516721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:27.328 [2024-04-26 23:36:16.525627] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd54ae0) with pdu=0x2000190fef90 00:33:27.328 [2024-04-26 23:36:16.525876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.328 [2024-04-26 23:36:16.525898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:27.328 [2024-04-26 23:36:16.534190] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd54ae0) with pdu=0x2000190fef90 00:33:27.328 [2024-04-26 23:36:16.534546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.328 [2024-04-26 23:36:16.534569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:27.328 [2024-04-26 23:36:16.543875] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd54ae0) with pdu=0x2000190fef90 00:33:27.328 [2024-04-26 23:36:16.544201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.328 [2024-04-26 23:36:16.544224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:27.328 [2024-04-26 23:36:16.553042] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd54ae0) with pdu=0x2000190fef90 00:33:27.328 [2024-04-26 23:36:16.553338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.328 [2024-04-26 23:36:16.553360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:27.328 [2024-04-26 23:36:16.560222] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd54ae0) with pdu=0x2000190fef90 00:33:27.328 [2024-04-26 23:36:16.560457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.328 [2024-04-26 23:36:16.560480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:27.328 [2024-04-26 23:36:16.566085] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd54ae0) with pdu=0x2000190fef90 00:33:27.328 [2024-04-26 23:36:16.566331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.328 [2024-04-26 23:36:16.566356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:27.328 [2024-04-26 23:36:16.571925] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd54ae0) with pdu=0x2000190fef90 00:33:27.328 [2024-04-26 23:36:16.572164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.328 [2024-04-26 23:36:16.572185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:27.328 [2024-04-26 23:36:16.578921] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd54ae0) with pdu=0x2000190fef90 00:33:27.328 [2024-04-26 23:36:16.579162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.328 [2024-04-26 23:36:16.579184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:27.591 [2024-04-26 23:36:16.585294] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd54ae0) with pdu=0x2000190fef90 00:33:27.591 [2024-04-26 23:36:16.585526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.591 [2024-04-26 23:36:16.585547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:27.591 [2024-04-26 23:36:16.591441] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd54ae0) with pdu=0x2000190fef90 00:33:27.591 [2024-04-26 23:36:16.591674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.591 [2024-04-26 23:36:16.591698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:27.591 [2024-04-26 23:36:16.597520] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd54ae0) with pdu=0x2000190fef90 00:33:27.591 [2024-04-26 23:36:16.597749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.591 [2024-04-26 23:36:16.597777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:27.591 [2024-04-26 23:36:16.604114] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd54ae0) with pdu=0x2000190fef90 00:33:27.591 [2024-04-26 23:36:16.604371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.591 [2024-04-26 23:36:16.604393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:27.591 [2024-04-26 23:36:16.609697] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd54ae0) with pdu=0x2000190fef90 00:33:27.591 [2024-04-26 23:36:16.609931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.591 [2024-04-26 23:36:16.609958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:27.591 [2024-04-26 23:36:16.615120] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd54ae0) with pdu=0x2000190fef90 00:33:27.591 [2024-04-26 23:36:16.615356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.591 [2024-04-26 23:36:16.615378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:27.591 [2024-04-26 23:36:16.619850] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd54ae0) with pdu=0x2000190fef90 00:33:27.591 [2024-04-26 23:36:16.620082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.591 [2024-04-26 23:36:16.620104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:27.591 [2024-04-26 23:36:16.624242] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd54ae0) with pdu=0x2000190fef90 00:33:27.591 [2024-04-26 23:36:16.624475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.591 [2024-04-26 23:36:16.624498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:27.591 [2024-04-26 23:36:16.629254] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd54ae0) with pdu=0x2000190fef90 00:33:27.591 [2024-04-26 23:36:16.629526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.591 [2024-04-26 23:36:16.629548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:27.591 [2024-04-26 23:36:16.633957] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd54ae0) with pdu=0x2000190fef90 00:33:27.591 [2024-04-26 23:36:16.634190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.591 [2024-04-26 23:36:16.634219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:27.591 [2024-04-26 23:36:16.640332] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd54ae0) with pdu=0x2000190fef90 00:33:27.591 [2024-04-26 23:36:16.640575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.591 [2024-04-26 23:36:16.640600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:27.591 [2024-04-26 23:36:16.648749] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd54ae0) with pdu=0x2000190fef90 00:33:27.591 [2024-04-26 23:36:16.649000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.591 [2024-04-26 23:36:16.649023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:27.591 [2024-04-26 23:36:16.657061] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd54ae0) with pdu=0x2000190fef90 00:33:27.591 [2024-04-26 23:36:16.657438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.591 [2024-04-26 23:36:16.657460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:27.591 [2024-04-26 23:36:16.664551] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd54ae0) with pdu=0x2000190fef90 00:33:27.591 [2024-04-26 23:36:16.664786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.591 [2024-04-26 23:36:16.664807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:27.591 [2024-04-26 23:36:16.669707] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd54ae0) with pdu=0x2000190fef90 00:33:27.591 [2024-04-26 23:36:16.669946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.591 [2024-04-26 23:36:16.669967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:27.591 [2024-04-26 23:36:16.675709] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd54ae0) with pdu=0x2000190fef90 00:33:27.591 [2024-04-26 23:36:16.675983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.591 [2024-04-26 23:36:16.676005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:27.591 [2024-04-26 23:36:16.682141] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd54ae0) with pdu=0x2000190fef90 00:33:27.591 [2024-04-26 23:36:16.682385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.591 [2024-04-26 23:36:16.682408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:27.591 [2024-04-26 23:36:16.688523] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd54ae0) with pdu=0x2000190fef90 00:33:27.591 [2024-04-26 23:36:16.688763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.591 [2024-04-26 23:36:16.688787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:27.591 [2024-04-26 23:36:16.695007] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd54ae0) with pdu=0x2000190fef90 00:33:27.591 [2024-04-26 23:36:16.695242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.591 [2024-04-26 23:36:16.695265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:27.591 [2024-04-26 23:36:16.701188] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd54ae0) with pdu=0x2000190fef90 00:33:27.591 [2024-04-26 23:36:16.701444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.591 [2024-04-26 23:36:16.701468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:27.591 [2024-04-26 23:36:16.707659] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd54ae0) with pdu=0x2000190fef90 00:33:27.591 [2024-04-26 23:36:16.707908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.591 [2024-04-26 23:36:16.707930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:27.591 [2024-04-26 23:36:16.714010] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd54ae0) with pdu=0x2000190fef90 00:33:27.591 [2024-04-26 23:36:16.714284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.591 [2024-04-26 23:36:16.714306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:27.591 [2024-04-26 23:36:16.720584] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd54ae0) with pdu=0x2000190fef90 00:33:27.591 [2024-04-26 23:36:16.720827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.591 [2024-04-26 23:36:16.720861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:27.591 [2024-04-26 23:36:16.729529] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd54ae0) with pdu=0x2000190fef90 00:33:27.591 [2024-04-26 23:36:16.729831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.591 [2024-04-26 23:36:16.729859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:27.591 [2024-04-26 23:36:16.738053] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd54ae0) with pdu=0x2000190fef90 00:33:27.591 [2024-04-26 23:36:16.738370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.591 [2024-04-26 23:36:16.738392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:27.591 [2024-04-26 23:36:16.745864] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd54ae0) with pdu=0x2000190fef90 00:33:27.591 [2024-04-26 23:36:16.746107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.591 [2024-04-26 23:36:16.746128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:27.591 [2024-04-26 23:36:16.752970] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd54ae0) with pdu=0x2000190fef90 00:33:27.591 [2024-04-26 23:36:16.753272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.591 [2024-04-26 23:36:16.753301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:27.591 [2024-04-26 23:36:16.760156] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd54ae0) with pdu=0x2000190fef90 00:33:27.591 [2024-04-26 23:36:16.760395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.591 [2024-04-26 23:36:16.760419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:27.591 [2024-04-26 23:36:16.766253] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd54ae0) with pdu=0x2000190fef90 00:33:27.591 [2024-04-26 23:36:16.766490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.591 [2024-04-26 23:36:16.766512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:27.591 [2024-04-26 23:36:16.772908] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd54ae0) with pdu=0x2000190fef90 00:33:27.591 [2024-04-26 23:36:16.773141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.591 [2024-04-26 23:36:16.773163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:27.591 [2024-04-26 23:36:16.778025] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd54ae0) with pdu=0x2000190fef90 00:33:27.591 [2024-04-26 23:36:16.778257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.591 [2024-04-26 23:36:16.778280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:27.591 [2024-04-26 23:36:16.782754] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd54ae0) with pdu=0x2000190fef90 00:33:27.591 [2024-04-26 23:36:16.782993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.591 [2024-04-26 23:36:16.783014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:27.591 [2024-04-26 23:36:16.787635] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd54ae0) with pdu=0x2000190fef90 00:33:27.591 [2024-04-26 23:36:16.787873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.591 [2024-04-26 23:36:16.787895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:27.591 [2024-04-26 23:36:16.792638] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd54ae0) with pdu=0x2000190fef90 00:33:27.591 [2024-04-26 23:36:16.792878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.591 [2024-04-26 23:36:16.792900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:27.591 [2024-04-26 23:36:16.799437] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd54ae0) with pdu=0x2000190fef90 00:33:27.592 [2024-04-26 23:36:16.799669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.592 [2024-04-26 23:36:16.799692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:27.592 [2024-04-26 23:36:16.804093] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd54ae0) with pdu=0x2000190fef90 00:33:27.592 [2024-04-26 23:36:16.804331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.592 [2024-04-26 23:36:16.804354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:27.592 [2024-04-26 23:36:16.808642] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd54ae0) with pdu=0x2000190fef90 00:33:27.592 [2024-04-26 23:36:16.808882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.592 [2024-04-26 23:36:16.808903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:27.592 [2024-04-26 23:36:16.813830] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd54ae0) with pdu=0x2000190fef90 00:33:27.592 [2024-04-26 23:36:16.814070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.592 [2024-04-26 23:36:16.814101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:27.592 [2024-04-26 23:36:16.818593] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd54ae0) with pdu=0x2000190fef90 00:33:27.592 [2024-04-26 23:36:16.818826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.592 [2024-04-26 23:36:16.818856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:27.592 [2024-04-26 23:36:16.823217] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd54ae0) with pdu=0x2000190fef90 00:33:27.592 [2024-04-26 23:36:16.823444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.592 [2024-04-26 23:36:16.823464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:27.592 [2024-04-26 23:36:16.828032] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd54ae0) with pdu=0x2000190fef90 00:33:27.592 [2024-04-26 23:36:16.828268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.592 [2024-04-26 23:36:16.828291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:27.592 [2024-04-26 23:36:16.832675] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd54ae0) with pdu=0x2000190fef90 00:33:27.592 [2024-04-26 23:36:16.832914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.592 [2024-04-26 23:36:16.832935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:27.592 [2024-04-26 23:36:16.837199] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd54ae0) with pdu=0x2000190fef90 00:33:27.592 [2024-04-26 23:36:16.837426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.592 [2024-04-26 23:36:16.837447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:27.592 [2024-04-26 23:36:16.842421] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd54ae0) with pdu=0x2000190fef90 00:33:27.592 [2024-04-26 23:36:16.842661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.592 [2024-04-26 23:36:16.842683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:27.854 [2024-04-26 23:36:16.848326] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd54ae0) with pdu=0x2000190fef90 00:33:27.854 [2024-04-26 23:36:16.848563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.855 [2024-04-26 23:36:16.848586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:27.855 [2024-04-26 23:36:16.853547] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd54ae0) with pdu=0x2000190fef90 00:33:27.855 [2024-04-26 23:36:16.853781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.855 [2024-04-26 23:36:16.853804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:27.855 [2024-04-26 23:36:16.859160] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd54ae0) with pdu=0x2000190fef90 00:33:27.855 [2024-04-26 23:36:16.859400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.855 [2024-04-26 23:36:16.859421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:27.855 [2024-04-26 23:36:16.865275] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd54ae0) with pdu=0x2000190fef90 00:33:27.855 [2024-04-26 23:36:16.865501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.855 [2024-04-26 23:36:16.865525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:27.855 [2024-04-26 23:36:16.870894] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd54ae0) with pdu=0x2000190fef90 00:33:27.855 [2024-04-26 23:36:16.871128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.855 [2024-04-26 23:36:16.871150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:27.855 [2024-04-26 23:36:16.876084] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd54ae0) with pdu=0x2000190fef90 00:33:27.855 [2024-04-26 23:36:16.876319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.855 [2024-04-26 23:36:16.876340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:27.855 [2024-04-26 23:36:16.880707] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd54ae0) with pdu=0x2000190fef90 00:33:27.855 [2024-04-26 23:36:16.880946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.855 [2024-04-26 23:36:16.880967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:27.855 [2024-04-26 23:36:16.885751] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd54ae0) with pdu=0x2000190fef90 00:33:27.855 [2024-04-26 23:36:16.885989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.855 [2024-04-26 23:36:16.886011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:27.855 [2024-04-26 23:36:16.891058] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd54ae0) with pdu=0x2000190fef90 00:33:27.855 [2024-04-26 23:36:16.891291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.855 [2024-04-26 23:36:16.891325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:27.855 [2024-04-26 23:36:16.896375] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd54ae0) with pdu=0x2000190fef90 00:33:27.855 [2024-04-26 23:36:16.896609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.855 [2024-04-26 23:36:16.896630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:27.855 [2024-04-26 23:36:16.901767] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd54ae0) with pdu=0x2000190fef90 00:33:27.855 [2024-04-26 23:36:16.902007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.855 [2024-04-26 23:36:16.902029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:27.855 [2024-04-26 23:36:16.907002] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd54ae0) with pdu=0x2000190fef90 00:33:27.855 [2024-04-26 23:36:16.907265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.855 [2024-04-26 23:36:16.907288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:27.855 [2024-04-26 23:36:16.912151] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd54ae0) with pdu=0x2000190fef90 00:33:27.855 [2024-04-26 23:36:16.912387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.855 [2024-04-26 23:36:16.912408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:27.855 [2024-04-26 23:36:16.917170] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd54ae0) with pdu=0x2000190fef90 00:33:27.855 [2024-04-26 23:36:16.917401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.855 [2024-04-26 23:36:16.917424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:27.855 [2024-04-26 23:36:16.921990] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd54ae0) with pdu=0x2000190fef90 00:33:27.855 [2024-04-26 23:36:16.922214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.855 [2024-04-26 23:36:16.922235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:27.855 [2024-04-26 23:36:16.926983] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd54ae0) with pdu=0x2000190fef90 00:33:27.855 [2024-04-26 23:36:16.927210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.855 [2024-04-26 23:36:16.927231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:27.855 [2024-04-26 23:36:16.932030] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd54ae0) with pdu=0x2000190fef90 00:33:27.855 [2024-04-26 23:36:16.932258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.855 [2024-04-26 23:36:16.932279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:27.855 [2024-04-26 23:36:16.936868] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd54ae0) with pdu=0x2000190fef90 00:33:27.855 [2024-04-26 23:36:16.937108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.855 [2024-04-26 23:36:16.937131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:27.855 [2024-04-26 23:36:16.941385] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd54ae0) with pdu=0x2000190fef90 00:33:27.855 [2024-04-26 23:36:16.941617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.855 [2024-04-26 23:36:16.941640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:27.855 [2024-04-26 23:36:16.946010] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd54ae0) with pdu=0x2000190fef90 00:33:27.855 [2024-04-26 23:36:16.946241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.855 [2024-04-26 23:36:16.946263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:27.855 [2024-04-26 23:36:16.951112] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd54ae0) with pdu=0x2000190fef90 00:33:27.855 [2024-04-26 23:36:16.951347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.855 [2024-04-26 23:36:16.951367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:27.855 [2024-04-26 23:36:16.956060] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd54ae0) with pdu=0x2000190fef90 00:33:27.855 [2024-04-26 23:36:16.956294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.855 [2024-04-26 23:36:16.956315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:27.855 [2024-04-26 23:36:16.960686] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd54ae0) with pdu=0x2000190fef90 00:33:27.855 [2024-04-26 23:36:16.960926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.855 [2024-04-26 23:36:16.960948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:27.855 [2024-04-26 23:36:16.965424] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd54ae0) with pdu=0x2000190fef90 00:33:27.855 [2024-04-26 23:36:16.965657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.855 [2024-04-26 23:36:16.965678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:27.855 [2024-04-26 23:36:16.969777] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd54ae0) with pdu=0x2000190fef90 00:33:27.855 [2024-04-26 23:36:16.970008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.855 [2024-04-26 23:36:16.970031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:27.855 [2024-04-26 23:36:16.973992] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd54ae0) with pdu=0x2000190fef90 00:33:27.855 [2024-04-26 23:36:16.974223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.855 [2024-04-26 23:36:16.974245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:27.855 [2024-04-26 23:36:16.978137] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd54ae0) with pdu=0x2000190fef90 00:33:27.855 [2024-04-26 23:36:16.978370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.856 [2024-04-26 23:36:16.978391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:27.856 [2024-04-26 23:36:16.982551] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd54ae0) with pdu=0x2000190fef90 00:33:27.856 [2024-04-26 23:36:16.982784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.856 [2024-04-26 23:36:16.982805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:27.856 [2024-04-26 23:36:16.989188] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd54ae0) with pdu=0x2000190fef90 00:33:27.856 [2024-04-26 23:36:16.989416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.856 [2024-04-26 23:36:16.989437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:27.856 [2024-04-26 23:36:16.994862] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd54ae0) with pdu=0x2000190fef90 00:33:27.856 [2024-04-26 23:36:16.995095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.856 [2024-04-26 23:36:16.995116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:27.856 [2024-04-26 23:36:16.999720] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd54ae0) with pdu=0x2000190fef90 00:33:27.856 [2024-04-26 23:36:16.999990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.856 [2024-04-26 23:36:17.000013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:27.856 [2024-04-26 23:36:17.004987] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd54ae0) with pdu=0x2000190fef90 00:33:27.856 [2024-04-26 23:36:17.005220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.856 [2024-04-26 23:36:17.005241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:27.856 [2024-04-26 23:36:17.009958] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd54ae0) with pdu=0x2000190fef90 00:33:27.856 [2024-04-26 23:36:17.010198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.856 [2024-04-26 23:36:17.010219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:27.856 [2024-04-26 23:36:17.015349] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd54ae0) with pdu=0x2000190fef90 00:33:27.856 [2024-04-26 23:36:17.015586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.856 [2024-04-26 23:36:17.015608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:27.856 [2024-04-26 23:36:17.021029] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd54ae0) with pdu=0x2000190fef90 00:33:27.856 [2024-04-26 23:36:17.021266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.856 [2024-04-26 23:36:17.021291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:27.856 [2024-04-26 23:36:17.025742] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd54ae0) with pdu=0x2000190fef90 00:33:27.856 [2024-04-26 23:36:17.025989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.856 [2024-04-26 23:36:17.026015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:27.856 [2024-04-26 23:36:17.030221] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd54ae0) with pdu=0x2000190fef90 00:33:27.856 [2024-04-26 23:36:17.030451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.856 [2024-04-26 23:36:17.030480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:27.856 [2024-04-26 23:36:17.034512] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd54ae0) with pdu=0x2000190fef90 00:33:27.856 [2024-04-26 23:36:17.034745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.856 [2024-04-26 23:36:17.034767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:27.856 [2024-04-26 23:36:17.038745] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd54ae0) with pdu=0x2000190fef90 00:33:27.856 [2024-04-26 23:36:17.038978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.856 [2024-04-26 23:36:17.039002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:27.856 [2024-04-26 23:36:17.042914] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd54ae0) with pdu=0x2000190fef90 00:33:27.856 [2024-04-26 23:36:17.043147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.856 [2024-04-26 23:36:17.043169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:27.856 [2024-04-26 23:36:17.047412] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd54ae0) with pdu=0x2000190fef90 00:33:27.856 [2024-04-26 23:36:17.047642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.856 [2024-04-26 23:36:17.047664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:27.856 [2024-04-26 23:36:17.051954] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd54ae0) with pdu=0x2000190fef90 00:33:27.856 [2024-04-26 23:36:17.052188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.856 [2024-04-26 23:36:17.052209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:27.856 [2024-04-26 23:36:17.056506] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd54ae0) with pdu=0x2000190fef90 00:33:27.856 [2024-04-26 23:36:17.056738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.856 [2024-04-26 23:36:17.056759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:27.856 [2024-04-26 23:36:17.060930] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd54ae0) with pdu=0x2000190fef90 00:33:27.856 [2024-04-26 23:36:17.061167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.856 [2024-04-26 23:36:17.061188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:27.856 [2024-04-26 23:36:17.065728] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd54ae0) with pdu=0x2000190fef90 00:33:27.856 [2024-04-26 23:36:17.065960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.856 [2024-04-26 23:36:17.065982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:27.856 [2024-04-26 23:36:17.071957] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd54ae0) with pdu=0x2000190fef90 00:33:27.856 [2024-04-26 23:36:17.072232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.856 [2024-04-26 23:36:17.072254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:27.856 [2024-04-26 23:36:17.077025] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd54ae0) with pdu=0x2000190fef90 00:33:27.856 [2024-04-26 23:36:17.077256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.856 [2024-04-26 23:36:17.077278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:27.856 [2024-04-26 23:36:17.082107] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd54ae0) with pdu=0x2000190fef90 00:33:27.856 [2024-04-26 23:36:17.082349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.856 [2024-04-26 23:36:17.082370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:27.856 [2024-04-26 23:36:17.086632] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd54ae0) with pdu=0x2000190fef90 00:33:27.856 [2024-04-26 23:36:17.086863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.856 [2024-04-26 23:36:17.086884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:27.856 [2024-04-26 23:36:17.091244] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd54ae0) with pdu=0x2000190fef90 00:33:27.856 [2024-04-26 23:36:17.091476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.856 [2024-04-26 23:36:17.091496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:27.856 [2024-04-26 23:36:17.095656] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd54ae0) with pdu=0x2000190fef90 00:33:27.856 [2024-04-26 23:36:17.095905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.856 [2024-04-26 23:36:17.095926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:27.856 [2024-04-26 23:36:17.100061] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd54ae0) with pdu=0x2000190fef90 00:33:27.856 [2024-04-26 23:36:17.100287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.856 [2024-04-26 23:36:17.100307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:27.856 [2024-04-26 23:36:17.104907] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd54ae0) with pdu=0x2000190fef90 00:33:27.856 [2024-04-26 23:36:17.105134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.856 [2024-04-26 23:36:17.105160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:28.120 [2024-04-26 23:36:17.109188] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd54ae0) with pdu=0x2000190fef90 00:33:28.120 [2024-04-26 23:36:17.109415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.120 [2024-04-26 23:36:17.109437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:28.120 [2024-04-26 23:36:17.113375] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd54ae0) with pdu=0x2000190fef90 00:33:28.120 [2024-04-26 23:36:17.113595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.120 [2024-04-26 23:36:17.113620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:28.120 [2024-04-26 23:36:17.117509] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd54ae0) with pdu=0x2000190fef90 00:33:28.120 [2024-04-26 23:36:17.117734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.120 [2024-04-26 23:36:17.117755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:28.120 [2024-04-26 23:36:17.121619] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd54ae0) with pdu=0x2000190fef90 00:33:28.120 [2024-04-26 23:36:17.121823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.120 [2024-04-26 23:36:17.121849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:28.120 [2024-04-26 23:36:17.125560] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd54ae0) with pdu=0x2000190fef90 00:33:28.120 [2024-04-26 23:36:17.125765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.120 [2024-04-26 23:36:17.125784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:28.120 [2024-04-26 23:36:17.129714] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd54ae0) with pdu=0x2000190fef90 00:33:28.120 [2024-04-26 23:36:17.129925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.120 [2024-04-26 23:36:17.129944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:28.120 [2024-04-26 23:36:17.133746] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd54ae0) with pdu=0x2000190fef90 00:33:28.120 [2024-04-26 23:36:17.133949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.120 [2024-04-26 23:36:17.133967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:28.120 [2024-04-26 23:36:17.137775] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd54ae0) with pdu=0x2000190fef90 00:33:28.120 [2024-04-26 23:36:17.137978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.120 [2024-04-26 23:36:17.138000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:28.120 [2024-04-26 23:36:17.141768] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd54ae0) with pdu=0x2000190fef90 00:33:28.120 [2024-04-26 23:36:17.141965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.120 [2024-04-26 23:36:17.141984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:28.120 [2024-04-26 23:36:17.145719] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd54ae0) with pdu=0x2000190fef90 00:33:28.120 [2024-04-26 23:36:17.145911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.120 [2024-04-26 23:36:17.145928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:28.120 [2024-04-26 23:36:17.149517] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd54ae0) with pdu=0x2000190fef90 00:33:28.120 [2024-04-26 23:36:17.149704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.120 [2024-04-26 23:36:17.149722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:28.120 [2024-04-26 23:36:17.153384] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd54ae0) with pdu=0x2000190fef90 00:33:28.120 [2024-04-26 23:36:17.153571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.120 [2024-04-26 23:36:17.153588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:28.120 [2024-04-26 23:36:17.157682] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd54ae0) with pdu=0x2000190fef90 00:33:28.120 [2024-04-26 23:36:17.157871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.120 [2024-04-26 23:36:17.157894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:28.120 [2024-04-26 23:36:17.161580] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd54ae0) with pdu=0x2000190fef90 00:33:28.120 [2024-04-26 23:36:17.161766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.120 [2024-04-26 23:36:17.161783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:28.120 [2024-04-26 23:36:17.165438] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd54ae0) with pdu=0x2000190fef90 00:33:28.120 [2024-04-26 23:36:17.165626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.120 [2024-04-26 23:36:17.165644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:28.120 [2024-04-26 23:36:17.169610] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd54ae0) with pdu=0x2000190fef90 00:33:28.120 [2024-04-26 23:36:17.169798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.120 [2024-04-26 23:36:17.169815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:28.120 [2024-04-26 23:36:17.173829] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd54ae0) with pdu=0x2000190fef90 00:33:28.120 [2024-04-26 23:36:17.174033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.120 [2024-04-26 23:36:17.174050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:28.120 [2024-04-26 23:36:17.178834] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd54ae0) with pdu=0x2000190fef90 00:33:28.120 [2024-04-26 23:36:17.179042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.120 [2024-04-26 23:36:17.179060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:28.120 [2024-04-26 23:36:17.184047] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd54ae0) with pdu=0x2000190fef90 00:33:28.120 [2024-04-26 23:36:17.184277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.120 [2024-04-26 23:36:17.184299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:28.120 [2024-04-26 23:36:17.189106] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd54ae0) with pdu=0x2000190fef90 00:33:28.120 [2024-04-26 23:36:17.189324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.120 [2024-04-26 23:36:17.189345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:28.120 [2024-04-26 23:36:17.194543] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd54ae0) with pdu=0x2000190fef90 00:33:28.120 [2024-04-26 23:36:17.194789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.120 [2024-04-26 23:36:17.194812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:28.120 [2024-04-26 23:36:17.201155] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd54ae0) with pdu=0x2000190fef90 00:33:28.120 [2024-04-26 23:36:17.201393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.120 [2024-04-26 23:36:17.201415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:28.120 [2024-04-26 23:36:17.206722] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd54ae0) with pdu=0x2000190fef90 00:33:28.120 [2024-04-26 23:36:17.206963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.120 [2024-04-26 23:36:17.206984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:28.120 [2024-04-26 23:36:17.211210] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd54ae0) with pdu=0x2000190fef90 00:33:28.120 [2024-04-26 23:36:17.211441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.120 [2024-04-26 23:36:17.211464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:28.120 [2024-04-26 23:36:17.215559] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd54ae0) with pdu=0x2000190fef90 00:33:28.120 [2024-04-26 23:36:17.215793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.120 [2024-04-26 23:36:17.215815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:28.120 [2024-04-26 23:36:17.220032] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd54ae0) with pdu=0x2000190fef90 00:33:28.120 [2024-04-26 23:36:17.220267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.120 [2024-04-26 23:36:17.220288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:28.120 [2024-04-26 23:36:17.224974] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd54ae0) with pdu=0x2000190fef90 00:33:28.120 [2024-04-26 23:36:17.225207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.120 [2024-04-26 23:36:17.225228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:28.120 [2024-04-26 23:36:17.229703] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd54ae0) with pdu=0x2000190fef90 00:33:28.120 [2024-04-26 23:36:17.229943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.120 [2024-04-26 23:36:17.229964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:28.120 [2024-04-26 23:36:17.234596] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd54ae0) with pdu=0x2000190fef90 00:33:28.120 [2024-04-26 23:36:17.234822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.120 [2024-04-26 23:36:17.234851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:28.120 [2024-04-26 23:36:17.239160] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd54ae0) with pdu=0x2000190fef90 00:33:28.120 [2024-04-26 23:36:17.239386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.120 [2024-04-26 23:36:17.239406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:28.120 [2024-04-26 23:36:17.244286] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd54ae0) with pdu=0x2000190fef90 00:33:28.120 [2024-04-26 23:36:17.244512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.120 [2024-04-26 23:36:17.244533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:28.120 [2024-04-26 23:36:17.249146] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd54ae0) with pdu=0x2000190fef90 00:33:28.120 [2024-04-26 23:36:17.249373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.120 [2024-04-26 23:36:17.249395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:28.120 [2024-04-26 23:36:17.254114] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd54ae0) with pdu=0x2000190fef90 00:33:28.120 [2024-04-26 23:36:17.254339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.120 [2024-04-26 23:36:17.254370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:28.120 [2024-04-26 23:36:17.258937] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd54ae0) with pdu=0x2000190fef90 00:33:28.120 [2024-04-26 23:36:17.259159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.120 [2024-04-26 23:36:17.259183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:28.120 [2024-04-26 23:36:17.264047] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd54ae0) with pdu=0x2000190fef90 00:33:28.120 [2024-04-26 23:36:17.264263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.120 [2024-04-26 23:36:17.264283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:28.120 [2024-04-26 23:36:17.268546] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd54ae0) with pdu=0x2000190fef90 00:33:28.120 [2024-04-26 23:36:17.268755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.120 [2024-04-26 23:36:17.268775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:28.120 [2024-04-26 23:36:17.272825] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd54ae0) with pdu=0x2000190fef90 00:33:28.120 [2024-04-26 23:36:17.273050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.120 [2024-04-26 23:36:17.273069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:28.120 [2024-04-26 23:36:17.277733] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd54ae0) with pdu=0x2000190fef90 00:33:28.120 [2024-04-26 23:36:17.277956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.120 [2024-04-26 23:36:17.277976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:28.120 [2024-04-26 23:36:17.281727] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd54ae0) with pdu=0x2000190fef90 00:33:28.120 [2024-04-26 23:36:17.281947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.120 [2024-04-26 23:36:17.281967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:28.120 [2024-04-26 23:36:17.285759] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd54ae0) with pdu=0x2000190fef90 00:33:28.120 [2024-04-26 23:36:17.285979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.120 [2024-04-26 23:36:17.286000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:28.120 [2024-04-26 23:36:17.289745] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd54ae0) with pdu=0x2000190fef90 00:33:28.120 [2024-04-26 23:36:17.289952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.120 [2024-04-26 23:36:17.289971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:28.120 [2024-04-26 23:36:17.293978] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd54ae0) with pdu=0x2000190fef90 00:33:28.121 [2024-04-26 23:36:17.294181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.121 [2024-04-26 23:36:17.294199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:28.121 [2024-04-26 23:36:17.298100] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd54ae0) with pdu=0x2000190fef90 00:33:28.121 [2024-04-26 23:36:17.298304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.121 [2024-04-26 23:36:17.298323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:28.121 [2024-04-26 23:36:17.302402] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd54ae0) with pdu=0x2000190fef90 00:33:28.121 [2024-04-26 23:36:17.302597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.121 [2024-04-26 23:36:17.302615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:28.121 [2024-04-26 23:36:17.306522] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd54ae0) with pdu=0x2000190fef90 00:33:28.121 [2024-04-26 23:36:17.306715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.121 [2024-04-26 23:36:17.306732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:28.121 [2024-04-26 23:36:17.310357] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd54ae0) with pdu=0x2000190fef90 00:33:28.121 [2024-04-26 23:36:17.310548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.121 [2024-04-26 23:36:17.310565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:28.121 [2024-04-26 23:36:17.314965] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd54ae0) with pdu=0x2000190fef90 00:33:28.121 [2024-04-26 23:36:17.315154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.121 [2024-04-26 23:36:17.315172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:28.121 [2024-04-26 23:36:17.318968] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd54ae0) with pdu=0x2000190fef90 00:33:28.121 [2024-04-26 23:36:17.319158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.121 [2024-04-26 23:36:17.319175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:28.121 [2024-04-26 23:36:17.322744] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd54ae0) with pdu=0x2000190fef90 00:33:28.121 [2024-04-26 23:36:17.322944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.121 [2024-04-26 23:36:17.322962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:28.121 [2024-04-26 23:36:17.326932] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd54ae0) with pdu=0x2000190fef90 00:33:28.121 [2024-04-26 23:36:17.327128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.121 [2024-04-26 23:36:17.327146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:28.121 [2024-04-26 23:36:17.330832] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd54ae0) with pdu=0x2000190fef90 00:33:28.121 [2024-04-26 23:36:17.331036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.121 [2024-04-26 23:36:17.331055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:28.121 [2024-04-26 23:36:17.334701] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd54ae0) with pdu=0x2000190fef90 00:33:28.121 [2024-04-26 23:36:17.334902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.121 [2024-04-26 23:36:17.334922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:28.121 [2024-04-26 23:36:17.338549] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd54ae0) with pdu=0x2000190fef90 00:33:28.121 [2024-04-26 23:36:17.338746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.121 [2024-04-26 23:36:17.338764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:28.121 [2024-04-26 23:36:17.342444] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd54ae0) with pdu=0x2000190fef90 00:33:28.121 [2024-04-26 23:36:17.342640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.121 [2024-04-26 23:36:17.342658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:28.121 [2024-04-26 23:36:17.346377] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd54ae0) with pdu=0x2000190fef90 00:33:28.121 [2024-04-26 23:36:17.346576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.121 [2024-04-26 23:36:17.346594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:28.121 [2024-04-26 23:36:17.350397] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd54ae0) with pdu=0x2000190fef90 00:33:28.121 [2024-04-26 23:36:17.350590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.121 [2024-04-26 23:36:17.350607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:28.121 [2024-04-26 23:36:17.354418] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd54ae0) with pdu=0x2000190fef90 00:33:28.121 [2024-04-26 23:36:17.354610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.121 [2024-04-26 23:36:17.354628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:28.121 [2024-04-26 23:36:17.358524] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd54ae0) with pdu=0x2000190fef90 00:33:28.121 [2024-04-26 23:36:17.358710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.121 [2024-04-26 23:36:17.358731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:28.121 [2024-04-26 23:36:17.362822] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd54ae0) with pdu=0x2000190fef90 00:33:28.121 [2024-04-26 23:36:17.363018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.121 [2024-04-26 23:36:17.363040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:28.121 [2024-04-26 23:36:17.366986] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd54ae0) with pdu=0x2000190fef90 00:33:28.121 [2024-04-26 23:36:17.367180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.121 [2024-04-26 23:36:17.367202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:28.121 [2024-04-26 23:36:17.371066] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd54ae0) with pdu=0x2000190fef90 00:33:28.121 [2024-04-26 23:36:17.371262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.121 [2024-04-26 23:36:17.371281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:28.383 [2024-04-26 23:36:17.375038] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd54ae0) with pdu=0x2000190fef90 00:33:28.383 [2024-04-26 23:36:17.375232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.383 [2024-04-26 23:36:17.375250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:28.383 [2024-04-26 23:36:17.379183] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd54ae0) with pdu=0x2000190fef90 00:33:28.383 [2024-04-26 23:36:17.379380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.383 [2024-04-26 23:36:17.379398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:28.383 [2024-04-26 23:36:17.383268] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd54ae0) with pdu=0x2000190fef90 00:33:28.383 [2024-04-26 23:36:17.383470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.383 [2024-04-26 23:36:17.383489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:28.383 [2024-04-26 23:36:17.387324] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd54ae0) with pdu=0x2000190fef90 00:33:28.383 [2024-04-26 23:36:17.387523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.383 [2024-04-26 23:36:17.387547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:28.383 [2024-04-26 23:36:17.391366] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd54ae0) with pdu=0x2000190fef90 00:33:28.383 [2024-04-26 23:36:17.391572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.383 [2024-04-26 23:36:17.391590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:28.383 [2024-04-26 23:36:17.395296] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd54ae0) with pdu=0x2000190fef90 00:33:28.383 [2024-04-26 23:36:17.395496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.383 [2024-04-26 23:36:17.395515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:28.383 [2024-04-26 23:36:17.399499] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd54ae0) with pdu=0x2000190fef90 00:33:28.383 [2024-04-26 23:36:17.399682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.383 [2024-04-26 23:36:17.399700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:28.383 [2024-04-26 23:36:17.403667] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd54ae0) with pdu=0x2000190fef90 00:33:28.383 [2024-04-26 23:36:17.403857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.383 [2024-04-26 23:36:17.403875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:28.383 [2024-04-26 23:36:17.407796] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd54ae0) with pdu=0x2000190fef90 00:33:28.383 [2024-04-26 23:36:17.407983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.383 [2024-04-26 23:36:17.408000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:28.383 [2024-04-26 23:36:17.412044] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd54ae0) with pdu=0x2000190fef90 00:33:28.383 [2024-04-26 23:36:17.412242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.383 [2024-04-26 23:36:17.412261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:28.383 [2024-04-26 23:36:17.416213] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd54ae0) with pdu=0x2000190fef90 00:33:28.383 [2024-04-26 23:36:17.416409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.383 [2024-04-26 23:36:17.416427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:28.383 [2024-04-26 23:36:17.420512] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd54ae0) with pdu=0x2000190fef90 00:33:28.383 [2024-04-26 23:36:17.420713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.383 [2024-04-26 23:36:17.420732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:28.383 [2024-04-26 23:36:17.425059] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd54ae0) with pdu=0x2000190fef90 00:33:28.383 [2024-04-26 23:36:17.425267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.383 [2024-04-26 23:36:17.425286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:28.383 [2024-04-26 23:36:17.429703] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd54ae0) with pdu=0x2000190fef90 00:33:28.383 [2024-04-26 23:36:17.429916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.383 [2024-04-26 23:36:17.429935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:28.383 [2024-04-26 23:36:17.434099] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd54ae0) with pdu=0x2000190fef90 00:33:28.383 [2024-04-26 23:36:17.434308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.383 [2024-04-26 23:36:17.434326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:28.383 [2024-04-26 23:36:17.438152] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd54ae0) with pdu=0x2000190fef90 00:33:28.383 [2024-04-26 23:36:17.438357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.383 [2024-04-26 23:36:17.438376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:28.383 [2024-04-26 23:36:17.442462] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd54ae0) with pdu=0x2000190fef90 00:33:28.383 [2024-04-26 23:36:17.442662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.383 [2024-04-26 23:36:17.442682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:28.383 [2024-04-26 23:36:17.446955] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd54ae0) with pdu=0x2000190fef90 00:33:28.384 [2024-04-26 23:36:17.447153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.384 [2024-04-26 23:36:17.447171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:28.384 [2024-04-26 23:36:17.451306] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd54ae0) with pdu=0x2000190fef90 00:33:28.384 [2024-04-26 23:36:17.451501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.384 [2024-04-26 23:36:17.451519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:28.384 [2024-04-26 23:36:17.455887] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd54ae0) with pdu=0x2000190fef90 00:33:28.384 [2024-04-26 23:36:17.456091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.384 [2024-04-26 23:36:17.456109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:28.384 [2024-04-26 23:36:17.460011] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd54ae0) with pdu=0x2000190fef90 00:33:28.384 [2024-04-26 23:36:17.460216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.384 [2024-04-26 23:36:17.460235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:28.384 [2024-04-26 23:36:17.464103] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd54ae0) with pdu=0x2000190fef90 00:33:28.384 [2024-04-26 23:36:17.464310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.384 [2024-04-26 23:36:17.464329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:28.384 [2024-04-26 23:36:17.468023] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd54ae0) with pdu=0x2000190fef90 00:33:28.384 [2024-04-26 23:36:17.468232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.384 [2024-04-26 23:36:17.468251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:28.384 [2024-04-26 23:36:17.471955] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd54ae0) with pdu=0x2000190fef90 00:33:28.384 [2024-04-26 23:36:17.472161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.384 [2024-04-26 23:36:17.472180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:28.384 [2024-04-26 23:36:17.476077] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd54ae0) with pdu=0x2000190fef90 00:33:28.384 [2024-04-26 23:36:17.476281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.384 [2024-04-26 23:36:17.476304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:28.384 [2024-04-26 23:36:17.480171] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd54ae0) with pdu=0x2000190fef90 00:33:28.384 [2024-04-26 23:36:17.480372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.384 [2024-04-26 23:36:17.480394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:28.384 [2024-04-26 23:36:17.484515] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd54ae0) with pdu=0x2000190fef90 00:33:28.384 [2024-04-26 23:36:17.484723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.384 [2024-04-26 23:36:17.484741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:28.384 [2024-04-26 23:36:17.488900] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd54ae0) with pdu=0x2000190fef90 00:33:28.384 [2024-04-26 23:36:17.489101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.384 [2024-04-26 23:36:17.489125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:28.384 [2024-04-26 23:36:17.492854] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd54ae0) with pdu=0x2000190fef90 00:33:28.384 [2024-04-26 23:36:17.493059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.384 [2024-04-26 23:36:17.493078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:28.384 [2024-04-26 23:36:17.497085] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd54ae0) with pdu=0x2000190fef90 00:33:28.384 [2024-04-26 23:36:17.497285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.384 [2024-04-26 23:36:17.497304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:28.384 [2024-04-26 23:36:17.501473] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd54ae0) with pdu=0x2000190fef90 00:33:28.384 [2024-04-26 23:36:17.501675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.384 [2024-04-26 23:36:17.501694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:28.384 [2024-04-26 23:36:17.505946] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd54ae0) with pdu=0x2000190fef90 00:33:28.384 [2024-04-26 23:36:17.506150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.384 [2024-04-26 23:36:17.506169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:28.384 [2024-04-26 23:36:17.511474] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd54ae0) with pdu=0x2000190fef90 00:33:28.384 [2024-04-26 23:36:17.511678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.384 [2024-04-26 23:36:17.511697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:28.384 [2024-04-26 23:36:17.517889] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd54ae0) with pdu=0x2000190fef90 00:33:28.384 [2024-04-26 23:36:17.518105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.384 [2024-04-26 23:36:17.518125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:28.384 [2024-04-26 23:36:17.523679] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd54ae0) with pdu=0x2000190fef90 00:33:28.384 [2024-04-26 23:36:17.523926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.384 [2024-04-26 23:36:17.523948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:28.384 [2024-04-26 23:36:17.529609] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd54ae0) with pdu=0x2000190fef90 00:33:28.384 [2024-04-26 23:36:17.529854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.384 [2024-04-26 23:36:17.529877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:28.384 [2024-04-26 23:36:17.536230] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd54ae0) with pdu=0x2000190fef90 00:33:28.384 [2024-04-26 23:36:17.536478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.384 [2024-04-26 23:36:17.536501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:28.384 [2024-04-26 23:36:17.543063] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd54ae0) with pdu=0x2000190fef90 00:33:28.384 [2024-04-26 23:36:17.543308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.384 [2024-04-26 23:36:17.543334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:28.384 [2024-04-26 23:36:17.549149] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd54ae0) with pdu=0x2000190fef90 00:33:28.384 [2024-04-26 23:36:17.549382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.384 [2024-04-26 23:36:17.549410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:28.384 [2024-04-26 23:36:17.556523] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd54ae0) with pdu=0x2000190fef90 00:33:28.384 [2024-04-26 23:36:17.556762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.384 [2024-04-26 23:36:17.556786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:28.384 [2024-04-26 23:36:17.563733] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd54ae0) with pdu=0x2000190fef90 00:33:28.384 [2024-04-26 23:36:17.564003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.384 [2024-04-26 23:36:17.564026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:28.384 [2024-04-26 23:36:17.570006] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd54ae0) with pdu=0x2000190fef90 00:33:28.384 [2024-04-26 23:36:17.570259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.384 [2024-04-26 23:36:17.570285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:28.384 [2024-04-26 23:36:17.576867] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd54ae0) with pdu=0x2000190fef90 00:33:28.384 [2024-04-26 23:36:17.577112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.384 [2024-04-26 23:36:17.577135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:28.384 [2024-04-26 23:36:17.584209] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd54ae0) with pdu=0x2000190fef90 00:33:28.384 [2024-04-26 23:36:17.584458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.384 [2024-04-26 23:36:17.584480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:28.384 [2024-04-26 23:36:17.590115] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd54ae0) with pdu=0x2000190fef90 00:33:28.384 [2024-04-26 23:36:17.590244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.384 [2024-04-26 23:36:17.590268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:28.384 00:33:28.384 Latency(us) 00:33:28.384 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:28.384 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:33:28.384 nvme0n1 : 2.00 6009.46 751.18 0.00 0.00 2656.90 1815.89 11031.89 00:33:28.384 =================================================================================================================== 00:33:28.384 Total : 6009.46 751.18 0.00 0.00 2656.90 1815.89 11031.89 00:33:28.384 0 00:33:28.384 23:36:17 -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:33:28.384 23:36:17 -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:33:28.384 23:36:17 -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:33:28.384 | .driver_specific 00:33:28.384 | .nvme_error 00:33:28.385 | .status_code 00:33:28.385 | .command_transient_transport_error' 00:33:28.385 23:36:17 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:33:28.645 23:36:17 -- host/digest.sh@71 -- # (( 388 > 0 )) 00:33:28.645 23:36:17 -- host/digest.sh@73 -- # killprocess 4175818 00:33:28.645 23:36:17 -- common/autotest_common.sh@936 -- # '[' -z 4175818 ']' 00:33:28.645 23:36:17 -- common/autotest_common.sh@940 -- # kill -0 4175818 00:33:28.645 23:36:17 -- common/autotest_common.sh@941 -- # uname 00:33:28.645 23:36:17 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:33:28.645 23:36:17 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 4175818 00:33:28.645 23:36:17 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:33:28.645 23:36:17 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:33:28.645 23:36:17 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 4175818' 00:33:28.645 killing process with pid 4175818 00:33:28.645 23:36:17 -- common/autotest_common.sh@955 -- # kill 4175818 00:33:28.645 Received shutdown signal, test time was about 2.000000 seconds 00:33:28.645 00:33:28.645 Latency(us) 00:33:28.645 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:28.645 =================================================================================================================== 00:33:28.645 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:33:28.645 23:36:17 -- common/autotest_common.sh@960 -- # wait 4175818 00:33:28.905 23:36:17 -- host/digest.sh@116 -- # killprocess 4173438 00:33:28.905 23:36:17 -- common/autotest_common.sh@936 -- # '[' -z 4173438 ']' 00:33:28.905 23:36:17 -- common/autotest_common.sh@940 -- # kill -0 4173438 00:33:28.905 23:36:17 -- common/autotest_common.sh@941 -- # uname 00:33:28.905 23:36:17 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:33:28.905 23:36:17 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 4173438 00:33:28.905 23:36:18 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:33:28.905 23:36:18 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:33:28.905 23:36:18 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 4173438' 00:33:28.905 killing process with pid 4173438 00:33:28.905 23:36:18 -- common/autotest_common.sh@955 -- # kill 4173438 00:33:28.905 23:36:18 -- common/autotest_common.sh@960 -- # wait 4173438 00:33:28.905 00:33:28.905 real 0m14.252s 00:33:28.905 user 0m28.122s 00:33:28.905 sys 0m3.281s 00:33:28.905 23:36:18 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:33:28.905 23:36:18 -- common/autotest_common.sh@10 -- # set +x 00:33:28.905 ************************************ 00:33:28.905 END TEST nvmf_digest_error 00:33:28.905 ************************************ 00:33:29.166 23:36:18 -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:33:29.166 23:36:18 -- host/digest.sh@150 -- # nvmftestfini 00:33:29.166 23:36:18 -- nvmf/common.sh@477 -- # nvmfcleanup 00:33:29.166 23:36:18 -- nvmf/common.sh@117 -- # sync 00:33:29.166 23:36:18 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:33:29.166 23:36:18 -- nvmf/common.sh@120 -- # set +e 00:33:29.166 23:36:18 -- nvmf/common.sh@121 -- # for i in {1..20} 00:33:29.166 23:36:18 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:33:29.166 rmmod nvme_tcp 00:33:29.166 rmmod nvme_fabrics 00:33:29.166 rmmod nvme_keyring 00:33:29.166 23:36:18 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:33:29.166 23:36:18 -- nvmf/common.sh@124 -- # set -e 00:33:29.166 23:36:18 -- nvmf/common.sh@125 -- # return 0 00:33:29.166 23:36:18 -- nvmf/common.sh@478 -- # '[' -n 4173438 ']' 00:33:29.166 23:36:18 -- nvmf/common.sh@479 -- # killprocess 4173438 00:33:29.166 23:36:18 -- common/autotest_common.sh@936 -- # '[' -z 4173438 ']' 00:33:29.166 23:36:18 -- common/autotest_common.sh@940 -- # kill -0 4173438 00:33:29.166 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 940: kill: (4173438) - No such process 00:33:29.166 23:36:18 -- common/autotest_common.sh@963 -- # echo 'Process with pid 4173438 is not found' 00:33:29.166 Process with pid 4173438 is not found 00:33:29.166 23:36:18 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:33:29.166 23:36:18 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:33:29.166 23:36:18 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:33:29.166 23:36:18 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:33:29.166 23:36:18 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:33:29.166 23:36:18 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:29.166 23:36:18 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:33:29.166 23:36:18 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:31.079 23:36:20 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:33:31.079 00:33:31.079 real 0m38.660s 00:33:31.079 user 0m57.902s 00:33:31.079 sys 0m12.315s 00:33:31.340 23:36:20 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:33:31.340 23:36:20 -- common/autotest_common.sh@10 -- # set +x 00:33:31.340 ************************************ 00:33:31.340 END TEST nvmf_digest 00:33:31.340 ************************************ 00:33:31.340 23:36:20 -- nvmf/nvmf.sh@108 -- # [[ 0 -eq 1 ]] 00:33:31.340 23:36:20 -- nvmf/nvmf.sh@113 -- # [[ 0 -eq 1 ]] 00:33:31.340 23:36:20 -- nvmf/nvmf.sh@118 -- # [[ phy == phy ]] 00:33:31.340 23:36:20 -- nvmf/nvmf.sh@119 -- # run_test nvmf_bdevperf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:33:31.340 23:36:20 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:33:31.340 23:36:20 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:33:31.340 23:36:20 -- common/autotest_common.sh@10 -- # set +x 00:33:31.340 ************************************ 00:33:31.340 START TEST nvmf_bdevperf 00:33:31.340 ************************************ 00:33:31.340 23:36:20 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:33:31.602 * Looking for test storage... 00:33:31.602 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:33:31.602 23:36:20 -- host/bdevperf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:31.602 23:36:20 -- nvmf/common.sh@7 -- # uname -s 00:33:31.602 23:36:20 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:31.602 23:36:20 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:31.602 23:36:20 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:31.602 23:36:20 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:31.602 23:36:20 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:31.602 23:36:20 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:31.602 23:36:20 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:31.602 23:36:20 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:31.602 23:36:20 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:31.602 23:36:20 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:31.602 23:36:20 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:33:31.602 23:36:20 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:33:31.602 23:36:20 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:31.602 23:36:20 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:31.602 23:36:20 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:31.602 23:36:20 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:31.602 23:36:20 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:31.602 23:36:20 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:31.602 23:36:20 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:31.602 23:36:20 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:31.602 23:36:20 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:31.602 23:36:20 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:31.602 23:36:20 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:31.602 23:36:20 -- paths/export.sh@5 -- # export PATH 00:33:31.602 23:36:20 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:31.602 23:36:20 -- nvmf/common.sh@47 -- # : 0 00:33:31.602 23:36:20 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:33:31.602 23:36:20 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:33:31.602 23:36:20 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:31.602 23:36:20 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:31.602 23:36:20 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:31.602 23:36:20 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:33:31.602 23:36:20 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:33:31.602 23:36:20 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:33:31.602 23:36:20 -- host/bdevperf.sh@11 -- # MALLOC_BDEV_SIZE=64 00:33:31.602 23:36:20 -- host/bdevperf.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:33:31.602 23:36:20 -- host/bdevperf.sh@24 -- # nvmftestinit 00:33:31.602 23:36:20 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:33:31.602 23:36:20 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:31.602 23:36:20 -- nvmf/common.sh@437 -- # prepare_net_devs 00:33:31.602 23:36:20 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:33:31.602 23:36:20 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:33:31.602 23:36:20 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:31.602 23:36:20 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:33:31.602 23:36:20 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:31.602 23:36:20 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:33:31.602 23:36:20 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:33:31.602 23:36:20 -- nvmf/common.sh@285 -- # xtrace_disable 00:33:31.602 23:36:20 -- common/autotest_common.sh@10 -- # set +x 00:33:39.747 23:36:27 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:33:39.747 23:36:27 -- nvmf/common.sh@291 -- # pci_devs=() 00:33:39.747 23:36:27 -- nvmf/common.sh@291 -- # local -a pci_devs 00:33:39.747 23:36:27 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:33:39.747 23:36:27 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:33:39.747 23:36:27 -- nvmf/common.sh@293 -- # pci_drivers=() 00:33:39.747 23:36:27 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:33:39.747 23:36:27 -- nvmf/common.sh@295 -- # net_devs=() 00:33:39.747 23:36:27 -- nvmf/common.sh@295 -- # local -ga net_devs 00:33:39.747 23:36:27 -- nvmf/common.sh@296 -- # e810=() 00:33:39.747 23:36:27 -- nvmf/common.sh@296 -- # local -ga e810 00:33:39.747 23:36:27 -- nvmf/common.sh@297 -- # x722=() 00:33:39.747 23:36:27 -- nvmf/common.sh@297 -- # local -ga x722 00:33:39.747 23:36:27 -- nvmf/common.sh@298 -- # mlx=() 00:33:39.747 23:36:27 -- nvmf/common.sh@298 -- # local -ga mlx 00:33:39.747 23:36:27 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:39.747 23:36:27 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:39.747 23:36:27 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:39.748 23:36:27 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:39.748 23:36:27 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:39.748 23:36:27 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:39.748 23:36:27 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:39.748 23:36:27 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:39.748 23:36:27 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:39.748 23:36:27 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:39.748 23:36:27 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:39.748 23:36:27 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:33:39.748 23:36:27 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:33:39.748 23:36:27 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:33:39.748 23:36:27 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:33:39.748 23:36:27 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:33:39.748 23:36:27 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:33:39.748 23:36:27 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:33:39.748 23:36:27 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:33:39.748 Found 0000:31:00.0 (0x8086 - 0x159b) 00:33:39.748 23:36:27 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:33:39.748 23:36:27 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:33:39.748 23:36:27 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:39.748 23:36:27 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:39.748 23:36:27 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:33:39.748 23:36:27 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:33:39.748 23:36:27 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:33:39.748 Found 0000:31:00.1 (0x8086 - 0x159b) 00:33:39.748 23:36:27 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:33:39.748 23:36:27 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:33:39.748 23:36:27 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:39.748 23:36:27 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:39.748 23:36:27 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:33:39.748 23:36:27 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:33:39.748 23:36:27 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:33:39.748 23:36:27 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:33:39.748 23:36:27 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:33:39.748 23:36:27 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:39.748 23:36:27 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:33:39.748 23:36:27 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:39.748 23:36:27 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:33:39.748 Found net devices under 0000:31:00.0: cvl_0_0 00:33:39.748 23:36:27 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:33:39.748 23:36:27 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:33:39.748 23:36:27 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:39.748 23:36:27 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:33:39.748 23:36:27 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:39.748 23:36:27 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:33:39.748 Found net devices under 0000:31:00.1: cvl_0_1 00:33:39.748 23:36:27 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:33:39.748 23:36:27 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:33:39.748 23:36:27 -- nvmf/common.sh@403 -- # is_hw=yes 00:33:39.748 23:36:27 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:33:39.748 23:36:27 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:33:39.748 23:36:27 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:33:39.748 23:36:27 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:39.748 23:36:27 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:39.748 23:36:27 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:39.748 23:36:27 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:33:39.748 23:36:27 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:39.748 23:36:27 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:39.748 23:36:27 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:33:39.748 23:36:27 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:39.748 23:36:27 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:39.748 23:36:27 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:33:39.748 23:36:27 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:33:39.748 23:36:27 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:33:39.748 23:36:27 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:39.748 23:36:27 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:39.748 23:36:27 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:39.748 23:36:27 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:33:39.748 23:36:27 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:39.748 23:36:27 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:39.748 23:36:27 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:39.748 23:36:27 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:33:39.748 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:39.748 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.710 ms 00:33:39.748 00:33:39.748 --- 10.0.0.2 ping statistics --- 00:33:39.748 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:39.748 rtt min/avg/max/mdev = 0.710/0.710/0.710/0.000 ms 00:33:39.748 23:36:27 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:39.748 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:39.748 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.344 ms 00:33:39.748 00:33:39.748 --- 10.0.0.1 ping statistics --- 00:33:39.748 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:39.748 rtt min/avg/max/mdev = 0.344/0.344/0.344/0.000 ms 00:33:39.748 23:36:27 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:39.748 23:36:27 -- nvmf/common.sh@411 -- # return 0 00:33:39.748 23:36:27 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:33:39.748 23:36:27 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:39.748 23:36:27 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:33:39.748 23:36:27 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:33:39.748 23:36:27 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:39.748 23:36:27 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:33:39.748 23:36:27 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:33:39.748 23:36:28 -- host/bdevperf.sh@25 -- # tgt_init 00:33:39.748 23:36:28 -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:33:39.748 23:36:28 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:33:39.748 23:36:28 -- common/autotest_common.sh@710 -- # xtrace_disable 00:33:39.748 23:36:28 -- common/autotest_common.sh@10 -- # set +x 00:33:39.748 23:36:28 -- nvmf/common.sh@470 -- # nvmfpid=4180901 00:33:39.748 23:36:28 -- nvmf/common.sh@471 -- # waitforlisten 4180901 00:33:39.748 23:36:28 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:33:39.748 23:36:28 -- common/autotest_common.sh@817 -- # '[' -z 4180901 ']' 00:33:39.748 23:36:28 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:39.748 23:36:28 -- common/autotest_common.sh@822 -- # local max_retries=100 00:33:39.748 23:36:28 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:39.748 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:39.748 23:36:28 -- common/autotest_common.sh@826 -- # xtrace_disable 00:33:39.748 23:36:28 -- common/autotest_common.sh@10 -- # set +x 00:33:39.748 [2024-04-26 23:36:28.068496] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:33:39.748 [2024-04-26 23:36:28.068560] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:39.748 EAL: No free 2048 kB hugepages reported on node 1 00:33:39.748 [2024-04-26 23:36:28.140013] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:33:39.748 [2024-04-26 23:36:28.177885] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:39.748 [2024-04-26 23:36:28.177933] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:39.748 [2024-04-26 23:36:28.177943] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:39.748 [2024-04-26 23:36:28.177950] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:39.748 [2024-04-26 23:36:28.177957] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:39.748 [2024-04-26 23:36:28.178116] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:33:39.748 [2024-04-26 23:36:28.178272] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:33:39.748 [2024-04-26 23:36:28.178273] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:33:39.748 23:36:28 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:33:39.748 23:36:28 -- common/autotest_common.sh@850 -- # return 0 00:33:39.748 23:36:28 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:33:39.748 23:36:28 -- common/autotest_common.sh@716 -- # xtrace_disable 00:33:39.748 23:36:28 -- common/autotest_common.sh@10 -- # set +x 00:33:39.748 23:36:28 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:39.748 23:36:28 -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:33:39.748 23:36:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:33:39.748 23:36:28 -- common/autotest_common.sh@10 -- # set +x 00:33:39.748 [2024-04-26 23:36:28.877108] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:39.748 23:36:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:33:39.748 23:36:28 -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:33:39.748 23:36:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:33:39.748 23:36:28 -- common/autotest_common.sh@10 -- # set +x 00:33:39.748 Malloc0 00:33:39.748 23:36:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:33:39.748 23:36:28 -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:33:39.748 23:36:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:33:39.748 23:36:28 -- common/autotest_common.sh@10 -- # set +x 00:33:39.748 23:36:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:33:39.748 23:36:28 -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:33:39.748 23:36:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:33:39.748 23:36:28 -- common/autotest_common.sh@10 -- # set +x 00:33:39.748 23:36:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:33:39.748 23:36:28 -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:39.748 23:36:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:33:39.749 23:36:28 -- common/autotest_common.sh@10 -- # set +x 00:33:39.749 [2024-04-26 23:36:28.939245] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:39.749 23:36:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:33:39.749 23:36:28 -- host/bdevperf.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w verify -t 1 00:33:39.749 23:36:28 -- host/bdevperf.sh@27 -- # gen_nvmf_target_json 00:33:39.749 23:36:28 -- nvmf/common.sh@521 -- # config=() 00:33:39.749 23:36:28 -- nvmf/common.sh@521 -- # local subsystem config 00:33:39.749 23:36:28 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:33:39.749 23:36:28 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:33:39.749 { 00:33:39.749 "params": { 00:33:39.749 "name": "Nvme$subsystem", 00:33:39.749 "trtype": "$TEST_TRANSPORT", 00:33:39.749 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:39.749 "adrfam": "ipv4", 00:33:39.749 "trsvcid": "$NVMF_PORT", 00:33:39.749 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:39.749 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:39.749 "hdgst": ${hdgst:-false}, 00:33:39.749 "ddgst": ${ddgst:-false} 00:33:39.749 }, 00:33:39.749 "method": "bdev_nvme_attach_controller" 00:33:39.749 } 00:33:39.749 EOF 00:33:39.749 )") 00:33:39.749 23:36:28 -- nvmf/common.sh@543 -- # cat 00:33:39.749 23:36:28 -- nvmf/common.sh@545 -- # jq . 00:33:39.749 23:36:28 -- nvmf/common.sh@546 -- # IFS=, 00:33:39.749 23:36:28 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:33:39.749 "params": { 00:33:39.749 "name": "Nvme1", 00:33:39.749 "trtype": "tcp", 00:33:39.749 "traddr": "10.0.0.2", 00:33:39.749 "adrfam": "ipv4", 00:33:39.749 "trsvcid": "4420", 00:33:39.749 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:33:39.749 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:33:39.749 "hdgst": false, 00:33:39.749 "ddgst": false 00:33:39.749 }, 00:33:39.749 "method": "bdev_nvme_attach_controller" 00:33:39.749 }' 00:33:39.749 [2024-04-26 23:36:28.992509] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:33:39.749 [2024-04-26 23:36:28.992557] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4181068 ] 00:33:40.010 EAL: No free 2048 kB hugepages reported on node 1 00:33:40.010 [2024-04-26 23:36:29.051944] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:40.010 [2024-04-26 23:36:29.081036] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:33:40.010 Running I/O for 1 seconds... 00:33:41.397 00:33:41.398 Latency(us) 00:33:41.398 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:41.398 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:33:41.398 Verification LBA range: start 0x0 length 0x4000 00:33:41.398 Nvme1n1 : 1.01 8872.69 34.66 0.00 0.00 14368.64 3031.04 17257.81 00:33:41.398 =================================================================================================================== 00:33:41.398 Total : 8872.69 34.66 0.00 0.00 14368.64 3031.04 17257.81 00:33:41.398 23:36:30 -- host/bdevperf.sh@30 -- # bdevperfpid=4181270 00:33:41.398 23:36:30 -- host/bdevperf.sh@32 -- # sleep 3 00:33:41.398 23:36:30 -- host/bdevperf.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -q 128 -o 4096 -w verify -t 15 -f 00:33:41.398 23:36:30 -- host/bdevperf.sh@29 -- # gen_nvmf_target_json 00:33:41.398 23:36:30 -- nvmf/common.sh@521 -- # config=() 00:33:41.398 23:36:30 -- nvmf/common.sh@521 -- # local subsystem config 00:33:41.398 23:36:30 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:33:41.398 23:36:30 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:33:41.398 { 00:33:41.398 "params": { 00:33:41.398 "name": "Nvme$subsystem", 00:33:41.398 "trtype": "$TEST_TRANSPORT", 00:33:41.398 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:41.398 "adrfam": "ipv4", 00:33:41.398 "trsvcid": "$NVMF_PORT", 00:33:41.398 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:41.398 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:41.398 "hdgst": ${hdgst:-false}, 00:33:41.398 "ddgst": ${ddgst:-false} 00:33:41.398 }, 00:33:41.398 "method": "bdev_nvme_attach_controller" 00:33:41.398 } 00:33:41.398 EOF 00:33:41.398 )") 00:33:41.398 23:36:30 -- nvmf/common.sh@543 -- # cat 00:33:41.398 23:36:30 -- nvmf/common.sh@545 -- # jq . 00:33:41.398 23:36:30 -- nvmf/common.sh@546 -- # IFS=, 00:33:41.398 23:36:30 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:33:41.398 "params": { 00:33:41.398 "name": "Nvme1", 00:33:41.398 "trtype": "tcp", 00:33:41.398 "traddr": "10.0.0.2", 00:33:41.398 "adrfam": "ipv4", 00:33:41.398 "trsvcid": "4420", 00:33:41.398 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:33:41.398 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:33:41.398 "hdgst": false, 00:33:41.398 "ddgst": false 00:33:41.398 }, 00:33:41.398 "method": "bdev_nvme_attach_controller" 00:33:41.398 }' 00:33:41.398 [2024-04-26 23:36:30.398862] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:33:41.398 [2024-04-26 23:36:30.398922] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4181270 ] 00:33:41.398 EAL: No free 2048 kB hugepages reported on node 1 00:33:41.398 [2024-04-26 23:36:30.458188] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:41.398 [2024-04-26 23:36:30.486509] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:33:41.658 Running I/O for 15 seconds... 00:33:44.202 23:36:33 -- host/bdevperf.sh@33 -- # kill -9 4180901 00:33:44.202 23:36:33 -- host/bdevperf.sh@35 -- # sleep 3 00:33:44.202 [2024-04-26 23:36:33.366321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:50000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.202 [2024-04-26 23:36:33.366364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:44.202 [2024-04-26 23:36:33.366385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:50136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.202 [2024-04-26 23:36:33.366394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:44.202 [2024-04-26 23:36:33.366405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:50008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.202 [2024-04-26 23:36:33.366415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:44.202 [2024-04-26 23:36:33.366428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:50144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.202 [2024-04-26 23:36:33.366436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:44.202 [2024-04-26 23:36:33.366448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:50152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.202 [2024-04-26 23:36:33.366456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:44.202 [2024-04-26 23:36:33.366467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:50160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.202 [2024-04-26 23:36:33.366474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:44.202 [2024-04-26 23:36:33.366484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:50168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.202 [2024-04-26 23:36:33.366494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:44.202 [2024-04-26 23:36:33.366504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:50176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.202 [2024-04-26 23:36:33.366513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:44.202 [2024-04-26 23:36:33.366526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:50184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.202 [2024-04-26 23:36:33.366537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:44.202 [2024-04-26 23:36:33.366548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:50192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.202 [2024-04-26 23:36:33.366558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:44.202 [2024-04-26 23:36:33.366573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:50200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.202 [2024-04-26 23:36:33.366581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:44.202 [2024-04-26 23:36:33.366592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:50208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.202 [2024-04-26 23:36:33.366602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:44.202 [2024-04-26 23:36:33.366611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:50216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.202 [2024-04-26 23:36:33.366619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:44.202 [2024-04-26 23:36:33.366632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:50224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.202 [2024-04-26 23:36:33.366643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:44.202 [2024-04-26 23:36:33.366654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:50232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.202 [2024-04-26 23:36:33.366666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:44.202 [2024-04-26 23:36:33.366677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:50240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.202 [2024-04-26 23:36:33.366687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:44.202 [2024-04-26 23:36:33.366696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:50248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.202 [2024-04-26 23:36:33.366706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:44.203 [2024-04-26 23:36:33.366722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:50256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.203 [2024-04-26 23:36:33.366734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:44.203 [2024-04-26 23:36:33.366745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:50264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.203 [2024-04-26 23:36:33.366756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:44.203 [2024-04-26 23:36:33.366770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:50272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.203 [2024-04-26 23:36:33.366784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:44.203 [2024-04-26 23:36:33.366794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:50280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.203 [2024-04-26 23:36:33.366805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:44.203 [2024-04-26 23:36:33.366818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:50288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.203 [2024-04-26 23:36:33.366827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:44.203 [2024-04-26 23:36:33.366921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:50296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.203 [2024-04-26 23:36:33.366935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:44.203 [2024-04-26 23:36:33.366945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:50304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.203 [2024-04-26 23:36:33.366953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:44.203 [2024-04-26 23:36:33.366962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:50312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.203 [2024-04-26 23:36:33.366969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:44.203 [2024-04-26 23:36:33.366978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:50320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.203 [2024-04-26 23:36:33.366985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:44.203 [2024-04-26 23:36:33.366995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:50328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.203 [2024-04-26 23:36:33.367002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:44.203 [2024-04-26 23:36:33.367012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:50336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.203 [2024-04-26 23:36:33.367019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:44.203 [2024-04-26 23:36:33.367029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:50344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.203 [2024-04-26 23:36:33.367036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:44.203 [2024-04-26 23:36:33.367045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:50352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.203 [2024-04-26 23:36:33.367052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:44.203 [2024-04-26 23:36:33.367062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:50360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.203 [2024-04-26 23:36:33.367069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:44.203 [2024-04-26 23:36:33.367078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:50368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.203 [2024-04-26 23:36:33.367085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:44.203 [2024-04-26 23:36:33.367094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:50376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.203 [2024-04-26 23:36:33.367102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:44.203 [2024-04-26 23:36:33.367111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:50384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.203 [2024-04-26 23:36:33.367119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:44.203 [2024-04-26 23:36:33.367129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:50392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.203 [2024-04-26 23:36:33.367136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:44.203 [2024-04-26 23:36:33.367147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:50400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.203 [2024-04-26 23:36:33.367154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:44.203 [2024-04-26 23:36:33.367163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:50408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.203 [2024-04-26 23:36:33.367170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:44.203 [2024-04-26 23:36:33.367179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:50416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.203 [2024-04-26 23:36:33.367186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:44.203 [2024-04-26 23:36:33.367196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:50424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.203 [2024-04-26 23:36:33.367203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:44.203 [2024-04-26 23:36:33.367212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:50432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.203 [2024-04-26 23:36:33.367219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:44.203 [2024-04-26 23:36:33.367228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:50440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.203 [2024-04-26 23:36:33.367237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:44.203 [2024-04-26 23:36:33.367246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:50448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.203 [2024-04-26 23:36:33.367253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:44.203 [2024-04-26 23:36:33.367262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:50456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.203 [2024-04-26 23:36:33.367269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:44.203 [2024-04-26 23:36:33.367278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:50464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.203 [2024-04-26 23:36:33.367285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:44.203 [2024-04-26 23:36:33.367294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:50472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.203 [2024-04-26 23:36:33.367301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:44.203 [2024-04-26 23:36:33.367311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:50480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.203 [2024-04-26 23:36:33.367319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:44.203 [2024-04-26 23:36:33.367329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:50488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.203 [2024-04-26 23:36:33.367335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:44.203 [2024-04-26 23:36:33.367344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:50496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.203 [2024-04-26 23:36:33.367351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:44.203 [2024-04-26 23:36:33.367362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:50504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.203 [2024-04-26 23:36:33.367370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:44.203 [2024-04-26 23:36:33.367379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:50512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.203 [2024-04-26 23:36:33.367386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:44.203 [2024-04-26 23:36:33.367394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:50520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.203 [2024-04-26 23:36:33.367402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:44.203 [2024-04-26 23:36:33.367411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:50528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.203 [2024-04-26 23:36:33.367419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:44.203 [2024-04-26 23:36:33.367428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:50536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.203 [2024-04-26 23:36:33.367435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:44.203 [2024-04-26 23:36:33.367444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:50544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.203 [2024-04-26 23:36:33.367452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:44.203 [2024-04-26 23:36:33.367461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:50552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.203 [2024-04-26 23:36:33.367468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:44.203 [2024-04-26 23:36:33.367477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:50560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.203 [2024-04-26 23:36:33.367485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:44.203 [2024-04-26 23:36:33.367494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:50568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.204 [2024-04-26 23:36:33.367501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:44.204 [2024-04-26 23:36:33.367510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:50576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.204 [2024-04-26 23:36:33.367517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:44.204 [2024-04-26 23:36:33.367526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:50584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.204 [2024-04-26 23:36:33.367534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:44.204 [2024-04-26 23:36:33.367543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:50592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.204 [2024-04-26 23:36:33.367549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:44.204 [2024-04-26 23:36:33.367558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:50600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.204 [2024-04-26 23:36:33.367568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:44.204 [2024-04-26 23:36:33.367577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:50608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.204 [2024-04-26 23:36:33.367585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:44.204 [2024-04-26 23:36:33.367594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:50616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.204 [2024-04-26 23:36:33.367600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:44.204 [2024-04-26 23:36:33.367609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:50624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.204 [2024-04-26 23:36:33.367617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:44.204 [2024-04-26 23:36:33.367626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:50632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.204 [2024-04-26 23:36:33.367633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:44.204 [2024-04-26 23:36:33.367644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:50640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.204 [2024-04-26 23:36:33.367651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:44.204 [2024-04-26 23:36:33.367660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:50648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.204 [2024-04-26 23:36:33.367667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:44.204 [2024-04-26 23:36:33.367677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:50656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.204 [2024-04-26 23:36:33.367684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:44.204 [2024-04-26 23:36:33.367693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:50664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.204 [2024-04-26 23:36:33.367700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:44.204 [2024-04-26 23:36:33.367710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:50672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.204 [2024-04-26 23:36:33.367717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:44.204 [2024-04-26 23:36:33.367727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:50680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.204 [2024-04-26 23:36:33.367734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:44.204 [2024-04-26 23:36:33.367743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:50688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.204 [2024-04-26 23:36:33.367750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:44.204 [2024-04-26 23:36:33.367759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:50696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.204 [2024-04-26 23:36:33.367766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:44.204 [2024-04-26 23:36:33.367777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:50704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.204 [2024-04-26 23:36:33.367785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:44.204 [2024-04-26 23:36:33.367794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:50712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.204 [2024-04-26 23:36:33.367802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:44.204 [2024-04-26 23:36:33.367811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:50720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.204 [2024-04-26 23:36:33.367818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:44.204 [2024-04-26 23:36:33.367827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:50728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.204 [2024-04-26 23:36:33.367835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:44.204 [2024-04-26 23:36:33.367849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:50736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.204 [2024-04-26 23:36:33.367856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:44.204 [2024-04-26 23:36:33.367865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:50744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.204 [2024-04-26 23:36:33.367872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:44.204 [2024-04-26 23:36:33.367881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:50752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.204 [2024-04-26 23:36:33.367889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:44.204 [2024-04-26 23:36:33.367898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:50760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.204 [2024-04-26 23:36:33.367906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:44.204 [2024-04-26 23:36:33.367915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:50768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.204 [2024-04-26 23:36:33.367922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:44.204 [2024-04-26 23:36:33.367931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:50776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.204 [2024-04-26 23:36:33.367938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:44.204 [2024-04-26 23:36:33.367947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:50784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.204 [2024-04-26 23:36:33.367954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:44.204 [2024-04-26 23:36:33.367963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:50792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.204 [2024-04-26 23:36:33.367970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:44.204 [2024-04-26 23:36:33.367979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:50800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.204 [2024-04-26 23:36:33.367987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:44.204 [2024-04-26 23:36:33.367997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:50016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.204 [2024-04-26 23:36:33.368005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:44.204 [2024-04-26 23:36:33.368014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:50024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.204 [2024-04-26 23:36:33.368021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:44.204 [2024-04-26 23:36:33.368030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:50032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.204 [2024-04-26 23:36:33.368038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:44.204 [2024-04-26 23:36:33.368047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:50040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.204 [2024-04-26 23:36:33.368055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:44.204 [2024-04-26 23:36:33.368065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:50048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.204 [2024-04-26 23:36:33.368072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:44.204 [2024-04-26 23:36:33.368081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:50056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.204 [2024-04-26 23:36:33.368088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:44.204 [2024-04-26 23:36:33.368097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:50064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.204 [2024-04-26 23:36:33.368105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:44.204 [2024-04-26 23:36:33.368114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:50808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.204 [2024-04-26 23:36:33.368121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:44.204 [2024-04-26 23:36:33.368130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:50816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.204 [2024-04-26 23:36:33.368137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:44.204 [2024-04-26 23:36:33.368146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:50824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.204 [2024-04-26 23:36:33.368153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:44.204 [2024-04-26 23:36:33.368162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:50832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.205 [2024-04-26 23:36:33.368169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:44.205 [2024-04-26 23:36:33.368178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:50840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.205 [2024-04-26 23:36:33.368186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:44.205 [2024-04-26 23:36:33.368195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:50848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.205 [2024-04-26 23:36:33.368204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:44.205 [2024-04-26 23:36:33.368213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:50856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.205 [2024-04-26 23:36:33.368220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:44.205 [2024-04-26 23:36:33.368229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:50864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.205 [2024-04-26 23:36:33.368237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:44.205 [2024-04-26 23:36:33.368246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:50872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.205 [2024-04-26 23:36:33.368253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:44.205 [2024-04-26 23:36:33.368262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:50880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.205 [2024-04-26 23:36:33.368270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:44.205 [2024-04-26 23:36:33.368279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:50888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.205 [2024-04-26 23:36:33.368286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:44.205 [2024-04-26 23:36:33.368295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:50896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.205 [2024-04-26 23:36:33.368302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:44.205 [2024-04-26 23:36:33.368311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:50904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.205 [2024-04-26 23:36:33.368319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:44.205 [2024-04-26 23:36:33.368328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:50912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.205 [2024-04-26 23:36:33.368335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:44.205 [2024-04-26 23:36:33.368344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:50920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.205 [2024-04-26 23:36:33.368351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:44.205 [2024-04-26 23:36:33.368361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:50928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.205 [2024-04-26 23:36:33.368368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:44.205 [2024-04-26 23:36:33.368377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:50936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.205 [2024-04-26 23:36:33.368384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:44.205 [2024-04-26 23:36:33.368393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:50944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.205 [2024-04-26 23:36:33.368400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:44.205 [2024-04-26 23:36:33.368411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:50952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.205 [2024-04-26 23:36:33.368419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:44.205 [2024-04-26 23:36:33.368433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:50960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.205 [2024-04-26 23:36:33.368440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:44.205 [2024-04-26 23:36:33.368449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:50968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.205 [2024-04-26 23:36:33.368457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:44.205 [2024-04-26 23:36:33.368466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:50976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.205 [2024-04-26 23:36:33.368473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:44.205 [2024-04-26 23:36:33.368482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:50984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.205 [2024-04-26 23:36:33.368489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:44.205 [2024-04-26 23:36:33.368498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:50992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.205 [2024-04-26 23:36:33.368506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:44.205 [2024-04-26 23:36:33.368516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:51000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.205 [2024-04-26 23:36:33.368523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:44.205 [2024-04-26 23:36:33.368532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:51008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.205 [2024-04-26 23:36:33.368539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:44.205 [2024-04-26 23:36:33.368548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:51016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.205 [2024-04-26 23:36:33.368555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:44.205 [2024-04-26 23:36:33.368565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:50072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.205 [2024-04-26 23:36:33.368572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:44.205 [2024-04-26 23:36:33.368581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:50080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.205 [2024-04-26 23:36:33.368588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:44.205 [2024-04-26 23:36:33.368597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:50088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.205 [2024-04-26 23:36:33.368604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:44.205 [2024-04-26 23:36:33.368614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:50096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.205 [2024-04-26 23:36:33.368622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:44.205 [2024-04-26 23:36:33.368632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:50104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.205 [2024-04-26 23:36:33.368639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:44.205 [2024-04-26 23:36:33.368648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:50112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.205 [2024-04-26 23:36:33.368655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:44.205 [2024-04-26 23:36:33.368664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:50120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.205 [2024-04-26 23:36:33.368672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:44.205 [2024-04-26 23:36:33.368680] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1088440 is same with the state(5) to be set 00:33:44.205 [2024-04-26 23:36:33.368689] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:44.205 [2024-04-26 23:36:33.368695] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:44.205 [2024-04-26 23:36:33.368701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:50128 len:8 PRP1 0x0 PRP2 0x0 00:33:44.205 [2024-04-26 23:36:33.368709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:44.205 [2024-04-26 23:36:33.368746] bdev_nvme.c:1601:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1088440 was disconnected and freed. reset controller. 00:33:44.205 [2024-04-26 23:36:33.368790] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:33:44.205 [2024-04-26 23:36:33.368799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:44.205 [2024-04-26 23:36:33.368808] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:33:44.205 [2024-04-26 23:36:33.368816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:44.205 [2024-04-26 23:36:33.368824] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:33:44.205 [2024-04-26 23:36:33.368831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:44.205 [2024-04-26 23:36:33.368843] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:33:44.205 [2024-04-26 23:36:33.368851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:44.205 [2024-04-26 23:36:33.368858] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe56640 is same with the state(5) to be set 00:33:44.205 [2024-04-26 23:36:33.372317] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:44.205 [2024-04-26 23:36:33.372338] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe56640 (9): Bad file descriptor 00:33:44.205 [2024-04-26 23:36:33.373241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:44.205 [2024-04-26 23:36:33.373510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:44.205 [2024-04-26 23:36:33.373523] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe56640 with addr=10.0.0.2, port=4420 00:33:44.206 [2024-04-26 23:36:33.373537] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe56640 is same with the state(5) to be set 00:33:44.206 [2024-04-26 23:36:33.373772] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe56640 (9): Bad file descriptor 00:33:44.206 [2024-04-26 23:36:33.373996] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:44.206 [2024-04-26 23:36:33.374006] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:44.206 [2024-04-26 23:36:33.374014] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:44.206 [2024-04-26 23:36:33.377488] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:44.206 [2024-04-26 23:36:33.386304] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:44.206 [2024-04-26 23:36:33.386940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:44.206 [2024-04-26 23:36:33.387343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:44.206 [2024-04-26 23:36:33.387357] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe56640 with addr=10.0.0.2, port=4420 00:33:44.206 [2024-04-26 23:36:33.387366] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe56640 is same with the state(5) to be set 00:33:44.206 [2024-04-26 23:36:33.387600] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe56640 (9): Bad file descriptor 00:33:44.206 [2024-04-26 23:36:33.387818] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:44.206 [2024-04-26 23:36:33.387827] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:44.206 [2024-04-26 23:36:33.387835] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:44.206 [2024-04-26 23:36:33.391317] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:44.206 [2024-04-26 23:36:33.400110] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:44.206 [2024-04-26 23:36:33.400758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:44.206 [2024-04-26 23:36:33.401117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:44.206 [2024-04-26 23:36:33.401132] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe56640 with addr=10.0.0.2, port=4420 00:33:44.206 [2024-04-26 23:36:33.401141] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe56640 is same with the state(5) to be set 00:33:44.206 [2024-04-26 23:36:33.401375] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe56640 (9): Bad file descriptor 00:33:44.206 [2024-04-26 23:36:33.401593] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:44.206 [2024-04-26 23:36:33.401602] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:44.206 [2024-04-26 23:36:33.401610] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:44.206 [2024-04-26 23:36:33.405096] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:44.206 [2024-04-26 23:36:33.413894] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:44.206 [2024-04-26 23:36:33.414408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:44.206 [2024-04-26 23:36:33.414810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:44.206 [2024-04-26 23:36:33.414823] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe56640 with addr=10.0.0.2, port=4420 00:33:44.206 [2024-04-26 23:36:33.414833] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe56640 is same with the state(5) to be set 00:33:44.206 [2024-04-26 23:36:33.415085] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe56640 (9): Bad file descriptor 00:33:44.206 [2024-04-26 23:36:33.415305] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:44.206 [2024-04-26 23:36:33.415314] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:44.206 [2024-04-26 23:36:33.415322] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:44.206 [2024-04-26 23:36:33.418794] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:44.206 [2024-04-26 23:36:33.427795] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:44.206 [2024-04-26 23:36:33.428481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:44.206 [2024-04-26 23:36:33.428882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:44.206 [2024-04-26 23:36:33.428897] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe56640 with addr=10.0.0.2, port=4420 00:33:44.206 [2024-04-26 23:36:33.428906] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe56640 is same with the state(5) to be set 00:33:44.206 [2024-04-26 23:36:33.429140] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe56640 (9): Bad file descriptor 00:33:44.206 [2024-04-26 23:36:33.429358] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:44.206 [2024-04-26 23:36:33.429367] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:44.206 [2024-04-26 23:36:33.429375] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:44.206 [2024-04-26 23:36:33.432855] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:44.206 [2024-04-26 23:36:33.441657] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:44.206 [2024-04-26 23:36:33.442356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:44.206 [2024-04-26 23:36:33.442750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:44.206 [2024-04-26 23:36:33.442764] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe56640 with addr=10.0.0.2, port=4420 00:33:44.206 [2024-04-26 23:36:33.442774] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe56640 is same with the state(5) to be set 00:33:44.206 [2024-04-26 23:36:33.443015] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe56640 (9): Bad file descriptor 00:33:44.206 [2024-04-26 23:36:33.443234] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:44.206 [2024-04-26 23:36:33.443244] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:44.206 [2024-04-26 23:36:33.443251] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:44.206 [2024-04-26 23:36:33.446721] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:44.471 [2024-04-26 23:36:33.455525] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:44.471 [2024-04-26 23:36:33.456193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:44.471 [2024-04-26 23:36:33.456596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:44.471 [2024-04-26 23:36:33.456610] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe56640 with addr=10.0.0.2, port=4420 00:33:44.471 [2024-04-26 23:36:33.456619] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe56640 is same with the state(5) to be set 00:33:44.471 [2024-04-26 23:36:33.456861] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe56640 (9): Bad file descriptor 00:33:44.471 [2024-04-26 23:36:33.457080] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:44.471 [2024-04-26 23:36:33.457094] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:44.471 [2024-04-26 23:36:33.457101] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:44.471 [2024-04-26 23:36:33.460577] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:44.471 [2024-04-26 23:36:33.469377] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:44.471 [2024-04-26 23:36:33.470113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:44.471 [2024-04-26 23:36:33.470504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:44.471 [2024-04-26 23:36:33.470518] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe56640 with addr=10.0.0.2, port=4420 00:33:44.471 [2024-04-26 23:36:33.470527] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe56640 is same with the state(5) to be set 00:33:44.471 [2024-04-26 23:36:33.470761] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe56640 (9): Bad file descriptor 00:33:44.471 [2024-04-26 23:36:33.470988] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:44.471 [2024-04-26 23:36:33.470998] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:44.471 [2024-04-26 23:36:33.471005] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:44.471 [2024-04-26 23:36:33.474478] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:44.471 [2024-04-26 23:36:33.483276] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:44.471 [2024-04-26 23:36:33.483940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:44.471 [2024-04-26 23:36:33.484337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:44.471 [2024-04-26 23:36:33.484351] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe56640 with addr=10.0.0.2, port=4420 00:33:44.471 [2024-04-26 23:36:33.484360] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe56640 is same with the state(5) to be set 00:33:44.471 [2024-04-26 23:36:33.484593] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe56640 (9): Bad file descriptor 00:33:44.471 [2024-04-26 23:36:33.484812] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:44.471 [2024-04-26 23:36:33.484821] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:44.471 [2024-04-26 23:36:33.484828] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:44.471 [2024-04-26 23:36:33.488309] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:44.471 [2024-04-26 23:36:33.497110] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:44.471 [2024-04-26 23:36:33.497751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:44.471 [2024-04-26 23:36:33.498003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:44.471 [2024-04-26 23:36:33.498020] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe56640 with addr=10.0.0.2, port=4420 00:33:44.471 [2024-04-26 23:36:33.498030] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe56640 is same with the state(5) to be set 00:33:44.471 [2024-04-26 23:36:33.498264] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe56640 (9): Bad file descriptor 00:33:44.471 [2024-04-26 23:36:33.498483] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:44.471 [2024-04-26 23:36:33.498492] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:44.471 [2024-04-26 23:36:33.498505] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:44.471 [2024-04-26 23:36:33.501982] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:44.471 [2024-04-26 23:36:33.510994] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:44.471 [2024-04-26 23:36:33.511471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:44.471 [2024-04-26 23:36:33.511874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:44.471 [2024-04-26 23:36:33.511888] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe56640 with addr=10.0.0.2, port=4420 00:33:44.471 [2024-04-26 23:36:33.511896] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe56640 is same with the state(5) to be set 00:33:44.471 [2024-04-26 23:36:33.512113] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe56640 (9): Bad file descriptor 00:33:44.471 [2024-04-26 23:36:33.512328] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:44.471 [2024-04-26 23:36:33.512337] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:44.471 [2024-04-26 23:36:33.512344] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:44.471 [2024-04-26 23:36:33.515811] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:44.471 [2024-04-26 23:36:33.524812] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:44.471 [2024-04-26 23:36:33.525501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:44.471 [2024-04-26 23:36:33.525831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:44.471 [2024-04-26 23:36:33.525853] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe56640 with addr=10.0.0.2, port=4420 00:33:44.471 [2024-04-26 23:36:33.525863] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe56640 is same with the state(5) to be set 00:33:44.471 [2024-04-26 23:36:33.526097] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe56640 (9): Bad file descriptor 00:33:44.472 [2024-04-26 23:36:33.526315] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:44.472 [2024-04-26 23:36:33.526324] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:44.472 [2024-04-26 23:36:33.526332] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:44.472 [2024-04-26 23:36:33.529804] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:44.472 [2024-04-26 23:36:33.538602] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:44.472 [2024-04-26 23:36:33.539299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:44.472 [2024-04-26 23:36:33.539696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:44.472 [2024-04-26 23:36:33.539709] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe56640 with addr=10.0.0.2, port=4420 00:33:44.472 [2024-04-26 23:36:33.539718] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe56640 is same with the state(5) to be set 00:33:44.472 [2024-04-26 23:36:33.539960] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe56640 (9): Bad file descriptor 00:33:44.472 [2024-04-26 23:36:33.540179] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:44.472 [2024-04-26 23:36:33.540188] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:44.472 [2024-04-26 23:36:33.540196] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:44.472 [2024-04-26 23:36:33.543673] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:44.472 [2024-04-26 23:36:33.552475] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:44.472 [2024-04-26 23:36:33.553182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:44.472 [2024-04-26 23:36:33.553587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:44.472 [2024-04-26 23:36:33.553600] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe56640 with addr=10.0.0.2, port=4420 00:33:44.472 [2024-04-26 23:36:33.553610] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe56640 is same with the state(5) to be set 00:33:44.472 [2024-04-26 23:36:33.553851] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe56640 (9): Bad file descriptor 00:33:44.472 [2024-04-26 23:36:33.554070] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:44.472 [2024-04-26 23:36:33.554081] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:44.472 [2024-04-26 23:36:33.554088] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:44.472 [2024-04-26 23:36:33.557560] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:44.472 [2024-04-26 23:36:33.566360] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:44.472 [2024-04-26 23:36:33.567118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:44.472 [2024-04-26 23:36:33.567475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:44.472 [2024-04-26 23:36:33.567488] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe56640 with addr=10.0.0.2, port=4420 00:33:44.472 [2024-04-26 23:36:33.567497] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe56640 is same with the state(5) to be set 00:33:44.472 [2024-04-26 23:36:33.567731] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe56640 (9): Bad file descriptor 00:33:44.472 [2024-04-26 23:36:33.567958] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:44.472 [2024-04-26 23:36:33.567968] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:44.472 [2024-04-26 23:36:33.567976] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:44.472 [2024-04-26 23:36:33.571449] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:44.472 [2024-04-26 23:36:33.580253] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:44.472 [2024-04-26 23:36:33.580943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:44.472 [2024-04-26 23:36:33.581347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:44.472 [2024-04-26 23:36:33.581361] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe56640 with addr=10.0.0.2, port=4420 00:33:44.472 [2024-04-26 23:36:33.581371] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe56640 is same with the state(5) to be set 00:33:44.472 [2024-04-26 23:36:33.581605] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe56640 (9): Bad file descriptor 00:33:44.472 [2024-04-26 23:36:33.581824] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:44.472 [2024-04-26 23:36:33.581833] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:44.472 [2024-04-26 23:36:33.581848] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:44.472 [2024-04-26 23:36:33.585319] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:44.472 [2024-04-26 23:36:33.594129] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:44.472 [2024-04-26 23:36:33.594715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:44.472 [2024-04-26 23:36:33.595032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:44.472 [2024-04-26 23:36:33.595044] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe56640 with addr=10.0.0.2, port=4420 00:33:44.472 [2024-04-26 23:36:33.595052] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe56640 is same with the state(5) to be set 00:33:44.472 [2024-04-26 23:36:33.595267] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe56640 (9): Bad file descriptor 00:33:44.472 [2024-04-26 23:36:33.595483] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:44.472 [2024-04-26 23:36:33.595492] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:44.472 [2024-04-26 23:36:33.595500] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:44.472 [2024-04-26 23:36:33.598972] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:44.472 [2024-04-26 23:36:33.607984] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:44.472 [2024-04-26 23:36:33.608623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:44.472 [2024-04-26 23:36:33.608987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:44.472 [2024-04-26 23:36:33.609002] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe56640 with addr=10.0.0.2, port=4420 00:33:44.472 [2024-04-26 23:36:33.609012] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe56640 is same with the state(5) to be set 00:33:44.472 [2024-04-26 23:36:33.609245] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe56640 (9): Bad file descriptor 00:33:44.472 [2024-04-26 23:36:33.609464] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:44.472 [2024-04-26 23:36:33.609474] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:44.472 [2024-04-26 23:36:33.609482] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:44.472 [2024-04-26 23:36:33.612959] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:44.472 [2024-04-26 23:36:33.621752] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:44.472 [2024-04-26 23:36:33.622301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:44.472 [2024-04-26 23:36:33.622659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:44.472 [2024-04-26 23:36:33.622671] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe56640 with addr=10.0.0.2, port=4420 00:33:44.472 [2024-04-26 23:36:33.622678] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe56640 is same with the state(5) to be set 00:33:44.472 [2024-04-26 23:36:33.622901] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe56640 (9): Bad file descriptor 00:33:44.472 [2024-04-26 23:36:33.623117] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:44.472 [2024-04-26 23:36:33.623127] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:44.472 [2024-04-26 23:36:33.623134] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:44.472 [2024-04-26 23:36:33.626603] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:44.472 [2024-04-26 23:36:33.635607] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:44.472 [2024-04-26 23:36:33.636158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:44.472 [2024-04-26 23:36:33.636525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:44.472 [2024-04-26 23:36:33.636538] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe56640 with addr=10.0.0.2, port=4420 00:33:44.472 [2024-04-26 23:36:33.636548] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe56640 is same with the state(5) to be set 00:33:44.472 [2024-04-26 23:36:33.636781] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe56640 (9): Bad file descriptor 00:33:44.472 [2024-04-26 23:36:33.637005] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:44.472 [2024-04-26 23:36:33.637015] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:44.472 [2024-04-26 23:36:33.637023] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:44.472 [2024-04-26 23:36:33.640496] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:44.472 [2024-04-26 23:36:33.649504] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:44.472 [2024-04-26 23:36:33.649969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:44.472 [2024-04-26 23:36:33.650299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:44.472 [2024-04-26 23:36:33.650310] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe56640 with addr=10.0.0.2, port=4420 00:33:44.472 [2024-04-26 23:36:33.650318] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe56640 is same with the state(5) to be set 00:33:44.472 [2024-04-26 23:36:33.650534] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe56640 (9): Bad file descriptor 00:33:44.472 [2024-04-26 23:36:33.650748] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:44.472 [2024-04-26 23:36:33.650757] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:44.473 [2024-04-26 23:36:33.650765] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:44.473 [2024-04-26 23:36:33.654235] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:44.473 [2024-04-26 23:36:33.663244] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:44.473 [2024-04-26 23:36:33.663780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:44.473 [2024-04-26 23:36:33.664108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:44.473 [2024-04-26 23:36:33.664120] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe56640 with addr=10.0.0.2, port=4420 00:33:44.473 [2024-04-26 23:36:33.664127] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe56640 is same with the state(5) to be set 00:33:44.473 [2024-04-26 23:36:33.664343] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe56640 (9): Bad file descriptor 00:33:44.473 [2024-04-26 23:36:33.664558] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:44.473 [2024-04-26 23:36:33.664567] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:44.473 [2024-04-26 23:36:33.664575] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:44.473 [2024-04-26 23:36:33.668046] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:44.473 [2024-04-26 23:36:33.677090] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:44.473 [2024-04-26 23:36:33.677750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:44.473 [2024-04-26 23:36:33.678097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:44.473 [2024-04-26 23:36:33.678115] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe56640 with addr=10.0.0.2, port=4420 00:33:44.473 [2024-04-26 23:36:33.678125] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe56640 is same with the state(5) to be set 00:33:44.473 [2024-04-26 23:36:33.678359] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe56640 (9): Bad file descriptor 00:33:44.473 [2024-04-26 23:36:33.678577] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:44.473 [2024-04-26 23:36:33.678587] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:44.473 [2024-04-26 23:36:33.678594] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:44.473 [2024-04-26 23:36:33.682074] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:44.473 [2024-04-26 23:36:33.690874] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:44.473 [2024-04-26 23:36:33.691557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:44.473 [2024-04-26 23:36:33.691957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:44.473 [2024-04-26 23:36:33.691972] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe56640 with addr=10.0.0.2, port=4420 00:33:44.473 [2024-04-26 23:36:33.691982] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe56640 is same with the state(5) to be set 00:33:44.473 [2024-04-26 23:36:33.692215] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe56640 (9): Bad file descriptor 00:33:44.473 [2024-04-26 23:36:33.692433] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:44.473 [2024-04-26 23:36:33.692442] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:44.473 [2024-04-26 23:36:33.692450] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:44.473 [2024-04-26 23:36:33.695928] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:44.473 [2024-04-26 23:36:33.704726] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:44.473 [2024-04-26 23:36:33.705371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:44.473 [2024-04-26 23:36:33.705726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:44.473 [2024-04-26 23:36:33.705740] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe56640 with addr=10.0.0.2, port=4420 00:33:44.473 [2024-04-26 23:36:33.705749] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe56640 is same with the state(5) to be set 00:33:44.473 [2024-04-26 23:36:33.705992] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe56640 (9): Bad file descriptor 00:33:44.473 [2024-04-26 23:36:33.706211] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:44.473 [2024-04-26 23:36:33.706220] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:44.473 [2024-04-26 23:36:33.706228] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:44.473 [2024-04-26 23:36:33.709699] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:44.473 [2024-04-26 23:36:33.718498] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:44.473 [2024-04-26 23:36:33.719186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:44.473 [2024-04-26 23:36:33.719584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:44.473 [2024-04-26 23:36:33.719598] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe56640 with addr=10.0.0.2, port=4420 00:33:44.473 [2024-04-26 23:36:33.719611] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe56640 is same with the state(5) to be set 00:33:44.473 [2024-04-26 23:36:33.719853] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe56640 (9): Bad file descriptor 00:33:44.473 [2024-04-26 23:36:33.720072] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:44.473 [2024-04-26 23:36:33.720082] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:44.473 [2024-04-26 23:36:33.720089] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:44.776 [2024-04-26 23:36:33.723564] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:44.776 [2024-04-26 23:36:33.732369] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:44.776 [2024-04-26 23:36:33.732934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:44.776 [2024-04-26 23:36:33.733298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:44.776 [2024-04-26 23:36:33.733312] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe56640 with addr=10.0.0.2, port=4420 00:33:44.776 [2024-04-26 23:36:33.733322] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe56640 is same with the state(5) to be set 00:33:44.776 [2024-04-26 23:36:33.733555] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe56640 (9): Bad file descriptor 00:33:44.776 [2024-04-26 23:36:33.733773] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:44.776 [2024-04-26 23:36:33.733782] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:44.776 [2024-04-26 23:36:33.733790] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:44.776 [2024-04-26 23:36:33.737274] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:44.776 [2024-04-26 23:36:33.746278] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:44.776 [2024-04-26 23:36:33.746945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:44.776 [2024-04-26 23:36:33.747305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:44.776 [2024-04-26 23:36:33.747319] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe56640 with addr=10.0.0.2, port=4420 00:33:44.776 [2024-04-26 23:36:33.747328] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe56640 is same with the state(5) to be set 00:33:44.776 [2024-04-26 23:36:33.747561] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe56640 (9): Bad file descriptor 00:33:44.776 [2024-04-26 23:36:33.747780] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:44.776 [2024-04-26 23:36:33.747789] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:44.776 [2024-04-26 23:36:33.747797] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:44.776 [2024-04-26 23:36:33.751276] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:44.776 [2024-04-26 23:36:33.760079] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:44.776 [2024-04-26 23:36:33.760760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:44.776 [2024-04-26 23:36:33.761164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:44.776 [2024-04-26 23:36:33.761179] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe56640 with addr=10.0.0.2, port=4420 00:33:44.776 [2024-04-26 23:36:33.761189] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe56640 is same with the state(5) to be set 00:33:44.776 [2024-04-26 23:36:33.761426] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe56640 (9): Bad file descriptor 00:33:44.776 [2024-04-26 23:36:33.761645] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:44.776 [2024-04-26 23:36:33.761654] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:44.776 [2024-04-26 23:36:33.761662] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:44.776 [2024-04-26 23:36:33.765142] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:44.776 [2024-04-26 23:36:33.773947] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:44.776 [2024-04-26 23:36:33.774607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:44.776 [2024-04-26 23:36:33.774976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:44.776 [2024-04-26 23:36:33.774991] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe56640 with addr=10.0.0.2, port=4420 00:33:44.776 [2024-04-26 23:36:33.775000] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe56640 is same with the state(5) to be set 00:33:44.776 [2024-04-26 23:36:33.775233] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe56640 (9): Bad file descriptor 00:33:44.776 [2024-04-26 23:36:33.775451] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:44.776 [2024-04-26 23:36:33.775461] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:44.776 [2024-04-26 23:36:33.775469] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:44.776 [2024-04-26 23:36:33.778947] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:44.776 [2024-04-26 23:36:33.787750] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:44.776 [2024-04-26 23:36:33.788299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:44.776 [2024-04-26 23:36:33.788645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:44.776 [2024-04-26 23:36:33.788656] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe56640 with addr=10.0.0.2, port=4420 00:33:44.776 [2024-04-26 23:36:33.788664] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe56640 is same with the state(5) to be set 00:33:44.776 [2024-04-26 23:36:33.788883] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe56640 (9): Bad file descriptor 00:33:44.776 [2024-04-26 23:36:33.789100] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:44.776 [2024-04-26 23:36:33.789109] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:44.776 [2024-04-26 23:36:33.789117] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:44.776 [2024-04-26 23:36:33.792582] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:44.776 [2024-04-26 23:36:33.801621] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:44.776 [2024-04-26 23:36:33.801942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:44.776 [2024-04-26 23:36:33.802276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:44.776 [2024-04-26 23:36:33.802287] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe56640 with addr=10.0.0.2, port=4420 00:33:44.776 [2024-04-26 23:36:33.802295] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe56640 is same with the state(5) to be set 00:33:44.776 [2024-04-26 23:36:33.802515] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe56640 (9): Bad file descriptor 00:33:44.776 [2024-04-26 23:36:33.802735] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:44.776 [2024-04-26 23:36:33.802745] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:44.776 [2024-04-26 23:36:33.802752] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:44.776 [2024-04-26 23:36:33.806239] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:44.776 [2024-04-26 23:36:33.815447] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:44.776 [2024-04-26 23:36:33.816100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:44.776 [2024-04-26 23:36:33.816503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:44.776 [2024-04-26 23:36:33.816517] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe56640 with addr=10.0.0.2, port=4420 00:33:44.776 [2024-04-26 23:36:33.816527] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe56640 is same with the state(5) to be set 00:33:44.776 [2024-04-26 23:36:33.816760] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe56640 (9): Bad file descriptor 00:33:44.776 [2024-04-26 23:36:33.816987] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:44.776 [2024-04-26 23:36:33.816997] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:44.776 [2024-04-26 23:36:33.817005] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:44.776 [2024-04-26 23:36:33.820477] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:44.776 [2024-04-26 23:36:33.829275] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:44.776 [2024-04-26 23:36:33.829827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:44.776 [2024-04-26 23:36:33.830188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:44.776 [2024-04-26 23:36:33.830199] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe56640 with addr=10.0.0.2, port=4420 00:33:44.776 [2024-04-26 23:36:33.830207] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe56640 is same with the state(5) to be set 00:33:44.776 [2024-04-26 23:36:33.830423] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe56640 (9): Bad file descriptor 00:33:44.776 [2024-04-26 23:36:33.830638] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:44.776 [2024-04-26 23:36:33.830647] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:44.776 [2024-04-26 23:36:33.830654] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:44.776 [2024-04-26 23:36:33.834127] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:44.776 [2024-04-26 23:36:33.843129] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:44.776 [2024-04-26 23:36:33.843707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:44.776 [2024-04-26 23:36:33.843911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:44.776 [2024-04-26 23:36:33.843922] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe56640 with addr=10.0.0.2, port=4420 00:33:44.776 [2024-04-26 23:36:33.843930] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe56640 is same with the state(5) to be set 00:33:44.776 [2024-04-26 23:36:33.844145] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe56640 (9): Bad file descriptor 00:33:44.776 [2024-04-26 23:36:33.844361] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:44.776 [2024-04-26 23:36:33.844374] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:44.777 [2024-04-26 23:36:33.844382] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:44.777 [2024-04-26 23:36:33.847853] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:44.777 [2024-04-26 23:36:33.856860] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:44.777 [2024-04-26 23:36:33.857533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:44.777 [2024-04-26 23:36:33.857896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:44.777 [2024-04-26 23:36:33.857911] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe56640 with addr=10.0.0.2, port=4420 00:33:44.777 [2024-04-26 23:36:33.857920] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe56640 is same with the state(5) to be set 00:33:44.777 [2024-04-26 23:36:33.858153] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe56640 (9): Bad file descriptor 00:33:44.777 [2024-04-26 23:36:33.858372] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:44.777 [2024-04-26 23:36:33.858381] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:44.777 [2024-04-26 23:36:33.858389] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:44.777 [2024-04-26 23:36:33.861864] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:44.777 [2024-04-26 23:36:33.870750] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:44.777 [2024-04-26 23:36:33.871439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:44.777 [2024-04-26 23:36:33.871850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:44.777 [2024-04-26 23:36:33.871865] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe56640 with addr=10.0.0.2, port=4420 00:33:44.777 [2024-04-26 23:36:33.871874] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe56640 is same with the state(5) to be set 00:33:44.777 [2024-04-26 23:36:33.872107] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe56640 (9): Bad file descriptor 00:33:44.777 [2024-04-26 23:36:33.872325] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:44.777 [2024-04-26 23:36:33.872336] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:44.777 [2024-04-26 23:36:33.872344] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:44.777 [2024-04-26 23:36:33.875817] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:44.777 [2024-04-26 23:36:33.884624] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:44.777 [2024-04-26 23:36:33.885251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:44.777 [2024-04-26 23:36:33.885568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:44.777 [2024-04-26 23:36:33.885581] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe56640 with addr=10.0.0.2, port=4420 00:33:44.777 [2024-04-26 23:36:33.885591] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe56640 is same with the state(5) to be set 00:33:44.777 [2024-04-26 23:36:33.885824] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe56640 (9): Bad file descriptor 00:33:44.777 [2024-04-26 23:36:33.886050] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:44.777 [2024-04-26 23:36:33.886061] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:44.777 [2024-04-26 23:36:33.886073] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:44.777 [2024-04-26 23:36:33.889545] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:44.777 [2024-04-26 23:36:33.898346] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:44.777 [2024-04-26 23:36:33.899043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:44.777 [2024-04-26 23:36:33.899441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:44.777 [2024-04-26 23:36:33.899454] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe56640 with addr=10.0.0.2, port=4420 00:33:44.777 [2024-04-26 23:36:33.899463] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe56640 is same with the state(5) to be set 00:33:44.777 [2024-04-26 23:36:33.899697] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe56640 (9): Bad file descriptor 00:33:44.777 [2024-04-26 23:36:33.899922] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:44.777 [2024-04-26 23:36:33.899932] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:44.777 [2024-04-26 23:36:33.899939] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:44.777 [2024-04-26 23:36:33.903413] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:44.777 [2024-04-26 23:36:33.912222] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:44.777 [2024-04-26 23:36:33.912901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:44.777 [2024-04-26 23:36:33.913283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:44.777 [2024-04-26 23:36:33.913296] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe56640 with addr=10.0.0.2, port=4420 00:33:44.777 [2024-04-26 23:36:33.913306] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe56640 is same with the state(5) to be set 00:33:44.777 [2024-04-26 23:36:33.913539] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe56640 (9): Bad file descriptor 00:33:44.777 [2024-04-26 23:36:33.913758] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:44.777 [2024-04-26 23:36:33.913768] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:44.777 [2024-04-26 23:36:33.913775] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:44.777 [2024-04-26 23:36:33.917257] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:44.777 [2024-04-26 23:36:33.926056] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:44.777 [2024-04-26 23:36:33.926597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:44.777 [2024-04-26 23:36:33.926927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:44.777 [2024-04-26 23:36:33.926941] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe56640 with addr=10.0.0.2, port=4420 00:33:44.777 [2024-04-26 23:36:33.926949] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe56640 is same with the state(5) to be set 00:33:44.777 [2024-04-26 23:36:33.927165] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe56640 (9): Bad file descriptor 00:33:44.777 [2024-04-26 23:36:33.927380] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:44.777 [2024-04-26 23:36:33.927389] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:44.777 [2024-04-26 23:36:33.927396] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:44.777 [2024-04-26 23:36:33.930873] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:44.777 [2024-04-26 23:36:33.939876] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:44.777 [2024-04-26 23:36:33.940556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:44.777 [2024-04-26 23:36:33.940873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:44.777 [2024-04-26 23:36:33.940888] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe56640 with addr=10.0.0.2, port=4420 00:33:44.777 [2024-04-26 23:36:33.940897] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe56640 is same with the state(5) to be set 00:33:44.777 [2024-04-26 23:36:33.941131] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe56640 (9): Bad file descriptor 00:33:44.777 [2024-04-26 23:36:33.941348] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:44.777 [2024-04-26 23:36:33.941359] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:44.777 [2024-04-26 23:36:33.941367] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:44.777 [2024-04-26 23:36:33.944844] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:44.777 [2024-04-26 23:36:33.953643] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:44.777 [2024-04-26 23:36:33.954327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:44.777 [2024-04-26 23:36:33.954539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:44.777 [2024-04-26 23:36:33.954555] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe56640 with addr=10.0.0.2, port=4420 00:33:44.777 [2024-04-26 23:36:33.954564] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe56640 is same with the state(5) to be set 00:33:44.777 [2024-04-26 23:36:33.954797] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe56640 (9): Bad file descriptor 00:33:44.777 [2024-04-26 23:36:33.955023] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:44.777 [2024-04-26 23:36:33.955032] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:44.777 [2024-04-26 23:36:33.955040] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:44.777 [2024-04-26 23:36:33.958513] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:44.777 [2024-04-26 23:36:33.967518] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:44.777 [2024-04-26 23:36:33.968203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:44.777 [2024-04-26 23:36:33.968564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:44.777 [2024-04-26 23:36:33.968578] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe56640 with addr=10.0.0.2, port=4420 00:33:44.777 [2024-04-26 23:36:33.968587] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe56640 is same with the state(5) to be set 00:33:44.777 [2024-04-26 23:36:33.968821] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe56640 (9): Bad file descriptor 00:33:44.777 [2024-04-26 23:36:33.969050] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:44.777 [2024-04-26 23:36:33.969060] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:44.777 [2024-04-26 23:36:33.969068] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:44.777 [2024-04-26 23:36:33.972540] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:44.777 [2024-04-26 23:36:33.981343] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:44.777 [2024-04-26 23:36:33.982034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:44.778 [2024-04-26 23:36:33.982389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:44.778 [2024-04-26 23:36:33.982402] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe56640 with addr=10.0.0.2, port=4420 00:33:44.778 [2024-04-26 23:36:33.982412] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe56640 is same with the state(5) to be set 00:33:44.778 [2024-04-26 23:36:33.982645] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe56640 (9): Bad file descriptor 00:33:44.778 [2024-04-26 23:36:33.982870] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:44.778 [2024-04-26 23:36:33.982880] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:44.778 [2024-04-26 23:36:33.982888] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:44.778 [2024-04-26 23:36:33.986362] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:44.778 [2024-04-26 23:36:33.995165] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:44.778 [2024-04-26 23:36:33.995802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:44.778 [2024-04-26 23:36:33.996952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:44.778 [2024-04-26 23:36:33.996975] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe56640 with addr=10.0.0.2, port=4420 00:33:44.778 [2024-04-26 23:36:33.996985] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe56640 is same with the state(5) to be set 00:33:44.778 [2024-04-26 23:36:33.997219] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe56640 (9): Bad file descriptor 00:33:44.778 [2024-04-26 23:36:33.997439] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:44.778 [2024-04-26 23:36:33.997449] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:44.778 [2024-04-26 23:36:33.997456] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:44.778 [2024-04-26 23:36:34.000936] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:44.778 [2024-04-26 23:36:34.008930] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:44.778 [2024-04-26 23:36:34.009491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:44.778 [2024-04-26 23:36:34.009810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:44.778 [2024-04-26 23:36:34.009820] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe56640 with addr=10.0.0.2, port=4420 00:33:44.778 [2024-04-26 23:36:34.009828] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe56640 is same with the state(5) to be set 00:33:44.778 [2024-04-26 23:36:34.010049] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe56640 (9): Bad file descriptor 00:33:44.778 [2024-04-26 23:36:34.010266] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:44.778 [2024-04-26 23:36:34.010274] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:44.778 [2024-04-26 23:36:34.010281] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:44.778 [2024-04-26 23:36:34.013783] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:44.778 [2024-04-26 23:36:34.022795] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:44.778 [2024-04-26 23:36:34.023355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:44.778 [2024-04-26 23:36:34.023726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:44.778 [2024-04-26 23:36:34.023737] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe56640 with addr=10.0.0.2, port=4420 00:33:44.778 [2024-04-26 23:36:34.023745] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe56640 is same with the state(5) to be set 00:33:44.778 [2024-04-26 23:36:34.023965] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe56640 (9): Bad file descriptor 00:33:44.778 [2024-04-26 23:36:34.024237] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:44.778 [2024-04-26 23:36:34.024247] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:44.778 [2024-04-26 23:36:34.024254] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:45.064 [2024-04-26 23:36:34.027733] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:45.064 [2024-04-26 23:36:34.036531] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:45.064 [2024-04-26 23:36:34.037189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:45.064 [2024-04-26 23:36:34.037551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:45.064 [2024-04-26 23:36:34.037564] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe56640 with addr=10.0.0.2, port=4420 00:33:45.064 [2024-04-26 23:36:34.037574] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe56640 is same with the state(5) to be set 00:33:45.064 [2024-04-26 23:36:34.037807] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe56640 (9): Bad file descriptor 00:33:45.064 [2024-04-26 23:36:34.038033] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:45.064 [2024-04-26 23:36:34.038043] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:45.064 [2024-04-26 23:36:34.038051] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:45.064 [2024-04-26 23:36:34.041526] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:45.064 [2024-04-26 23:36:34.050327] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:45.064 [2024-04-26 23:36:34.050962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:45.064 [2024-04-26 23:36:34.051368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:45.064 [2024-04-26 23:36:34.051382] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe56640 with addr=10.0.0.2, port=4420 00:33:45.064 [2024-04-26 23:36:34.051391] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe56640 is same with the state(5) to be set 00:33:45.064 [2024-04-26 23:36:34.051625] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe56640 (9): Bad file descriptor 00:33:45.064 [2024-04-26 23:36:34.051851] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:45.064 [2024-04-26 23:36:34.051861] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:45.064 [2024-04-26 23:36:34.051868] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:45.064 [2024-04-26 23:36:34.055344] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:45.064 [2024-04-26 23:36:34.064147] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:45.064 [2024-04-26 23:36:34.064747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:45.064 [2024-04-26 23:36:34.065127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:45.064 [2024-04-26 23:36:34.065146] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe56640 with addr=10.0.0.2, port=4420 00:33:45.064 [2024-04-26 23:36:34.065155] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe56640 is same with the state(5) to be set 00:33:45.064 [2024-04-26 23:36:34.065389] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe56640 (9): Bad file descriptor 00:33:45.064 [2024-04-26 23:36:34.065607] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:45.064 [2024-04-26 23:36:34.065616] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:45.064 [2024-04-26 23:36:34.065623] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:45.064 [2024-04-26 23:36:34.069102] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:45.064 [2024-04-26 23:36:34.077908] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:45.064 [2024-04-26 23:36:34.078389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:45.064 [2024-04-26 23:36:34.078640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:45.064 [2024-04-26 23:36:34.078651] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe56640 with addr=10.0.0.2, port=4420 00:33:45.064 [2024-04-26 23:36:34.078659] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe56640 is same with the state(5) to be set 00:33:45.064 [2024-04-26 23:36:34.078881] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe56640 (9): Bad file descriptor 00:33:45.064 [2024-04-26 23:36:34.079097] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:45.064 [2024-04-26 23:36:34.079106] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:45.064 [2024-04-26 23:36:34.079113] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:45.064 [2024-04-26 23:36:34.082581] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:45.064 [2024-04-26 23:36:34.091792] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:45.064 [2024-04-26 23:36:34.092423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:45.064 [2024-04-26 23:36:34.093324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:45.064 [2024-04-26 23:36:34.093349] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe56640 with addr=10.0.0.2, port=4420 00:33:45.065 [2024-04-26 23:36:34.093358] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe56640 is same with the state(5) to be set 00:33:45.065 [2024-04-26 23:36:34.093592] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe56640 (9): Bad file descriptor 00:33:45.065 [2024-04-26 23:36:34.093810] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:45.065 [2024-04-26 23:36:34.093820] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:45.065 [2024-04-26 23:36:34.093828] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:45.065 [2024-04-26 23:36:34.097309] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:45.065 [2024-04-26 23:36:34.105712] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:45.065 [2024-04-26 23:36:34.106146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:45.065 [2024-04-26 23:36:34.106505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:45.065 [2024-04-26 23:36:34.106516] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe56640 with addr=10.0.0.2, port=4420 00:33:45.065 [2024-04-26 23:36:34.106528] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe56640 is same with the state(5) to be set 00:33:45.065 [2024-04-26 23:36:34.106744] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe56640 (9): Bad file descriptor 00:33:45.065 [2024-04-26 23:36:34.106966] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:45.065 [2024-04-26 23:36:34.106975] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:45.065 [2024-04-26 23:36:34.106983] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:45.065 [2024-04-26 23:36:34.110470] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:45.065 [2024-04-26 23:36:34.119475] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:45.065 [2024-04-26 23:36:34.120028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:45.065 [2024-04-26 23:36:34.120429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:45.065 [2024-04-26 23:36:34.120440] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe56640 with addr=10.0.0.2, port=4420 00:33:45.065 [2024-04-26 23:36:34.120448] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe56640 is same with the state(5) to be set 00:33:45.065 [2024-04-26 23:36:34.120663] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe56640 (9): Bad file descriptor 00:33:45.065 [2024-04-26 23:36:34.120882] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:45.065 [2024-04-26 23:36:34.120892] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:45.065 [2024-04-26 23:36:34.120899] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:45.065 [2024-04-26 23:36:34.124371] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:45.065 [2024-04-26 23:36:34.133376] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:45.065 [2024-04-26 23:36:34.133826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:45.065 [2024-04-26 23:36:34.134044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:45.065 [2024-04-26 23:36:34.134055] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe56640 with addr=10.0.0.2, port=4420 00:33:45.065 [2024-04-26 23:36:34.134063] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe56640 is same with the state(5) to be set 00:33:45.065 [2024-04-26 23:36:34.134277] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe56640 (9): Bad file descriptor 00:33:45.065 [2024-04-26 23:36:34.134493] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:45.065 [2024-04-26 23:36:34.134503] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:45.065 [2024-04-26 23:36:34.134510] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:45.065 [2024-04-26 23:36:34.138008] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:45.065 [2024-04-26 23:36:34.147221] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:45.065 [2024-04-26 23:36:34.147873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:45.065 [2024-04-26 23:36:34.148922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:45.065 [2024-04-26 23:36:34.148948] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe56640 with addr=10.0.0.2, port=4420 00:33:45.065 [2024-04-26 23:36:34.148958] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe56640 is same with the state(5) to be set 00:33:45.065 [2024-04-26 23:36:34.149195] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe56640 (9): Bad file descriptor 00:33:45.065 [2024-04-26 23:36:34.149415] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:45.065 [2024-04-26 23:36:34.149424] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:45.065 [2024-04-26 23:36:34.149432] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:45.065 [2024-04-26 23:36:34.152912] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:45.065 [2024-04-26 23:36:34.161101] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:45.065 [2024-04-26 23:36:34.161650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:45.065 [2024-04-26 23:36:34.161977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:45.065 [2024-04-26 23:36:34.161989] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe56640 with addr=10.0.0.2, port=4420 00:33:45.065 [2024-04-26 23:36:34.161997] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe56640 is same with the state(5) to be set 00:33:45.065 [2024-04-26 23:36:34.162214] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe56640 (9): Bad file descriptor 00:33:45.065 [2024-04-26 23:36:34.162430] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:45.065 [2024-04-26 23:36:34.162440] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:45.065 [2024-04-26 23:36:34.162447] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:45.065 [2024-04-26 23:36:34.165939] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:45.065 [2024-04-26 23:36:34.174944] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:45.065 [2024-04-26 23:36:34.175491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:45.065 [2024-04-26 23:36:34.175851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:45.065 [2024-04-26 23:36:34.175863] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe56640 with addr=10.0.0.2, port=4420 00:33:45.065 [2024-04-26 23:36:34.175870] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe56640 is same with the state(5) to be set 00:33:45.065 [2024-04-26 23:36:34.176086] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe56640 (9): Bad file descriptor 00:33:45.065 [2024-04-26 23:36:34.176301] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:45.065 [2024-04-26 23:36:34.176310] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:45.065 [2024-04-26 23:36:34.176317] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:45.065 [2024-04-26 23:36:34.179787] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:45.065 [2024-04-26 23:36:34.188803] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:45.065 [2024-04-26 23:36:34.189501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:45.065 [2024-04-26 23:36:34.189895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:45.065 [2024-04-26 23:36:34.189910] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe56640 with addr=10.0.0.2, port=4420 00:33:45.065 [2024-04-26 23:36:34.189920] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe56640 is same with the state(5) to be set 00:33:45.065 [2024-04-26 23:36:34.190154] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe56640 (9): Bad file descriptor 00:33:45.065 [2024-04-26 23:36:34.190376] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:45.065 [2024-04-26 23:36:34.190386] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:45.065 [2024-04-26 23:36:34.190393] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:45.065 [2024-04-26 23:36:34.193872] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:45.065 [2024-04-26 23:36:34.202670] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:45.065 [2024-04-26 23:36:34.203248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:45.065 [2024-04-26 23:36:34.203616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:45.065 [2024-04-26 23:36:34.203627] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe56640 with addr=10.0.0.2, port=4420 00:33:45.065 [2024-04-26 23:36:34.203634] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe56640 is same with the state(5) to be set 00:33:45.065 [2024-04-26 23:36:34.203854] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe56640 (9): Bad file descriptor 00:33:45.065 [2024-04-26 23:36:34.204071] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:45.065 [2024-04-26 23:36:34.204080] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:45.065 [2024-04-26 23:36:34.204087] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:45.065 [2024-04-26 23:36:34.207782] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:45.065 [2024-04-26 23:36:34.216386] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:45.065 [2024-04-26 23:36:34.217074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:45.065 [2024-04-26 23:36:34.217441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:45.065 [2024-04-26 23:36:34.217455] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe56640 with addr=10.0.0.2, port=4420 00:33:45.065 [2024-04-26 23:36:34.217464] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe56640 is same with the state(5) to be set 00:33:45.065 [2024-04-26 23:36:34.217697] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe56640 (9): Bad file descriptor 00:33:45.065 [2024-04-26 23:36:34.217922] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:45.065 [2024-04-26 23:36:34.217932] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:45.066 [2024-04-26 23:36:34.217940] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:45.066 [2024-04-26 23:36:34.221413] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:45.066 [2024-04-26 23:36:34.230216] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:45.066 [2024-04-26 23:36:34.230820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:45.066 [2024-04-26 23:36:34.231235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:45.066 [2024-04-26 23:36:34.231249] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe56640 with addr=10.0.0.2, port=4420 00:33:45.066 [2024-04-26 23:36:34.231259] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe56640 is same with the state(5) to be set 00:33:45.066 [2024-04-26 23:36:34.231492] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe56640 (9): Bad file descriptor 00:33:45.066 [2024-04-26 23:36:34.231710] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:45.066 [2024-04-26 23:36:34.231720] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:45.066 [2024-04-26 23:36:34.231731] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:45.066 [2024-04-26 23:36:34.235210] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:45.066 [2024-04-26 23:36:34.244009] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:45.066 [2024-04-26 23:36:34.244558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:45.066 [2024-04-26 23:36:34.245034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:45.066 [2024-04-26 23:36:34.245072] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe56640 with addr=10.0.0.2, port=4420 00:33:45.066 [2024-04-26 23:36:34.245083] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe56640 is same with the state(5) to be set 00:33:45.066 [2024-04-26 23:36:34.245317] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe56640 (9): Bad file descriptor 00:33:45.066 [2024-04-26 23:36:34.245536] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:45.066 [2024-04-26 23:36:34.245545] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:45.066 [2024-04-26 23:36:34.245553] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:45.066 [2024-04-26 23:36:34.249038] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:45.066 [2024-04-26 23:36:34.257840] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:45.066 [2024-04-26 23:36:34.258538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:45.066 [2024-04-26 23:36:34.258919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:45.066 [2024-04-26 23:36:34.258934] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe56640 with addr=10.0.0.2, port=4420 00:33:45.066 [2024-04-26 23:36:34.258943] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe56640 is same with the state(5) to be set 00:33:45.066 [2024-04-26 23:36:34.259176] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe56640 (9): Bad file descriptor 00:33:45.066 [2024-04-26 23:36:34.259395] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:45.066 [2024-04-26 23:36:34.259404] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:45.066 [2024-04-26 23:36:34.259411] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:45.066 [2024-04-26 23:36:34.262893] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:45.066 [2024-04-26 23:36:34.271700] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:45.066 [2024-04-26 23:36:34.272294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:45.066 [2024-04-26 23:36:34.272502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:45.066 [2024-04-26 23:36:34.272513] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe56640 with addr=10.0.0.2, port=4420 00:33:45.066 [2024-04-26 23:36:34.272521] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe56640 is same with the state(5) to be set 00:33:45.066 [2024-04-26 23:36:34.272736] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe56640 (9): Bad file descriptor 00:33:45.066 [2024-04-26 23:36:34.272956] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:45.066 [2024-04-26 23:36:34.272965] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:45.066 [2024-04-26 23:36:34.272972] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:45.066 [2024-04-26 23:36:34.276446] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:45.066 [2024-04-26 23:36:34.285457] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:45.066 [2024-04-26 23:36:34.286099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:45.066 [2024-04-26 23:36:34.286484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:45.066 [2024-04-26 23:36:34.286498] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe56640 with addr=10.0.0.2, port=4420 00:33:45.066 [2024-04-26 23:36:34.286508] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe56640 is same with the state(5) to be set 00:33:45.066 [2024-04-26 23:36:34.286742] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe56640 (9): Bad file descriptor 00:33:45.066 [2024-04-26 23:36:34.286967] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:45.066 [2024-04-26 23:36:34.286977] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:45.066 [2024-04-26 23:36:34.286984] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:45.066 [2024-04-26 23:36:34.290458] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:45.066 [2024-04-26 23:36:34.299260] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:45.066 [2024-04-26 23:36:34.299958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:45.066 [2024-04-26 23:36:34.300379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:45.066 [2024-04-26 23:36:34.300392] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe56640 with addr=10.0.0.2, port=4420 00:33:45.066 [2024-04-26 23:36:34.300402] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe56640 is same with the state(5) to be set 00:33:45.066 [2024-04-26 23:36:34.300635] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe56640 (9): Bad file descriptor 00:33:45.066 [2024-04-26 23:36:34.300861] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:45.066 [2024-04-26 23:36:34.300870] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:45.066 [2024-04-26 23:36:34.300878] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:45.066 [2024-04-26 23:36:34.304353] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:45.066 [2024-04-26 23:36:34.313165] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:45.066 [2024-04-26 23:36:34.313880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:45.066 [2024-04-26 23:36:34.314317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:45.066 [2024-04-26 23:36:34.314331] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe56640 with addr=10.0.0.2, port=4420 00:33:45.066 [2024-04-26 23:36:34.314340] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe56640 is same with the state(5) to be set 00:33:45.066 [2024-04-26 23:36:34.314574] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe56640 (9): Bad file descriptor 00:33:45.066 [2024-04-26 23:36:34.314792] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:45.066 [2024-04-26 23:36:34.314801] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:45.066 [2024-04-26 23:36:34.314809] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:45.329 [2024-04-26 23:36:34.318290] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:45.329 [2024-04-26 23:36:34.326893] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:45.329 [2024-04-26 23:36:34.327457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:45.329 [2024-04-26 23:36:34.327797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:45.329 [2024-04-26 23:36:34.327807] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe56640 with addr=10.0.0.2, port=4420 00:33:45.329 [2024-04-26 23:36:34.327815] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe56640 is same with the state(5) to be set 00:33:45.329 [2024-04-26 23:36:34.328035] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe56640 (9): Bad file descriptor 00:33:45.329 [2024-04-26 23:36:34.328251] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:45.329 [2024-04-26 23:36:34.328260] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:45.329 [2024-04-26 23:36:34.328267] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:45.329 [2024-04-26 23:36:34.331733] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:45.329 [2024-04-26 23:36:34.340739] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:45.329 [2024-04-26 23:36:34.341415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:45.329 [2024-04-26 23:36:34.341618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:45.329 [2024-04-26 23:36:34.341631] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe56640 with addr=10.0.0.2, port=4420 00:33:45.329 [2024-04-26 23:36:34.341640] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe56640 is same with the state(5) to be set 00:33:45.329 [2024-04-26 23:36:34.341881] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe56640 (9): Bad file descriptor 00:33:45.329 [2024-04-26 23:36:34.342101] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:45.329 [2024-04-26 23:36:34.342111] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:45.329 [2024-04-26 23:36:34.342119] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:45.329 [2024-04-26 23:36:34.345623] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:45.329 [2024-04-26 23:36:34.354637] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:45.329 [2024-04-26 23:36:34.355213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:45.329 [2024-04-26 23:36:34.355548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:45.329 [2024-04-26 23:36:34.355561] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe56640 with addr=10.0.0.2, port=4420 00:33:45.329 [2024-04-26 23:36:34.355571] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe56640 is same with the state(5) to be set 00:33:45.329 [2024-04-26 23:36:34.355804] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe56640 (9): Bad file descriptor 00:33:45.329 [2024-04-26 23:36:34.356029] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:45.329 [2024-04-26 23:36:34.356040] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:45.329 [2024-04-26 23:36:34.356047] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:45.329 [2024-04-26 23:36:34.359521] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:45.329 [2024-04-26 23:36:34.368529] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:45.329 [2024-04-26 23:36:34.369086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:45.329 [2024-04-26 23:36:34.369461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:45.329 [2024-04-26 23:36:34.369472] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe56640 with addr=10.0.0.2, port=4420 00:33:45.329 [2024-04-26 23:36:34.369480] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe56640 is same with the state(5) to be set 00:33:45.329 [2024-04-26 23:36:34.369695] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe56640 (9): Bad file descriptor 00:33:45.329 [2024-04-26 23:36:34.369916] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:45.329 [2024-04-26 23:36:34.369926] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:45.330 [2024-04-26 23:36:34.369933] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:45.330 [2024-04-26 23:36:34.373406] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:45.330 [2024-04-26 23:36:34.382408] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:45.330 [2024-04-26 23:36:34.382944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:45.330 [2024-04-26 23:36:34.383162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:45.330 [2024-04-26 23:36:34.383172] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe56640 with addr=10.0.0.2, port=4420 00:33:45.330 [2024-04-26 23:36:34.383181] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe56640 is same with the state(5) to be set 00:33:45.330 [2024-04-26 23:36:34.383396] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe56640 (9): Bad file descriptor 00:33:45.330 [2024-04-26 23:36:34.383612] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:45.330 [2024-04-26 23:36:34.383621] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:45.330 [2024-04-26 23:36:34.383628] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:45.330 [2024-04-26 23:36:34.387109] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:45.330 [2024-04-26 23:36:34.396112] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:45.330 [2024-04-26 23:36:34.396623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:45.330 [2024-04-26 23:36:34.396824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:45.330 [2024-04-26 23:36:34.396834] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe56640 with addr=10.0.0.2, port=4420 00:33:45.330 [2024-04-26 23:36:34.396847] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe56640 is same with the state(5) to be set 00:33:45.330 [2024-04-26 23:36:34.397061] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe56640 (9): Bad file descriptor 00:33:45.330 [2024-04-26 23:36:34.397278] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:45.330 [2024-04-26 23:36:34.397288] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:45.330 [2024-04-26 23:36:34.397295] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:45.330 [2024-04-26 23:36:34.400760] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:45.330 [2024-04-26 23:36:34.409973] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:45.330 [2024-04-26 23:36:34.410549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:45.330 [2024-04-26 23:36:34.410876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:45.330 [2024-04-26 23:36:34.410891] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe56640 with addr=10.0.0.2, port=4420 00:33:45.330 [2024-04-26 23:36:34.410898] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe56640 is same with the state(5) to be set 00:33:45.330 [2024-04-26 23:36:34.411113] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe56640 (9): Bad file descriptor 00:33:45.330 [2024-04-26 23:36:34.411327] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:45.330 [2024-04-26 23:36:34.411337] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:45.330 [2024-04-26 23:36:34.411344] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:45.330 [2024-04-26 23:36:34.414812] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:45.330 [2024-04-26 23:36:34.423826] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:45.330 [2024-04-26 23:36:34.424486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:45.330 [2024-04-26 23:36:34.424812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:45.330 [2024-04-26 23:36:34.424822] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe56640 with addr=10.0.0.2, port=4420 00:33:45.330 [2024-04-26 23:36:34.424829] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe56640 is same with the state(5) to be set 00:33:45.330 [2024-04-26 23:36:34.425050] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe56640 (9): Bad file descriptor 00:33:45.330 [2024-04-26 23:36:34.425266] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:45.330 [2024-04-26 23:36:34.425275] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:45.330 [2024-04-26 23:36:34.425282] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:45.330 [2024-04-26 23:36:34.428756] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:45.330 [2024-04-26 23:36:34.437768] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:45.330 [2024-04-26 23:36:34.438349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:45.330 [2024-04-26 23:36:34.438705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:45.330 [2024-04-26 23:36:34.438715] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe56640 with addr=10.0.0.2, port=4420 00:33:45.330 [2024-04-26 23:36:34.438722] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe56640 is same with the state(5) to be set 00:33:45.330 [2024-04-26 23:36:34.438943] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe56640 (9): Bad file descriptor 00:33:45.330 [2024-04-26 23:36:34.439159] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:45.330 [2024-04-26 23:36:34.439168] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:45.330 [2024-04-26 23:36:34.439176] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:45.330 [2024-04-26 23:36:34.442647] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:45.330 [2024-04-26 23:36:34.451662] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:45.330 [2024-04-26 23:36:34.452228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:45.330 [2024-04-26 23:36:34.452521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:45.330 [2024-04-26 23:36:34.452531] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe56640 with addr=10.0.0.2, port=4420 00:33:45.330 [2024-04-26 23:36:34.452542] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe56640 is same with the state(5) to be set 00:33:45.330 [2024-04-26 23:36:34.452757] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe56640 (9): Bad file descriptor 00:33:45.330 [2024-04-26 23:36:34.452980] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:45.330 [2024-04-26 23:36:34.452990] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:45.330 [2024-04-26 23:36:34.452998] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:45.330 [2024-04-26 23:36:34.456468] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:45.330 [2024-04-26 23:36:34.465484] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:45.330 [2024-04-26 23:36:34.466030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:45.330 [2024-04-26 23:36:34.466235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:45.330 [2024-04-26 23:36:34.466245] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe56640 with addr=10.0.0.2, port=4420 00:33:45.330 [2024-04-26 23:36:34.466252] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe56640 is same with the state(5) to be set 00:33:45.330 [2024-04-26 23:36:34.466467] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe56640 (9): Bad file descriptor 00:33:45.330 [2024-04-26 23:36:34.466682] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:45.330 [2024-04-26 23:36:34.466690] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:45.330 [2024-04-26 23:36:34.466697] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:45.330 [2024-04-26 23:36:34.470173] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:45.330 [2024-04-26 23:36:34.479389] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:45.330 [2024-04-26 23:36:34.479983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:45.330 [2024-04-26 23:36:34.480363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:45.330 [2024-04-26 23:36:34.480377] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe56640 with addr=10.0.0.2, port=4420 00:33:45.330 [2024-04-26 23:36:34.480386] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe56640 is same with the state(5) to be set 00:33:45.330 [2024-04-26 23:36:34.480620] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe56640 (9): Bad file descriptor 00:33:45.330 [2024-04-26 23:36:34.480846] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:45.330 [2024-04-26 23:36:34.480856] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:45.330 [2024-04-26 23:36:34.480864] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:45.330 [2024-04-26 23:36:34.484337] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:45.330 [2024-04-26 23:36:34.493144] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:45.330 [2024-04-26 23:36:34.493824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:45.330 [2024-04-26 23:36:34.494246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:45.330 [2024-04-26 23:36:34.494260] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe56640 with addr=10.0.0.2, port=4420 00:33:45.330 [2024-04-26 23:36:34.494269] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe56640 is same with the state(5) to be set 00:33:45.330 [2024-04-26 23:36:34.494506] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe56640 (9): Bad file descriptor 00:33:45.330 [2024-04-26 23:36:34.494724] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:45.330 [2024-04-26 23:36:34.494733] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:45.330 [2024-04-26 23:36:34.494741] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:45.331 [2024-04-26 23:36:34.498217] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:45.331 [2024-04-26 23:36:34.507038] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:45.331 [2024-04-26 23:36:34.507580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:45.331 [2024-04-26 23:36:34.507867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:45.331 [2024-04-26 23:36:34.507880] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe56640 with addr=10.0.0.2, port=4420 00:33:45.331 [2024-04-26 23:36:34.507887] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe56640 is same with the state(5) to be set 00:33:45.331 [2024-04-26 23:36:34.508103] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe56640 (9): Bad file descriptor 00:33:45.331 [2024-04-26 23:36:34.508319] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:45.331 [2024-04-26 23:36:34.508328] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:45.331 [2024-04-26 23:36:34.508335] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:45.331 [2024-04-26 23:36:34.511807] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:45.331 [2024-04-26 23:36:34.520829] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:45.331 [2024-04-26 23:36:34.521415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:45.331 [2024-04-26 23:36:34.521775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:45.331 [2024-04-26 23:36:34.521785] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe56640 with addr=10.0.0.2, port=4420 00:33:45.331 [2024-04-26 23:36:34.521792] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe56640 is same with the state(5) to be set 00:33:45.331 [2024-04-26 23:36:34.522013] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe56640 (9): Bad file descriptor 00:33:45.331 [2024-04-26 23:36:34.522228] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:45.331 [2024-04-26 23:36:34.522239] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:45.331 [2024-04-26 23:36:34.522246] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:45.331 [2024-04-26 23:36:34.525717] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:45.331 [2024-04-26 23:36:34.534732] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:45.331 [2024-04-26 23:36:34.535319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:45.331 [2024-04-26 23:36:34.535669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:45.331 [2024-04-26 23:36:34.535680] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe56640 with addr=10.0.0.2, port=4420 00:33:45.331 [2024-04-26 23:36:34.535687] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe56640 is same with the state(5) to be set 00:33:45.331 [2024-04-26 23:36:34.535907] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe56640 (9): Bad file descriptor 00:33:45.331 [2024-04-26 23:36:34.536127] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:45.331 [2024-04-26 23:36:34.536136] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:45.331 [2024-04-26 23:36:34.536143] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:45.331 [2024-04-26 23:36:34.539612] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:45.331 [2024-04-26 23:36:34.548625] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:45.331 [2024-04-26 23:36:34.549182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:45.331 [2024-04-26 23:36:34.549504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:45.331 [2024-04-26 23:36:34.549514] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe56640 with addr=10.0.0.2, port=4420 00:33:45.331 [2024-04-26 23:36:34.549522] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe56640 is same with the state(5) to be set 00:33:45.331 [2024-04-26 23:36:34.549736] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe56640 (9): Bad file descriptor 00:33:45.331 [2024-04-26 23:36:34.549957] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:45.331 [2024-04-26 23:36:34.549968] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:45.331 [2024-04-26 23:36:34.549975] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:45.331 [2024-04-26 23:36:34.553476] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:45.331 [2024-04-26 23:36:34.562492] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:45.331 [2024-04-26 23:36:34.563149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:45.331 [2024-04-26 23:36:34.563540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:45.331 [2024-04-26 23:36:34.563554] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe56640 with addr=10.0.0.2, port=4420 00:33:45.331 [2024-04-26 23:36:34.563563] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe56640 is same with the state(5) to be set 00:33:45.331 [2024-04-26 23:36:34.563797] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe56640 (9): Bad file descriptor 00:33:45.331 [2024-04-26 23:36:34.564023] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:45.331 [2024-04-26 23:36:34.564033] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:45.331 [2024-04-26 23:36:34.564040] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:45.331 [2024-04-26 23:36:34.567516] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:45.331 [2024-04-26 23:36:34.576323] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:45.331 [2024-04-26 23:36:34.576972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:45.331 [2024-04-26 23:36:34.577376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:45.331 [2024-04-26 23:36:34.577390] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe56640 with addr=10.0.0.2, port=4420 00:33:45.331 [2024-04-26 23:36:34.577399] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe56640 is same with the state(5) to be set 00:33:45.331 [2024-04-26 23:36:34.577633] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe56640 (9): Bad file descriptor 00:33:45.331 [2024-04-26 23:36:34.577858] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:45.331 [2024-04-26 23:36:34.577873] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:45.331 [2024-04-26 23:36:34.577880] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:45.331 [2024-04-26 23:36:34.581354] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:45.596 [2024-04-26 23:36:34.590158] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:45.596 [2024-04-26 23:36:34.590844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:45.596 [2024-04-26 23:36:34.591215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:45.596 [2024-04-26 23:36:34.591229] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe56640 with addr=10.0.0.2, port=4420 00:33:45.596 [2024-04-26 23:36:34.591238] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe56640 is same with the state(5) to be set 00:33:45.596 [2024-04-26 23:36:34.591472] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe56640 (9): Bad file descriptor 00:33:45.596 [2024-04-26 23:36:34.591690] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:45.596 [2024-04-26 23:36:34.591700] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:45.596 [2024-04-26 23:36:34.591707] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:45.596 [2024-04-26 23:36:34.595191] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:45.596 [2024-04-26 23:36:34.603991] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:45.596 [2024-04-26 23:36:34.604635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:45.596 [2024-04-26 23:36:34.605026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:45.596 [2024-04-26 23:36:34.605042] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe56640 with addr=10.0.0.2, port=4420 00:33:45.596 [2024-04-26 23:36:34.605052] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe56640 is same with the state(5) to be set 00:33:45.596 [2024-04-26 23:36:34.605285] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe56640 (9): Bad file descriptor 00:33:45.596 [2024-04-26 23:36:34.605503] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:45.596 [2024-04-26 23:36:34.605512] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:45.596 [2024-04-26 23:36:34.605519] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:45.596 [2024-04-26 23:36:34.609011] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:45.596 [2024-04-26 23:36:34.617814] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:45.596 [2024-04-26 23:36:34.618366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:45.596 [2024-04-26 23:36:34.618698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:45.596 [2024-04-26 23:36:34.618710] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe56640 with addr=10.0.0.2, port=4420 00:33:45.596 [2024-04-26 23:36:34.618718] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe56640 is same with the state(5) to be set 00:33:45.596 [2024-04-26 23:36:34.618939] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe56640 (9): Bad file descriptor 00:33:45.596 [2024-04-26 23:36:34.619156] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:45.596 [2024-04-26 23:36:34.619164] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:45.596 [2024-04-26 23:36:34.619176] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:45.596 [2024-04-26 23:36:34.622644] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:45.596 [2024-04-26 23:36:34.631654] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:45.596 [2024-04-26 23:36:34.632330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:45.596 [2024-04-26 23:36:34.632733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:45.596 [2024-04-26 23:36:34.632747] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe56640 with addr=10.0.0.2, port=4420 00:33:45.596 [2024-04-26 23:36:34.632756] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe56640 is same with the state(5) to be set 00:33:45.596 [2024-04-26 23:36:34.632999] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe56640 (9): Bad file descriptor 00:33:45.596 [2024-04-26 23:36:34.633218] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:45.596 [2024-04-26 23:36:34.633228] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:45.596 [2024-04-26 23:36:34.633235] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:45.596 [2024-04-26 23:36:34.636714] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:45.596 [2024-04-26 23:36:34.645524] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:45.596 [2024-04-26 23:36:34.646152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:45.596 [2024-04-26 23:36:34.646516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:45.596 [2024-04-26 23:36:34.646530] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe56640 with addr=10.0.0.2, port=4420 00:33:45.596 [2024-04-26 23:36:34.646539] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe56640 is same with the state(5) to be set 00:33:45.596 [2024-04-26 23:36:34.646772] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe56640 (9): Bad file descriptor 00:33:45.596 [2024-04-26 23:36:34.646998] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:45.596 [2024-04-26 23:36:34.647008] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:45.596 [2024-04-26 23:36:34.647015] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:45.596 [2024-04-26 23:36:34.650491] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:45.596 [2024-04-26 23:36:34.659286] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:45.596 [2024-04-26 23:36:34.659947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:45.596 [2024-04-26 23:36:34.660308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:45.596 [2024-04-26 23:36:34.660322] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe56640 with addr=10.0.0.2, port=4420 00:33:45.596 [2024-04-26 23:36:34.660332] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe56640 is same with the state(5) to be set 00:33:45.596 [2024-04-26 23:36:34.660565] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe56640 (9): Bad file descriptor 00:33:45.597 [2024-04-26 23:36:34.660784] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:45.597 [2024-04-26 23:36:34.660794] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:45.597 [2024-04-26 23:36:34.660801] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:45.597 [2024-04-26 23:36:34.664287] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:45.597 [2024-04-26 23:36:34.673091] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:45.597 [2024-04-26 23:36:34.673794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:45.597 [2024-04-26 23:36:34.674208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:45.597 [2024-04-26 23:36:34.674223] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe56640 with addr=10.0.0.2, port=4420 00:33:45.597 [2024-04-26 23:36:34.674232] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe56640 is same with the state(5) to be set 00:33:45.597 [2024-04-26 23:36:34.674465] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe56640 (9): Bad file descriptor 00:33:45.597 [2024-04-26 23:36:34.674684] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:45.597 [2024-04-26 23:36:34.674693] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:45.597 [2024-04-26 23:36:34.674701] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:45.597 [2024-04-26 23:36:34.678177] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:45.597 [2024-04-26 23:36:34.686975] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:45.597 [2024-04-26 23:36:34.687630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:45.597 [2024-04-26 23:36:34.688028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:45.597 [2024-04-26 23:36:34.688043] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe56640 with addr=10.0.0.2, port=4420 00:33:45.597 [2024-04-26 23:36:34.688052] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe56640 is same with the state(5) to be set 00:33:45.597 [2024-04-26 23:36:34.688286] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe56640 (9): Bad file descriptor 00:33:45.597 [2024-04-26 23:36:34.688505] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:45.597 [2024-04-26 23:36:34.688514] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:45.597 [2024-04-26 23:36:34.688523] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:45.597 [2024-04-26 23:36:34.691997] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:45.597 [2024-04-26 23:36:34.700809] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:45.597 [2024-04-26 23:36:34.701498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:45.597 [2024-04-26 23:36:34.701894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:45.597 [2024-04-26 23:36:34.701909] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe56640 with addr=10.0.0.2, port=4420 00:33:45.597 [2024-04-26 23:36:34.701919] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe56640 is same with the state(5) to be set 00:33:45.597 [2024-04-26 23:36:34.702153] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe56640 (9): Bad file descriptor 00:33:45.597 [2024-04-26 23:36:34.702371] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:45.597 [2024-04-26 23:36:34.702381] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:45.597 [2024-04-26 23:36:34.702389] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:45.597 [2024-04-26 23:36:34.705880] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:45.597 [2024-04-26 23:36:34.714692] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:45.597 [2024-04-26 23:36:34.715259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:45.597 [2024-04-26 23:36:34.715613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:45.597 [2024-04-26 23:36:34.715624] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe56640 with addr=10.0.0.2, port=4420 00:33:45.597 [2024-04-26 23:36:34.715631] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe56640 is same with the state(5) to be set 00:33:45.597 [2024-04-26 23:36:34.715852] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe56640 (9): Bad file descriptor 00:33:45.597 [2024-04-26 23:36:34.716068] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:45.597 [2024-04-26 23:36:34.716077] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:45.597 [2024-04-26 23:36:34.716084] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:45.597 [2024-04-26 23:36:34.719556] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:45.597 [2024-04-26 23:36:34.728563] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:45.597 [2024-04-26 23:36:34.729211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:45.597 [2024-04-26 23:36:34.729609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:45.597 [2024-04-26 23:36:34.729624] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe56640 with addr=10.0.0.2, port=4420 00:33:45.597 [2024-04-26 23:36:34.729633] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe56640 is same with the state(5) to be set 00:33:45.597 [2024-04-26 23:36:34.729876] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe56640 (9): Bad file descriptor 00:33:45.597 [2024-04-26 23:36:34.730095] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:45.597 [2024-04-26 23:36:34.730105] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:45.597 [2024-04-26 23:36:34.730113] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:45.597 [2024-04-26 23:36:34.733591] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:45.597 [2024-04-26 23:36:34.742397] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:45.597 [2024-04-26 23:36:34.743035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:45.597 [2024-04-26 23:36:34.743386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:45.597 [2024-04-26 23:36:34.743399] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe56640 with addr=10.0.0.2, port=4420 00:33:45.597 [2024-04-26 23:36:34.743408] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe56640 is same with the state(5) to be set 00:33:45.597 [2024-04-26 23:36:34.743642] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe56640 (9): Bad file descriptor 00:33:45.597 [2024-04-26 23:36:34.743868] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:45.597 [2024-04-26 23:36:34.743879] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:45.597 [2024-04-26 23:36:34.743887] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:45.597 [2024-04-26 23:36:34.747358] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:45.597 [2024-04-26 23:36:34.756153] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:45.597 [2024-04-26 23:36:34.756760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:45.597 [2024-04-26 23:36:34.757169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:45.597 [2024-04-26 23:36:34.757184] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe56640 with addr=10.0.0.2, port=4420 00:33:45.597 [2024-04-26 23:36:34.757193] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe56640 is same with the state(5) to be set 00:33:45.597 [2024-04-26 23:36:34.757426] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe56640 (9): Bad file descriptor 00:33:45.597 [2024-04-26 23:36:34.757645] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:45.597 [2024-04-26 23:36:34.757654] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:45.597 [2024-04-26 23:36:34.757662] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:45.597 [2024-04-26 23:36:34.761168] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:45.597 [2024-04-26 23:36:34.769969] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:45.597 [2024-04-26 23:36:34.770632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:45.597 [2024-04-26 23:36:34.770992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:45.597 [2024-04-26 23:36:34.771007] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe56640 with addr=10.0.0.2, port=4420 00:33:45.597 [2024-04-26 23:36:34.771016] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe56640 is same with the state(5) to be set 00:33:45.597 [2024-04-26 23:36:34.771250] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe56640 (9): Bad file descriptor 00:33:45.597 [2024-04-26 23:36:34.771468] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:45.597 [2024-04-26 23:36:34.771477] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:45.597 [2024-04-26 23:36:34.771485] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:45.597 [2024-04-26 23:36:34.774962] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:45.597 [2024-04-26 23:36:34.783756] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:45.597 [2024-04-26 23:36:34.784413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:45.597 [2024-04-26 23:36:34.784815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:45.597 [2024-04-26 23:36:34.784828] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe56640 with addr=10.0.0.2, port=4420 00:33:45.597 [2024-04-26 23:36:34.784846] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe56640 is same with the state(5) to be set 00:33:45.597 [2024-04-26 23:36:34.785080] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe56640 (9): Bad file descriptor 00:33:45.597 [2024-04-26 23:36:34.785298] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:45.597 [2024-04-26 23:36:34.785308] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:45.597 [2024-04-26 23:36:34.785315] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:45.597 [2024-04-26 23:36:34.788787] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:45.597 [2024-04-26 23:36:34.797593] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:45.597 [2024-04-26 23:36:34.798292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:45.597 [2024-04-26 23:36:34.798646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:45.597 [2024-04-26 23:36:34.798664] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe56640 with addr=10.0.0.2, port=4420 00:33:45.597 [2024-04-26 23:36:34.798674] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe56640 is same with the state(5) to be set 00:33:45.597 [2024-04-26 23:36:34.798914] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe56640 (9): Bad file descriptor 00:33:45.597 [2024-04-26 23:36:34.799134] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:45.597 [2024-04-26 23:36:34.799143] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:45.597 [2024-04-26 23:36:34.799151] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:45.597 [2024-04-26 23:36:34.802629] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:45.597 [2024-04-26 23:36:34.811438] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:45.597 [2024-04-26 23:36:34.811996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:45.597 [2024-04-26 23:36:34.812352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:45.597 [2024-04-26 23:36:34.812363] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe56640 with addr=10.0.0.2, port=4420 00:33:45.597 [2024-04-26 23:36:34.812371] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe56640 is same with the state(5) to be set 00:33:45.597 [2024-04-26 23:36:34.812586] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe56640 (9): Bad file descriptor 00:33:45.597 [2024-04-26 23:36:34.812802] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:45.597 [2024-04-26 23:36:34.812810] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:45.597 [2024-04-26 23:36:34.812818] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:45.597 [2024-04-26 23:36:34.816289] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:45.597 [2024-04-26 23:36:34.825327] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:45.597 [2024-04-26 23:36:34.826006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:45.597 [2024-04-26 23:36:34.826402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:45.597 [2024-04-26 23:36:34.826416] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe56640 with addr=10.0.0.2, port=4420 00:33:45.598 [2024-04-26 23:36:34.826425] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe56640 is same with the state(5) to be set 00:33:45.598 [2024-04-26 23:36:34.826658] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe56640 (9): Bad file descriptor 00:33:45.598 [2024-04-26 23:36:34.826886] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:45.598 [2024-04-26 23:36:34.826896] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:45.598 [2024-04-26 23:36:34.826904] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:45.598 [2024-04-26 23:36:34.830375] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:45.598 [2024-04-26 23:36:34.839171] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:45.598 [2024-04-26 23:36:34.839876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:45.598 [2024-04-26 23:36:34.840192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:45.598 [2024-04-26 23:36:34.840207] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe56640 with addr=10.0.0.2, port=4420 00:33:45.598 [2024-04-26 23:36:34.840220] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe56640 is same with the state(5) to be set 00:33:45.598 [2024-04-26 23:36:34.840454] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe56640 (9): Bad file descriptor 00:33:45.598 [2024-04-26 23:36:34.840673] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:45.598 [2024-04-26 23:36:34.840683] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:45.598 [2024-04-26 23:36:34.840690] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:45.598 [2024-04-26 23:36:34.844171] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:45.860 [2024-04-26 23:36:34.852973] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:45.860 [2024-04-26 23:36:34.853515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:45.860 [2024-04-26 23:36:34.853910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:45.860 [2024-04-26 23:36:34.853922] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe56640 with addr=10.0.0.2, port=4420 00:33:45.860 [2024-04-26 23:36:34.853930] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe56640 is same with the state(5) to be set 00:33:45.860 [2024-04-26 23:36:34.854146] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe56640 (9): Bad file descriptor 00:33:45.860 [2024-04-26 23:36:34.854361] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:45.860 [2024-04-26 23:36:34.854371] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:45.860 [2024-04-26 23:36:34.854378] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:45.860 [2024-04-26 23:36:34.857852] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:45.860 [2024-04-26 23:36:34.866848] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:45.860 [2024-04-26 23:36:34.867397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:45.860 [2024-04-26 23:36:34.867779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:45.860 [2024-04-26 23:36:34.867793] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe56640 with addr=10.0.0.2, port=4420 00:33:45.860 [2024-04-26 23:36:34.867802] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe56640 is same with the state(5) to be set 00:33:45.860 [2024-04-26 23:36:34.868050] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe56640 (9): Bad file descriptor 00:33:45.860 [2024-04-26 23:36:34.868271] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:45.860 [2024-04-26 23:36:34.868280] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:45.860 [2024-04-26 23:36:34.868287] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:45.860 [2024-04-26 23:36:34.871758] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:45.860 [2024-04-26 23:36:34.880555] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:45.860 [2024-04-26 23:36:34.881234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:45.860 [2024-04-26 23:36:34.881634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:45.860 [2024-04-26 23:36:34.881648] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe56640 with addr=10.0.0.2, port=4420 00:33:45.860 [2024-04-26 23:36:34.881657] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe56640 is same with the state(5) to be set 00:33:45.860 [2024-04-26 23:36:34.881904] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe56640 (9): Bad file descriptor 00:33:45.860 [2024-04-26 23:36:34.882124] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:45.860 [2024-04-26 23:36:34.882133] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:45.860 [2024-04-26 23:36:34.882141] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:45.860 [2024-04-26 23:36:34.885613] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:45.860 [2024-04-26 23:36:34.894421] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:45.860 [2024-04-26 23:36:34.894941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:45.860 [2024-04-26 23:36:34.895271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:45.860 [2024-04-26 23:36:34.895285] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe56640 with addr=10.0.0.2, port=4420 00:33:45.860 [2024-04-26 23:36:34.895294] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe56640 is same with the state(5) to be set 00:33:45.860 [2024-04-26 23:36:34.895527] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe56640 (9): Bad file descriptor 00:33:45.860 [2024-04-26 23:36:34.895745] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:45.860 [2024-04-26 23:36:34.895754] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:45.860 [2024-04-26 23:36:34.895762] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:45.860 [2024-04-26 23:36:34.899242] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:45.860 [2024-04-26 23:36:34.908257] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:45.860 [2024-04-26 23:36:34.908946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:45.860 [2024-04-26 23:36:34.909210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:45.860 [2024-04-26 23:36:34.909223] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe56640 with addr=10.0.0.2, port=4420 00:33:45.860 [2024-04-26 23:36:34.909234] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe56640 is same with the state(5) to be set 00:33:45.860 [2024-04-26 23:36:34.909467] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe56640 (9): Bad file descriptor 00:33:45.860 [2024-04-26 23:36:34.909686] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:45.860 [2024-04-26 23:36:34.909695] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:45.860 [2024-04-26 23:36:34.909702] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:45.860 [2024-04-26 23:36:34.913185] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:45.860 [2024-04-26 23:36:34.921988] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:45.860 [2024-04-26 23:36:34.922667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:45.860 [2024-04-26 23:36:34.923033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:45.860 [2024-04-26 23:36:34.923048] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe56640 with addr=10.0.0.2, port=4420 00:33:45.860 [2024-04-26 23:36:34.923057] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe56640 is same with the state(5) to be set 00:33:45.860 [2024-04-26 23:36:34.923291] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe56640 (9): Bad file descriptor 00:33:45.860 [2024-04-26 23:36:34.923512] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:45.860 [2024-04-26 23:36:34.923522] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:45.860 [2024-04-26 23:36:34.923530] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:45.860 [2024-04-26 23:36:34.927009] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:45.860 [2024-04-26 23:36:34.935806] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:45.860 [2024-04-26 23:36:34.936450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:45.860 [2024-04-26 23:36:34.936803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:45.860 [2024-04-26 23:36:34.936817] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe56640 with addr=10.0.0.2, port=4420 00:33:45.860 [2024-04-26 23:36:34.936826] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe56640 is same with the state(5) to be set 00:33:45.860 [2024-04-26 23:36:34.937068] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe56640 (9): Bad file descriptor 00:33:45.860 [2024-04-26 23:36:34.937288] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:45.860 [2024-04-26 23:36:34.937298] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:45.860 [2024-04-26 23:36:34.937306] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:45.860 [2024-04-26 23:36:34.940776] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:45.860 [2024-04-26 23:36:34.949572] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:45.860 [2024-04-26 23:36:34.950228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:45.860 [2024-04-26 23:36:34.950623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:45.860 [2024-04-26 23:36:34.950636] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe56640 with addr=10.0.0.2, port=4420 00:33:45.860 [2024-04-26 23:36:34.950646] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe56640 is same with the state(5) to be set 00:33:45.860 [2024-04-26 23:36:34.950888] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe56640 (9): Bad file descriptor 00:33:45.860 [2024-04-26 23:36:34.951107] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:45.860 [2024-04-26 23:36:34.951117] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:45.860 [2024-04-26 23:36:34.951124] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:45.860 [2024-04-26 23:36:34.954598] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:45.860 [2024-04-26 23:36:34.963396] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:45.860 [2024-04-26 23:36:34.964071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:45.860 [2024-04-26 23:36:34.964471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:45.860 [2024-04-26 23:36:34.964485] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe56640 with addr=10.0.0.2, port=4420 00:33:45.860 [2024-04-26 23:36:34.964494] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe56640 is same with the state(5) to be set 00:33:45.860 [2024-04-26 23:36:34.964727] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe56640 (9): Bad file descriptor 00:33:45.860 [2024-04-26 23:36:34.964953] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:45.860 [2024-04-26 23:36:34.964965] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:45.860 [2024-04-26 23:36:34.964978] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:45.861 [2024-04-26 23:36:34.968480] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:45.861 [2024-04-26 23:36:34.977281] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:45.861 [2024-04-26 23:36:34.977956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:45.861 [2024-04-26 23:36:34.978313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:45.861 [2024-04-26 23:36:34.978327] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe56640 with addr=10.0.0.2, port=4420 00:33:45.861 [2024-04-26 23:36:34.978336] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe56640 is same with the state(5) to be set 00:33:45.861 [2024-04-26 23:36:34.978569] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe56640 (9): Bad file descriptor 00:33:45.861 [2024-04-26 23:36:34.978787] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:45.861 [2024-04-26 23:36:34.978796] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:45.861 [2024-04-26 23:36:34.978803] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:45.861 [2024-04-26 23:36:34.982284] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:45.861 [2024-04-26 23:36:34.991077] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:45.861 [2024-04-26 23:36:34.991733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:45.861 [2024-04-26 23:36:34.992095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:45.861 [2024-04-26 23:36:34.992110] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe56640 with addr=10.0.0.2, port=4420 00:33:45.861 [2024-04-26 23:36:34.992119] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe56640 is same with the state(5) to be set 00:33:45.861 [2024-04-26 23:36:34.992352] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe56640 (9): Bad file descriptor 00:33:45.861 [2024-04-26 23:36:34.992570] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:45.861 [2024-04-26 23:36:34.992579] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:45.861 [2024-04-26 23:36:34.992587] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:45.861 [2024-04-26 23:36:34.996066] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:45.861 [2024-04-26 23:36:35.004865] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:45.861 [2024-04-26 23:36:35.005543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:45.861 [2024-04-26 23:36:35.005945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:45.861 [2024-04-26 23:36:35.005960] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe56640 with addr=10.0.0.2, port=4420 00:33:45.861 [2024-04-26 23:36:35.005969] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe56640 is same with the state(5) to be set 00:33:45.861 [2024-04-26 23:36:35.006202] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe56640 (9): Bad file descriptor 00:33:45.861 [2024-04-26 23:36:35.006420] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:45.861 [2024-04-26 23:36:35.006430] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:45.861 [2024-04-26 23:36:35.006441] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:45.861 [2024-04-26 23:36:35.009930] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:45.861 [2024-04-26 23:36:35.018724] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:45.861 [2024-04-26 23:36:35.019269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:45.861 [2024-04-26 23:36:35.019621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:45.861 [2024-04-26 23:36:35.019634] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe56640 with addr=10.0.0.2, port=4420 00:33:45.861 [2024-04-26 23:36:35.019643] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe56640 is same with the state(5) to be set 00:33:45.861 [2024-04-26 23:36:35.019886] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe56640 (9): Bad file descriptor 00:33:45.861 [2024-04-26 23:36:35.020105] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:45.861 [2024-04-26 23:36:35.020115] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:45.861 [2024-04-26 23:36:35.020122] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:45.861 [2024-04-26 23:36:35.023593] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:45.861 [2024-04-26 23:36:35.032594] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:45.861 [2024-04-26 23:36:35.033275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:45.861 [2024-04-26 23:36:35.033630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:45.861 [2024-04-26 23:36:35.033644] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe56640 with addr=10.0.0.2, port=4420 00:33:45.861 [2024-04-26 23:36:35.033654] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe56640 is same with the state(5) to be set 00:33:45.861 [2024-04-26 23:36:35.033897] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe56640 (9): Bad file descriptor 00:33:45.861 [2024-04-26 23:36:35.034117] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:45.861 [2024-04-26 23:36:35.034125] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:45.861 [2024-04-26 23:36:35.034133] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:45.861 [2024-04-26 23:36:35.037605] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:45.861 [2024-04-26 23:36:35.046400] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:45.861 [2024-04-26 23:36:35.046904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:45.861 [2024-04-26 23:36:35.047254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:45.861 [2024-04-26 23:36:35.047268] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe56640 with addr=10.0.0.2, port=4420 00:33:45.861 [2024-04-26 23:36:35.047277] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe56640 is same with the state(5) to be set 00:33:45.861 [2024-04-26 23:36:35.047510] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe56640 (9): Bad file descriptor 00:33:45.861 [2024-04-26 23:36:35.047728] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:45.861 [2024-04-26 23:36:35.047738] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:45.861 [2024-04-26 23:36:35.047745] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:45.861 [2024-04-26 23:36:35.051226] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:45.861 [2024-04-26 23:36:35.060234] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:45.861 [2024-04-26 23:36:35.060918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:45.861 [2024-04-26 23:36:35.061270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:45.861 [2024-04-26 23:36:35.061284] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe56640 with addr=10.0.0.2, port=4420 00:33:45.861 [2024-04-26 23:36:35.061293] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe56640 is same with the state(5) to be set 00:33:45.861 [2024-04-26 23:36:35.061526] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe56640 (9): Bad file descriptor 00:33:45.861 [2024-04-26 23:36:35.061744] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:45.861 [2024-04-26 23:36:35.061754] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:45.861 [2024-04-26 23:36:35.061761] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:45.861 [2024-04-26 23:36:35.065241] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:45.861 [2024-04-26 23:36:35.074039] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:45.861 [2024-04-26 23:36:35.074676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:45.861 [2024-04-26 23:36:35.075026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:45.861 [2024-04-26 23:36:35.075042] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe56640 with addr=10.0.0.2, port=4420 00:33:45.861 [2024-04-26 23:36:35.075051] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe56640 is same with the state(5) to be set 00:33:45.861 [2024-04-26 23:36:35.075285] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe56640 (9): Bad file descriptor 00:33:45.861 [2024-04-26 23:36:35.075504] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:45.861 [2024-04-26 23:36:35.075513] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:45.861 [2024-04-26 23:36:35.075520] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:45.861 [2024-04-26 23:36:35.078998] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:45.861 [2024-04-26 23:36:35.087790] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:45.861 [2024-04-26 23:36:35.088471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:45.861 [2024-04-26 23:36:35.088821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:45.861 [2024-04-26 23:36:35.088835] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe56640 with addr=10.0.0.2, port=4420 00:33:45.861 [2024-04-26 23:36:35.088854] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe56640 is same with the state(5) to be set 00:33:45.861 [2024-04-26 23:36:35.089088] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe56640 (9): Bad file descriptor 00:33:45.861 [2024-04-26 23:36:35.089306] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:45.861 [2024-04-26 23:36:35.089316] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:45.861 [2024-04-26 23:36:35.089323] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:45.861 [2024-04-26 23:36:35.092794] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:45.861 [2024-04-26 23:36:35.101586] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:45.861 [2024-04-26 23:36:35.102251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:45.861 [2024-04-26 23:36:35.102647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:45.861 [2024-04-26 23:36:35.102661] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe56640 with addr=10.0.0.2, port=4420 00:33:45.861 [2024-04-26 23:36:35.102670] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe56640 is same with the state(5) to be set 00:33:45.861 [2024-04-26 23:36:35.102913] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe56640 (9): Bad file descriptor 00:33:45.861 [2024-04-26 23:36:35.103132] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:45.861 [2024-04-26 23:36:35.103142] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:45.861 [2024-04-26 23:36:35.103150] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:45.861 [2024-04-26 23:36:35.106622] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:46.123 [2024-04-26 23:36:35.115446] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:46.123 [2024-04-26 23:36:35.116104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.123 [2024-04-26 23:36:35.116452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.123 [2024-04-26 23:36:35.116466] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe56640 with addr=10.0.0.2, port=4420 00:33:46.123 [2024-04-26 23:36:35.116475] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe56640 is same with the state(5) to be set 00:33:46.123 [2024-04-26 23:36:35.116709] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe56640 (9): Bad file descriptor 00:33:46.123 [2024-04-26 23:36:35.116936] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:46.123 [2024-04-26 23:36:35.116946] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:46.123 [2024-04-26 23:36:35.116954] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:46.123 [2024-04-26 23:36:35.120433] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:46.123 [2024-04-26 23:36:35.129231] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:46.123 [2024-04-26 23:36:35.129911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.123 [2024-04-26 23:36:35.130266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.123 [2024-04-26 23:36:35.130280] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe56640 with addr=10.0.0.2, port=4420 00:33:46.123 [2024-04-26 23:36:35.130289] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe56640 is same with the state(5) to be set 00:33:46.123 [2024-04-26 23:36:35.130522] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe56640 (9): Bad file descriptor 00:33:46.123 [2024-04-26 23:36:35.130740] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:46.123 [2024-04-26 23:36:35.130751] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:46.123 [2024-04-26 23:36:35.130759] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:46.123 [2024-04-26 23:36:35.134239] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:46.123 [2024-04-26 23:36:35.143035] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:46.123 [2024-04-26 23:36:35.143559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.123 [2024-04-26 23:36:35.143875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.123 [2024-04-26 23:36:35.143889] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe56640 with addr=10.0.0.2, port=4420 00:33:46.123 [2024-04-26 23:36:35.143896] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe56640 is same with the state(5) to be set 00:33:46.123 [2024-04-26 23:36:35.144112] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe56640 (9): Bad file descriptor 00:33:46.123 [2024-04-26 23:36:35.144329] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:46.123 [2024-04-26 23:36:35.144338] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:46.123 [2024-04-26 23:36:35.144345] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:46.123 [2024-04-26 23:36:35.147814] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:46.123 [2024-04-26 23:36:35.156818] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:46.123 [2024-04-26 23:36:35.157451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.123 [2024-04-26 23:36:35.157826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.123 [2024-04-26 23:36:35.157848] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe56640 with addr=10.0.0.2, port=4420 00:33:46.123 [2024-04-26 23:36:35.157858] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe56640 is same with the state(5) to be set 00:33:46.123 [2024-04-26 23:36:35.158092] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe56640 (9): Bad file descriptor 00:33:46.123 [2024-04-26 23:36:35.158311] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:46.123 [2024-04-26 23:36:35.158320] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:46.123 [2024-04-26 23:36:35.158328] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:46.123 [2024-04-26 23:36:35.161801] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:46.123 [2024-04-26 23:36:35.170597] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:46.123 [2024-04-26 23:36:35.171280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.123 [2024-04-26 23:36:35.171635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.123 [2024-04-26 23:36:35.171649] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe56640 with addr=10.0.0.2, port=4420 00:33:46.123 [2024-04-26 23:36:35.171658] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe56640 is same with the state(5) to be set 00:33:46.123 [2024-04-26 23:36:35.171901] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe56640 (9): Bad file descriptor 00:33:46.123 [2024-04-26 23:36:35.172121] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:46.123 [2024-04-26 23:36:35.172130] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:46.123 [2024-04-26 23:36:35.172138] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:46.123 [2024-04-26 23:36:35.175638] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:46.123 [2024-04-26 23:36:35.184457] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:46.123 [2024-04-26 23:36:35.184990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.123 [2024-04-26 23:36:35.185384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.123 [2024-04-26 23:36:35.185398] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe56640 with addr=10.0.0.2, port=4420 00:33:46.124 [2024-04-26 23:36:35.185411] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe56640 is same with the state(5) to be set 00:33:46.124 [2024-04-26 23:36:35.185645] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe56640 (9): Bad file descriptor 00:33:46.124 [2024-04-26 23:36:35.185872] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:46.124 [2024-04-26 23:36:35.185883] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:46.124 [2024-04-26 23:36:35.185890] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:46.124 [2024-04-26 23:36:35.189363] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:46.124 [2024-04-26 23:36:35.198363] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:46.124 [2024-04-26 23:36:35.198942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.124 [2024-04-26 23:36:35.199345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.124 [2024-04-26 23:36:35.199359] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe56640 with addr=10.0.0.2, port=4420 00:33:46.124 [2024-04-26 23:36:35.199369] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe56640 is same with the state(5) to be set 00:33:46.124 [2024-04-26 23:36:35.199602] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe56640 (9): Bad file descriptor 00:33:46.124 [2024-04-26 23:36:35.199820] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:46.124 [2024-04-26 23:36:35.199829] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:46.124 [2024-04-26 23:36:35.199845] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:46.124 [2024-04-26 23:36:35.203316] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:46.124 [2024-04-26 23:36:35.212122] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:46.124 [2024-04-26 23:36:35.212812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.124 [2024-04-26 23:36:35.213172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.124 [2024-04-26 23:36:35.213187] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe56640 with addr=10.0.0.2, port=4420 00:33:46.124 [2024-04-26 23:36:35.213196] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe56640 is same with the state(5) to be set 00:33:46.124 [2024-04-26 23:36:35.213430] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe56640 (9): Bad file descriptor 00:33:46.124 [2024-04-26 23:36:35.213648] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:46.124 [2024-04-26 23:36:35.213657] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:46.124 [2024-04-26 23:36:35.213664] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:46.124 [2024-04-26 23:36:35.217148] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:46.124 [2024-04-26 23:36:35.225946] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:46.124 [2024-04-26 23:36:35.226559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.124 [2024-04-26 23:36:35.226918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.124 [2024-04-26 23:36:35.226933] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe56640 with addr=10.0.0.2, port=4420 00:33:46.124 [2024-04-26 23:36:35.226942] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe56640 is same with the state(5) to be set 00:33:46.124 [2024-04-26 23:36:35.227179] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe56640 (9): Bad file descriptor 00:33:46.124 [2024-04-26 23:36:35.227398] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:46.124 [2024-04-26 23:36:35.227408] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:46.124 [2024-04-26 23:36:35.227415] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:46.124 [2024-04-26 23:36:35.230890] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:46.124 [2024-04-26 23:36:35.239684] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:46.124 [2024-04-26 23:36:35.240370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.124 [2024-04-26 23:36:35.240724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.124 [2024-04-26 23:36:35.240738] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe56640 with addr=10.0.0.2, port=4420 00:33:46.124 [2024-04-26 23:36:35.240747] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe56640 is same with the state(5) to be set 00:33:46.124 [2024-04-26 23:36:35.240990] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe56640 (9): Bad file descriptor 00:33:46.124 [2024-04-26 23:36:35.241210] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:46.124 [2024-04-26 23:36:35.241220] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:46.124 [2024-04-26 23:36:35.241227] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:46.124 [2024-04-26 23:36:35.244700] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:46.124 [2024-04-26 23:36:35.253496] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:46.124 [2024-04-26 23:36:35.254130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.124 [2024-04-26 23:36:35.254492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.124 [2024-04-26 23:36:35.254506] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe56640 with addr=10.0.0.2, port=4420 00:33:46.124 [2024-04-26 23:36:35.254515] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe56640 is same with the state(5) to be set 00:33:46.124 [2024-04-26 23:36:35.254748] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe56640 (9): Bad file descriptor 00:33:46.124 [2024-04-26 23:36:35.254976] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:46.124 [2024-04-26 23:36:35.254987] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:46.124 [2024-04-26 23:36:35.254994] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:46.124 [2024-04-26 23:36:35.258466] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:46.124 [2024-04-26 23:36:35.267263] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:46.124 [2024-04-26 23:36:35.267813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.124 [2024-04-26 23:36:35.268171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.124 [2024-04-26 23:36:35.268182] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe56640 with addr=10.0.0.2, port=4420 00:33:46.124 [2024-04-26 23:36:35.268190] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe56640 is same with the state(5) to be set 00:33:46.124 [2024-04-26 23:36:35.268405] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe56640 (9): Bad file descriptor 00:33:46.124 [2024-04-26 23:36:35.268625] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:46.124 [2024-04-26 23:36:35.268634] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:46.124 [2024-04-26 23:36:35.268641] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:46.124 [2024-04-26 23:36:35.272123] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:46.124 [2024-04-26 23:36:35.281124] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:46.124 [2024-04-26 23:36:35.281760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.124 [2024-04-26 23:36:35.282113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.124 [2024-04-26 23:36:35.282128] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe56640 with addr=10.0.0.2, port=4420 00:33:46.124 [2024-04-26 23:36:35.282137] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe56640 is same with the state(5) to be set 00:33:46.124 [2024-04-26 23:36:35.282370] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe56640 (9): Bad file descriptor 00:33:46.124 [2024-04-26 23:36:35.282588] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:46.124 [2024-04-26 23:36:35.282599] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:46.124 [2024-04-26 23:36:35.282606] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:46.124 [2024-04-26 23:36:35.286084] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:46.124 [2024-04-26 23:36:35.294880] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:46.124 [2024-04-26 23:36:35.295540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.124 [2024-04-26 23:36:35.295936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.124 [2024-04-26 23:36:35.295951] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe56640 with addr=10.0.0.2, port=4420 00:33:46.124 [2024-04-26 23:36:35.295961] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe56640 is same with the state(5) to be set 00:33:46.124 [2024-04-26 23:36:35.296194] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe56640 (9): Bad file descriptor 00:33:46.124 [2024-04-26 23:36:35.296413] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:46.124 [2024-04-26 23:36:35.296422] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:46.124 [2024-04-26 23:36:35.296429] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:46.124 [2024-04-26 23:36:35.299907] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:46.124 [2024-04-26 23:36:35.308711] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:46.124 [2024-04-26 23:36:35.309397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.124 [2024-04-26 23:36:35.309749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.124 [2024-04-26 23:36:35.309763] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe56640 with addr=10.0.0.2, port=4420 00:33:46.124 [2024-04-26 23:36:35.309772] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe56640 is same with the state(5) to be set 00:33:46.124 [2024-04-26 23:36:35.310014] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe56640 (9): Bad file descriptor 00:33:46.124 [2024-04-26 23:36:35.310234] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:46.124 [2024-04-26 23:36:35.310247] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:46.124 [2024-04-26 23:36:35.310255] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:46.124 [2024-04-26 23:36:35.313727] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:46.124 [2024-04-26 23:36:35.322530] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:46.124 [2024-04-26 23:36:35.323098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.124 [2024-04-26 23:36:35.323450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.124 [2024-04-26 23:36:35.323463] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe56640 with addr=10.0.0.2, port=4420 00:33:46.124 [2024-04-26 23:36:35.323472] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe56640 is same with the state(5) to be set 00:33:46.124 [2024-04-26 23:36:35.323706] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe56640 (9): Bad file descriptor 00:33:46.124 [2024-04-26 23:36:35.323934] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:46.124 [2024-04-26 23:36:35.323944] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:46.124 [2024-04-26 23:36:35.323951] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:46.124 [2024-04-26 23:36:35.327423] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:46.124 [2024-04-26 23:36:35.336425] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:46.124 [2024-04-26 23:36:35.337104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.124 [2024-04-26 23:36:35.337506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.124 [2024-04-26 23:36:35.337520] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe56640 with addr=10.0.0.2, port=4420 00:33:46.124 [2024-04-26 23:36:35.337529] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe56640 is same with the state(5) to be set 00:33:46.124 [2024-04-26 23:36:35.337762] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe56640 (9): Bad file descriptor 00:33:46.124 [2024-04-26 23:36:35.337990] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:46.124 [2024-04-26 23:36:35.338001] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:46.124 [2024-04-26 23:36:35.338008] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:46.124 [2024-04-26 23:36:35.341482] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:46.124 [2024-04-26 23:36:35.350282] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:46.124 [2024-04-26 23:36:35.350962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.124 [2024-04-26 23:36:35.351366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.124 [2024-04-26 23:36:35.351380] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe56640 with addr=10.0.0.2, port=4420 00:33:46.124 [2024-04-26 23:36:35.351389] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe56640 is same with the state(5) to be set 00:33:46.124 [2024-04-26 23:36:35.351622] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe56640 (9): Bad file descriptor 00:33:46.125 [2024-04-26 23:36:35.351849] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:46.125 [2024-04-26 23:36:35.351860] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:46.125 [2024-04-26 23:36:35.351871] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:46.125 [2024-04-26 23:36:35.355343] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:46.125 [2024-04-26 23:36:35.364147] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:46.125 [2024-04-26 23:36:35.364706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.125 [2024-04-26 23:36:35.365088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.125 [2024-04-26 23:36:35.365104] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe56640 with addr=10.0.0.2, port=4420 00:33:46.125 [2024-04-26 23:36:35.365113] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe56640 is same with the state(5) to be set 00:33:46.125 [2024-04-26 23:36:35.365346] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe56640 (9): Bad file descriptor 00:33:46.125 [2024-04-26 23:36:35.365565] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:46.125 [2024-04-26 23:36:35.365574] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:46.125 [2024-04-26 23:36:35.365581] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:46.125 [2024-04-26 23:36:35.369059] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:46.386 [2024-04-26 23:36:35.377869] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:46.386 [2024-04-26 23:36:35.378551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.386 [2024-04-26 23:36:35.378929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.386 [2024-04-26 23:36:35.378944] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe56640 with addr=10.0.0.2, port=4420 00:33:46.386 [2024-04-26 23:36:35.378953] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe56640 is same with the state(5) to be set 00:33:46.386 [2024-04-26 23:36:35.379187] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe56640 (9): Bad file descriptor 00:33:46.386 [2024-04-26 23:36:35.379404] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:46.386 [2024-04-26 23:36:35.379415] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:46.386 [2024-04-26 23:36:35.379422] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:46.386 [2024-04-26 23:36:35.382926] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:46.386 [2024-04-26 23:36:35.391735] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:46.386 [2024-04-26 23:36:35.392324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.386 [2024-04-26 23:36:35.392683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.386 [2024-04-26 23:36:35.392694] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe56640 with addr=10.0.0.2, port=4420 00:33:46.386 [2024-04-26 23:36:35.392702] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe56640 is same with the state(5) to be set 00:33:46.386 [2024-04-26 23:36:35.392923] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe56640 (9): Bad file descriptor 00:33:46.386 [2024-04-26 23:36:35.393138] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:46.386 [2024-04-26 23:36:35.393148] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:46.386 [2024-04-26 23:36:35.393155] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:46.386 [2024-04-26 23:36:35.396637] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:46.386 [2024-04-26 23:36:35.405645] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:46.386 [2024-04-26 23:36:35.406185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.386 [2024-04-26 23:36:35.406421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.386 [2024-04-26 23:36:35.406431] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe56640 with addr=10.0.0.2, port=4420 00:33:46.386 [2024-04-26 23:36:35.406438] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe56640 is same with the state(5) to be set 00:33:46.386 [2024-04-26 23:36:35.406653] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe56640 (9): Bad file descriptor 00:33:46.386 [2024-04-26 23:36:35.406883] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:46.386 [2024-04-26 23:36:35.406900] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:46.386 [2024-04-26 23:36:35.406907] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:46.386 [2024-04-26 23:36:35.410373] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:46.386 [2024-04-26 23:36:35.419369] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:46.387 [2024-04-26 23:36:35.419960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.387 [2024-04-26 23:36:35.420311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.387 [2024-04-26 23:36:35.420325] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe56640 with addr=10.0.0.2, port=4420 00:33:46.387 [2024-04-26 23:36:35.420334] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe56640 is same with the state(5) to be set 00:33:46.387 [2024-04-26 23:36:35.420568] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe56640 (9): Bad file descriptor 00:33:46.387 [2024-04-26 23:36:35.420786] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:46.387 [2024-04-26 23:36:35.420795] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:46.387 [2024-04-26 23:36:35.420802] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:46.387 [2024-04-26 23:36:35.424292] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:46.387 [2024-04-26 23:36:35.433108] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:46.387 [2024-04-26 23:36:35.433757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.387 [2024-04-26 23:36:35.434132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.387 [2024-04-26 23:36:35.434147] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe56640 with addr=10.0.0.2, port=4420 00:33:46.387 [2024-04-26 23:36:35.434157] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe56640 is same with the state(5) to be set 00:33:46.387 [2024-04-26 23:36:35.434390] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe56640 (9): Bad file descriptor 00:33:46.387 [2024-04-26 23:36:35.434609] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:46.387 [2024-04-26 23:36:35.434618] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:46.387 [2024-04-26 23:36:35.434625] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:46.387 [2024-04-26 23:36:35.438110] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:46.387 [2024-04-26 23:36:35.446927] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:46.387 [2024-04-26 23:36:35.447476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.387 [2024-04-26 23:36:35.447835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.387 [2024-04-26 23:36:35.447855] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe56640 with addr=10.0.0.2, port=4420 00:33:46.387 [2024-04-26 23:36:35.447863] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe56640 is same with the state(5) to be set 00:33:46.387 [2024-04-26 23:36:35.448078] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe56640 (9): Bad file descriptor 00:33:46.387 [2024-04-26 23:36:35.448294] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:46.387 [2024-04-26 23:36:35.448303] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:46.387 [2024-04-26 23:36:35.448310] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:46.387 [2024-04-26 23:36:35.451780] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:46.387 [2024-04-26 23:36:35.460792] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:46.387 [2024-04-26 23:36:35.461337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.387 [2024-04-26 23:36:35.461659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.387 [2024-04-26 23:36:35.461670] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe56640 with addr=10.0.0.2, port=4420 00:33:46.387 [2024-04-26 23:36:35.461677] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe56640 is same with the state(5) to be set 00:33:46.387 [2024-04-26 23:36:35.461897] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe56640 (9): Bad file descriptor 00:33:46.387 [2024-04-26 23:36:35.462112] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:46.387 [2024-04-26 23:36:35.462122] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:46.387 [2024-04-26 23:36:35.462129] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:46.387 [2024-04-26 23:36:35.465601] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:46.387 [2024-04-26 23:36:35.474614] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:46.387 [2024-04-26 23:36:35.475272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.387 [2024-04-26 23:36:35.475637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.387 [2024-04-26 23:36:35.475650] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe56640 with addr=10.0.0.2, port=4420 00:33:46.387 [2024-04-26 23:36:35.475660] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe56640 is same with the state(5) to be set 00:33:46.387 [2024-04-26 23:36:35.475902] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe56640 (9): Bad file descriptor 00:33:46.387 [2024-04-26 23:36:35.476121] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:46.387 [2024-04-26 23:36:35.476130] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:46.387 [2024-04-26 23:36:35.476137] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:46.387 [2024-04-26 23:36:35.479616] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:46.387 [2024-04-26 23:36:35.488506] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:46.387 [2024-04-26 23:36:35.489166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.387 [2024-04-26 23:36:35.489531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.387 [2024-04-26 23:36:35.489545] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe56640 with addr=10.0.0.2, port=4420 00:33:46.387 [2024-04-26 23:36:35.489555] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe56640 is same with the state(5) to be set 00:33:46.387 [2024-04-26 23:36:35.489788] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe56640 (9): Bad file descriptor 00:33:46.387 [2024-04-26 23:36:35.490013] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:46.387 [2024-04-26 23:36:35.490024] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:46.387 [2024-04-26 23:36:35.490031] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:46.387 [2024-04-26 23:36:35.493502] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:46.387 [2024-04-26 23:36:35.502303] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:46.387 [2024-04-26 23:36:35.502974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.387 [2024-04-26 23:36:35.503378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.387 [2024-04-26 23:36:35.503391] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe56640 with addr=10.0.0.2, port=4420 00:33:46.387 [2024-04-26 23:36:35.503401] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe56640 is same with the state(5) to be set 00:33:46.387 [2024-04-26 23:36:35.503635] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe56640 (9): Bad file descriptor 00:33:46.387 [2024-04-26 23:36:35.503859] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:46.387 [2024-04-26 23:36:35.503869] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:46.387 [2024-04-26 23:36:35.503877] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:46.387 [2024-04-26 23:36:35.507350] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:46.387 [2024-04-26 23:36:35.516163] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:46.387 [2024-04-26 23:36:35.516856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.387 [2024-04-26 23:36:35.517217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.387 [2024-04-26 23:36:35.517230] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe56640 with addr=10.0.0.2, port=4420 00:33:46.387 [2024-04-26 23:36:35.517240] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe56640 is same with the state(5) to be set 00:33:46.387 [2024-04-26 23:36:35.517473] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe56640 (9): Bad file descriptor 00:33:46.387 [2024-04-26 23:36:35.517692] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:46.387 [2024-04-26 23:36:35.517702] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:46.387 [2024-04-26 23:36:35.517709] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:46.387 [2024-04-26 23:36:35.521189] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:46.387 [2024-04-26 23:36:35.529995] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:46.387 [2024-04-26 23:36:35.530516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.387 [2024-04-26 23:36:35.530917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.387 [2024-04-26 23:36:35.530936] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe56640 with addr=10.0.0.2, port=4420 00:33:46.387 [2024-04-26 23:36:35.530945] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe56640 is same with the state(5) to be set 00:33:46.387 [2024-04-26 23:36:35.531179] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe56640 (9): Bad file descriptor 00:33:46.387 [2024-04-26 23:36:35.531396] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:46.387 [2024-04-26 23:36:35.531405] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:46.387 [2024-04-26 23:36:35.531413] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:46.387 [2024-04-26 23:36:35.534891] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:46.387 [2024-04-26 23:36:35.543895] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:46.387 [2024-04-26 23:36:35.544596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.387 [2024-04-26 23:36:35.544949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.387 [2024-04-26 23:36:35.544965] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe56640 with addr=10.0.0.2, port=4420 00:33:46.387 [2024-04-26 23:36:35.544974] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe56640 is same with the state(5) to be set 00:33:46.388 [2024-04-26 23:36:35.545208] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe56640 (9): Bad file descriptor 00:33:46.388 [2024-04-26 23:36:35.545425] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:46.388 [2024-04-26 23:36:35.545435] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:46.388 [2024-04-26 23:36:35.545442] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:46.388 [2024-04-26 23:36:35.548921] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:46.388 [2024-04-26 23:36:35.557715] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:46.388 [2024-04-26 23:36:35.558400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.388 [2024-04-26 23:36:35.558796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.388 [2024-04-26 23:36:35.558809] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe56640 with addr=10.0.0.2, port=4420 00:33:46.388 [2024-04-26 23:36:35.558818] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe56640 is same with the state(5) to be set 00:33:46.388 [2024-04-26 23:36:35.559060] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe56640 (9): Bad file descriptor 00:33:46.388 [2024-04-26 23:36:35.559279] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:46.388 [2024-04-26 23:36:35.559288] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:46.388 [2024-04-26 23:36:35.559295] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:46.388 [2024-04-26 23:36:35.562768] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:46.388 [2024-04-26 23:36:35.571567] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:46.388 [2024-04-26 23:36:35.572239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.388 [2024-04-26 23:36:35.572589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.388 [2024-04-26 23:36:35.572603] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe56640 with addr=10.0.0.2, port=4420 00:33:46.388 [2024-04-26 23:36:35.572616] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe56640 is same with the state(5) to be set 00:33:46.388 [2024-04-26 23:36:35.572857] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe56640 (9): Bad file descriptor 00:33:46.388 [2024-04-26 23:36:35.573076] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:46.388 [2024-04-26 23:36:35.573086] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:46.388 [2024-04-26 23:36:35.573094] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:46.388 [2024-04-26 23:36:35.576568] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:46.388 [2024-04-26 23:36:35.585362] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:46.388 [2024-04-26 23:36:35.585944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.388 [2024-04-26 23:36:35.586313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.388 [2024-04-26 23:36:35.586327] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe56640 with addr=10.0.0.2, port=4420 00:33:46.388 [2024-04-26 23:36:35.586336] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe56640 is same with the state(5) to be set 00:33:46.388 [2024-04-26 23:36:35.586569] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe56640 (9): Bad file descriptor 00:33:46.388 [2024-04-26 23:36:35.586787] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:46.388 [2024-04-26 23:36:35.586796] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:46.388 [2024-04-26 23:36:35.586804] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:46.388 [2024-04-26 23:36:35.590285] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:46.388 [2024-04-26 23:36:35.599120] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:46.388 [2024-04-26 23:36:35.599778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.388 [2024-04-26 23:36:35.600162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.388 [2024-04-26 23:36:35.600176] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe56640 with addr=10.0.0.2, port=4420 00:33:46.388 [2024-04-26 23:36:35.600186] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe56640 is same with the state(5) to be set 00:33:46.388 [2024-04-26 23:36:35.600419] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe56640 (9): Bad file descriptor 00:33:46.388 [2024-04-26 23:36:35.600638] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:46.388 [2024-04-26 23:36:35.600649] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:46.388 [2024-04-26 23:36:35.600657] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:46.388 [2024-04-26 23:36:35.604135] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:46.388 [2024-04-26 23:36:35.612947] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:46.388 [2024-04-26 23:36:35.613508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.388 [2024-04-26 23:36:35.613859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.388 [2024-04-26 23:36:35.613874] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe56640 with addr=10.0.0.2, port=4420 00:33:46.388 [2024-04-26 23:36:35.613883] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe56640 is same with the state(5) to be set 00:33:46.388 [2024-04-26 23:36:35.614121] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe56640 (9): Bad file descriptor 00:33:46.388 [2024-04-26 23:36:35.614340] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:46.388 [2024-04-26 23:36:35.614350] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:46.388 [2024-04-26 23:36:35.614357] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:46.388 [2024-04-26 23:36:35.617831] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:46.388 [2024-04-26 23:36:35.626841] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:46.388 [2024-04-26 23:36:35.627501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.388 [2024-04-26 23:36:35.627867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.388 [2024-04-26 23:36:35.627882] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe56640 with addr=10.0.0.2, port=4420 00:33:46.388 [2024-04-26 23:36:35.627892] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe56640 is same with the state(5) to be set 00:33:46.388 [2024-04-26 23:36:35.628125] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe56640 (9): Bad file descriptor 00:33:46.388 [2024-04-26 23:36:35.628343] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:46.388 [2024-04-26 23:36:35.628352] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:46.388 [2024-04-26 23:36:35.628360] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:46.388 [2024-04-26 23:36:35.631844] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:46.650 [2024-04-26 23:36:35.640645] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:46.650 [2024-04-26 23:36:35.641241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.650 [2024-04-26 23:36:35.641597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.650 [2024-04-26 23:36:35.641608] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe56640 with addr=10.0.0.2, port=4420 00:33:46.650 [2024-04-26 23:36:35.641616] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe56640 is same with the state(5) to be set 00:33:46.650 [2024-04-26 23:36:35.641832] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe56640 (9): Bad file descriptor 00:33:46.650 [2024-04-26 23:36:35.642055] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:46.650 [2024-04-26 23:36:35.642065] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:46.650 [2024-04-26 23:36:35.642073] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:46.650 [2024-04-26 23:36:35.645541] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:46.650 [2024-04-26 23:36:35.654547] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:46.650 [2024-04-26 23:36:35.655034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.650 [2024-04-26 23:36:35.655409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.650 [2024-04-26 23:36:35.655420] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe56640 with addr=10.0.0.2, port=4420 00:33:46.650 [2024-04-26 23:36:35.655427] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe56640 is same with the state(5) to be set 00:33:46.650 [2024-04-26 23:36:35.655643] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe56640 (9): Bad file descriptor 00:33:46.650 [2024-04-26 23:36:35.655866] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:46.650 [2024-04-26 23:36:35.655876] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:46.650 [2024-04-26 23:36:35.655883] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:46.650 [2024-04-26 23:36:35.659356] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:46.650 [2024-04-26 23:36:35.668358] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:46.650 [2024-04-26 23:36:35.668974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.650 [2024-04-26 23:36:35.669339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.650 [2024-04-26 23:36:35.669353] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe56640 with addr=10.0.0.2, port=4420 00:33:46.650 [2024-04-26 23:36:35.669363] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe56640 is same with the state(5) to be set 00:33:46.650 [2024-04-26 23:36:35.669597] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe56640 (9): Bad file descriptor 00:33:46.650 [2024-04-26 23:36:35.669815] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:46.650 [2024-04-26 23:36:35.669826] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:46.650 [2024-04-26 23:36:35.669834] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:46.650 [2024-04-26 23:36:35.673317] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:46.650 [2024-04-26 23:36:35.682119] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:46.650 [2024-04-26 23:36:35.682654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.650 [2024-04-26 23:36:35.683007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.650 [2024-04-26 23:36:35.683019] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe56640 with addr=10.0.0.2, port=4420 00:33:46.650 [2024-04-26 23:36:35.683027] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe56640 is same with the state(5) to be set 00:33:46.650 [2024-04-26 23:36:35.683242] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe56640 (9): Bad file descriptor 00:33:46.650 [2024-04-26 23:36:35.683457] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:46.650 [2024-04-26 23:36:35.683467] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:46.650 [2024-04-26 23:36:35.683474] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:46.650 [2024-04-26 23:36:35.686947] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:46.650 [2024-04-26 23:36:35.695973] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:46.650 [2024-04-26 23:36:35.696612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.650 [2024-04-26 23:36:35.696957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.650 [2024-04-26 23:36:35.696972] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe56640 with addr=10.0.0.2, port=4420 00:33:46.650 [2024-04-26 23:36:35.696981] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe56640 is same with the state(5) to be set 00:33:46.650 [2024-04-26 23:36:35.697214] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe56640 (9): Bad file descriptor 00:33:46.650 [2024-04-26 23:36:35.697432] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:46.650 [2024-04-26 23:36:35.697445] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:46.650 [2024-04-26 23:36:35.697453] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:46.650 [2024-04-26 23:36:35.700930] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:46.650 [2024-04-26 23:36:35.709747] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:46.650 [2024-04-26 23:36:35.710292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.650 [2024-04-26 23:36:35.710642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.650 [2024-04-26 23:36:35.710653] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe56640 with addr=10.0.0.2, port=4420 00:33:46.650 [2024-04-26 23:36:35.710661] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe56640 is same with the state(5) to be set 00:33:46.650 [2024-04-26 23:36:35.710881] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe56640 (9): Bad file descriptor 00:33:46.650 [2024-04-26 23:36:35.711098] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:46.650 [2024-04-26 23:36:35.711108] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:46.650 [2024-04-26 23:36:35.711115] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:46.650 [2024-04-26 23:36:35.714583] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:46.650 [2024-04-26 23:36:35.723608] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:46.650 [2024-04-26 23:36:35.724255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.650 [2024-04-26 23:36:35.724610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.650 [2024-04-26 23:36:35.724624] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe56640 with addr=10.0.0.2, port=4420 00:33:46.650 [2024-04-26 23:36:35.724633] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe56640 is same with the state(5) to be set 00:33:46.650 [2024-04-26 23:36:35.724873] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe56640 (9): Bad file descriptor 00:33:46.650 [2024-04-26 23:36:35.725092] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:46.650 [2024-04-26 23:36:35.725102] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:46.650 [2024-04-26 23:36:35.725110] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:46.651 [2024-04-26 23:36:35.728583] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:46.651 [2024-04-26 23:36:35.737387] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:46.651 [2024-04-26 23:36:35.738061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.651 [2024-04-26 23:36:35.738410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.651 [2024-04-26 23:36:35.738424] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe56640 with addr=10.0.0.2, port=4420 00:33:46.651 [2024-04-26 23:36:35.738433] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe56640 is same with the state(5) to be set 00:33:46.651 [2024-04-26 23:36:35.738667] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe56640 (9): Bad file descriptor 00:33:46.651 [2024-04-26 23:36:35.738892] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:46.651 [2024-04-26 23:36:35.738902] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:46.651 [2024-04-26 23:36:35.738914] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:46.651 [2024-04-26 23:36:35.742388] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:46.651 [2024-04-26 23:36:35.751193] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:46.651 [2024-04-26 23:36:35.751937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.651 [2024-04-26 23:36:35.752316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.651 [2024-04-26 23:36:35.752330] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe56640 with addr=10.0.0.2, port=4420 00:33:46.651 [2024-04-26 23:36:35.752340] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe56640 is same with the state(5) to be set 00:33:46.651 [2024-04-26 23:36:35.752573] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe56640 (9): Bad file descriptor 00:33:46.651 [2024-04-26 23:36:35.752792] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:46.651 [2024-04-26 23:36:35.752802] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:46.651 [2024-04-26 23:36:35.752810] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:46.651 [2024-04-26 23:36:35.756291] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:46.651 [2024-04-26 23:36:35.765097] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:46.651 [2024-04-26 23:36:35.765790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.651 [2024-04-26 23:36:35.766165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.651 [2024-04-26 23:36:35.766179] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe56640 with addr=10.0.0.2, port=4420 00:33:46.651 [2024-04-26 23:36:35.766189] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe56640 is same with the state(5) to be set 00:33:46.651 [2024-04-26 23:36:35.766423] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe56640 (9): Bad file descriptor 00:33:46.651 [2024-04-26 23:36:35.766641] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:46.651 [2024-04-26 23:36:35.766651] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:46.651 [2024-04-26 23:36:35.766659] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:46.651 [2024-04-26 23:36:35.770135] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:46.651 [2024-04-26 23:36:35.778941] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:46.651 [2024-04-26 23:36:35.779496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.651 [2024-04-26 23:36:35.779852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.651 [2024-04-26 23:36:35.779864] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe56640 with addr=10.0.0.2, port=4420 00:33:46.651 [2024-04-26 23:36:35.779873] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe56640 is same with the state(5) to be set 00:33:46.651 [2024-04-26 23:36:35.780088] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe56640 (9): Bad file descriptor 00:33:46.651 [2024-04-26 23:36:35.780303] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:46.651 [2024-04-26 23:36:35.780313] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:46.651 [2024-04-26 23:36:35.780320] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:46.651 [2024-04-26 23:36:35.783792] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:46.651 [2024-04-26 23:36:35.792804] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:46.651 [2024-04-26 23:36:35.793351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.651 [2024-04-26 23:36:35.793696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.651 [2024-04-26 23:36:35.793708] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe56640 with addr=10.0.0.2, port=4420 00:33:46.651 [2024-04-26 23:36:35.793715] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe56640 is same with the state(5) to be set 00:33:46.651 [2024-04-26 23:36:35.793935] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe56640 (9): Bad file descriptor 00:33:46.651 [2024-04-26 23:36:35.794150] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:46.651 [2024-04-26 23:36:35.794160] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:46.651 [2024-04-26 23:36:35.794167] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:46.651 [2024-04-26 23:36:35.797634] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:46.651 [2024-04-26 23:36:35.806667] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:46.651 [2024-04-26 23:36:35.807230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.651 [2024-04-26 23:36:35.807589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.651 [2024-04-26 23:36:35.807600] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe56640 with addr=10.0.0.2, port=4420 00:33:46.651 [2024-04-26 23:36:35.807607] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe56640 is same with the state(5) to be set 00:33:46.651 [2024-04-26 23:36:35.807822] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe56640 (9): Bad file descriptor 00:33:46.651 [2024-04-26 23:36:35.808050] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:46.651 [2024-04-26 23:36:35.808060] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:46.651 [2024-04-26 23:36:35.808066] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:46.651 [2024-04-26 23:36:35.811538] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:46.651 [2024-04-26 23:36:35.820542] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:46.651 [2024-04-26 23:36:35.821008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.651 [2024-04-26 23:36:35.821402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.651 [2024-04-26 23:36:35.821416] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe56640 with addr=10.0.0.2, port=4420 00:33:46.651 [2024-04-26 23:36:35.821426] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe56640 is same with the state(5) to be set 00:33:46.651 [2024-04-26 23:36:35.821660] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe56640 (9): Bad file descriptor 00:33:46.651 [2024-04-26 23:36:35.821886] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:46.651 [2024-04-26 23:36:35.821896] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:46.651 [2024-04-26 23:36:35.821905] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:46.651 [2024-04-26 23:36:35.825380] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:46.651 [2024-04-26 23:36:35.834391] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:46.651 [2024-04-26 23:36:35.835058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.651 [2024-04-26 23:36:35.835557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.651 [2024-04-26 23:36:35.835571] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe56640 with addr=10.0.0.2, port=4420 00:33:46.651 [2024-04-26 23:36:35.835580] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe56640 is same with the state(5) to be set 00:33:46.651 [2024-04-26 23:36:35.835814] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe56640 (9): Bad file descriptor 00:33:46.651 [2024-04-26 23:36:35.836037] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:46.651 [2024-04-26 23:36:35.836049] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:46.651 [2024-04-26 23:36:35.836056] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:46.651 [2024-04-26 23:36:35.839532] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:46.651 [2024-04-26 23:36:35.848239] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:46.651 [2024-04-26 23:36:35.848674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.651 [2024-04-26 23:36:35.848986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.651 [2024-04-26 23:36:35.848998] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe56640 with addr=10.0.0.2, port=4420 00:33:46.651 [2024-04-26 23:36:35.849005] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe56640 is same with the state(5) to be set 00:33:46.651 [2024-04-26 23:36:35.849221] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe56640 (9): Bad file descriptor 00:33:46.651 [2024-04-26 23:36:35.849436] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:46.651 [2024-04-26 23:36:35.849446] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:46.651 [2024-04-26 23:36:35.849453] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:46.651 [2024-04-26 23:36:35.852925] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:46.651 [2024-04-26 23:36:35.862133] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:46.651 [2024-04-26 23:36:35.862725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.651 [2024-04-26 23:36:35.863117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.652 [2024-04-26 23:36:35.863128] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe56640 with addr=10.0.0.2, port=4420 00:33:46.652 [2024-04-26 23:36:35.863136] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe56640 is same with the state(5) to be set 00:33:46.652 [2024-04-26 23:36:35.863351] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe56640 (9): Bad file descriptor 00:33:46.652 [2024-04-26 23:36:35.863566] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:46.652 [2024-04-26 23:36:35.863575] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:46.652 [2024-04-26 23:36:35.863582] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:46.652 [2024-04-26 23:36:35.867054] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:46.652 [2024-04-26 23:36:35.875852] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:46.652 [2024-04-26 23:36:35.876420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.652 [2024-04-26 23:36:35.876791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.652 [2024-04-26 23:36:35.876802] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe56640 with addr=10.0.0.2, port=4420 00:33:46.652 [2024-04-26 23:36:35.876810] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe56640 is same with the state(5) to be set 00:33:46.652 [2024-04-26 23:36:35.877033] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe56640 (9): Bad file descriptor 00:33:46.652 [2024-04-26 23:36:35.877249] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:46.652 [2024-04-26 23:36:35.877258] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:46.652 [2024-04-26 23:36:35.877265] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:46.652 [2024-04-26 23:36:35.880732] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:46.652 [2024-04-26 23:36:35.889731] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:46.652 [2024-04-26 23:36:35.890299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.652 [2024-04-26 23:36:35.890666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.652 [2024-04-26 23:36:35.890677] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe56640 with addr=10.0.0.2, port=4420 00:33:46.652 [2024-04-26 23:36:35.890685] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe56640 is same with the state(5) to be set 00:33:46.652 [2024-04-26 23:36:35.890906] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe56640 (9): Bad file descriptor 00:33:46.652 [2024-04-26 23:36:35.891121] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:46.652 [2024-04-26 23:36:35.891131] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:46.652 [2024-04-26 23:36:35.891138] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:46.652 [2024-04-26 23:36:35.894604] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:46.914 [2024-04-26 23:36:35.903608] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:46.914 [2024-04-26 23:36:35.904128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.914 [2024-04-26 23:36:35.904275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.914 [2024-04-26 23:36:35.904289] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe56640 with addr=10.0.0.2, port=4420 00:33:46.914 [2024-04-26 23:36:35.904298] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe56640 is same with the state(5) to be set 00:33:46.914 [2024-04-26 23:36:35.904533] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe56640 (9): Bad file descriptor 00:33:46.914 [2024-04-26 23:36:35.904753] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:46.914 [2024-04-26 23:36:35.904762] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:46.914 [2024-04-26 23:36:35.904771] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:46.914 [2024-04-26 23:36:35.908265] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:46.914 [2024-04-26 23:36:35.917482] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:46.914 [2024-04-26 23:36:35.917972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.914 [2024-04-26 23:36:35.918301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.914 [2024-04-26 23:36:35.918316] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe56640 with addr=10.0.0.2, port=4420 00:33:46.914 [2024-04-26 23:36:35.918331] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe56640 is same with the state(5) to be set 00:33:46.914 [2024-04-26 23:36:35.918564] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe56640 (9): Bad file descriptor 00:33:46.914 [2024-04-26 23:36:35.918784] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:46.914 [2024-04-26 23:36:35.918793] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:46.914 [2024-04-26 23:36:35.918801] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:46.915 [2024-04-26 23:36:35.922284] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:46.915 [2024-04-26 23:36:35.931292] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:46.915 [2024-04-26 23:36:35.931885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.915 [2024-04-26 23:36:35.932256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.915 [2024-04-26 23:36:35.932267] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe56640 with addr=10.0.0.2, port=4420 00:33:46.915 [2024-04-26 23:36:35.932275] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe56640 is same with the state(5) to be set 00:33:46.915 [2024-04-26 23:36:35.932496] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe56640 (9): Bad file descriptor 00:33:46.915 [2024-04-26 23:36:35.932712] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:46.915 [2024-04-26 23:36:35.932721] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:46.915 [2024-04-26 23:36:35.932728] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:46.915 [2024-04-26 23:36:35.936204] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:46.915 [2024-04-26 23:36:35.945008] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:46.915 [2024-04-26 23:36:35.945689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.915 [2024-04-26 23:36:35.946041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.915 [2024-04-26 23:36:35.946057] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe56640 with addr=10.0.0.2, port=4420 00:33:46.915 [2024-04-26 23:36:35.946066] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe56640 is same with the state(5) to be set 00:33:46.915 [2024-04-26 23:36:35.946300] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe56640 (9): Bad file descriptor 00:33:46.915 [2024-04-26 23:36:35.946519] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:46.915 [2024-04-26 23:36:35.946528] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:46.915 [2024-04-26 23:36:35.946536] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:46.915 [2024-04-26 23:36:35.950017] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:46.915 [2024-04-26 23:36:35.958815] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:46.915 [2024-04-26 23:36:35.959363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.915 [2024-04-26 23:36:35.959715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.915 [2024-04-26 23:36:35.959727] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe56640 with addr=10.0.0.2, port=4420 00:33:46.915 [2024-04-26 23:36:35.959735] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe56640 is same with the state(5) to be set 00:33:46.915 [2024-04-26 23:36:35.959959] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe56640 (9): Bad file descriptor 00:33:46.915 [2024-04-26 23:36:35.960175] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:46.915 [2024-04-26 23:36:35.960185] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:46.915 [2024-04-26 23:36:35.960192] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:46.915 [2024-04-26 23:36:35.963659] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:46.915 [2024-04-26 23:36:35.972667] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:46.915 [2024-04-26 23:36:35.973228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.915 [2024-04-26 23:36:35.973548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.915 [2024-04-26 23:36:35.973559] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe56640 with addr=10.0.0.2, port=4420 00:33:46.915 [2024-04-26 23:36:35.973566] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe56640 is same with the state(5) to be set 00:33:46.915 [2024-04-26 23:36:35.973781] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe56640 (9): Bad file descriptor 00:33:46.915 [2024-04-26 23:36:35.974002] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:46.915 [2024-04-26 23:36:35.974012] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:46.915 [2024-04-26 23:36:35.974019] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:46.915 [2024-04-26 23:36:35.977524] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:46.915 [2024-04-26 23:36:35.986530] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:46.915 [2024-04-26 23:36:35.987070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.915 [2024-04-26 23:36:35.987430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.915 [2024-04-26 23:36:35.987442] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe56640 with addr=10.0.0.2, port=4420 00:33:46.915 [2024-04-26 23:36:35.987450] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe56640 is same with the state(5) to be set 00:33:46.915 [2024-04-26 23:36:35.987665] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe56640 (9): Bad file descriptor 00:33:46.915 [2024-04-26 23:36:35.987884] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:46.915 [2024-04-26 23:36:35.987894] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:46.915 [2024-04-26 23:36:35.987901] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:46.915 [2024-04-26 23:36:35.991371] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:46.915 [2024-04-26 23:36:36.000372] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:46.915 [2024-04-26 23:36:36.000957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.915 [2024-04-26 23:36:36.001315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.915 [2024-04-26 23:36:36.001325] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe56640 with addr=10.0.0.2, port=4420 00:33:46.915 [2024-04-26 23:36:36.001333] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe56640 is same with the state(5) to be set 00:33:46.915 [2024-04-26 23:36:36.001548] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe56640 (9): Bad file descriptor 00:33:46.915 [2024-04-26 23:36:36.001767] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:46.915 [2024-04-26 23:36:36.001775] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:46.915 [2024-04-26 23:36:36.001783] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:46.915 [2024-04-26 23:36:36.005254] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:46.915 [2024-04-26 23:36:36.014088] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:46.915 [2024-04-26 23:36:36.014671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.915 [2024-04-26 23:36:36.015032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.915 [2024-04-26 23:36:36.015044] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe56640 with addr=10.0.0.2, port=4420 00:33:46.915 [2024-04-26 23:36:36.015051] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe56640 is same with the state(5) to be set 00:33:46.915 [2024-04-26 23:36:36.015266] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe56640 (9): Bad file descriptor 00:33:46.915 [2024-04-26 23:36:36.015482] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:46.915 [2024-04-26 23:36:36.015492] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:46.915 [2024-04-26 23:36:36.015499] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:46.915 [2024-04-26 23:36:36.018972] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:46.915 [2024-04-26 23:36:36.027974] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:46.915 [2024-04-26 23:36:36.028472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.915 [2024-04-26 23:36:36.028771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.915 [2024-04-26 23:36:36.028781] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe56640 with addr=10.0.0.2, port=4420 00:33:46.915 [2024-04-26 23:36:36.028789] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe56640 is same with the state(5) to be set 00:33:46.915 [2024-04-26 23:36:36.029008] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe56640 (9): Bad file descriptor 00:33:46.915 [2024-04-26 23:36:36.029223] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:46.915 [2024-04-26 23:36:36.029232] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:46.915 [2024-04-26 23:36:36.029239] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:46.915 [2024-04-26 23:36:36.032703] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:46.915 [2024-04-26 23:36:36.041705] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:46.915 [2024-04-26 23:36:36.042340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.915 [2024-04-26 23:36:36.042693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.915 [2024-04-26 23:36:36.042707] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe56640 with addr=10.0.0.2, port=4420 00:33:46.915 [2024-04-26 23:36:36.042716] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe56640 is same with the state(5) to be set 00:33:46.915 [2024-04-26 23:36:36.042955] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe56640 (9): Bad file descriptor 00:33:46.915 [2024-04-26 23:36:36.043173] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:46.915 [2024-04-26 23:36:36.043191] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:46.915 [2024-04-26 23:36:36.043199] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:46.915 [2024-04-26 23:36:36.046674] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:46.915 [2024-04-26 23:36:36.055483] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:46.915 [2024-04-26 23:36:36.056163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.915 [2024-04-26 23:36:36.056561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.916 [2024-04-26 23:36:36.056575] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe56640 with addr=10.0.0.2, port=4420 00:33:46.916 [2024-04-26 23:36:36.056584] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe56640 is same with the state(5) to be set 00:33:46.916 [2024-04-26 23:36:36.056818] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe56640 (9): Bad file descriptor 00:33:46.916 [2024-04-26 23:36:36.057045] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:46.916 [2024-04-26 23:36:36.057055] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:46.916 [2024-04-26 23:36:36.057062] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:46.916 [2024-04-26 23:36:36.060535] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:46.916 [2024-04-26 23:36:36.069336] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:46.916 [2024-04-26 23:36:36.069803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.916 [2024-04-26 23:36:36.070113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.916 [2024-04-26 23:36:36.070125] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe56640 with addr=10.0.0.2, port=4420 00:33:46.916 [2024-04-26 23:36:36.070133] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe56640 is same with the state(5) to be set 00:33:46.916 [2024-04-26 23:36:36.070348] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe56640 (9): Bad file descriptor 00:33:46.916 [2024-04-26 23:36:36.070563] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:46.916 [2024-04-26 23:36:36.070573] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:46.916 [2024-04-26 23:36:36.070580] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:46.916 [2024-04-26 23:36:36.074052] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:46.916 [2024-04-26 23:36:36.083057] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:46.916 [2024-04-26 23:36:36.083635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.916 [2024-04-26 23:36:36.083979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.916 [2024-04-26 23:36:36.083990] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe56640 with addr=10.0.0.2, port=4420 00:33:46.916 [2024-04-26 23:36:36.083998] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe56640 is same with the state(5) to be set 00:33:46.916 [2024-04-26 23:36:36.084212] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe56640 (9): Bad file descriptor 00:33:46.916 [2024-04-26 23:36:36.084427] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:46.916 [2024-04-26 23:36:36.084437] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:46.916 [2024-04-26 23:36:36.084448] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:46.916 [2024-04-26 23:36:36.087920] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:46.916 [2024-04-26 23:36:36.096924] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:46.916 [2024-04-26 23:36:36.097385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.916 [2024-04-26 23:36:36.097741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.916 [2024-04-26 23:36:36.097752] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe56640 with addr=10.0.0.2, port=4420 00:33:46.916 [2024-04-26 23:36:36.097759] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe56640 is same with the state(5) to be set 00:33:46.916 [2024-04-26 23:36:36.097978] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe56640 (9): Bad file descriptor 00:33:46.916 [2024-04-26 23:36:36.098194] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:46.916 [2024-04-26 23:36:36.098204] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:46.916 [2024-04-26 23:36:36.098211] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:46.916 [2024-04-26 23:36:36.101678] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:46.916 [2024-04-26 23:36:36.110690] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:46.916 [2024-04-26 23:36:36.111269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.916 [2024-04-26 23:36:36.111579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.916 [2024-04-26 23:36:36.111589] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe56640 with addr=10.0.0.2, port=4420 00:33:46.916 [2024-04-26 23:36:36.111597] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe56640 is same with the state(5) to be set 00:33:46.916 [2024-04-26 23:36:36.111811] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe56640 (9): Bad file descriptor 00:33:46.916 [2024-04-26 23:36:36.112031] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:46.916 [2024-04-26 23:36:36.112042] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:46.916 [2024-04-26 23:36:36.112049] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:46.916 [2024-04-26 23:36:36.115519] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:46.916 [2024-04-26 23:36:36.124518] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:46.916 [2024-04-26 23:36:36.125075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.916 [2024-04-26 23:36:36.125279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.916 [2024-04-26 23:36:36.125291] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe56640 with addr=10.0.0.2, port=4420 00:33:46.916 [2024-04-26 23:36:36.125299] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe56640 is same with the state(5) to be set 00:33:46.916 [2024-04-26 23:36:36.125515] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe56640 (9): Bad file descriptor 00:33:46.916 [2024-04-26 23:36:36.125731] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:46.916 [2024-04-26 23:36:36.125739] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:46.916 [2024-04-26 23:36:36.125747] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:46.916 [2024-04-26 23:36:36.129224] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:46.916 [2024-04-26 23:36:36.138224] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:46.916 [2024-04-26 23:36:36.138788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.916 [2024-04-26 23:36:36.139123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.916 [2024-04-26 23:36:36.139134] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe56640 with addr=10.0.0.2, port=4420 00:33:46.916 [2024-04-26 23:36:36.139142] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe56640 is same with the state(5) to be set 00:33:46.916 [2024-04-26 23:36:36.139356] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe56640 (9): Bad file descriptor 00:33:46.916 [2024-04-26 23:36:36.139572] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:46.916 [2024-04-26 23:36:36.139581] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:46.916 [2024-04-26 23:36:36.139588] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:46.916 [2024-04-26 23:36:36.143060] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:46.916 [2024-04-26 23:36:36.152061] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:46.916 [2024-04-26 23:36:36.152522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.916 [2024-04-26 23:36:36.152890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.916 [2024-04-26 23:36:36.152901] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe56640 with addr=10.0.0.2, port=4420 00:33:46.916 [2024-04-26 23:36:36.152909] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe56640 is same with the state(5) to be set 00:33:46.916 [2024-04-26 23:36:36.153123] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe56640 (9): Bad file descriptor 00:33:46.916 [2024-04-26 23:36:36.153338] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:46.916 [2024-04-26 23:36:36.153347] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:46.916 [2024-04-26 23:36:36.153354] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:46.916 [2024-04-26 23:36:36.156821] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:46.916 [2024-04-26 23:36:36.165827] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:46.916 [2024-04-26 23:36:36.166374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.916 [2024-04-26 23:36:36.166622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.916 [2024-04-26 23:36:36.166634] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe56640 with addr=10.0.0.2, port=4420 00:33:46.916 [2024-04-26 23:36:36.166641] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe56640 is same with the state(5) to be set 00:33:46.916 [2024-04-26 23:36:36.166860] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe56640 (9): Bad file descriptor 00:33:46.916 [2024-04-26 23:36:36.167078] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:46.916 [2024-04-26 23:36:36.167087] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:46.916 [2024-04-26 23:36:36.167094] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:47.178 [2024-04-26 23:36:36.170560] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:47.178 [2024-04-26 23:36:36.179567] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:47.178 [2024-04-26 23:36:36.180083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.178 [2024-04-26 23:36:36.180444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.178 [2024-04-26 23:36:36.180454] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe56640 with addr=10.0.0.2, port=4420 00:33:47.178 [2024-04-26 23:36:36.180461] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe56640 is same with the state(5) to be set 00:33:47.178 [2024-04-26 23:36:36.180676] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe56640 (9): Bad file descriptor 00:33:47.178 [2024-04-26 23:36:36.180895] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:47.178 [2024-04-26 23:36:36.180904] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:47.178 [2024-04-26 23:36:36.180911] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:47.178 [2024-04-26 23:36:36.184380] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:47.178 [2024-04-26 23:36:36.193378] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:47.179 [2024-04-26 23:36:36.193948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.179 [2024-04-26 23:36:36.194332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.179 [2024-04-26 23:36:36.194343] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe56640 with addr=10.0.0.2, port=4420 00:33:47.179 [2024-04-26 23:36:36.194350] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe56640 is same with the state(5) to be set 00:33:47.179 [2024-04-26 23:36:36.194565] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe56640 (9): Bad file descriptor 00:33:47.179 [2024-04-26 23:36:36.194780] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:47.179 [2024-04-26 23:36:36.194788] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:47.179 [2024-04-26 23:36:36.194795] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:47.179 [2024-04-26 23:36:36.198265] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:47.179 [2024-04-26 23:36:36.207463] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:47.179 [2024-04-26 23:36:36.207950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.179 [2024-04-26 23:36:36.208268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.179 [2024-04-26 23:36:36.208279] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe56640 with addr=10.0.0.2, port=4420 00:33:47.179 [2024-04-26 23:36:36.208286] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe56640 is same with the state(5) to be set 00:33:47.179 [2024-04-26 23:36:36.208502] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe56640 (9): Bad file descriptor 00:33:47.179 [2024-04-26 23:36:36.208717] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:47.179 [2024-04-26 23:36:36.208726] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:47.179 [2024-04-26 23:36:36.208733] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:47.179 [2024-04-26 23:36:36.212214] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:47.179 [2024-04-26 23:36:36.221242] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:47.179 [2024-04-26 23:36:36.221673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.179 [2024-04-26 23:36:36.222054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.179 [2024-04-26 23:36:36.222065] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe56640 with addr=10.0.0.2, port=4420 00:33:47.179 [2024-04-26 23:36:36.222073] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe56640 is same with the state(5) to be set 00:33:47.179 [2024-04-26 23:36:36.222288] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe56640 (9): Bad file descriptor 00:33:47.179 [2024-04-26 23:36:36.222503] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:47.179 [2024-04-26 23:36:36.222512] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:47.179 [2024-04-26 23:36:36.222519] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:47.179 [2024-04-26 23:36:36.225991] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:47.179 [2024-04-26 23:36:36.234992] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:47.179 [2024-04-26 23:36:36.235537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.179 [2024-04-26 23:36:36.235743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.179 [2024-04-26 23:36:36.235754] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe56640 with addr=10.0.0.2, port=4420 00:33:47.179 [2024-04-26 23:36:36.235761] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe56640 is same with the state(5) to be set 00:33:47.179 [2024-04-26 23:36:36.235980] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe56640 (9): Bad file descriptor 00:33:47.179 [2024-04-26 23:36:36.236198] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:47.179 [2024-04-26 23:36:36.236207] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:47.179 [2024-04-26 23:36:36.236214] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:47.179 [2024-04-26 23:36:36.239695] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:47.179 [2024-04-26 23:36:36.248695] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:47.179 [2024-04-26 23:36:36.249345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.179 [2024-04-26 23:36:36.249700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.179 [2024-04-26 23:36:36.249714] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe56640 with addr=10.0.0.2, port=4420 00:33:47.179 [2024-04-26 23:36:36.249723] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe56640 is same with the state(5) to be set 00:33:47.179 [2024-04-26 23:36:36.249963] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe56640 (9): Bad file descriptor 00:33:47.179 [2024-04-26 23:36:36.250183] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:47.179 [2024-04-26 23:36:36.250192] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:47.179 [2024-04-26 23:36:36.250200] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:47.179 [2024-04-26 23:36:36.253675] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:47.179 [2024-04-26 23:36:36.262480] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:47.179 [2024-04-26 23:36:36.263002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.179 [2024-04-26 23:36:36.263340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.179 [2024-04-26 23:36:36.263356] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe56640 with addr=10.0.0.2, port=4420 00:33:47.179 [2024-04-26 23:36:36.263364] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe56640 is same with the state(5) to be set 00:33:47.179 [2024-04-26 23:36:36.263580] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe56640 (9): Bad file descriptor 00:33:47.179 [2024-04-26 23:36:36.263795] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:47.179 [2024-04-26 23:36:36.263804] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:47.179 [2024-04-26 23:36:36.263811] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:47.179 [2024-04-26 23:36:36.267283] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:47.179 [2024-04-26 23:36:36.276287] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:47.179 [2024-04-26 23:36:36.276873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.179 [2024-04-26 23:36:36.277156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.179 [2024-04-26 23:36:36.277167] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe56640 with addr=10.0.0.2, port=4420 00:33:47.179 [2024-04-26 23:36:36.277175] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe56640 is same with the state(5) to be set 00:33:47.179 [2024-04-26 23:36:36.277389] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe56640 (9): Bad file descriptor 00:33:47.179 [2024-04-26 23:36:36.277604] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:47.179 [2024-04-26 23:36:36.277613] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:47.179 [2024-04-26 23:36:36.277620] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:47.179 [2024-04-26 23:36:36.281093] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:47.179 [2024-04-26 23:36:36.290091] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:47.179 [2024-04-26 23:36:36.290528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.179 [2024-04-26 23:36:36.290859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.179 [2024-04-26 23:36:36.290870] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe56640 with addr=10.0.0.2, port=4420 00:33:47.179 [2024-04-26 23:36:36.290878] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe56640 is same with the state(5) to be set 00:33:47.179 [2024-04-26 23:36:36.291094] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe56640 (9): Bad file descriptor 00:33:47.179 [2024-04-26 23:36:36.291310] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:47.179 [2024-04-26 23:36:36.291319] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:47.179 [2024-04-26 23:36:36.291326] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:47.179 [2024-04-26 23:36:36.294793] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:47.179 [2024-04-26 23:36:36.303995] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:47.179 [2024-04-26 23:36:36.304499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.179 [2024-04-26 23:36:36.304834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.179 [2024-04-26 23:36:36.304851] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe56640 with addr=10.0.0.2, port=4420 00:33:47.179 [2024-04-26 23:36:36.304861] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe56640 is same with the state(5) to be set 00:33:47.179 [2024-04-26 23:36:36.305076] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe56640 (9): Bad file descriptor 00:33:47.179 [2024-04-26 23:36:36.305292] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:47.179 [2024-04-26 23:36:36.305302] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:47.179 [2024-04-26 23:36:36.305308] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:47.180 [2024-04-26 23:36:36.308774] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:47.180 [2024-04-26 23:36:36.317784] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:47.180 [2024-04-26 23:36:36.318368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.180 [2024-04-26 23:36:36.318725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.180 [2024-04-26 23:36:36.318735] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe56640 with addr=10.0.0.2, port=4420 00:33:47.180 [2024-04-26 23:36:36.318743] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe56640 is same with the state(5) to be set 00:33:47.180 [2024-04-26 23:36:36.318962] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe56640 (9): Bad file descriptor 00:33:47.180 [2024-04-26 23:36:36.319177] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:47.180 [2024-04-26 23:36:36.319186] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:47.180 [2024-04-26 23:36:36.319193] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:47.180 [2024-04-26 23:36:36.322659] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:47.180 [2024-04-26 23:36:36.331654] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:47.180 [2024-04-26 23:36:36.332331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.180 [2024-04-26 23:36:36.332704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.180 [2024-04-26 23:36:36.332718] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe56640 with addr=10.0.0.2, port=4420 00:33:47.180 [2024-04-26 23:36:36.332727] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe56640 is same with the state(5) to be set 00:33:47.180 [2024-04-26 23:36:36.332968] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe56640 (9): Bad file descriptor 00:33:47.180 [2024-04-26 23:36:36.333187] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:47.180 [2024-04-26 23:36:36.333197] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:47.180 [2024-04-26 23:36:36.333204] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:47.180 [2024-04-26 23:36:36.336680] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:47.180 [2024-04-26 23:36:36.345479] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:47.180 [2024-04-26 23:36:36.346173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.180 [2024-04-26 23:36:36.346569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.180 [2024-04-26 23:36:36.346583] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe56640 with addr=10.0.0.2, port=4420 00:33:47.180 [2024-04-26 23:36:36.346592] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe56640 is same with the state(5) to be set 00:33:47.180 [2024-04-26 23:36:36.346830] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe56640 (9): Bad file descriptor 00:33:47.180 [2024-04-26 23:36:36.347057] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:47.180 [2024-04-26 23:36:36.347067] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:47.180 [2024-04-26 23:36:36.347075] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:47.180 [2024-04-26 23:36:36.350546] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:47.180 [2024-04-26 23:36:36.359342] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:47.180 [2024-04-26 23:36:36.359943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.180 [2024-04-26 23:36:36.360315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.180 [2024-04-26 23:36:36.360329] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe56640 with addr=10.0.0.2, port=4420 00:33:47.180 [2024-04-26 23:36:36.360339] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe56640 is same with the state(5) to be set 00:33:47.180 [2024-04-26 23:36:36.360573] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe56640 (9): Bad file descriptor 00:33:47.180 [2024-04-26 23:36:36.360791] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:47.180 [2024-04-26 23:36:36.360801] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:47.180 [2024-04-26 23:36:36.360808] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:47.180 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh: line 35: 4180901 Killed "${NVMF_APP[@]}" "$@" 00:33:47.180 23:36:36 -- host/bdevperf.sh@36 -- # tgt_init 00:33:47.180 23:36:36 -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:33:47.180 23:36:36 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:33:47.180 23:36:36 -- common/autotest_common.sh@710 -- # xtrace_disable 00:33:47.180 23:36:36 -- common/autotest_common.sh@10 -- # set +x 00:33:47.180 [2024-04-26 23:36:36.364294] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:47.180 23:36:36 -- nvmf/common.sh@470 -- # nvmfpid=4182604 00:33:47.180 23:36:36 -- nvmf/common.sh@471 -- # waitforlisten 4182604 00:33:47.180 23:36:36 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:33:47.180 23:36:36 -- common/autotest_common.sh@817 -- # '[' -z 4182604 ']' 00:33:47.180 23:36:36 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:47.180 23:36:36 -- common/autotest_common.sh@822 -- # local max_retries=100 00:33:47.180 23:36:36 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:47.180 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:47.180 23:36:36 -- common/autotest_common.sh@826 -- # xtrace_disable 00:33:47.180 23:36:36 -- common/autotest_common.sh@10 -- # set +x 00:33:47.180 [2024-04-26 23:36:36.373098] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:47.180 [2024-04-26 23:36:36.373790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.180 [2024-04-26 23:36:36.374212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.180 [2024-04-26 23:36:36.374226] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe56640 with addr=10.0.0.2, port=4420 00:33:47.180 [2024-04-26 23:36:36.374236] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe56640 is same with the state(5) to be set 00:33:47.180 [2024-04-26 23:36:36.374469] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe56640 (9): Bad file descriptor 00:33:47.180 [2024-04-26 23:36:36.374687] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:47.180 [2024-04-26 23:36:36.374702] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:47.180 [2024-04-26 23:36:36.374709] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:47.180 [2024-04-26 23:36:36.378188] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:47.180 [2024-04-26 23:36:36.386989] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:47.180 [2024-04-26 23:36:36.387504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.180 [2024-04-26 23:36:36.387869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.180 [2024-04-26 23:36:36.387883] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe56640 with addr=10.0.0.2, port=4420 00:33:47.180 [2024-04-26 23:36:36.387893] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe56640 is same with the state(5) to be set 00:33:47.180 [2024-04-26 23:36:36.388126] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe56640 (9): Bad file descriptor 00:33:47.180 [2024-04-26 23:36:36.388345] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:47.180 [2024-04-26 23:36:36.388355] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:47.180 [2024-04-26 23:36:36.388363] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:47.180 [2024-04-26 23:36:36.391842] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:47.180 [2024-04-26 23:36:36.400887] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:47.180 [2024-04-26 23:36:36.401445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.180 [2024-04-26 23:36:36.401856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.180 [2024-04-26 23:36:36.401871] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe56640 with addr=10.0.0.2, port=4420 00:33:47.180 [2024-04-26 23:36:36.401880] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe56640 is same with the state(5) to be set 00:33:47.180 [2024-04-26 23:36:36.402114] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe56640 (9): Bad file descriptor 00:33:47.180 [2024-04-26 23:36:36.402331] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:47.180 [2024-04-26 23:36:36.402341] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:47.180 [2024-04-26 23:36:36.402348] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:47.180 [2024-04-26 23:36:36.405850] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:47.180 [2024-04-26 23:36:36.414680] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:47.180 [2024-04-26 23:36:36.415239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.180 [2024-04-26 23:36:36.415620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.180 [2024-04-26 23:36:36.415634] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe56640 with addr=10.0.0.2, port=4420 00:33:47.180 [2024-04-26 23:36:36.415643] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe56640 is same with the state(5) to be set 00:33:47.181 [2024-04-26 23:36:36.415884] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe56640 (9): Bad file descriptor 00:33:47.181 [2024-04-26 23:36:36.416103] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:47.181 [2024-04-26 23:36:36.416112] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:47.181 [2024-04-26 23:36:36.416124] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:47.181 [2024-04-26 23:36:36.419598] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:47.181 [2024-04-26 23:36:36.428433] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:47.181 [2024-04-26 23:36:36.428950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.181 [2024-04-26 23:36:36.429310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.181 [2024-04-26 23:36:36.429320] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe56640 with addr=10.0.0.2, port=4420 00:33:47.181 [2024-04-26 23:36:36.429328] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe56640 is same with the state(5) to be set 00:33:47.181 [2024-04-26 23:36:36.429544] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe56640 (9): Bad file descriptor 00:33:47.181 [2024-04-26 23:36:36.429759] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:47.181 [2024-04-26 23:36:36.429769] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:47.181 [2024-04-26 23:36:36.429777] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:47.443 [2024-04-26 23:36:36.432351] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:33:47.443 [2024-04-26 23:36:36.432397] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:47.443 [2024-04-26 23:36:36.433257] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:47.443 [2024-04-26 23:36:36.442279] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:47.443 [2024-04-26 23:36:36.442797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.443 [2024-04-26 23:36:36.443136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.443 [2024-04-26 23:36:36.443147] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe56640 with addr=10.0.0.2, port=4420 00:33:47.443 [2024-04-26 23:36:36.443155] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe56640 is same with the state(5) to be set 00:33:47.443 [2024-04-26 23:36:36.443370] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe56640 (9): Bad file descriptor 00:33:47.443 [2024-04-26 23:36:36.443585] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:47.443 [2024-04-26 23:36:36.443594] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:47.443 [2024-04-26 23:36:36.443602] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:47.443 [2024-04-26 23:36:36.447075] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:47.443 [2024-04-26 23:36:36.456076] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:47.443 [2024-04-26 23:36:36.456618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.443 [2024-04-26 23:36:36.456991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.443 [2024-04-26 23:36:36.457003] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe56640 with addr=10.0.0.2, port=4420 00:33:47.443 [2024-04-26 23:36:36.457010] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe56640 is same with the state(5) to be set 00:33:47.443 [2024-04-26 23:36:36.457226] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe56640 (9): Bad file descriptor 00:33:47.443 [2024-04-26 23:36:36.457446] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:47.443 [2024-04-26 23:36:36.457455] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:47.443 [2024-04-26 23:36:36.457462] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:47.443 [2024-04-26 23:36:36.460930] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:47.443 EAL: No free 2048 kB hugepages reported on node 1 00:33:47.443 [2024-04-26 23:36:36.469943] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:47.443 [2024-04-26 23:36:36.470627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.443 [2024-04-26 23:36:36.470843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.443 [2024-04-26 23:36:36.470857] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe56640 with addr=10.0.0.2, port=4420 00:33:47.443 [2024-04-26 23:36:36.470867] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe56640 is same with the state(5) to be set 00:33:47.443 [2024-04-26 23:36:36.471101] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe56640 (9): Bad file descriptor 00:33:47.443 [2024-04-26 23:36:36.471321] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:47.443 [2024-04-26 23:36:36.471331] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:47.443 [2024-04-26 23:36:36.471339] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:47.443 [2024-04-26 23:36:36.474812] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:47.443 [2024-04-26 23:36:36.483814] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:47.443 [2024-04-26 23:36:36.484367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.443 [2024-04-26 23:36:36.484735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.443 [2024-04-26 23:36:36.484746] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe56640 with addr=10.0.0.2, port=4420 00:33:47.443 [2024-04-26 23:36:36.484753] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe56640 is same with the state(5) to be set 00:33:47.443 [2024-04-26 23:36:36.484975] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe56640 (9): Bad file descriptor 00:33:47.443 [2024-04-26 23:36:36.485190] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:47.443 [2024-04-26 23:36:36.485200] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:47.443 [2024-04-26 23:36:36.485207] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:47.443 [2024-04-26 23:36:36.488672] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:47.443 [2024-04-26 23:36:36.497674] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:47.443 [2024-04-26 23:36:36.498114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.443 [2024-04-26 23:36:36.498411] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:33:47.443 [2024-04-26 23:36:36.498462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.443 [2024-04-26 23:36:36.498472] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe56640 with addr=10.0.0.2, port=4420 00:33:47.443 [2024-04-26 23:36:36.498480] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe56640 is same with the state(5) to be set 00:33:47.443 [2024-04-26 23:36:36.498694] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe56640 (9): Bad file descriptor 00:33:47.443 [2024-04-26 23:36:36.498919] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:47.443 [2024-04-26 23:36:36.498930] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:47.443 [2024-04-26 23:36:36.498938] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:47.443 [2024-04-26 23:36:36.502406] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:47.443 [2024-04-26 23:36:36.511425] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:47.443 [2024-04-26 23:36:36.512120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.443 [2024-04-26 23:36:36.512476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.443 [2024-04-26 23:36:36.512490] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe56640 with addr=10.0.0.2, port=4420 00:33:47.443 [2024-04-26 23:36:36.512499] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe56640 is same with the state(5) to be set 00:33:47.443 [2024-04-26 23:36:36.512737] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe56640 (9): Bad file descriptor 00:33:47.443 [2024-04-26 23:36:36.512961] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:47.443 [2024-04-26 23:36:36.512972] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:47.443 [2024-04-26 23:36:36.512980] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:47.443 [2024-04-26 23:36:36.516458] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:47.443 [2024-04-26 23:36:36.525264] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:47.443 [2024-04-26 23:36:36.525718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.443 [2024-04-26 23:36:36.526092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.443 [2024-04-26 23:36:36.526104] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe56640 with addr=10.0.0.2, port=4420 00:33:47.443 [2024-04-26 23:36:36.526112] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe56640 is same with the state(5) to be set 00:33:47.443 [2024-04-26 23:36:36.526329] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe56640 (9): Bad file descriptor 00:33:47.443 [2024-04-26 23:36:36.526545] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:47.444 [2024-04-26 23:36:36.526554] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:47.444 [2024-04-26 23:36:36.526561] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:47.444 [2024-04-26 23:36:36.527612] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:47.444 [2024-04-26 23:36:36.527640] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:47.444 [2024-04-26 23:36:36.527647] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:47.444 [2024-04-26 23:36:36.527654] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:47.444 [2024-04-26 23:36:36.527659] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:47.444 [2024-04-26 23:36:36.527758] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:33:47.444 [2024-04-26 23:36:36.527899] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:33:47.444 [2024-04-26 23:36:36.527900] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:33:47.444 [2024-04-26 23:36:36.530037] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:47.444 [2024-04-26 23:36:36.539149] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:47.444 [2024-04-26 23:36:36.539728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.444 [2024-04-26 23:36:36.540119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.444 [2024-04-26 23:36:36.540134] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe56640 with addr=10.0.0.2, port=4420 00:33:47.444 [2024-04-26 23:36:36.540144] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe56640 is same with the state(5) to be set 00:33:47.444 [2024-04-26 23:36:36.540381] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe56640 (9): Bad file descriptor 00:33:47.444 [2024-04-26 23:36:36.540600] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:47.444 [2024-04-26 23:36:36.540610] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:47.444 [2024-04-26 23:36:36.540617] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:47.444 [2024-04-26 23:36:36.544096] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:47.444 [2024-04-26 23:36:36.552904] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:47.444 [2024-04-26 23:36:36.553630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.444 [2024-04-26 23:36:36.553901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.444 [2024-04-26 23:36:36.553917] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe56640 with addr=10.0.0.2, port=4420 00:33:47.444 [2024-04-26 23:36:36.553927] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe56640 is same with the state(5) to be set 00:33:47.444 [2024-04-26 23:36:36.554163] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe56640 (9): Bad file descriptor 00:33:47.444 [2024-04-26 23:36:36.554382] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:47.444 [2024-04-26 23:36:36.554391] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:47.444 [2024-04-26 23:36:36.554399] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:47.444 [2024-04-26 23:36:36.557878] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:47.444 [2024-04-26 23:36:36.566678] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:47.444 [2024-04-26 23:36:36.567355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.444 [2024-04-26 23:36:36.567766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.444 [2024-04-26 23:36:36.567779] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe56640 with addr=10.0.0.2, port=4420 00:33:47.444 [2024-04-26 23:36:36.567789] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe56640 is same with the state(5) to be set 00:33:47.444 [2024-04-26 23:36:36.568036] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe56640 (9): Bad file descriptor 00:33:47.444 [2024-04-26 23:36:36.568256] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:47.444 [2024-04-26 23:36:36.568265] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:47.444 [2024-04-26 23:36:36.568273] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:47.444 [2024-04-26 23:36:36.571744] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:47.444 [2024-04-26 23:36:36.580548] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:47.444 [2024-04-26 23:36:36.581232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.444 [2024-04-26 23:36:36.581602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.444 [2024-04-26 23:36:36.581616] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe56640 with addr=10.0.0.2, port=4420 00:33:47.444 [2024-04-26 23:36:36.581626] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe56640 is same with the state(5) to be set 00:33:47.444 [2024-04-26 23:36:36.581866] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe56640 (9): Bad file descriptor 00:33:47.444 [2024-04-26 23:36:36.582085] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:47.444 [2024-04-26 23:36:36.582095] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:47.444 [2024-04-26 23:36:36.582102] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:47.444 [2024-04-26 23:36:36.585572] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:47.444 [2024-04-26 23:36:36.594372] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:47.444 [2024-04-26 23:36:36.595067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.444 [2024-04-26 23:36:36.595477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.444 [2024-04-26 23:36:36.595491] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe56640 with addr=10.0.0.2, port=4420 00:33:47.444 [2024-04-26 23:36:36.595500] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe56640 is same with the state(5) to be set 00:33:47.444 [2024-04-26 23:36:36.595733] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe56640 (9): Bad file descriptor 00:33:47.444 [2024-04-26 23:36:36.595958] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:47.444 [2024-04-26 23:36:36.595969] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:47.444 [2024-04-26 23:36:36.595976] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:47.444 [2024-04-26 23:36:36.599448] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:47.444 [2024-04-26 23:36:36.608250] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:47.444 [2024-04-26 23:36:36.608848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.444 [2024-04-26 23:36:36.609091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.444 [2024-04-26 23:36:36.609103] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe56640 with addr=10.0.0.2, port=4420 00:33:47.444 [2024-04-26 23:36:36.609111] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe56640 is same with the state(5) to be set 00:33:47.444 [2024-04-26 23:36:36.609327] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe56640 (9): Bad file descriptor 00:33:47.444 [2024-04-26 23:36:36.609543] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:47.444 [2024-04-26 23:36:36.609552] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:47.444 [2024-04-26 23:36:36.609559] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:47.444 [2024-04-26 23:36:36.613045] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:47.444 [2024-04-26 23:36:36.622049] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:47.444 [2024-04-26 23:36:36.622690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.444 [2024-04-26 23:36:36.623077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.444 [2024-04-26 23:36:36.623093] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe56640 with addr=10.0.0.2, port=4420 00:33:47.444 [2024-04-26 23:36:36.623107] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe56640 is same with the state(5) to be set 00:33:47.444 [2024-04-26 23:36:36.623341] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe56640 (9): Bad file descriptor 00:33:47.444 [2024-04-26 23:36:36.623559] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:47.444 [2024-04-26 23:36:36.623568] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:47.444 [2024-04-26 23:36:36.623576] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:47.444 [2024-04-26 23:36:36.627056] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:47.444 [2024-04-26 23:36:36.635910] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:47.444 [2024-04-26 23:36:36.636584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.444 [2024-04-26 23:36:36.636834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.444 [2024-04-26 23:36:36.636854] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe56640 with addr=10.0.0.2, port=4420 00:33:47.444 [2024-04-26 23:36:36.636865] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe56640 is same with the state(5) to be set 00:33:47.444 [2024-04-26 23:36:36.637099] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe56640 (9): Bad file descriptor 00:33:47.444 [2024-04-26 23:36:36.637318] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:47.444 [2024-04-26 23:36:36.637327] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:47.444 [2024-04-26 23:36:36.637334] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:47.444 [2024-04-26 23:36:36.640807] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:47.444 [2024-04-26 23:36:36.649813] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:47.444 [2024-04-26 23:36:36.650352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.444 [2024-04-26 23:36:36.650465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.445 [2024-04-26 23:36:36.650479] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe56640 with addr=10.0.0.2, port=4420 00:33:47.445 [2024-04-26 23:36:36.650489] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe56640 is same with the state(5) to be set 00:33:47.445 [2024-04-26 23:36:36.650722] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe56640 (9): Bad file descriptor 00:33:47.445 [2024-04-26 23:36:36.650948] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:47.445 [2024-04-26 23:36:36.650958] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:47.445 [2024-04-26 23:36:36.650965] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:47.445 [2024-04-26 23:36:36.654437] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:47.445 [2024-04-26 23:36:36.663650] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:47.445 [2024-04-26 23:36:36.664334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.445 [2024-04-26 23:36:36.664693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.445 [2024-04-26 23:36:36.664706] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe56640 with addr=10.0.0.2, port=4420 00:33:47.445 [2024-04-26 23:36:36.664716] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe56640 is same with the state(5) to be set 00:33:47.445 [2024-04-26 23:36:36.664963] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe56640 (9): Bad file descriptor 00:33:47.445 [2024-04-26 23:36:36.665183] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:47.445 [2024-04-26 23:36:36.665194] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:47.445 [2024-04-26 23:36:36.665201] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:47.445 [2024-04-26 23:36:36.668673] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:47.445 [2024-04-26 23:36:36.677478] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:47.445 [2024-04-26 23:36:36.677937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.445 [2024-04-26 23:36:36.678265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.445 [2024-04-26 23:36:36.678275] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe56640 with addr=10.0.0.2, port=4420 00:33:47.445 [2024-04-26 23:36:36.678283] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe56640 is same with the state(5) to be set 00:33:47.445 [2024-04-26 23:36:36.678498] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe56640 (9): Bad file descriptor 00:33:47.445 [2024-04-26 23:36:36.678712] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:47.445 [2024-04-26 23:36:36.678720] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:47.445 [2024-04-26 23:36:36.678727] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:47.445 [2024-04-26 23:36:36.682199] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:47.445 [2024-04-26 23:36:36.691201] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:47.445 [2024-04-26 23:36:36.691874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.445 [2024-04-26 23:36:36.692220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.445 [2024-04-26 23:36:36.692233] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe56640 with addr=10.0.0.2, port=4420 00:33:47.445 [2024-04-26 23:36:36.692242] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe56640 is same with the state(5) to be set 00:33:47.445 [2024-04-26 23:36:36.692475] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe56640 (9): Bad file descriptor 00:33:47.445 [2024-04-26 23:36:36.692692] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:47.445 [2024-04-26 23:36:36.692701] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:47.445 [2024-04-26 23:36:36.692708] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:47.707 [2024-04-26 23:36:36.696189] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:47.707 [2024-04-26 23:36:36.704992] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:47.707 [2024-04-26 23:36:36.705667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.707 [2024-04-26 23:36:36.706073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.707 [2024-04-26 23:36:36.706088] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe56640 with addr=10.0.0.2, port=4420 00:33:47.707 [2024-04-26 23:36:36.706097] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe56640 is same with the state(5) to be set 00:33:47.707 [2024-04-26 23:36:36.706332] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe56640 (9): Bad file descriptor 00:33:47.707 [2024-04-26 23:36:36.706554] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:47.707 [2024-04-26 23:36:36.706562] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:47.707 [2024-04-26 23:36:36.706569] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:47.707 [2024-04-26 23:36:36.710048] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:47.707 [2024-04-26 23:36:36.718855] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:47.707 [2024-04-26 23:36:36.719495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.707 [2024-04-26 23:36:36.719855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.707 [2024-04-26 23:36:36.719869] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe56640 with addr=10.0.0.2, port=4420 00:33:47.707 [2024-04-26 23:36:36.719878] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe56640 is same with the state(5) to be set 00:33:47.707 [2024-04-26 23:36:36.720112] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe56640 (9): Bad file descriptor 00:33:47.707 [2024-04-26 23:36:36.720329] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:47.707 [2024-04-26 23:36:36.720338] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:47.707 [2024-04-26 23:36:36.720346] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:47.707 [2024-04-26 23:36:36.723818] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:47.707 [2024-04-26 23:36:36.732620] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:47.707 [2024-04-26 23:36:36.733301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.707 [2024-04-26 23:36:36.733568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.707 [2024-04-26 23:36:36.733581] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe56640 with addr=10.0.0.2, port=4420 00:33:47.707 [2024-04-26 23:36:36.733590] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe56640 is same with the state(5) to be set 00:33:47.707 [2024-04-26 23:36:36.733824] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe56640 (9): Bad file descriptor 00:33:47.707 [2024-04-26 23:36:36.734049] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:47.707 [2024-04-26 23:36:36.734057] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:47.707 [2024-04-26 23:36:36.734065] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:47.707 [2024-04-26 23:36:36.737536] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:47.707 [2024-04-26 23:36:36.746337] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:47.707 [2024-04-26 23:36:36.747049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.707 [2024-04-26 23:36:36.747403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.707 [2024-04-26 23:36:36.747416] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe56640 with addr=10.0.0.2, port=4420 00:33:47.707 [2024-04-26 23:36:36.747425] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe56640 is same with the state(5) to be set 00:33:47.707 [2024-04-26 23:36:36.747659] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe56640 (9): Bad file descriptor 00:33:47.707 [2024-04-26 23:36:36.747882] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:47.707 [2024-04-26 23:36:36.747895] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:47.707 [2024-04-26 23:36:36.747903] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:47.707 [2024-04-26 23:36:36.751377] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:47.707 [2024-04-26 23:36:36.760175] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:47.707 [2024-04-26 23:36:36.760708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.707 [2024-04-26 23:36:36.761062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.708 [2024-04-26 23:36:36.761076] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe56640 with addr=10.0.0.2, port=4420 00:33:47.708 [2024-04-26 23:36:36.761085] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe56640 is same with the state(5) to be set 00:33:47.708 [2024-04-26 23:36:36.761319] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe56640 (9): Bad file descriptor 00:33:47.708 [2024-04-26 23:36:36.761537] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:47.708 [2024-04-26 23:36:36.761545] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:47.708 [2024-04-26 23:36:36.761553] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:47.708 [2024-04-26 23:36:36.765030] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:47.708 [2024-04-26 23:36:36.774033] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:47.708 [2024-04-26 23:36:36.774687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.708 [2024-04-26 23:36:36.775083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.708 [2024-04-26 23:36:36.775098] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe56640 with addr=10.0.0.2, port=4420 00:33:47.708 [2024-04-26 23:36:36.775108] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe56640 is same with the state(5) to be set 00:33:47.708 [2024-04-26 23:36:36.775341] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe56640 (9): Bad file descriptor 00:33:47.708 [2024-04-26 23:36:36.775559] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:47.708 [2024-04-26 23:36:36.775568] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:47.708 [2024-04-26 23:36:36.775575] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:47.708 [2024-04-26 23:36:36.779051] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:47.708 [2024-04-26 23:36:36.787850] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:47.708 [2024-04-26 23:36:36.788403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.708 [2024-04-26 23:36:36.788746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.708 [2024-04-26 23:36:36.788756] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe56640 with addr=10.0.0.2, port=4420 00:33:47.708 [2024-04-26 23:36:36.788763] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe56640 is same with the state(5) to be set 00:33:47.708 [2024-04-26 23:36:36.788986] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe56640 (9): Bad file descriptor 00:33:47.708 [2024-04-26 23:36:36.789201] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:47.708 [2024-04-26 23:36:36.789209] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:47.708 [2024-04-26 23:36:36.789221] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:47.708 [2024-04-26 23:36:36.792690] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:47.708 [2024-04-26 23:36:36.801691] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:47.708 [2024-04-26 23:36:36.802254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.708 [2024-04-26 23:36:36.802611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.708 [2024-04-26 23:36:36.802624] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe56640 with addr=10.0.0.2, port=4420 00:33:47.708 [2024-04-26 23:36:36.802633] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe56640 is same with the state(5) to be set 00:33:47.708 [2024-04-26 23:36:36.802873] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe56640 (9): Bad file descriptor 00:33:47.708 [2024-04-26 23:36:36.803091] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:47.708 [2024-04-26 23:36:36.803100] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:47.708 [2024-04-26 23:36:36.803107] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:47.708 [2024-04-26 23:36:36.806577] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:47.708 [2024-04-26 23:36:36.815591] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:47.708 [2024-04-26 23:36:36.816148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.708 [2024-04-26 23:36:36.816507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.708 [2024-04-26 23:36:36.816520] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe56640 with addr=10.0.0.2, port=4420 00:33:47.708 [2024-04-26 23:36:36.816529] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe56640 is same with the state(5) to be set 00:33:47.708 [2024-04-26 23:36:36.816762] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe56640 (9): Bad file descriptor 00:33:47.708 [2024-04-26 23:36:36.816987] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:47.708 [2024-04-26 23:36:36.817001] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:47.708 [2024-04-26 23:36:36.817009] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:47.708 [2024-04-26 23:36:36.820482] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:47.708 [2024-04-26 23:36:36.829488] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:47.708 [2024-04-26 23:36:36.830211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.708 [2024-04-26 23:36:36.830433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.708 [2024-04-26 23:36:36.830446] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe56640 with addr=10.0.0.2, port=4420 00:33:47.708 [2024-04-26 23:36:36.830456] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe56640 is same with the state(5) to be set 00:33:47.708 [2024-04-26 23:36:36.830690] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe56640 (9): Bad file descriptor 00:33:47.708 [2024-04-26 23:36:36.830916] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:47.708 [2024-04-26 23:36:36.830926] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:47.708 [2024-04-26 23:36:36.830933] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:47.708 [2024-04-26 23:36:36.834412] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:47.708 [2024-04-26 23:36:36.843252] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:47.708 [2024-04-26 23:36:36.843927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.708 [2024-04-26 23:36:36.844257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.708 [2024-04-26 23:36:36.844270] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe56640 with addr=10.0.0.2, port=4420 00:33:47.708 [2024-04-26 23:36:36.844279] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe56640 is same with the state(5) to be set 00:33:47.708 [2024-04-26 23:36:36.844512] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe56640 (9): Bad file descriptor 00:33:47.708 [2024-04-26 23:36:36.844730] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:47.708 [2024-04-26 23:36:36.844738] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:47.708 [2024-04-26 23:36:36.844746] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:47.708 [2024-04-26 23:36:36.848229] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:47.708 [2024-04-26 23:36:36.857035] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:47.708 [2024-04-26 23:36:36.857571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.708 [2024-04-26 23:36:36.857944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.709 [2024-04-26 23:36:36.857958] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe56640 with addr=10.0.0.2, port=4420 00:33:47.709 [2024-04-26 23:36:36.857967] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe56640 is same with the state(5) to be set 00:33:47.709 [2024-04-26 23:36:36.858201] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe56640 (9): Bad file descriptor 00:33:47.709 [2024-04-26 23:36:36.858419] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:47.709 [2024-04-26 23:36:36.858429] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:47.709 [2024-04-26 23:36:36.858436] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:47.709 [2024-04-26 23:36:36.861914] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:47.709 [2024-04-26 23:36:36.870923] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:47.709 [2024-04-26 23:36:36.871481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.709 [2024-04-26 23:36:36.871821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.709 [2024-04-26 23:36:36.871831] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe56640 with addr=10.0.0.2, port=4420 00:33:47.709 [2024-04-26 23:36:36.871933] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe56640 is same with the state(5) to be set 00:33:47.709 [2024-04-26 23:36:36.872151] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe56640 (9): Bad file descriptor 00:33:47.709 [2024-04-26 23:36:36.872365] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:47.709 [2024-04-26 23:36:36.872373] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:47.709 [2024-04-26 23:36:36.872380] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:47.709 [2024-04-26 23:36:36.875851] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:47.709 [2024-04-26 23:36:36.884654] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:47.709 [2024-04-26 23:36:36.885330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.709 [2024-04-26 23:36:36.885683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.709 [2024-04-26 23:36:36.885695] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe56640 with addr=10.0.0.2, port=4420 00:33:47.709 [2024-04-26 23:36:36.885705] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe56640 is same with the state(5) to be set 00:33:47.709 [2024-04-26 23:36:36.885944] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe56640 (9): Bad file descriptor 00:33:47.709 [2024-04-26 23:36:36.886163] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:47.709 [2024-04-26 23:36:36.886171] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:47.709 [2024-04-26 23:36:36.886178] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:47.709 [2024-04-26 23:36:36.889649] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:47.709 [2024-04-26 23:36:36.898450] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:47.709 [2024-04-26 23:36:36.899141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.709 [2024-04-26 23:36:36.899505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.709 [2024-04-26 23:36:36.899517] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe56640 with addr=10.0.0.2, port=4420 00:33:47.709 [2024-04-26 23:36:36.899526] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe56640 is same with the state(5) to be set 00:33:47.709 [2024-04-26 23:36:36.899760] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe56640 (9): Bad file descriptor 00:33:47.709 [2024-04-26 23:36:36.899984] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:47.709 [2024-04-26 23:36:36.899993] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:47.709 [2024-04-26 23:36:36.900000] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:47.709 [2024-04-26 23:36:36.903472] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:47.709 [2024-04-26 23:36:36.912280] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:47.709 [2024-04-26 23:36:36.912932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.709 [2024-04-26 23:36:36.913194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.709 [2024-04-26 23:36:36.913207] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe56640 with addr=10.0.0.2, port=4420 00:33:47.709 [2024-04-26 23:36:36.913216] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe56640 is same with the state(5) to be set 00:33:47.709 [2024-04-26 23:36:36.913449] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe56640 (9): Bad file descriptor 00:33:47.709 [2024-04-26 23:36:36.913667] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:47.709 [2024-04-26 23:36:36.913675] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:47.709 [2024-04-26 23:36:36.913682] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:47.709 [2024-04-26 23:36:36.917160] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:47.709 [2024-04-26 23:36:36.926169] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:47.709 [2024-04-26 23:36:36.926882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.709 [2024-04-26 23:36:36.927053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.709 [2024-04-26 23:36:36.927066] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe56640 with addr=10.0.0.2, port=4420 00:33:47.709 [2024-04-26 23:36:36.927075] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe56640 is same with the state(5) to be set 00:33:47.709 [2024-04-26 23:36:36.927308] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe56640 (9): Bad file descriptor 00:33:47.709 [2024-04-26 23:36:36.927525] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:47.709 [2024-04-26 23:36:36.927534] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:47.709 [2024-04-26 23:36:36.927541] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:47.709 [2024-04-26 23:36:36.931019] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:47.709 [2024-04-26 23:36:36.940021] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:47.709 [2024-04-26 23:36:36.940700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.709 [2024-04-26 23:36:36.941117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.709 [2024-04-26 23:36:36.941132] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe56640 with addr=10.0.0.2, port=4420 00:33:47.709 [2024-04-26 23:36:36.941141] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe56640 is same with the state(5) to be set 00:33:47.709 [2024-04-26 23:36:36.941375] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe56640 (9): Bad file descriptor 00:33:47.709 [2024-04-26 23:36:36.941592] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:47.709 [2024-04-26 23:36:36.941600] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:47.709 [2024-04-26 23:36:36.941607] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:47.709 [2024-04-26 23:36:36.945085] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:47.709 [2024-04-26 23:36:36.953886] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:47.709 [2024-04-26 23:36:36.954555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.709 [2024-04-26 23:36:36.954805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.709 [2024-04-26 23:36:36.954817] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe56640 with addr=10.0.0.2, port=4420 00:33:47.709 [2024-04-26 23:36:36.954827] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe56640 is same with the state(5) to be set 00:33:47.709 [2024-04-26 23:36:36.955067] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe56640 (9): Bad file descriptor 00:33:47.709 [2024-04-26 23:36:36.955285] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:47.709 [2024-04-26 23:36:36.955293] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:47.710 [2024-04-26 23:36:36.955301] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:47.710 [2024-04-26 23:36:36.958770] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:47.972 [2024-04-26 23:36:36.967771] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:47.972 [2024-04-26 23:36:36.968376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.972 [2024-04-26 23:36:36.968741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.972 [2024-04-26 23:36:36.968756] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe56640 with addr=10.0.0.2, port=4420 00:33:47.972 [2024-04-26 23:36:36.968763] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe56640 is same with the state(5) to be set 00:33:47.972 [2024-04-26 23:36:36.968983] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe56640 (9): Bad file descriptor 00:33:47.972 [2024-04-26 23:36:36.969198] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:47.972 [2024-04-26 23:36:36.969206] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:47.972 [2024-04-26 23:36:36.969213] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:47.972 [2024-04-26 23:36:36.972678] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:47.972 [2024-04-26 23:36:36.981678] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:47.972 [2024-04-26 23:36:36.982239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.972 [2024-04-26 23:36:36.982595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.972 [2024-04-26 23:36:36.982608] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe56640 with addr=10.0.0.2, port=4420 00:33:47.972 [2024-04-26 23:36:36.982617] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe56640 is same with the state(5) to be set 00:33:47.972 [2024-04-26 23:36:36.982857] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe56640 (9): Bad file descriptor 00:33:47.972 [2024-04-26 23:36:36.983075] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:47.972 [2024-04-26 23:36:36.983084] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:47.972 [2024-04-26 23:36:36.983091] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:47.972 [2024-04-26 23:36:36.986572] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:47.972 [2024-04-26 23:36:36.995575] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:47.972 [2024-04-26 23:36:36.996283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.972 [2024-04-26 23:36:36.996636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.972 [2024-04-26 23:36:36.996649] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe56640 with addr=10.0.0.2, port=4420 00:33:47.972 [2024-04-26 23:36:36.996658] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe56640 is same with the state(5) to be set 00:33:47.972 [2024-04-26 23:36:36.996905] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe56640 (9): Bad file descriptor 00:33:47.972 [2024-04-26 23:36:36.997125] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:47.972 [2024-04-26 23:36:36.997134] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:47.972 [2024-04-26 23:36:36.997141] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:47.972 [2024-04-26 23:36:37.000611] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:47.972 [2024-04-26 23:36:37.009410] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:47.972 [2024-04-26 23:36:37.010117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.972 [2024-04-26 23:36:37.010472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.972 [2024-04-26 23:36:37.010484] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe56640 with addr=10.0.0.2, port=4420 00:33:47.972 [2024-04-26 23:36:37.010497] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe56640 is same with the state(5) to be set 00:33:47.972 [2024-04-26 23:36:37.010731] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe56640 (9): Bad file descriptor 00:33:47.972 [2024-04-26 23:36:37.010956] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:47.972 [2024-04-26 23:36:37.010965] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:47.972 [2024-04-26 23:36:37.010973] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:47.972 [2024-04-26 23:36:37.014458] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:47.972 [2024-04-26 23:36:37.023262] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:47.972 [2024-04-26 23:36:37.023702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.972 [2024-04-26 23:36:37.024045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.972 [2024-04-26 23:36:37.024057] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe56640 with addr=10.0.0.2, port=4420 00:33:47.972 [2024-04-26 23:36:37.024066] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe56640 is same with the state(5) to be set 00:33:47.972 [2024-04-26 23:36:37.024283] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe56640 (9): Bad file descriptor 00:33:47.972 [2024-04-26 23:36:37.024497] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:47.972 [2024-04-26 23:36:37.024505] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:47.972 [2024-04-26 23:36:37.024512] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:47.973 [2024-04-26 23:36:37.027986] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:47.973 [2024-04-26 23:36:37.036992] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:47.973 [2024-04-26 23:36:37.037561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.973 [2024-04-26 23:36:37.037961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.973 [2024-04-26 23:36:37.037975] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe56640 with addr=10.0.0.2, port=4420 00:33:47.973 [2024-04-26 23:36:37.037985] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe56640 is same with the state(5) to be set 00:33:47.973 [2024-04-26 23:36:37.038218] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe56640 (9): Bad file descriptor 00:33:47.973 [2024-04-26 23:36:37.038436] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:47.973 [2024-04-26 23:36:37.038444] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:47.973 [2024-04-26 23:36:37.038452] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:47.973 [2024-04-26 23:36:37.041929] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:47.973 [2024-04-26 23:36:37.050756] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:47.973 [2024-04-26 23:36:37.051465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.973 [2024-04-26 23:36:37.051679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.973 [2024-04-26 23:36:37.051691] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe56640 with addr=10.0.0.2, port=4420 00:33:47.973 [2024-04-26 23:36:37.051701] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe56640 is same with the state(5) to be set 00:33:47.973 [2024-04-26 23:36:37.051945] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe56640 (9): Bad file descriptor 00:33:47.973 [2024-04-26 23:36:37.052165] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:47.973 [2024-04-26 23:36:37.052173] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:47.973 [2024-04-26 23:36:37.052180] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:47.973 [2024-04-26 23:36:37.055654] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:47.973 [2024-04-26 23:36:37.064660] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:47.973 [2024-04-26 23:36:37.065196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.973 [2024-04-26 23:36:37.065554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.973 [2024-04-26 23:36:37.065567] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe56640 with addr=10.0.0.2, port=4420 00:33:47.973 [2024-04-26 23:36:37.065576] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe56640 is same with the state(5) to be set 00:33:47.973 [2024-04-26 23:36:37.065809] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe56640 (9): Bad file descriptor 00:33:47.973 [2024-04-26 23:36:37.066034] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:47.973 [2024-04-26 23:36:37.066044] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:47.973 [2024-04-26 23:36:37.066051] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:47.973 [2024-04-26 23:36:37.069524] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:47.973 [2024-04-26 23:36:37.078530] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:47.973 [2024-04-26 23:36:37.079246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.973 [2024-04-26 23:36:37.079601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.973 [2024-04-26 23:36:37.079613] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe56640 with addr=10.0.0.2, port=4420 00:33:47.973 [2024-04-26 23:36:37.079623] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe56640 is same with the state(5) to be set 00:33:47.973 [2024-04-26 23:36:37.079863] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe56640 (9): Bad file descriptor 00:33:47.973 [2024-04-26 23:36:37.080081] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:47.973 [2024-04-26 23:36:37.080089] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:47.973 [2024-04-26 23:36:37.080097] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:47.973 [2024-04-26 23:36:37.083568] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:47.973 [2024-04-26 23:36:37.092371] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:47.973 [2024-04-26 23:36:37.093071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.973 [2024-04-26 23:36:37.093430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.973 [2024-04-26 23:36:37.093443] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe56640 with addr=10.0.0.2, port=4420 00:33:47.973 [2024-04-26 23:36:37.093453] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe56640 is same with the state(5) to be set 00:33:47.973 [2024-04-26 23:36:37.093686] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe56640 (9): Bad file descriptor 00:33:47.973 [2024-04-26 23:36:37.093915] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:47.973 [2024-04-26 23:36:37.093925] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:47.973 [2024-04-26 23:36:37.093932] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:47.973 [2024-04-26 23:36:37.097406] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:47.973 [2024-04-26 23:36:37.106209] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:47.973 [2024-04-26 23:36:37.106809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.973 [2024-04-26 23:36:37.107136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.973 [2024-04-26 23:36:37.107147] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe56640 with addr=10.0.0.2, port=4420 00:33:47.973 [2024-04-26 23:36:37.107154] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe56640 is same with the state(5) to be set 00:33:47.973 [2024-04-26 23:36:37.107370] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe56640 (9): Bad file descriptor 00:33:47.973 [2024-04-26 23:36:37.107584] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:47.973 [2024-04-26 23:36:37.107597] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:47.973 [2024-04-26 23:36:37.107605] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:47.973 [2024-04-26 23:36:37.111074] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:47.973 [2024-04-26 23:36:37.120085] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:47.973 [2024-04-26 23:36:37.120642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.973 [2024-04-26 23:36:37.120967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.973 [2024-04-26 23:36:37.120978] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe56640 with addr=10.0.0.2, port=4420 00:33:47.973 [2024-04-26 23:36:37.120985] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe56640 is same with the state(5) to be set 00:33:47.973 [2024-04-26 23:36:37.121199] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe56640 (9): Bad file descriptor 00:33:47.973 [2024-04-26 23:36:37.121414] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:47.973 [2024-04-26 23:36:37.121422] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:47.973 [2024-04-26 23:36:37.121430] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:47.973 [2024-04-26 23:36:37.124898] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:47.973 [2024-04-26 23:36:37.133902] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:47.973 [2024-04-26 23:36:37.134452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.973 [2024-04-26 23:36:37.134776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.973 [2024-04-26 23:36:37.134785] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe56640 with addr=10.0.0.2, port=4420 00:33:47.973 [2024-04-26 23:36:37.134793] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe56640 is same with the state(5) to be set 00:33:47.973 [2024-04-26 23:36:37.135014] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe56640 (9): Bad file descriptor 00:33:47.973 [2024-04-26 23:36:37.135229] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:47.973 [2024-04-26 23:36:37.135241] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:47.973 [2024-04-26 23:36:37.135248] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:47.973 [2024-04-26 23:36:37.138715] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:47.973 [2024-04-26 23:36:37.147715] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:47.973 [2024-04-26 23:36:37.148288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.973 [2024-04-26 23:36:37.148616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.973 [2024-04-26 23:36:37.148625] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe56640 with addr=10.0.0.2, port=4420 00:33:47.973 [2024-04-26 23:36:37.148632] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe56640 is same with the state(5) to be set 00:33:47.973 [2024-04-26 23:36:37.148852] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe56640 (9): Bad file descriptor 00:33:47.973 [2024-04-26 23:36:37.149067] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:47.973 [2024-04-26 23:36:37.149075] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:47.973 [2024-04-26 23:36:37.149082] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:47.973 [2024-04-26 23:36:37.152547] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:47.973 [2024-04-26 23:36:37.161545] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:47.973 [2024-04-26 23:36:37.161979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.973 [2024-04-26 23:36:37.162476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.973 [2024-04-26 23:36:37.162489] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe56640 with addr=10.0.0.2, port=4420 00:33:47.974 [2024-04-26 23:36:37.162498] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe56640 is same with the state(5) to be set 00:33:47.974 [2024-04-26 23:36:37.162731] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe56640 (9): Bad file descriptor 00:33:47.974 [2024-04-26 23:36:37.162955] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:47.974 [2024-04-26 23:36:37.162964] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:47.974 [2024-04-26 23:36:37.162971] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:47.974 [2024-04-26 23:36:37.166445] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:47.974 [2024-04-26 23:36:37.175448] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:47.974 [2024-04-26 23:36:37.175940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.974 [2024-04-26 23:36:37.176088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.974 [2024-04-26 23:36:37.176101] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe56640 with addr=10.0.0.2, port=4420 00:33:47.974 [2024-04-26 23:36:37.176110] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe56640 is same with the state(5) to be set 00:33:47.974 [2024-04-26 23:36:37.176344] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe56640 (9): Bad file descriptor 00:33:47.974 [2024-04-26 23:36:37.176561] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:47.974 [2024-04-26 23:36:37.176569] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:47.974 [2024-04-26 23:36:37.176581] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:47.974 [2024-04-26 23:36:37.180060] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:47.974 [2024-04-26 23:36:37.189266] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:47.974 [2024-04-26 23:36:37.189885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.974 [2024-04-26 23:36:37.190204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.974 [2024-04-26 23:36:37.190214] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe56640 with addr=10.0.0.2, port=4420 00:33:47.974 [2024-04-26 23:36:37.190222] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe56640 is same with the state(5) to be set 00:33:47.974 [2024-04-26 23:36:37.190442] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe56640 (9): Bad file descriptor 00:33:47.974 [2024-04-26 23:36:37.190657] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:47.974 [2024-04-26 23:36:37.190665] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:47.974 [2024-04-26 23:36:37.190672] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:47.974 23:36:37 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:33:47.974 23:36:37 -- common/autotest_common.sh@850 -- # return 0 00:33:47.974 23:36:37 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:33:47.974 23:36:37 -- common/autotest_common.sh@716 -- # xtrace_disable 00:33:47.974 23:36:37 -- common/autotest_common.sh@10 -- # set +x 00:33:47.974 [2024-04-26 23:36:37.194153] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:47.974 [2024-04-26 23:36:37.203158] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:47.974 [2024-04-26 23:36:37.203709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.974 [2024-04-26 23:36:37.204043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.974 [2024-04-26 23:36:37.204054] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe56640 with addr=10.0.0.2, port=4420 00:33:47.974 [2024-04-26 23:36:37.204062] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe56640 is same with the state(5) to be set 00:33:47.974 [2024-04-26 23:36:37.204279] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe56640 (9): Bad file descriptor 00:33:47.974 [2024-04-26 23:36:37.204494] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:47.974 [2024-04-26 23:36:37.204502] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:47.974 [2024-04-26 23:36:37.204509] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:47.974 [2024-04-26 23:36:37.208197] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:47.974 [2024-04-26 23:36:37.217017] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:47.974 [2024-04-26 23:36:37.217575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.974 [2024-04-26 23:36:37.217789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.974 [2024-04-26 23:36:37.217802] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe56640 with addr=10.0.0.2, port=4420 00:33:47.974 [2024-04-26 23:36:37.217809] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe56640 is same with the state(5) to be set 00:33:47.974 [2024-04-26 23:36:37.218030] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe56640 (9): Bad file descriptor 00:33:47.974 [2024-04-26 23:36:37.218247] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:47.974 [2024-04-26 23:36:37.218259] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:47.974 [2024-04-26 23:36:37.218266] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:47.974 [2024-04-26 23:36:37.221737] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:48.236 [2024-04-26 23:36:37.230743] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:48.236 [2024-04-26 23:36:37.231309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.236 [2024-04-26 23:36:37.231682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.236 [2024-04-26 23:36:37.231692] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe56640 with addr=10.0.0.2, port=4420 00:33:48.236 [2024-04-26 23:36:37.231699] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe56640 is same with the state(5) to be set 00:33:48.236 [2024-04-26 23:36:37.231919] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe56640 (9): Bad file descriptor 00:33:48.236 [2024-04-26 23:36:37.232134] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:48.236 [2024-04-26 23:36:37.232142] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:48.236 [2024-04-26 23:36:37.232149] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:48.236 23:36:37 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:48.236 23:36:37 -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:33:48.236 23:36:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:33:48.236 23:36:37 -- common/autotest_common.sh@10 -- # set +x 00:33:48.236 [2024-04-26 23:36:37.235616] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:48.236 [2024-04-26 23:36:37.236714] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:48.236 23:36:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:33:48.236 23:36:37 -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:33:48.236 23:36:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:33:48.236 23:36:37 -- common/autotest_common.sh@10 -- # set +x 00:33:48.236 [2024-04-26 23:36:37.244621] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:48.236 [2024-04-26 23:36:37.245288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.236 [2024-04-26 23:36:37.245641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.236 [2024-04-26 23:36:37.245654] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe56640 with addr=10.0.0.2, port=4420 00:33:48.236 [2024-04-26 23:36:37.245663] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe56640 is same with the state(5) to be set 00:33:48.236 [2024-04-26 23:36:37.245904] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe56640 (9): Bad file descriptor 00:33:48.236 [2024-04-26 23:36:37.246122] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:48.236 [2024-04-26 23:36:37.246130] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:48.236 [2024-04-26 23:36:37.246138] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:48.236 [2024-04-26 23:36:37.249610] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:48.236 [2024-04-26 23:36:37.258437] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:48.236 [2024-04-26 23:36:37.259009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.236 [2024-04-26 23:36:37.259223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.236 [2024-04-26 23:36:37.259233] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe56640 with addr=10.0.0.2, port=4420 00:33:48.236 [2024-04-26 23:36:37.259245] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe56640 is same with the state(5) to be set 00:33:48.236 [2024-04-26 23:36:37.259462] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe56640 (9): Bad file descriptor 00:33:48.236 [2024-04-26 23:36:37.259677] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:48.236 [2024-04-26 23:36:37.259684] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:48.236 [2024-04-26 23:36:37.259691] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:48.236 [2024-04-26 23:36:37.263164] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:48.236 [2024-04-26 23:36:37.272167] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:48.236 [2024-04-26 23:36:37.272602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.236 [2024-04-26 23:36:37.272949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.236 [2024-04-26 23:36:37.272960] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe56640 with addr=10.0.0.2, port=4420 00:33:48.236 [2024-04-26 23:36:37.272968] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe56640 is same with the state(5) to be set 00:33:48.236 [2024-04-26 23:36:37.273184] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe56640 (9): Bad file descriptor 00:33:48.236 [2024-04-26 23:36:37.273398] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:48.236 [2024-04-26 23:36:37.273406] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:48.236 [2024-04-26 23:36:37.273413] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:48.236 Malloc0 00:33:48.236 23:36:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:33:48.236 23:36:37 -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:33:48.236 23:36:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:33:48.236 23:36:37 -- common/autotest_common.sh@10 -- # set +x 00:33:48.236 [2024-04-26 23:36:37.276882] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:48.236 [2024-04-26 23:36:37.285879] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:48.236 [2024-04-26 23:36:37.286413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.236 [2024-04-26 23:36:37.286734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.236 [2024-04-26 23:36:37.286745] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe56640 with addr=10.0.0.2, port=4420 00:33:48.236 [2024-04-26 23:36:37.286752] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe56640 is same with the state(5) to be set 00:33:48.236 [2024-04-26 23:36:37.286973] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe56640 (9): Bad file descriptor 00:33:48.236 [2024-04-26 23:36:37.287187] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:48.236 [2024-04-26 23:36:37.287196] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:48.236 [2024-04-26 23:36:37.287203] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:48.236 23:36:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:33:48.236 23:36:37 -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:33:48.236 23:36:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:33:48.236 23:36:37 -- common/autotest_common.sh@10 -- # set +x 00:33:48.236 [2024-04-26 23:36:37.290668] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:48.236 23:36:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:33:48.236 [2024-04-26 23:36:37.299674] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:48.236 23:36:37 -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:48.236 23:36:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:33:48.236 [2024-04-26 23:36:37.300326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.236 23:36:37 -- common/autotest_common.sh@10 -- # set +x 00:33:48.236 [2024-04-26 23:36:37.300681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.236 [2024-04-26 23:36:37.300694] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe56640 with addr=10.0.0.2, port=4420 00:33:48.236 [2024-04-26 23:36:37.300703] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe56640 is same with the state(5) to be set 00:33:48.236 [2024-04-26 23:36:37.300943] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe56640 (9): Bad file descriptor 00:33:48.236 [2024-04-26 23:36:37.301162] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:48.236 [2024-04-26 23:36:37.301170] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:48.236 [2024-04-26 23:36:37.301178] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:48.236 [2024-04-26 23:36:37.304648] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:48.236 [2024-04-26 23:36:37.306860] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:48.236 23:36:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:33:48.236 23:36:37 -- host/bdevperf.sh@38 -- # wait 4181270 00:33:48.236 [2024-04-26 23:36:37.313457] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:48.236 [2024-04-26 23:36:37.385221] bdev_nvme.c:2054:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:33:58.244 00:33:58.244 Latency(us) 00:33:58.244 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:58.244 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:33:58.244 Verification LBA range: start 0x0 length 0x4000 00:33:58.244 Nvme1n1 : 15.00 7857.78 30.69 9775.79 0.00 7233.32 778.24 17039.36 00:33:58.244 =================================================================================================================== 00:33:58.244 Total : 7857.78 30.69 9775.79 0.00 7233.32 778.24 17039.36 00:33:58.244 23:36:45 -- host/bdevperf.sh@39 -- # sync 00:33:58.244 23:36:45 -- host/bdevperf.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:33:58.244 23:36:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:33:58.244 23:36:45 -- common/autotest_common.sh@10 -- # set +x 00:33:58.244 23:36:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:33:58.244 23:36:45 -- host/bdevperf.sh@42 -- # trap - SIGINT SIGTERM EXIT 00:33:58.244 23:36:45 -- host/bdevperf.sh@44 -- # nvmftestfini 00:33:58.244 23:36:45 -- nvmf/common.sh@477 -- # nvmfcleanup 00:33:58.244 23:36:45 -- nvmf/common.sh@117 -- # sync 00:33:58.244 23:36:45 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:33:58.244 23:36:45 -- nvmf/common.sh@120 -- # set +e 00:33:58.244 23:36:45 -- nvmf/common.sh@121 -- # for i in {1..20} 00:33:58.244 23:36:45 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:33:58.244 rmmod nvme_tcp 00:33:58.244 rmmod nvme_fabrics 00:33:58.244 rmmod nvme_keyring 00:33:58.244 23:36:46 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:33:58.245 23:36:46 -- nvmf/common.sh@124 -- # set -e 00:33:58.245 23:36:46 -- nvmf/common.sh@125 -- # return 0 00:33:58.245 23:36:46 -- nvmf/common.sh@478 -- # '[' -n 4182604 ']' 00:33:58.245 23:36:46 -- nvmf/common.sh@479 -- # killprocess 4182604 00:33:58.245 23:36:46 -- common/autotest_common.sh@936 -- # '[' -z 4182604 ']' 00:33:58.245 23:36:46 -- common/autotest_common.sh@940 -- # kill -0 4182604 00:33:58.245 23:36:46 -- common/autotest_common.sh@941 -- # uname 00:33:58.245 23:36:46 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:33:58.245 23:36:46 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 4182604 00:33:58.245 23:36:46 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:33:58.245 23:36:46 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:33:58.245 23:36:46 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 4182604' 00:33:58.245 killing process with pid 4182604 00:33:58.245 23:36:46 -- common/autotest_common.sh@955 -- # kill 4182604 00:33:58.245 23:36:46 -- common/autotest_common.sh@960 -- # wait 4182604 00:33:58.245 23:36:46 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:33:58.245 23:36:46 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:33:58.245 23:36:46 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:33:58.245 23:36:46 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:33:58.245 23:36:46 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:33:58.245 23:36:46 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:58.245 23:36:46 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:33:58.245 23:36:46 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:59.188 23:36:48 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:33:59.188 00:33:59.188 real 0m27.750s 00:33:59.188 user 1m2.695s 00:33:59.188 sys 0m7.079s 00:33:59.188 23:36:48 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:33:59.188 23:36:48 -- common/autotest_common.sh@10 -- # set +x 00:33:59.188 ************************************ 00:33:59.188 END TEST nvmf_bdevperf 00:33:59.188 ************************************ 00:33:59.188 23:36:48 -- nvmf/nvmf.sh@120 -- # run_test nvmf_target_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:33:59.188 23:36:48 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:33:59.188 23:36:48 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:33:59.188 23:36:48 -- common/autotest_common.sh@10 -- # set +x 00:33:59.451 ************************************ 00:33:59.451 START TEST nvmf_target_disconnect 00:33:59.451 ************************************ 00:33:59.451 23:36:48 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:33:59.451 * Looking for test storage... 00:33:59.451 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:33:59.451 23:36:48 -- host/target_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:59.451 23:36:48 -- nvmf/common.sh@7 -- # uname -s 00:33:59.451 23:36:48 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:59.451 23:36:48 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:59.451 23:36:48 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:59.451 23:36:48 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:59.451 23:36:48 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:59.451 23:36:48 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:59.451 23:36:48 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:59.451 23:36:48 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:59.451 23:36:48 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:59.451 23:36:48 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:59.451 23:36:48 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:33:59.451 23:36:48 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:33:59.451 23:36:48 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:59.451 23:36:48 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:59.451 23:36:48 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:59.451 23:36:48 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:59.451 23:36:48 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:59.451 23:36:48 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:59.451 23:36:48 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:59.451 23:36:48 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:59.451 23:36:48 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:59.451 23:36:48 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:59.451 23:36:48 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:59.451 23:36:48 -- paths/export.sh@5 -- # export PATH 00:33:59.451 23:36:48 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:59.451 23:36:48 -- nvmf/common.sh@47 -- # : 0 00:33:59.451 23:36:48 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:33:59.451 23:36:48 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:33:59.451 23:36:48 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:59.451 23:36:48 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:59.451 23:36:48 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:59.451 23:36:48 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:33:59.451 23:36:48 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:33:59.451 23:36:48 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:33:59.451 23:36:48 -- host/target_disconnect.sh@11 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:33:59.451 23:36:48 -- host/target_disconnect.sh@13 -- # MALLOC_BDEV_SIZE=64 00:33:59.451 23:36:48 -- host/target_disconnect.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:33:59.451 23:36:48 -- host/target_disconnect.sh@77 -- # nvmftestinit 00:33:59.451 23:36:48 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:33:59.451 23:36:48 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:59.451 23:36:48 -- nvmf/common.sh@437 -- # prepare_net_devs 00:33:59.451 23:36:48 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:33:59.451 23:36:48 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:33:59.451 23:36:48 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:59.451 23:36:48 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:33:59.451 23:36:48 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:59.451 23:36:48 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:33:59.451 23:36:48 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:33:59.451 23:36:48 -- nvmf/common.sh@285 -- # xtrace_disable 00:33:59.451 23:36:48 -- common/autotest_common.sh@10 -- # set +x 00:34:07.600 23:36:55 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:34:07.600 23:36:55 -- nvmf/common.sh@291 -- # pci_devs=() 00:34:07.600 23:36:55 -- nvmf/common.sh@291 -- # local -a pci_devs 00:34:07.600 23:36:55 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:34:07.600 23:36:55 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:34:07.600 23:36:55 -- nvmf/common.sh@293 -- # pci_drivers=() 00:34:07.600 23:36:55 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:34:07.600 23:36:55 -- nvmf/common.sh@295 -- # net_devs=() 00:34:07.600 23:36:55 -- nvmf/common.sh@295 -- # local -ga net_devs 00:34:07.600 23:36:55 -- nvmf/common.sh@296 -- # e810=() 00:34:07.600 23:36:55 -- nvmf/common.sh@296 -- # local -ga e810 00:34:07.600 23:36:55 -- nvmf/common.sh@297 -- # x722=() 00:34:07.600 23:36:55 -- nvmf/common.sh@297 -- # local -ga x722 00:34:07.600 23:36:55 -- nvmf/common.sh@298 -- # mlx=() 00:34:07.600 23:36:55 -- nvmf/common.sh@298 -- # local -ga mlx 00:34:07.600 23:36:55 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:07.600 23:36:55 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:07.601 23:36:55 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:07.601 23:36:55 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:07.601 23:36:55 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:07.601 23:36:55 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:07.601 23:36:55 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:07.601 23:36:55 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:07.601 23:36:55 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:07.601 23:36:55 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:07.601 23:36:55 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:07.601 23:36:55 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:34:07.601 23:36:55 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:34:07.601 23:36:55 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:34:07.601 23:36:55 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:34:07.601 23:36:55 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:34:07.601 23:36:55 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:34:07.601 23:36:55 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:34:07.601 23:36:55 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:34:07.601 Found 0000:31:00.0 (0x8086 - 0x159b) 00:34:07.601 23:36:55 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:34:07.601 23:36:55 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:34:07.601 23:36:55 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:07.601 23:36:55 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:07.601 23:36:55 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:34:07.601 23:36:55 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:34:07.601 23:36:55 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:34:07.601 Found 0000:31:00.1 (0x8086 - 0x159b) 00:34:07.601 23:36:55 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:34:07.601 23:36:55 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:34:07.601 23:36:55 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:07.601 23:36:55 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:07.601 23:36:55 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:34:07.601 23:36:55 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:34:07.601 23:36:55 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:34:07.601 23:36:55 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:34:07.601 23:36:55 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:34:07.601 23:36:55 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:07.601 23:36:55 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:34:07.601 23:36:55 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:07.601 23:36:55 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:34:07.601 Found net devices under 0000:31:00.0: cvl_0_0 00:34:07.601 23:36:55 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:34:07.601 23:36:55 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:34:07.601 23:36:55 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:07.601 23:36:55 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:34:07.601 23:36:55 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:07.601 23:36:55 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:34:07.601 Found net devices under 0000:31:00.1: cvl_0_1 00:34:07.601 23:36:55 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:34:07.601 23:36:55 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:34:07.601 23:36:55 -- nvmf/common.sh@403 -- # is_hw=yes 00:34:07.601 23:36:55 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:34:07.601 23:36:55 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:34:07.601 23:36:55 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:34:07.601 23:36:55 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:07.601 23:36:55 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:07.601 23:36:55 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:34:07.601 23:36:55 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:34:07.601 23:36:55 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:34:07.601 23:36:55 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:34:07.601 23:36:55 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:34:07.601 23:36:55 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:34:07.601 23:36:55 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:07.601 23:36:55 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:34:07.601 23:36:55 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:34:07.601 23:36:55 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:34:07.601 23:36:55 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:34:07.601 23:36:55 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:34:07.601 23:36:55 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:07.601 23:36:55 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:34:07.601 23:36:55 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:07.601 23:36:55 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:34:07.601 23:36:55 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:34:07.601 23:36:55 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:34:07.601 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:07.601 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.612 ms 00:34:07.601 00:34:07.601 --- 10.0.0.2 ping statistics --- 00:34:07.601 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:07.601 rtt min/avg/max/mdev = 0.612/0.612/0.612/0.000 ms 00:34:07.601 23:36:55 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:34:07.601 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:07.601 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.226 ms 00:34:07.601 00:34:07.601 --- 10.0.0.1 ping statistics --- 00:34:07.601 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:07.601 rtt min/avg/max/mdev = 0.226/0.226/0.226/0.000 ms 00:34:07.601 23:36:55 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:07.601 23:36:55 -- nvmf/common.sh@411 -- # return 0 00:34:07.601 23:36:55 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:34:07.601 23:36:55 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:07.601 23:36:55 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:34:07.601 23:36:55 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:34:07.601 23:36:55 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:07.601 23:36:55 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:34:07.601 23:36:55 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:34:07.601 23:36:55 -- host/target_disconnect.sh@78 -- # run_test nvmf_target_disconnect_tc1 nvmf_target_disconnect_tc1 00:34:07.601 23:36:55 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:34:07.601 23:36:55 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:34:07.601 23:36:55 -- common/autotest_common.sh@10 -- # set +x 00:34:07.601 ************************************ 00:34:07.601 START TEST nvmf_target_disconnect_tc1 00:34:07.601 ************************************ 00:34:07.601 23:36:55 -- common/autotest_common.sh@1111 -- # nvmf_target_disconnect_tc1 00:34:07.601 23:36:55 -- host/target_disconnect.sh@32 -- # set +e 00:34:07.601 23:36:55 -- host/target_disconnect.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:34:07.601 EAL: No free 2048 kB hugepages reported on node 1 00:34:07.601 [2024-04-26 23:36:56.032137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:07.601 [2024-04-26 23:36:56.032586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:07.601 [2024-04-26 23:36:56.032602] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xedf260 with addr=10.0.0.2, port=4420 00:34:07.601 [2024-04-26 23:36:56.032632] nvme_tcp.c:2699:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:34:07.601 [2024-04-26 23:36:56.032650] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:34:07.601 [2024-04-26 23:36:56.032658] nvme.c: 898:spdk_nvme_probe: *ERROR*: Create probe context failed 00:34:07.601 spdk_nvme_probe() failed for transport address '10.0.0.2' 00:34:07.601 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect: errors occurred 00:34:07.601 Initializing NVMe Controllers 00:34:07.601 23:36:56 -- host/target_disconnect.sh@33 -- # trap - ERR 00:34:07.601 23:36:56 -- host/target_disconnect.sh@33 -- # print_backtrace 00:34:07.601 23:36:56 -- common/autotest_common.sh@1139 -- # [[ hxBET =~ e ]] 00:34:07.601 23:36:56 -- common/autotest_common.sh@1139 -- # return 0 00:34:07.601 23:36:56 -- host/target_disconnect.sh@37 -- # '[' 1 '!=' 1 ']' 00:34:07.601 23:36:56 -- host/target_disconnect.sh@41 -- # set -e 00:34:07.601 00:34:07.601 real 0m0.102s 00:34:07.601 user 0m0.046s 00:34:07.601 sys 0m0.054s 00:34:07.601 23:36:56 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:34:07.601 23:36:56 -- common/autotest_common.sh@10 -- # set +x 00:34:07.601 ************************************ 00:34:07.601 END TEST nvmf_target_disconnect_tc1 00:34:07.601 ************************************ 00:34:07.601 23:36:56 -- host/target_disconnect.sh@79 -- # run_test nvmf_target_disconnect_tc2 nvmf_target_disconnect_tc2 00:34:07.601 23:36:56 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:34:07.601 23:36:56 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:34:07.601 23:36:56 -- common/autotest_common.sh@10 -- # set +x 00:34:07.601 ************************************ 00:34:07.601 START TEST nvmf_target_disconnect_tc2 00:34:07.601 ************************************ 00:34:07.601 23:36:56 -- common/autotest_common.sh@1111 -- # nvmf_target_disconnect_tc2 00:34:07.601 23:36:56 -- host/target_disconnect.sh@45 -- # disconnect_init 10.0.0.2 00:34:07.601 23:36:56 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:34:07.601 23:36:56 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:34:07.601 23:36:56 -- common/autotest_common.sh@710 -- # xtrace_disable 00:34:07.601 23:36:56 -- common/autotest_common.sh@10 -- # set +x 00:34:07.601 23:36:56 -- nvmf/common.sh@470 -- # nvmfpid=4188720 00:34:07.601 23:36:56 -- nvmf/common.sh@471 -- # waitforlisten 4188720 00:34:07.601 23:36:56 -- common/autotest_common.sh@817 -- # '[' -z 4188720 ']' 00:34:07.601 23:36:56 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:07.601 23:36:56 -- common/autotest_common.sh@822 -- # local max_retries=100 00:34:07.601 23:36:56 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:07.601 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:07.601 23:36:56 -- common/autotest_common.sh@826 -- # xtrace_disable 00:34:07.601 23:36:56 -- common/autotest_common.sh@10 -- # set +x 00:34:07.601 23:36:56 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:34:07.601 [2024-04-26 23:36:56.258898] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:34:07.602 [2024-04-26 23:36:56.258959] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:07.602 EAL: No free 2048 kB hugepages reported on node 1 00:34:07.602 [2024-04-26 23:36:56.348318] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:34:07.602 [2024-04-26 23:36:56.394274] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:07.602 [2024-04-26 23:36:56.394330] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:07.602 [2024-04-26 23:36:56.394338] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:07.602 [2024-04-26 23:36:56.394345] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:07.602 [2024-04-26 23:36:56.394351] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:07.602 [2024-04-26 23:36:56.395027] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:34:07.602 [2024-04-26 23:36:56.395240] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:34:07.602 [2024-04-26 23:36:56.395431] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 7 00:34:07.602 [2024-04-26 23:36:56.395441] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:34:07.864 23:36:57 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:34:07.864 23:36:57 -- common/autotest_common.sh@850 -- # return 0 00:34:07.864 23:36:57 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:34:07.864 23:36:57 -- common/autotest_common.sh@716 -- # xtrace_disable 00:34:07.864 23:36:57 -- common/autotest_common.sh@10 -- # set +x 00:34:07.864 23:36:57 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:07.864 23:36:57 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:34:07.864 23:36:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:34:07.864 23:36:57 -- common/autotest_common.sh@10 -- # set +x 00:34:07.864 Malloc0 00:34:07.864 23:36:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:34:07.864 23:36:57 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:34:07.864 23:36:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:34:07.864 23:36:57 -- common/autotest_common.sh@10 -- # set +x 00:34:07.864 [2024-04-26 23:36:57.113835] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:08.125 23:36:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:34:08.125 23:36:57 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:34:08.125 23:36:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:34:08.125 23:36:57 -- common/autotest_common.sh@10 -- # set +x 00:34:08.125 23:36:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:34:08.125 23:36:57 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:34:08.125 23:36:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:34:08.125 23:36:57 -- common/autotest_common.sh@10 -- # set +x 00:34:08.125 23:36:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:34:08.125 23:36:57 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:34:08.125 23:36:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:34:08.125 23:36:57 -- common/autotest_common.sh@10 -- # set +x 00:34:08.125 [2024-04-26 23:36:57.142149] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:08.125 23:36:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:34:08.125 23:36:57 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:34:08.125 23:36:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:34:08.125 23:36:57 -- common/autotest_common.sh@10 -- # set +x 00:34:08.125 23:36:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:34:08.125 23:36:57 -- host/target_disconnect.sh@50 -- # reconnectpid=4188786 00:34:08.125 23:36:57 -- host/target_disconnect.sh@52 -- # sleep 2 00:34:08.126 23:36:57 -- host/target_disconnect.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:34:08.126 EAL: No free 2048 kB hugepages reported on node 1 00:34:10.045 23:36:59 -- host/target_disconnect.sh@53 -- # kill -9 4188720 00:34:10.045 23:36:59 -- host/target_disconnect.sh@55 -- # sleep 2 00:34:10.045 Read completed with error (sct=0, sc=8) 00:34:10.045 starting I/O failed 00:34:10.045 Read completed with error (sct=0, sc=8) 00:34:10.045 starting I/O failed 00:34:10.045 Read completed with error (sct=0, sc=8) 00:34:10.045 starting I/O failed 00:34:10.045 Read completed with error (sct=0, sc=8) 00:34:10.045 starting I/O failed 00:34:10.045 Read completed with error (sct=0, sc=8) 00:34:10.045 starting I/O failed 00:34:10.045 Read completed with error (sct=0, sc=8) 00:34:10.045 starting I/O failed 00:34:10.045 Read completed with error (sct=0, sc=8) 00:34:10.046 starting I/O failed 00:34:10.046 Read completed with error (sct=0, sc=8) 00:34:10.046 starting I/O failed 00:34:10.046 Read completed with error (sct=0, sc=8) 00:34:10.046 starting I/O failed 00:34:10.046 Read completed with error (sct=0, sc=8) 00:34:10.046 starting I/O failed 00:34:10.046 Read completed with error (sct=0, sc=8) 00:34:10.046 starting I/O failed 00:34:10.046 Read completed with error (sct=0, sc=8) 00:34:10.046 starting I/O failed 00:34:10.046 Read completed with error (sct=0, sc=8) 00:34:10.046 starting I/O failed 00:34:10.046 Read completed with error (sct=0, sc=8) 00:34:10.046 starting I/O failed 00:34:10.046 Read completed with error (sct=0, sc=8) 00:34:10.046 starting I/O failed 00:34:10.046 Write completed with error (sct=0, sc=8) 00:34:10.046 starting I/O failed 00:34:10.046 Read completed with error (sct=0, sc=8) 00:34:10.046 starting I/O failed 00:34:10.046 Read completed with error (sct=0, sc=8) 00:34:10.046 starting I/O failed 00:34:10.046 Write completed with error (sct=0, sc=8) 00:34:10.046 starting I/O failed 00:34:10.046 Write completed with error (sct=0, sc=8) 00:34:10.046 starting I/O failed 00:34:10.046 Write completed with error (sct=0, sc=8) 00:34:10.046 starting I/O failed 00:34:10.046 Read completed with error (sct=0, sc=8) 00:34:10.046 starting I/O failed 00:34:10.046 Read completed with error (sct=0, sc=8) 00:34:10.046 starting I/O failed 00:34:10.046 Read completed with error (sct=0, sc=8) 00:34:10.046 starting I/O failed 00:34:10.046 Write completed with error (sct=0, sc=8) 00:34:10.046 starting I/O failed 00:34:10.046 Read completed with error (sct=0, sc=8) 00:34:10.046 starting I/O failed 00:34:10.046 Write completed with error (sct=0, sc=8) 00:34:10.046 starting I/O failed 00:34:10.046 Read completed with error (sct=0, sc=8) 00:34:10.046 starting I/O failed 00:34:10.046 Read completed with error (sct=0, sc=8) 00:34:10.046 starting I/O failed 00:34:10.046 Read completed with error (sct=0, sc=8) 00:34:10.046 starting I/O failed 00:34:10.046 Read completed with error (sct=0, sc=8) 00:34:10.046 starting I/O failed 00:34:10.046 Read completed with error (sct=0, sc=8) 00:34:10.046 starting I/O failed 00:34:10.046 [2024-04-26 23:36:59.168738] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:10.046 [2024-04-26 23:36:59.169270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.046 [2024-04-26 23:36:59.169555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.046 [2024-04-26 23:36:59.169565] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.046 qpair failed and we were unable to recover it. 00:34:10.046 [2024-04-26 23:36:59.169818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.046 [2024-04-26 23:36:59.170259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.046 [2024-04-26 23:36:59.170287] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.046 qpair failed and we were unable to recover it. 00:34:10.046 [2024-04-26 23:36:59.170658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.046 [2024-04-26 23:36:59.171137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.046 [2024-04-26 23:36:59.171164] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.046 qpair failed and we were unable to recover it. 00:34:10.046 [2024-04-26 23:36:59.171502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.046 [2024-04-26 23:36:59.172074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.046 [2024-04-26 23:36:59.172101] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.046 qpair failed and we were unable to recover it. 00:34:10.046 [2024-04-26 23:36:59.172353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.046 [2024-04-26 23:36:59.172679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.046 [2024-04-26 23:36:59.172686] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.046 qpair failed and we were unable to recover it. 00:34:10.046 [2024-04-26 23:36:59.173077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.046 [2024-04-26 23:36:59.173431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.046 [2024-04-26 23:36:59.173441] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.046 qpair failed and we were unable to recover it. 00:34:10.046 [2024-04-26 23:36:59.173670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.046 [2024-04-26 23:36:59.173977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.046 [2024-04-26 23:36:59.173985] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.046 qpair failed and we were unable to recover it. 00:34:10.046 [2024-04-26 23:36:59.174351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.046 [2024-04-26 23:36:59.174564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.046 [2024-04-26 23:36:59.174572] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.046 qpair failed and we were unable to recover it. 00:34:10.046 [2024-04-26 23:36:59.174868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.046 [2024-04-26 23:36:59.175211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.046 [2024-04-26 23:36:59.175218] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.046 qpair failed and we were unable to recover it. 00:34:10.046 [2024-04-26 23:36:59.175562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.046 [2024-04-26 23:36:59.175932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.046 [2024-04-26 23:36:59.175940] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.046 qpair failed and we were unable to recover it. 00:34:10.046 [2024-04-26 23:36:59.176136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.046 [2024-04-26 23:36:59.176471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.046 [2024-04-26 23:36:59.176478] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.046 qpair failed and we were unable to recover it. 00:34:10.046 [2024-04-26 23:36:59.176726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.046 [2024-04-26 23:36:59.177081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.046 [2024-04-26 23:36:59.177089] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.046 qpair failed and we were unable to recover it. 00:34:10.046 [2024-04-26 23:36:59.177328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.046 [2024-04-26 23:36:59.177654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.046 [2024-04-26 23:36:59.177662] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.046 qpair failed and we were unable to recover it. 00:34:10.046 [2024-04-26 23:36:59.178012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.046 [2024-04-26 23:36:59.178236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.046 [2024-04-26 23:36:59.178243] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.046 qpair failed and we were unable to recover it. 00:34:10.046 [2024-04-26 23:36:59.178474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.046 [2024-04-26 23:36:59.178701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.046 [2024-04-26 23:36:59.178709] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.046 qpair failed and we were unable to recover it. 00:34:10.046 [2024-04-26 23:36:59.179034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.046 [2024-04-26 23:36:59.179360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.046 [2024-04-26 23:36:59.179367] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.046 qpair failed and we were unable to recover it. 00:34:10.046 [2024-04-26 23:36:59.179688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.046 [2024-04-26 23:36:59.179935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.046 [2024-04-26 23:36:59.179941] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.046 qpair failed and we were unable to recover it. 00:34:10.046 [2024-04-26 23:36:59.180283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.046 [2024-04-26 23:36:59.180580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.046 [2024-04-26 23:36:59.180586] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.046 qpair failed and we were unable to recover it. 00:34:10.046 [2024-04-26 23:36:59.180775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.046 [2024-04-26 23:36:59.180955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.046 [2024-04-26 23:36:59.180961] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.046 qpair failed and we were unable to recover it. 00:34:10.046 [2024-04-26 23:36:59.181320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.046 [2024-04-26 23:36:59.181684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.046 [2024-04-26 23:36:59.181691] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.046 qpair failed and we were unable to recover it. 00:34:10.046 [2024-04-26 23:36:59.181888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.046 [2024-04-26 23:36:59.182203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.046 [2024-04-26 23:36:59.182210] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.046 qpair failed and we were unable to recover it. 00:34:10.046 [2024-04-26 23:36:59.182516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.046 [2024-04-26 23:36:59.182795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.047 [2024-04-26 23:36:59.182802] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.047 qpair failed and we were unable to recover it. 00:34:10.047 [2024-04-26 23:36:59.183168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.047 [2024-04-26 23:36:59.183538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.047 [2024-04-26 23:36:59.183544] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.047 qpair failed and we were unable to recover it. 00:34:10.047 [2024-04-26 23:36:59.183887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.047 [2024-04-26 23:36:59.184226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.047 [2024-04-26 23:36:59.184233] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.047 qpair failed and we were unable to recover it. 00:34:10.047 [2024-04-26 23:36:59.184446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.047 [2024-04-26 23:36:59.184750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.047 [2024-04-26 23:36:59.184757] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.047 qpair failed and we were unable to recover it. 00:34:10.047 [2024-04-26 23:36:59.184977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.047 [2024-04-26 23:36:59.185211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.047 [2024-04-26 23:36:59.185217] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.047 qpair failed and we were unable to recover it. 00:34:10.047 [2024-04-26 23:36:59.185447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.047 [2024-04-26 23:36:59.185769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.047 [2024-04-26 23:36:59.185776] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.047 qpair failed and we were unable to recover it. 00:34:10.047 [2024-04-26 23:36:59.186128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.047 [2024-04-26 23:36:59.186343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.047 [2024-04-26 23:36:59.186349] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.047 qpair failed and we were unable to recover it. 00:34:10.047 [2024-04-26 23:36:59.186589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.047 [2024-04-26 23:36:59.186895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.047 [2024-04-26 23:36:59.186902] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.047 qpair failed and we were unable to recover it. 00:34:10.047 [2024-04-26 23:36:59.187271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.047 [2024-04-26 23:36:59.187615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.047 [2024-04-26 23:36:59.187622] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.047 qpair failed and we were unable to recover it. 00:34:10.047 [2024-04-26 23:36:59.188026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.047 [2024-04-26 23:36:59.188264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.047 [2024-04-26 23:36:59.188270] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.047 qpair failed and we were unable to recover it. 00:34:10.047 [2024-04-26 23:36:59.188591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.047 [2024-04-26 23:36:59.188955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.047 [2024-04-26 23:36:59.188962] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.047 qpair failed and we were unable to recover it. 00:34:10.047 [2024-04-26 23:36:59.189302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.047 [2024-04-26 23:36:59.189530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.047 [2024-04-26 23:36:59.189536] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.047 qpair failed and we were unable to recover it. 00:34:10.047 [2024-04-26 23:36:59.189874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.047 [2024-04-26 23:36:59.190111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.047 [2024-04-26 23:36:59.190117] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.047 qpair failed and we were unable to recover it. 00:34:10.047 [2024-04-26 23:36:59.190404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.047 [2024-04-26 23:36:59.190764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.047 [2024-04-26 23:36:59.190770] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.047 qpair failed and we were unable to recover it. 00:34:10.047 [2024-04-26 23:36:59.191177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.047 [2024-04-26 23:36:59.191505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.047 [2024-04-26 23:36:59.191511] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.047 qpair failed and we were unable to recover it. 00:34:10.047 [2024-04-26 23:36:59.191725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.047 [2024-04-26 23:36:59.191957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.047 [2024-04-26 23:36:59.191964] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.047 qpair failed and we were unable to recover it. 00:34:10.047 [2024-04-26 23:36:59.192266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.047 [2024-04-26 23:36:59.192603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.047 [2024-04-26 23:36:59.192609] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.047 qpair failed and we were unable to recover it. 00:34:10.047 [2024-04-26 23:36:59.192918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.047 [2024-04-26 23:36:59.193019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.047 [2024-04-26 23:36:59.193025] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.047 qpair failed and we were unable to recover it. 00:34:10.047 [2024-04-26 23:36:59.193189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.047 [2024-04-26 23:36:59.193358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.047 [2024-04-26 23:36:59.193366] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.047 qpair failed and we were unable to recover it. 00:34:10.047 [2024-04-26 23:36:59.193594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.047 [2024-04-26 23:36:59.193878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.047 [2024-04-26 23:36:59.193885] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.047 qpair failed and we were unable to recover it. 00:34:10.047 [2024-04-26 23:36:59.194141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.047 [2024-04-26 23:36:59.194496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.047 [2024-04-26 23:36:59.194502] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.047 qpair failed and we were unable to recover it. 00:34:10.047 [2024-04-26 23:36:59.194799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.047 [2024-04-26 23:36:59.195157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.047 [2024-04-26 23:36:59.195163] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.047 qpair failed and we were unable to recover it. 00:34:10.047 [2024-04-26 23:36:59.195510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.047 [2024-04-26 23:36:59.195718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.047 [2024-04-26 23:36:59.195724] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.047 qpair failed and we were unable to recover it. 00:34:10.047 [2024-04-26 23:36:59.196046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.047 [2024-04-26 23:36:59.196251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.047 [2024-04-26 23:36:59.196257] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.047 qpair failed and we were unable to recover it. 00:34:10.047 [2024-04-26 23:36:59.196583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.047 [2024-04-26 23:36:59.196946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.047 [2024-04-26 23:36:59.196952] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.047 qpair failed and we were unable to recover it. 00:34:10.047 [2024-04-26 23:36:59.197359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.047 [2024-04-26 23:36:59.197725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.047 [2024-04-26 23:36:59.197732] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.047 qpair failed and we were unable to recover it. 00:34:10.047 [2024-04-26 23:36:59.198117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.047 [2024-04-26 23:36:59.198434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.047 [2024-04-26 23:36:59.198440] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.047 qpair failed and we were unable to recover it. 00:34:10.047 [2024-04-26 23:36:59.198773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.047 [2024-04-26 23:36:59.199023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.047 [2024-04-26 23:36:59.199029] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.047 qpair failed and we were unable to recover it. 00:34:10.047 [2024-04-26 23:36:59.199369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.047 [2024-04-26 23:36:59.199685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.047 [2024-04-26 23:36:59.199693] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.047 qpair failed and we were unable to recover it. 00:34:10.047 [2024-04-26 23:36:59.199942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.048 [2024-04-26 23:36:59.200244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.048 [2024-04-26 23:36:59.200250] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.048 qpair failed and we were unable to recover it. 00:34:10.048 [2024-04-26 23:36:59.200635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.048 [2024-04-26 23:36:59.200902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.048 [2024-04-26 23:36:59.200908] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.048 qpair failed and we were unable to recover it. 00:34:10.048 [2024-04-26 23:36:59.201188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.048 [2024-04-26 23:36:59.201394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.048 [2024-04-26 23:36:59.201400] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.048 qpair failed and we were unable to recover it. 00:34:10.048 [2024-04-26 23:36:59.201724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.048 [2024-04-26 23:36:59.202032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.048 [2024-04-26 23:36:59.202039] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.048 qpair failed and we were unable to recover it. 00:34:10.048 [2024-04-26 23:36:59.202337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.048 [2024-04-26 23:36:59.202678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.048 [2024-04-26 23:36:59.202684] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.048 qpair failed and we were unable to recover it. 00:34:10.048 [2024-04-26 23:36:59.202928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.048 [2024-04-26 23:36:59.203154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.048 [2024-04-26 23:36:59.203160] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.048 qpair failed and we were unable to recover it. 00:34:10.048 [2024-04-26 23:36:59.203518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.048 [2024-04-26 23:36:59.203864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.048 [2024-04-26 23:36:59.203870] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.048 qpair failed and we were unable to recover it. 00:34:10.048 [2024-04-26 23:36:59.204250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.048 [2024-04-26 23:36:59.204452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.048 [2024-04-26 23:36:59.204459] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.048 qpair failed and we were unable to recover it. 00:34:10.048 [2024-04-26 23:36:59.204799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.048 [2024-04-26 23:36:59.205176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.048 [2024-04-26 23:36:59.205183] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.048 qpair failed and we were unable to recover it. 00:34:10.048 [2024-04-26 23:36:59.205460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.048 [2024-04-26 23:36:59.205780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.048 [2024-04-26 23:36:59.205790] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.048 qpair failed and we were unable to recover it. 00:34:10.048 [2024-04-26 23:36:59.206115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.048 [2024-04-26 23:36:59.206437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.048 [2024-04-26 23:36:59.206443] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.048 qpair failed and we were unable to recover it. 00:34:10.048 [2024-04-26 23:36:59.206748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.048 [2024-04-26 23:36:59.206949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.048 [2024-04-26 23:36:59.206955] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.048 qpair failed and we were unable to recover it. 00:34:10.048 [2024-04-26 23:36:59.207308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.048 [2024-04-26 23:36:59.207609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.048 [2024-04-26 23:36:59.207616] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.048 qpair failed and we were unable to recover it. 00:34:10.048 [2024-04-26 23:36:59.207909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.048 [2024-04-26 23:36:59.208251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.048 [2024-04-26 23:36:59.208258] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.048 qpair failed and we were unable to recover it. 00:34:10.048 [2024-04-26 23:36:59.208597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.048 [2024-04-26 23:36:59.208958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.048 [2024-04-26 23:36:59.208965] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.048 qpair failed and we were unable to recover it. 00:34:10.048 [2024-04-26 23:36:59.209337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.048 [2024-04-26 23:36:59.209729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.048 [2024-04-26 23:36:59.209736] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.048 qpair failed and we were unable to recover it. 00:34:10.048 [2024-04-26 23:36:59.210099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.048 [2024-04-26 23:36:59.210420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.048 [2024-04-26 23:36:59.210426] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.048 qpair failed and we were unable to recover it. 00:34:10.048 [2024-04-26 23:36:59.210797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.048 [2024-04-26 23:36:59.210981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.048 [2024-04-26 23:36:59.210989] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.048 qpair failed and we were unable to recover it. 00:34:10.048 [2024-04-26 23:36:59.211083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.048 [2024-04-26 23:36:59.211357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.048 [2024-04-26 23:36:59.211363] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.048 qpair failed and we were unable to recover it. 00:34:10.048 [2024-04-26 23:36:59.211625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.048 [2024-04-26 23:36:59.211935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.048 [2024-04-26 23:36:59.211943] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.048 qpair failed and we were unable to recover it. 00:34:10.048 [2024-04-26 23:36:59.212264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.048 [2024-04-26 23:36:59.212494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.048 [2024-04-26 23:36:59.212500] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.048 qpair failed and we were unable to recover it. 00:34:10.048 [2024-04-26 23:36:59.212819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.048 [2024-04-26 23:36:59.212965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.048 [2024-04-26 23:36:59.212971] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.048 qpair failed and we were unable to recover it. 00:34:10.048 [2024-04-26 23:36:59.213378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.048 [2024-04-26 23:36:59.213587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.048 [2024-04-26 23:36:59.213593] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.048 qpair failed and we were unable to recover it. 00:34:10.048 [2024-04-26 23:36:59.213779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.048 [2024-04-26 23:36:59.213944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.048 [2024-04-26 23:36:59.213951] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.048 qpair failed and we were unable to recover it. 00:34:10.048 [2024-04-26 23:36:59.214348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.048 [2024-04-26 23:36:59.214643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.048 [2024-04-26 23:36:59.214649] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.048 qpair failed and we were unable to recover it. 00:34:10.048 [2024-04-26 23:36:59.214908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.048 [2024-04-26 23:36:59.215091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.048 [2024-04-26 23:36:59.215098] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.048 qpair failed and we were unable to recover it. 00:34:10.048 [2024-04-26 23:36:59.215420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.048 [2024-04-26 23:36:59.215791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.048 [2024-04-26 23:36:59.215797] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.048 qpair failed and we were unable to recover it. 00:34:10.048 [2024-04-26 23:36:59.216030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.048 [2024-04-26 23:36:59.216369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.048 [2024-04-26 23:36:59.216375] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.048 qpair failed and we were unable to recover it. 00:34:10.049 [2024-04-26 23:36:59.216599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.049 [2024-04-26 23:36:59.216930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.049 [2024-04-26 23:36:59.216937] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.049 qpair failed and we were unable to recover it. 00:34:10.049 [2024-04-26 23:36:59.217149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.049 [2024-04-26 23:36:59.217473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.049 [2024-04-26 23:36:59.217479] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.049 qpair failed and we were unable to recover it. 00:34:10.049 [2024-04-26 23:36:59.217783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.049 [2024-04-26 23:36:59.218122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.049 [2024-04-26 23:36:59.218129] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.049 qpair failed and we were unable to recover it. 00:34:10.049 [2024-04-26 23:36:59.218244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.049 [2024-04-26 23:36:59.218418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.049 [2024-04-26 23:36:59.218425] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.049 qpair failed and we were unable to recover it. 00:34:10.049 [2024-04-26 23:36:59.218736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.049 [2024-04-26 23:36:59.219038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.049 [2024-04-26 23:36:59.219044] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.049 qpair failed and we were unable to recover it. 00:34:10.049 [2024-04-26 23:36:59.219387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.049 [2024-04-26 23:36:59.219566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.049 [2024-04-26 23:36:59.219573] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.049 qpair failed and we were unable to recover it. 00:34:10.049 [2024-04-26 23:36:59.219882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.049 [2024-04-26 23:36:59.220215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.049 [2024-04-26 23:36:59.220222] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.049 qpair failed and we were unable to recover it. 00:34:10.049 [2024-04-26 23:36:59.220406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.049 [2024-04-26 23:36:59.220783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.049 [2024-04-26 23:36:59.220789] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.049 qpair failed and we were unable to recover it. 00:34:10.049 [2024-04-26 23:36:59.221007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.049 [2024-04-26 23:36:59.221358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.049 [2024-04-26 23:36:59.221365] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.049 qpair failed and we were unable to recover it. 00:34:10.049 [2024-04-26 23:36:59.221726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.049 [2024-04-26 23:36:59.221819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.049 [2024-04-26 23:36:59.221826] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.049 qpair failed and we were unable to recover it. 00:34:10.049 [2024-04-26 23:36:59.222071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.049 [2024-04-26 23:36:59.222268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.049 [2024-04-26 23:36:59.222275] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.049 qpair failed and we were unable to recover it. 00:34:10.049 [2024-04-26 23:36:59.222587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.049 [2024-04-26 23:36:59.222804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.049 [2024-04-26 23:36:59.222811] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.049 qpair failed and we were unable to recover it. 00:34:10.049 [2024-04-26 23:36:59.222976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.049 [2024-04-26 23:36:59.223318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.049 [2024-04-26 23:36:59.223325] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.049 qpair failed and we were unable to recover it. 00:34:10.049 [2024-04-26 23:36:59.223539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.049 [2024-04-26 23:36:59.223755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.049 [2024-04-26 23:36:59.223761] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.049 qpair failed and we were unable to recover it. 00:34:10.049 [2024-04-26 23:36:59.224081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.049 [2024-04-26 23:36:59.224231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.049 [2024-04-26 23:36:59.224238] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.049 qpair failed and we were unable to recover it. 00:34:10.049 [2024-04-26 23:36:59.224603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.049 [2024-04-26 23:36:59.224778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.049 [2024-04-26 23:36:59.224786] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.049 qpair failed and we were unable to recover it. 00:34:10.049 [2024-04-26 23:36:59.225010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.049 [2024-04-26 23:36:59.225330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.049 [2024-04-26 23:36:59.225337] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.049 qpair failed and we were unable to recover it. 00:34:10.049 [2024-04-26 23:36:59.225650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.049 [2024-04-26 23:36:59.225968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.049 [2024-04-26 23:36:59.225975] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.049 qpair failed and we were unable to recover it. 00:34:10.049 [2024-04-26 23:36:59.226122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.049 [2024-04-26 23:36:59.226360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.049 [2024-04-26 23:36:59.226367] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.049 qpair failed and we were unable to recover it. 00:34:10.049 [2024-04-26 23:36:59.226683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.049 [2024-04-26 23:36:59.226933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.049 [2024-04-26 23:36:59.226940] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.049 qpair failed and we were unable to recover it. 00:34:10.049 [2024-04-26 23:36:59.227272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.049 [2024-04-26 23:36:59.227586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.049 [2024-04-26 23:36:59.227593] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.049 qpair failed and we were unable to recover it. 00:34:10.049 [2024-04-26 23:36:59.227848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.049 [2024-04-26 23:36:59.228203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.049 [2024-04-26 23:36:59.228209] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.049 qpair failed and we were unable to recover it. 00:34:10.049 [2024-04-26 23:36:59.228359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.049 [2024-04-26 23:36:59.228710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.049 [2024-04-26 23:36:59.228716] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.049 qpair failed and we were unable to recover it. 00:34:10.049 [2024-04-26 23:36:59.228938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.049 [2024-04-26 23:36:59.229294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.049 [2024-04-26 23:36:59.229300] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.049 qpair failed and we were unable to recover it. 00:34:10.049 [2024-04-26 23:36:59.229665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.049 [2024-04-26 23:36:59.229997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.049 [2024-04-26 23:36:59.230004] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.049 qpair failed and we were unable to recover it. 00:34:10.049 [2024-04-26 23:36:59.230306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.049 [2024-04-26 23:36:59.230480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.049 [2024-04-26 23:36:59.230487] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.049 qpair failed and we were unable to recover it. 00:34:10.049 [2024-04-26 23:36:59.230791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.049 [2024-04-26 23:36:59.230950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.049 [2024-04-26 23:36:59.230957] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.049 qpair failed and we were unable to recover it. 00:34:10.049 [2024-04-26 23:36:59.231164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.049 [2024-04-26 23:36:59.231468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.049 [2024-04-26 23:36:59.231475] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.049 qpair failed and we were unable to recover it. 00:34:10.049 [2024-04-26 23:36:59.231671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.049 [2024-04-26 23:36:59.231905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.050 [2024-04-26 23:36:59.231912] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.050 qpair failed and we were unable to recover it. 00:34:10.050 [2024-04-26 23:36:59.232314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.050 [2024-04-26 23:36:59.232642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.050 [2024-04-26 23:36:59.232648] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.050 qpair failed and we were unable to recover it. 00:34:10.050 [2024-04-26 23:36:59.233002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.050 [2024-04-26 23:36:59.233356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.050 [2024-04-26 23:36:59.233362] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.050 qpair failed and we were unable to recover it. 00:34:10.050 [2024-04-26 23:36:59.233717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.050 [2024-04-26 23:36:59.234061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.050 [2024-04-26 23:36:59.234067] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.050 qpair failed and we were unable to recover it. 00:34:10.050 [2024-04-26 23:36:59.234417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.050 [2024-04-26 23:36:59.234772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.050 [2024-04-26 23:36:59.234778] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.050 qpair failed and we were unable to recover it. 00:34:10.050 [2024-04-26 23:36:59.235130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.050 [2024-04-26 23:36:59.235485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.050 [2024-04-26 23:36:59.235491] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.050 qpair failed and we were unable to recover it. 00:34:10.050 [2024-04-26 23:36:59.235811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.050 [2024-04-26 23:36:59.236146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.050 [2024-04-26 23:36:59.236153] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.050 qpair failed and we were unable to recover it. 00:34:10.050 [2024-04-26 23:36:59.236566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.050 [2024-04-26 23:36:59.236608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.050 [2024-04-26 23:36:59.236614] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.050 qpair failed and we were unable to recover it. 00:34:10.050 [2024-04-26 23:36:59.236822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.050 [2024-04-26 23:36:59.237163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.050 [2024-04-26 23:36:59.237171] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.050 qpair failed and we were unable to recover it. 00:34:10.050 [2024-04-26 23:36:59.237497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.050 [2024-04-26 23:36:59.237854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.050 [2024-04-26 23:36:59.237861] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.050 qpair failed and we were unable to recover it. 00:34:10.050 [2024-04-26 23:36:59.238180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.050 [2024-04-26 23:36:59.238513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.050 [2024-04-26 23:36:59.238521] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.050 qpair failed and we were unable to recover it. 00:34:10.050 [2024-04-26 23:36:59.238930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.050 [2024-04-26 23:36:59.239250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.050 [2024-04-26 23:36:59.239256] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.050 qpair failed and we were unable to recover it. 00:34:10.050 [2024-04-26 23:36:59.239576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.050 [2024-04-26 23:36:59.239792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.050 [2024-04-26 23:36:59.239798] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.050 qpair failed and we were unable to recover it. 00:34:10.050 [2024-04-26 23:36:59.239995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.050 [2024-04-26 23:36:59.240287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.050 [2024-04-26 23:36:59.240293] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.050 qpair failed and we were unable to recover it. 00:34:10.050 [2024-04-26 23:36:59.240608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.050 [2024-04-26 23:36:59.240934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.050 [2024-04-26 23:36:59.240940] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.050 qpair failed and we were unable to recover it. 00:34:10.050 [2024-04-26 23:36:59.241272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.050 [2024-04-26 23:36:59.241596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.050 [2024-04-26 23:36:59.241602] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.050 qpair failed and we were unable to recover it. 00:34:10.050 [2024-04-26 23:36:59.241914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.050 [2024-04-26 23:36:59.242217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.050 [2024-04-26 23:36:59.242223] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.050 qpair failed and we were unable to recover it. 00:34:10.050 [2024-04-26 23:36:59.242543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.050 [2024-04-26 23:36:59.242902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.050 [2024-04-26 23:36:59.242909] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.050 qpair failed and we were unable to recover it. 00:34:10.050 [2024-04-26 23:36:59.243236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.050 [2024-04-26 23:36:59.243571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.050 [2024-04-26 23:36:59.243577] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.050 qpair failed and we were unable to recover it. 00:34:10.050 [2024-04-26 23:36:59.243937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.050 [2024-04-26 23:36:59.244296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.050 [2024-04-26 23:36:59.244303] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.050 qpair failed and we were unable to recover it. 00:34:10.050 [2024-04-26 23:36:59.244506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.050 [2024-04-26 23:36:59.244843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.050 [2024-04-26 23:36:59.244850] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.050 qpair failed and we were unable to recover it. 00:34:10.050 [2024-04-26 23:36:59.245156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.050 [2024-04-26 23:36:59.245475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.050 [2024-04-26 23:36:59.245482] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.050 qpair failed and we were unable to recover it. 00:34:10.050 [2024-04-26 23:36:59.245834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.050 [2024-04-26 23:36:59.246170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.050 [2024-04-26 23:36:59.246177] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.050 qpair failed and we were unable to recover it. 00:34:10.050 [2024-04-26 23:36:59.246533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.050 [2024-04-26 23:36:59.246775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.050 [2024-04-26 23:36:59.246782] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.050 qpair failed and we were unable to recover it. 00:34:10.050 [2024-04-26 23:36:59.247172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.051 [2024-04-26 23:36:59.247524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.051 [2024-04-26 23:36:59.247530] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.051 qpair failed and we were unable to recover it. 00:34:10.051 [2024-04-26 23:36:59.247880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.051 [2024-04-26 23:36:59.248228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.051 [2024-04-26 23:36:59.248235] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.051 qpair failed and we were unable to recover it. 00:34:10.051 [2024-04-26 23:36:59.248470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.051 [2024-04-26 23:36:59.248805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.051 [2024-04-26 23:36:59.248813] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.051 qpair failed and we were unable to recover it. 00:34:10.051 [2024-04-26 23:36:59.249151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.051 [2024-04-26 23:36:59.249473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.051 [2024-04-26 23:36:59.249479] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.051 qpair failed and we were unable to recover it. 00:34:10.051 [2024-04-26 23:36:59.249763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.051 [2024-04-26 23:36:59.249960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.051 [2024-04-26 23:36:59.249966] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.051 qpair failed and we were unable to recover it. 00:34:10.051 [2024-04-26 23:36:59.250245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.051 [2024-04-26 23:36:59.250584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.051 [2024-04-26 23:36:59.250590] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.051 qpair failed and we were unable to recover it. 00:34:10.051 [2024-04-26 23:36:59.250897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.051 [2024-04-26 23:36:59.251249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.051 [2024-04-26 23:36:59.251255] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.051 qpair failed and we were unable to recover it. 00:34:10.051 [2024-04-26 23:36:59.251601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.051 [2024-04-26 23:36:59.251923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.051 [2024-04-26 23:36:59.251929] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.051 qpair failed and we were unable to recover it. 00:34:10.051 [2024-04-26 23:36:59.252368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.051 [2024-04-26 23:36:59.252580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.051 [2024-04-26 23:36:59.252587] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.051 qpair failed and we were unable to recover it. 00:34:10.051 [2024-04-26 23:36:59.252818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.051 [2024-04-26 23:36:59.253115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.051 [2024-04-26 23:36:59.253122] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.051 qpair failed and we were unable to recover it. 00:34:10.051 [2024-04-26 23:36:59.253477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.051 [2024-04-26 23:36:59.253792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.051 [2024-04-26 23:36:59.253798] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.051 qpair failed and we were unable to recover it. 00:34:10.051 [2024-04-26 23:36:59.254022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.051 [2024-04-26 23:36:59.254398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.051 [2024-04-26 23:36:59.254404] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.051 qpair failed and we were unable to recover it. 00:34:10.051 [2024-04-26 23:36:59.254715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.051 [2024-04-26 23:36:59.255048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.051 [2024-04-26 23:36:59.255056] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.051 qpair failed and we were unable to recover it. 00:34:10.051 [2024-04-26 23:36:59.255410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.051 [2024-04-26 23:36:59.255775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.051 [2024-04-26 23:36:59.255781] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.051 qpair failed and we were unable to recover it. 00:34:10.051 [2024-04-26 23:36:59.255975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.051 [2024-04-26 23:36:59.256265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.051 [2024-04-26 23:36:59.256271] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.051 qpair failed and we were unable to recover it. 00:34:10.051 [2024-04-26 23:36:59.256590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.051 [2024-04-26 23:36:59.256847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.051 [2024-04-26 23:36:59.256854] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.051 qpair failed and we were unable to recover it. 00:34:10.051 [2024-04-26 23:36:59.257165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.051 [2024-04-26 23:36:59.257474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.051 [2024-04-26 23:36:59.257480] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.051 qpair failed and we were unable to recover it. 00:34:10.051 [2024-04-26 23:36:59.257780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.051 [2024-04-26 23:36:59.257928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.051 [2024-04-26 23:36:59.257935] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.051 qpair failed and we were unable to recover it. 00:34:10.051 [2024-04-26 23:36:59.258239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.051 [2024-04-26 23:36:59.258569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.051 [2024-04-26 23:36:59.258577] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.051 qpair failed and we were unable to recover it. 00:34:10.051 [2024-04-26 23:36:59.258812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.051 [2024-04-26 23:36:59.259111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.051 [2024-04-26 23:36:59.259118] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.051 qpair failed and we were unable to recover it. 00:34:10.051 [2024-04-26 23:36:59.259436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.051 [2024-04-26 23:36:59.259797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.051 [2024-04-26 23:36:59.259804] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.051 qpair failed and we were unable to recover it. 00:34:10.051 [2024-04-26 23:36:59.260010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.051 [2024-04-26 23:36:59.260223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.051 [2024-04-26 23:36:59.260230] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.051 qpair failed and we were unable to recover it. 00:34:10.051 [2024-04-26 23:36:59.260561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.051 [2024-04-26 23:36:59.260856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.051 [2024-04-26 23:36:59.260864] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.051 qpair failed and we were unable to recover it. 00:34:10.051 [2024-04-26 23:36:59.261188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.051 [2024-04-26 23:36:59.261506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.051 [2024-04-26 23:36:59.261512] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.051 qpair failed and we were unable to recover it. 00:34:10.051 [2024-04-26 23:36:59.261871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.051 [2024-04-26 23:36:59.262175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.051 [2024-04-26 23:36:59.262181] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.051 qpair failed and we were unable to recover it. 00:34:10.051 [2024-04-26 23:36:59.262538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.051 [2024-04-26 23:36:59.262866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.051 [2024-04-26 23:36:59.262873] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.051 qpair failed and we were unable to recover it. 00:34:10.051 [2024-04-26 23:36:59.263097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.051 [2024-04-26 23:36:59.263397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.051 [2024-04-26 23:36:59.263403] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.051 qpair failed and we were unable to recover it. 00:34:10.051 [2024-04-26 23:36:59.263717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.051 [2024-04-26 23:36:59.264053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.051 [2024-04-26 23:36:59.264060] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.051 qpair failed and we were unable to recover it. 00:34:10.051 [2024-04-26 23:36:59.264264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.051 [2024-04-26 23:36:59.264467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.051 [2024-04-26 23:36:59.264473] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.051 qpair failed and we were unable to recover it. 00:34:10.052 [2024-04-26 23:36:59.264702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.052 [2024-04-26 23:36:59.264996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.052 [2024-04-26 23:36:59.265002] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.052 qpair failed and we were unable to recover it. 00:34:10.052 [2024-04-26 23:36:59.265347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.052 [2024-04-26 23:36:59.265585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.052 [2024-04-26 23:36:59.265591] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.052 qpair failed and we were unable to recover it. 00:34:10.052 [2024-04-26 23:36:59.265914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.052 [2024-04-26 23:36:59.266238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.052 [2024-04-26 23:36:59.266245] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.052 qpair failed and we were unable to recover it. 00:34:10.052 [2024-04-26 23:36:59.266446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.052 [2024-04-26 23:36:59.266742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.052 [2024-04-26 23:36:59.266748] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.052 qpair failed and we were unable to recover it. 00:34:10.052 [2024-04-26 23:36:59.267139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.052 [2024-04-26 23:36:59.267476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.052 [2024-04-26 23:36:59.267482] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.052 qpair failed and we were unable to recover it. 00:34:10.052 [2024-04-26 23:36:59.267846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.052 [2024-04-26 23:36:59.268174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.052 [2024-04-26 23:36:59.268180] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.052 qpair failed and we were unable to recover it. 00:34:10.052 [2024-04-26 23:36:59.268513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.052 [2024-04-26 23:36:59.268808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.052 [2024-04-26 23:36:59.268815] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.052 qpair failed and we were unable to recover it. 00:34:10.052 [2024-04-26 23:36:59.269143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.052 [2024-04-26 23:36:59.269427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.052 [2024-04-26 23:36:59.269434] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.052 qpair failed and we were unable to recover it. 00:34:10.052 [2024-04-26 23:36:59.269783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.052 [2024-04-26 23:36:59.270092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.052 [2024-04-26 23:36:59.270099] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.052 qpair failed and we were unable to recover it. 00:34:10.052 [2024-04-26 23:36:59.270299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.052 [2024-04-26 23:36:59.270602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.052 [2024-04-26 23:36:59.270610] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.052 qpair failed and we were unable to recover it. 00:34:10.052 [2024-04-26 23:36:59.270975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.052 [2024-04-26 23:36:59.271288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.052 [2024-04-26 23:36:59.271295] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.052 qpair failed and we were unable to recover it. 00:34:10.052 [2024-04-26 23:36:59.271636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.052 [2024-04-26 23:36:59.271845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.052 [2024-04-26 23:36:59.271851] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.052 qpair failed and we were unable to recover it. 00:34:10.052 [2024-04-26 23:36:59.272023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.052 [2024-04-26 23:36:59.272282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.052 [2024-04-26 23:36:59.272290] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.052 qpair failed and we were unable to recover it. 00:34:10.052 [2024-04-26 23:36:59.272485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.052 [2024-04-26 23:36:59.272844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.052 [2024-04-26 23:36:59.272851] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.052 qpair failed and we were unable to recover it. 00:34:10.052 [2024-04-26 23:36:59.273167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.052 [2024-04-26 23:36:59.273494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.052 [2024-04-26 23:36:59.273501] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.052 qpair failed and we were unable to recover it. 00:34:10.052 [2024-04-26 23:36:59.273719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.052 [2024-04-26 23:36:59.274003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.052 [2024-04-26 23:36:59.274011] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.052 qpair failed and we were unable to recover it. 00:34:10.052 [2024-04-26 23:36:59.274348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.052 [2024-04-26 23:36:59.274585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.052 [2024-04-26 23:36:59.274593] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.052 qpair failed and we were unable to recover it. 00:34:10.052 [2024-04-26 23:36:59.274944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.052 [2024-04-26 23:36:59.275284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.052 [2024-04-26 23:36:59.275291] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.052 qpair failed and we were unable to recover it. 00:34:10.052 [2024-04-26 23:36:59.275639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.052 [2024-04-26 23:36:59.275982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.052 [2024-04-26 23:36:59.275989] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.052 qpair failed and we were unable to recover it. 00:34:10.052 [2024-04-26 23:36:59.276286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.052 [2024-04-26 23:36:59.276634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.052 [2024-04-26 23:36:59.276642] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.052 qpair failed and we were unable to recover it. 00:34:10.052 [2024-04-26 23:36:59.276971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.052 [2024-04-26 23:36:59.277311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.052 [2024-04-26 23:36:59.277318] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.052 qpair failed and we were unable to recover it. 00:34:10.052 [2024-04-26 23:36:59.277679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.052 [2024-04-26 23:36:59.278006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.052 [2024-04-26 23:36:59.278013] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.052 qpair failed and we were unable to recover it. 00:34:10.052 [2024-04-26 23:36:59.278322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.052 [2024-04-26 23:36:59.278681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.052 [2024-04-26 23:36:59.278688] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.052 qpair failed and we were unable to recover it. 00:34:10.052 [2024-04-26 23:36:59.279094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.052 [2024-04-26 23:36:59.279407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.052 [2024-04-26 23:36:59.279414] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.052 qpair failed and we were unable to recover it. 00:34:10.052 [2024-04-26 23:36:59.279743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.052 [2024-04-26 23:36:59.280072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.052 [2024-04-26 23:36:59.280079] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.052 qpair failed and we were unable to recover it. 00:34:10.052 [2024-04-26 23:36:59.280387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.052 [2024-04-26 23:36:59.280709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.052 [2024-04-26 23:36:59.280716] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.052 qpair failed and we were unable to recover it. 00:34:10.052 [2024-04-26 23:36:59.281120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.052 [2024-04-26 23:36:59.281333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.052 [2024-04-26 23:36:59.281339] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.052 qpair failed and we were unable to recover it. 00:34:10.052 [2024-04-26 23:36:59.281701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.052 [2024-04-26 23:36:59.282016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.052 [2024-04-26 23:36:59.282023] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.052 qpair failed and we were unable to recover it. 00:34:10.052 [2024-04-26 23:36:59.282354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.052 [2024-04-26 23:36:59.282707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.052 [2024-04-26 23:36:59.282714] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.052 qpair failed and we were unable to recover it. 00:34:10.053 [2024-04-26 23:36:59.283110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.053 [2024-04-26 23:36:59.283404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.053 [2024-04-26 23:36:59.283411] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.053 qpair failed and we were unable to recover it. 00:34:10.053 [2024-04-26 23:36:59.283639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.053 [2024-04-26 23:36:59.283960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.053 [2024-04-26 23:36:59.283967] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.053 qpair failed and we were unable to recover it. 00:34:10.053 [2024-04-26 23:36:59.284162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.053 [2024-04-26 23:36:59.284466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.053 [2024-04-26 23:36:59.284472] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.053 qpair failed and we were unable to recover it. 00:34:10.053 [2024-04-26 23:36:59.284778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.053 [2024-04-26 23:36:59.285085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.053 [2024-04-26 23:36:59.285091] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.053 qpair failed and we were unable to recover it. 00:34:10.053 [2024-04-26 23:36:59.285411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.053 [2024-04-26 23:36:59.285771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.053 [2024-04-26 23:36:59.285779] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.053 qpair failed and we were unable to recover it. 00:34:10.053 [2024-04-26 23:36:59.285957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.053 [2024-04-26 23:36:59.286275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.053 [2024-04-26 23:36:59.286283] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.053 qpair failed and we were unable to recover it. 00:34:10.053 [2024-04-26 23:36:59.286446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.053 [2024-04-26 23:36:59.286768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.053 [2024-04-26 23:36:59.286775] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.053 qpair failed and we were unable to recover it. 00:34:10.053 [2024-04-26 23:36:59.287107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.053 [2024-04-26 23:36:59.287471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.053 [2024-04-26 23:36:59.287478] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.053 qpair failed and we were unable to recover it. 00:34:10.053 [2024-04-26 23:36:59.287788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.053 [2024-04-26 23:36:59.288106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.053 [2024-04-26 23:36:59.288114] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.053 qpair failed and we were unable to recover it. 00:34:10.053 [2024-04-26 23:36:59.288455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.053 [2024-04-26 23:36:59.288766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.053 [2024-04-26 23:36:59.288773] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.053 qpair failed and we were unable to recover it. 00:34:10.053 [2024-04-26 23:36:59.289109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.053 [2024-04-26 23:36:59.289455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.053 [2024-04-26 23:36:59.289462] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.053 qpair failed and we were unable to recover it. 00:34:10.053 [2024-04-26 23:36:59.289812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.053 [2024-04-26 23:36:59.290124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.053 [2024-04-26 23:36:59.290133] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.053 qpair failed and we were unable to recover it. 00:34:10.053 [2024-04-26 23:36:59.290483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.053 [2024-04-26 23:36:59.290848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.053 [2024-04-26 23:36:59.290857] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.053 qpair failed and we were unable to recover it. 00:34:10.053 [2024-04-26 23:36:59.291076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.053 [2024-04-26 23:36:59.291377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.053 [2024-04-26 23:36:59.291390] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.053 qpair failed and we were unable to recover it. 00:34:10.053 [2024-04-26 23:36:59.291742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.053 [2024-04-26 23:36:59.292065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.053 [2024-04-26 23:36:59.292072] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.053 qpair failed and we were unable to recover it. 00:34:10.053 [2024-04-26 23:36:59.292397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.053 [2024-04-26 23:36:59.292574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.053 [2024-04-26 23:36:59.292581] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.053 qpair failed and we were unable to recover it. 00:34:10.053 [2024-04-26 23:36:59.292999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.053 [2024-04-26 23:36:59.293320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.053 [2024-04-26 23:36:59.293326] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.053 qpair failed and we were unable to recover it. 00:34:10.053 [2024-04-26 23:36:59.293509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.053 [2024-04-26 23:36:59.293760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.053 [2024-04-26 23:36:59.293768] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.053 qpair failed and we were unable to recover it. 00:34:10.053 [2024-04-26 23:36:59.294062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.053 [2024-04-26 23:36:59.294381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.053 [2024-04-26 23:36:59.294388] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.053 qpair failed and we were unable to recover it. 00:34:10.053 [2024-04-26 23:36:59.294724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.323 [2024-04-26 23:36:59.295491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.323 [2024-04-26 23:36:59.295508] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.323 qpair failed and we were unable to recover it. 00:34:10.323 [2024-04-26 23:36:59.295820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.323 [2024-04-26 23:36:59.296520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.323 [2024-04-26 23:36:59.296535] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.323 qpair failed and we were unable to recover it. 00:34:10.323 [2024-04-26 23:36:59.296757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.323 [2024-04-26 23:36:59.297060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.323 [2024-04-26 23:36:59.297067] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.323 qpair failed and we were unable to recover it. 00:34:10.323 [2024-04-26 23:36:59.297398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.323 [2024-04-26 23:36:59.297726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.324 [2024-04-26 23:36:59.297735] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.324 qpair failed and we were unable to recover it. 00:34:10.324 [2024-04-26 23:36:59.298110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.324 [2024-04-26 23:36:59.298411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.324 [2024-04-26 23:36:59.298417] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.324 qpair failed and we were unable to recover it. 00:34:10.324 [2024-04-26 23:36:59.298766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.324 [2024-04-26 23:36:59.299088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.324 [2024-04-26 23:36:59.299094] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.324 qpair failed and we were unable to recover it. 00:34:10.324 [2024-04-26 23:36:59.299425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.324 [2024-04-26 23:36:59.299756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.324 [2024-04-26 23:36:59.299762] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.324 qpair failed and we were unable to recover it. 00:34:10.324 [2024-04-26 23:36:59.300128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.324 [2024-04-26 23:36:59.300490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.324 [2024-04-26 23:36:59.300496] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.324 qpair failed and we were unable to recover it. 00:34:10.324 [2024-04-26 23:36:59.300818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.324 [2024-04-26 23:36:59.300975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.324 [2024-04-26 23:36:59.300982] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.324 qpair failed and we were unable to recover it. 00:34:10.324 [2024-04-26 23:36:59.301332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.324 [2024-04-26 23:36:59.301664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.324 [2024-04-26 23:36:59.301671] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.324 qpair failed and we were unable to recover it. 00:34:10.324 [2024-04-26 23:36:59.301970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.324 [2024-04-26 23:36:59.302297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.324 [2024-04-26 23:36:59.302303] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.324 qpair failed and we were unable to recover it. 00:34:10.324 [2024-04-26 23:36:59.302674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.324 [2024-04-26 23:36:59.303032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.324 [2024-04-26 23:36:59.303038] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.324 qpair failed and we were unable to recover it. 00:34:10.324 [2024-04-26 23:36:59.303382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.324 [2024-04-26 23:36:59.303717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.324 [2024-04-26 23:36:59.303723] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.324 qpair failed and we were unable to recover it. 00:34:10.324 [2024-04-26 23:36:59.304047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.324 [2024-04-26 23:36:59.304383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.324 [2024-04-26 23:36:59.304391] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.324 qpair failed and we were unable to recover it. 00:34:10.324 [2024-04-26 23:36:59.304702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.324 [2024-04-26 23:36:59.305028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.324 [2024-04-26 23:36:59.305035] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.324 qpair failed and we were unable to recover it. 00:34:10.324 [2024-04-26 23:36:59.305366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.324 [2024-04-26 23:36:59.305671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.324 [2024-04-26 23:36:59.305677] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.324 qpair failed and we were unable to recover it. 00:34:10.324 [2024-04-26 23:36:59.305856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.324 [2024-04-26 23:36:59.306213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.324 [2024-04-26 23:36:59.306219] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.324 qpair failed and we were unable to recover it. 00:34:10.324 [2024-04-26 23:36:59.306527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.324 [2024-04-26 23:36:59.306852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.324 [2024-04-26 23:36:59.306858] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.324 qpair failed and we were unable to recover it. 00:34:10.324 [2024-04-26 23:36:59.307192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.324 [2024-04-26 23:36:59.307549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.324 [2024-04-26 23:36:59.307556] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.324 qpair failed and we were unable to recover it. 00:34:10.324 [2024-04-26 23:36:59.307784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.324 [2024-04-26 23:36:59.308093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.324 [2024-04-26 23:36:59.308100] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.324 qpair failed and we were unable to recover it. 00:34:10.324 [2024-04-26 23:36:59.308302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.324 [2024-04-26 23:36:59.308608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.324 [2024-04-26 23:36:59.308614] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.324 qpair failed and we were unable to recover it. 00:34:10.324 [2024-04-26 23:36:59.308938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.324 [2024-04-26 23:36:59.309259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.324 [2024-04-26 23:36:59.309265] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.324 qpair failed and we were unable to recover it. 00:34:10.324 [2024-04-26 23:36:59.309587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.324 [2024-04-26 23:36:59.309899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.324 [2024-04-26 23:36:59.309905] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.324 qpair failed and we were unable to recover it. 00:34:10.324 [2024-04-26 23:36:59.310232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.324 [2024-04-26 23:36:59.310520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.324 [2024-04-26 23:36:59.310527] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.324 qpair failed and we were unable to recover it. 00:34:10.324 [2024-04-26 23:36:59.310842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.324 [2024-04-26 23:36:59.311163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.324 [2024-04-26 23:36:59.311169] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.324 qpair failed and we were unable to recover it. 00:34:10.324 [2024-04-26 23:36:59.311514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.324 [2024-04-26 23:36:59.311871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.324 [2024-04-26 23:36:59.311879] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.324 qpair failed and we were unable to recover it. 00:34:10.324 [2024-04-26 23:36:59.312116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.324 [2024-04-26 23:36:59.312384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.324 [2024-04-26 23:36:59.312392] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.324 qpair failed and we were unable to recover it. 00:34:10.324 [2024-04-26 23:36:59.312717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.324 [2024-04-26 23:36:59.313071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.324 [2024-04-26 23:36:59.313078] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.324 qpair failed and we were unable to recover it. 00:34:10.324 [2024-04-26 23:36:59.313391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.324 [2024-04-26 23:36:59.313591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.324 [2024-04-26 23:36:59.313598] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.324 qpair failed and we were unable to recover it. 00:34:10.324 [2024-04-26 23:36:59.313825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.324 [2024-04-26 23:36:59.314144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.324 [2024-04-26 23:36:59.314150] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.324 qpair failed and we were unable to recover it. 00:34:10.324 [2024-04-26 23:36:59.314534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.324 [2024-04-26 23:36:59.314857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.324 [2024-04-26 23:36:59.314864] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.324 qpair failed and we were unable to recover it. 00:34:10.324 [2024-04-26 23:36:59.315220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.324 [2024-04-26 23:36:59.315542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.325 [2024-04-26 23:36:59.315548] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.325 qpair failed and we were unable to recover it. 00:34:10.325 [2024-04-26 23:36:59.315883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.325 [2024-04-26 23:36:59.316184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.325 [2024-04-26 23:36:59.316190] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.325 qpair failed and we were unable to recover it. 00:34:10.325 [2024-04-26 23:36:59.316579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.325 [2024-04-26 23:36:59.316761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.325 [2024-04-26 23:36:59.316768] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.325 qpair failed and we were unable to recover it. 00:34:10.325 [2024-04-26 23:36:59.317067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.325 [2024-04-26 23:36:59.317388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.325 [2024-04-26 23:36:59.317395] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.325 qpair failed and we were unable to recover it. 00:34:10.325 [2024-04-26 23:36:59.317728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.325 [2024-04-26 23:36:59.318015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.325 [2024-04-26 23:36:59.318021] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.325 qpair failed and we were unable to recover it. 00:34:10.325 [2024-04-26 23:36:59.318355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.325 [2024-04-26 23:36:59.318647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.325 [2024-04-26 23:36:59.318653] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.325 qpair failed and we were unable to recover it. 00:34:10.325 [2024-04-26 23:36:59.318964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.325 [2024-04-26 23:36:59.319162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.325 [2024-04-26 23:36:59.319169] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.325 qpair failed and we were unable to recover it. 00:34:10.325 [2024-04-26 23:36:59.319479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.325 [2024-04-26 23:36:59.319833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.325 [2024-04-26 23:36:59.319843] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.325 qpair failed and we were unable to recover it. 00:34:10.325 [2024-04-26 23:36:59.320152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.325 [2024-04-26 23:36:59.320342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.325 [2024-04-26 23:36:59.320349] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.325 qpair failed and we were unable to recover it. 00:34:10.325 [2024-04-26 23:36:59.320668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.325 [2024-04-26 23:36:59.320998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.325 [2024-04-26 23:36:59.321005] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.325 qpair failed and we were unable to recover it. 00:34:10.325 [2024-04-26 23:36:59.321370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.325 [2024-04-26 23:36:59.321719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.325 [2024-04-26 23:36:59.321725] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.325 qpair failed and we were unable to recover it. 00:34:10.325 [2024-04-26 23:36:59.322051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.325 [2024-04-26 23:36:59.322324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.325 [2024-04-26 23:36:59.322330] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.325 qpair failed and we were unable to recover it. 00:34:10.325 [2024-04-26 23:36:59.322648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.325 [2024-04-26 23:36:59.322690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.325 [2024-04-26 23:36:59.322697] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.325 qpair failed and we were unable to recover it. 00:34:10.325 [2024-04-26 23:36:59.322915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.325 [2024-04-26 23:36:59.323278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.325 [2024-04-26 23:36:59.323285] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.325 qpair failed and we were unable to recover it. 00:34:10.325 [2024-04-26 23:36:59.323588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.325 [2024-04-26 23:36:59.323908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.325 [2024-04-26 23:36:59.323915] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.325 qpair failed and we were unable to recover it. 00:34:10.325 [2024-04-26 23:36:59.324243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.325 [2024-04-26 23:36:59.324594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.325 [2024-04-26 23:36:59.324600] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.325 qpair failed and we were unable to recover it. 00:34:10.325 [2024-04-26 23:36:59.324969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.325 [2024-04-26 23:36:59.325285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.325 [2024-04-26 23:36:59.325292] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.325 qpair failed and we were unable to recover it. 00:34:10.325 [2024-04-26 23:36:59.325605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.325 [2024-04-26 23:36:59.325802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.325 [2024-04-26 23:36:59.325809] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.325 qpair failed and we were unable to recover it. 00:34:10.325 [2024-04-26 23:36:59.326067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.325 [2024-04-26 23:36:59.326402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.325 [2024-04-26 23:36:59.326408] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.325 qpair failed and we were unable to recover it. 00:34:10.325 [2024-04-26 23:36:59.326748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.325 [2024-04-26 23:36:59.327094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.325 [2024-04-26 23:36:59.327100] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.325 qpair failed and we were unable to recover it. 00:34:10.325 [2024-04-26 23:36:59.327406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.325 [2024-04-26 23:36:59.327741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.325 [2024-04-26 23:36:59.327747] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.325 qpair failed and we were unable to recover it. 00:34:10.325 [2024-04-26 23:36:59.328032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.325 [2024-04-26 23:36:59.328359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.325 [2024-04-26 23:36:59.328365] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.325 qpair failed and we were unable to recover it. 00:34:10.325 [2024-04-26 23:36:59.328684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.325 [2024-04-26 23:36:59.329039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.325 [2024-04-26 23:36:59.329047] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.325 qpair failed and we were unable to recover it. 00:34:10.325 [2024-04-26 23:36:59.329414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.325 [2024-04-26 23:36:59.329726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.325 [2024-04-26 23:36:59.329733] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.325 qpair failed and we were unable to recover it. 00:34:10.325 [2024-04-26 23:36:59.330075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.325 [2024-04-26 23:36:59.330346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.325 [2024-04-26 23:36:59.330353] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.325 qpair failed and we were unable to recover it. 00:34:10.325 [2024-04-26 23:36:59.330704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.325 [2024-04-26 23:36:59.331054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.325 [2024-04-26 23:36:59.331061] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.325 qpair failed and we were unable to recover it. 00:34:10.325 [2024-04-26 23:36:59.331386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.325 [2024-04-26 23:36:59.331680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.325 [2024-04-26 23:36:59.331686] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.325 qpair failed and we were unable to recover it. 00:34:10.325 [2024-04-26 23:36:59.331948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.325 [2024-04-26 23:36:59.332270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.325 [2024-04-26 23:36:59.332276] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.325 qpair failed and we were unable to recover it. 00:34:10.325 [2024-04-26 23:36:59.332509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.325 [2024-04-26 23:36:59.332791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.325 [2024-04-26 23:36:59.332797] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.325 qpair failed and we were unable to recover it. 00:34:10.325 [2024-04-26 23:36:59.333110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.326 [2024-04-26 23:36:59.333421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.326 [2024-04-26 23:36:59.333427] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.326 qpair failed and we were unable to recover it. 00:34:10.326 [2024-04-26 23:36:59.333776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.326 [2024-04-26 23:36:59.334066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.326 [2024-04-26 23:36:59.334073] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.326 qpair failed and we were unable to recover it. 00:34:10.326 [2024-04-26 23:36:59.334422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.326 [2024-04-26 23:36:59.334695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.326 [2024-04-26 23:36:59.334702] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.326 qpair failed and we were unable to recover it. 00:34:10.326 [2024-04-26 23:36:59.335027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.326 [2024-04-26 23:36:59.335362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.326 [2024-04-26 23:36:59.335368] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.326 qpair failed and we were unable to recover it. 00:34:10.326 [2024-04-26 23:36:59.335559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.326 [2024-04-26 23:36:59.335868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.326 [2024-04-26 23:36:59.335874] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.326 qpair failed and we were unable to recover it. 00:34:10.326 [2024-04-26 23:36:59.336202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.326 [2024-04-26 23:36:59.336519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.326 [2024-04-26 23:36:59.336525] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.326 qpair failed and we were unable to recover it. 00:34:10.326 [2024-04-26 23:36:59.336835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.326 [2024-04-26 23:36:59.337186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.326 [2024-04-26 23:36:59.337193] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.326 qpair failed and we were unable to recover it. 00:34:10.326 [2024-04-26 23:36:59.337495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.326 [2024-04-26 23:36:59.337869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.326 [2024-04-26 23:36:59.337876] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.326 qpair failed and we were unable to recover it. 00:34:10.326 [2024-04-26 23:36:59.338247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.326 [2024-04-26 23:36:59.338607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.326 [2024-04-26 23:36:59.338613] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.326 qpair failed and we were unable to recover it. 00:34:10.326 [2024-04-26 23:36:59.338959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.326 [2024-04-26 23:36:59.339296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.326 [2024-04-26 23:36:59.339302] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.326 qpair failed and we were unable to recover it. 00:34:10.326 [2024-04-26 23:36:59.339616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.326 [2024-04-26 23:36:59.339895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.326 [2024-04-26 23:36:59.339902] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.326 qpair failed and we were unable to recover it. 00:34:10.326 [2024-04-26 23:36:59.340229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.326 [2024-04-26 23:36:59.340561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.326 [2024-04-26 23:36:59.340567] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.326 qpair failed and we were unable to recover it. 00:34:10.326 [2024-04-26 23:36:59.340890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.326 [2024-04-26 23:36:59.341211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.326 [2024-04-26 23:36:59.341217] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.326 qpair failed and we were unable to recover it. 00:34:10.326 [2024-04-26 23:36:59.341569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.326 [2024-04-26 23:36:59.341776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.326 [2024-04-26 23:36:59.341782] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.326 qpair failed and we were unable to recover it. 00:34:10.326 [2024-04-26 23:36:59.342120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.326 [2024-04-26 23:36:59.342409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.326 [2024-04-26 23:36:59.342416] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.326 qpair failed and we were unable to recover it. 00:34:10.326 [2024-04-26 23:36:59.342757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.326 [2024-04-26 23:36:59.343089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.326 [2024-04-26 23:36:59.343096] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.326 qpair failed and we were unable to recover it. 00:34:10.326 [2024-04-26 23:36:59.343319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.326 [2024-04-26 23:36:59.343610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.326 [2024-04-26 23:36:59.343616] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.326 qpair failed and we were unable to recover it. 00:34:10.326 [2024-04-26 23:36:59.343925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.326 [2024-04-26 23:36:59.344219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.326 [2024-04-26 23:36:59.344225] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.326 qpair failed and we were unable to recover it. 00:34:10.326 [2024-04-26 23:36:59.344541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.326 [2024-04-26 23:36:59.344897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.326 [2024-04-26 23:36:59.344903] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.326 qpair failed and we were unable to recover it. 00:34:10.326 [2024-04-26 23:36:59.345224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.326 [2024-04-26 23:36:59.345523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.326 [2024-04-26 23:36:59.345530] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.326 qpair failed and we were unable to recover it. 00:34:10.326 [2024-04-26 23:36:59.345724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.326 [2024-04-26 23:36:59.345996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.326 [2024-04-26 23:36:59.346003] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.326 qpair failed and we were unable to recover it. 00:34:10.326 [2024-04-26 23:36:59.346343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.326 [2024-04-26 23:36:59.346660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.326 [2024-04-26 23:36:59.346666] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.326 qpair failed and we were unable to recover it. 00:34:10.326 [2024-04-26 23:36:59.346897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.326 [2024-04-26 23:36:59.347190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.326 [2024-04-26 23:36:59.347196] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.326 qpair failed and we were unable to recover it. 00:34:10.326 [2024-04-26 23:36:59.347541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.326 [2024-04-26 23:36:59.347901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.326 [2024-04-26 23:36:59.347909] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.326 qpair failed and we were unable to recover it. 00:34:10.326 [2024-04-26 23:36:59.348262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.326 [2024-04-26 23:36:59.348503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.326 [2024-04-26 23:36:59.348511] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.326 qpair failed and we were unable to recover it. 00:34:10.326 [2024-04-26 23:36:59.348853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.326 [2024-04-26 23:36:59.349177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.326 [2024-04-26 23:36:59.349183] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.326 qpair failed and we were unable to recover it. 00:34:10.326 [2024-04-26 23:36:59.349501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.326 [2024-04-26 23:36:59.349849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.326 [2024-04-26 23:36:59.349856] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.326 qpair failed and we were unable to recover it. 00:34:10.326 [2024-04-26 23:36:59.350180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.326 [2024-04-26 23:36:59.350498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.326 [2024-04-26 23:36:59.350505] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.326 qpair failed and we were unable to recover it. 00:34:10.326 [2024-04-26 23:36:59.350846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.326 [2024-04-26 23:36:59.351225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.327 [2024-04-26 23:36:59.351232] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.327 qpair failed and we were unable to recover it. 00:34:10.327 [2024-04-26 23:36:59.351404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.327 [2024-04-26 23:36:59.351798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.327 [2024-04-26 23:36:59.351804] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.327 qpair failed and we were unable to recover it. 00:34:10.327 [2024-04-26 23:36:59.352116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.327 [2024-04-26 23:36:59.352438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.327 [2024-04-26 23:36:59.352444] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.327 qpair failed and we were unable to recover it. 00:34:10.327 [2024-04-26 23:36:59.352753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.327 [2024-04-26 23:36:59.352947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.327 [2024-04-26 23:36:59.352955] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.327 qpair failed and we were unable to recover it. 00:34:10.327 [2024-04-26 23:36:59.353265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.327 [2024-04-26 23:36:59.353574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.327 [2024-04-26 23:36:59.353580] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.327 qpair failed and we were unable to recover it. 00:34:10.327 [2024-04-26 23:36:59.353895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.327 [2024-04-26 23:36:59.354193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.327 [2024-04-26 23:36:59.354199] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.327 qpair failed and we were unable to recover it. 00:34:10.327 [2024-04-26 23:36:59.354549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.327 [2024-04-26 23:36:59.354880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.327 [2024-04-26 23:36:59.354887] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.327 qpair failed and we were unable to recover it. 00:34:10.327 [2024-04-26 23:36:59.355210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.327 [2024-04-26 23:36:59.355534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.327 [2024-04-26 23:36:59.355540] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.327 qpair failed and we were unable to recover it. 00:34:10.327 [2024-04-26 23:36:59.355884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.327 [2024-04-26 23:36:59.356216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.327 [2024-04-26 23:36:59.356223] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.327 qpair failed and we were unable to recover it. 00:34:10.327 [2024-04-26 23:36:59.356545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.327 [2024-04-26 23:36:59.356866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.327 [2024-04-26 23:36:59.356874] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.327 qpair failed and we were unable to recover it. 00:34:10.327 [2024-04-26 23:36:59.357279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.327 [2024-04-26 23:36:59.357597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.327 [2024-04-26 23:36:59.357603] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.327 qpair failed and we were unable to recover it. 00:34:10.327 [2024-04-26 23:36:59.357953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.327 [2024-04-26 23:36:59.358173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.327 [2024-04-26 23:36:59.358179] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.327 qpair failed and we were unable to recover it. 00:34:10.327 [2024-04-26 23:36:59.358390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.327 [2024-04-26 23:36:59.358708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.327 [2024-04-26 23:36:59.358715] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.327 qpair failed and we were unable to recover it. 00:34:10.327 [2024-04-26 23:36:59.359024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.327 [2024-04-26 23:36:59.359340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.327 [2024-04-26 23:36:59.359346] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.327 qpair failed and we were unable to recover it. 00:34:10.327 [2024-04-26 23:36:59.359663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.327 [2024-04-26 23:36:59.359999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.327 [2024-04-26 23:36:59.360006] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.327 qpair failed and we were unable to recover it. 00:34:10.327 [2024-04-26 23:36:59.360174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.327 [2024-04-26 23:36:59.360473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.327 [2024-04-26 23:36:59.360479] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.327 qpair failed and we were unable to recover it. 00:34:10.327 [2024-04-26 23:36:59.360813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.327 [2024-04-26 23:36:59.361118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.327 [2024-04-26 23:36:59.361124] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.327 qpair failed and we were unable to recover it. 00:34:10.327 [2024-04-26 23:36:59.361445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.327 [2024-04-26 23:36:59.361797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.327 [2024-04-26 23:36:59.361804] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.327 qpair failed and we were unable to recover it. 00:34:10.327 [2024-04-26 23:36:59.362193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.327 [2024-04-26 23:36:59.362523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.327 [2024-04-26 23:36:59.362529] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.327 qpair failed and we were unable to recover it. 00:34:10.327 [2024-04-26 23:36:59.362723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.327 [2024-04-26 23:36:59.362991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.327 [2024-04-26 23:36:59.362998] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.327 qpair failed and we were unable to recover it. 00:34:10.327 [2024-04-26 23:36:59.363323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.327 [2024-04-26 23:36:59.363640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.327 [2024-04-26 23:36:59.363646] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.327 qpair failed and we were unable to recover it. 00:34:10.327 [2024-04-26 23:36:59.364005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.327 [2024-04-26 23:36:59.364345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.327 [2024-04-26 23:36:59.364352] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.327 qpair failed and we were unable to recover it. 00:34:10.327 [2024-04-26 23:36:59.364705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.327 [2024-04-26 23:36:59.365105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.327 [2024-04-26 23:36:59.365112] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.327 qpair failed and we were unable to recover it. 00:34:10.327 [2024-04-26 23:36:59.365436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.327 [2024-04-26 23:36:59.365794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.327 [2024-04-26 23:36:59.365800] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.327 qpair failed and we were unable to recover it. 00:34:10.327 [2024-04-26 23:36:59.366187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.327 [2024-04-26 23:36:59.366524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.327 [2024-04-26 23:36:59.366530] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.327 qpair failed and we were unable to recover it. 00:34:10.327 [2024-04-26 23:36:59.366853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.327 [2024-04-26 23:36:59.367162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.327 [2024-04-26 23:36:59.367168] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.327 qpair failed and we were unable to recover it. 00:34:10.327 [2024-04-26 23:36:59.367560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.327 [2024-04-26 23:36:59.367875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.327 [2024-04-26 23:36:59.367882] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.327 qpair failed and we were unable to recover it. 00:34:10.327 [2024-04-26 23:36:59.368193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.327 [2024-04-26 23:36:59.368546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.327 [2024-04-26 23:36:59.368552] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.327 qpair failed and we were unable to recover it. 00:34:10.327 [2024-04-26 23:36:59.368868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.327 [2024-04-26 23:36:59.369205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.327 [2024-04-26 23:36:59.369211] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.327 qpair failed and we were unable to recover it. 00:34:10.328 [2024-04-26 23:36:59.369436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.328 [2024-04-26 23:36:59.369606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.328 [2024-04-26 23:36:59.369614] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.328 qpair failed and we were unable to recover it. 00:34:10.328 [2024-04-26 23:36:59.369922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.328 [2024-04-26 23:36:59.370212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.328 [2024-04-26 23:36:59.370218] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.328 qpair failed and we were unable to recover it. 00:34:10.328 [2024-04-26 23:36:59.370565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.328 [2024-04-26 23:36:59.370913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.328 [2024-04-26 23:36:59.370920] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.328 qpair failed and we were unable to recover it. 00:34:10.328 [2024-04-26 23:36:59.371137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.328 [2024-04-26 23:36:59.371525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.328 [2024-04-26 23:36:59.371531] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.328 qpair failed and we were unable to recover it. 00:34:10.328 [2024-04-26 23:36:59.371920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.328 [2024-04-26 23:36:59.372238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.328 [2024-04-26 23:36:59.372245] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.328 qpair failed and we were unable to recover it. 00:34:10.328 [2024-04-26 23:36:59.372556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.328 [2024-04-26 23:36:59.372875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.328 [2024-04-26 23:36:59.372882] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.328 qpair failed and we were unable to recover it. 00:34:10.328 [2024-04-26 23:36:59.373210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.328 [2024-04-26 23:36:59.373566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.328 [2024-04-26 23:36:59.373573] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.328 qpair failed and we were unable to recover it. 00:34:10.328 [2024-04-26 23:36:59.373892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.328 [2024-04-26 23:36:59.374285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.328 [2024-04-26 23:36:59.374292] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.328 qpair failed and we were unable to recover it. 00:34:10.328 [2024-04-26 23:36:59.374453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.328 [2024-04-26 23:36:59.374746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.328 [2024-04-26 23:36:59.374753] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.328 qpair failed and we were unable to recover it. 00:34:10.328 [2024-04-26 23:36:59.374976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.328 [2024-04-26 23:36:59.375275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.328 [2024-04-26 23:36:59.375283] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.328 qpair failed and we were unable to recover it. 00:34:10.328 [2024-04-26 23:36:59.375578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.328 [2024-04-26 23:36:59.375922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.328 [2024-04-26 23:36:59.375928] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.328 qpair failed and we were unable to recover it. 00:34:10.328 [2024-04-26 23:36:59.376299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.328 [2024-04-26 23:36:59.376602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.328 [2024-04-26 23:36:59.376608] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.328 qpair failed and we were unable to recover it. 00:34:10.328 [2024-04-26 23:36:59.376906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.328 [2024-04-26 23:36:59.377231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.328 [2024-04-26 23:36:59.377237] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.328 qpair failed and we were unable to recover it. 00:34:10.328 [2024-04-26 23:36:59.377555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.328 [2024-04-26 23:36:59.377870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.328 [2024-04-26 23:36:59.377877] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.328 qpair failed and we were unable to recover it. 00:34:10.328 [2024-04-26 23:36:59.378222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.328 [2024-04-26 23:36:59.378546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.328 [2024-04-26 23:36:59.378552] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.328 qpair failed and we were unable to recover it. 00:34:10.328 [2024-04-26 23:36:59.378735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.328 [2024-04-26 23:36:59.379057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.328 [2024-04-26 23:36:59.379063] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.328 qpair failed and we were unable to recover it. 00:34:10.328 [2024-04-26 23:36:59.379281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.328 [2024-04-26 23:36:59.379600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.328 [2024-04-26 23:36:59.379607] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.328 qpair failed and we were unable to recover it. 00:34:10.328 [2024-04-26 23:36:59.379962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.328 [2024-04-26 23:36:59.380166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.328 [2024-04-26 23:36:59.380173] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.328 qpair failed and we were unable to recover it. 00:34:10.328 [2024-04-26 23:36:59.380536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.328 [2024-04-26 23:36:59.380857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.328 [2024-04-26 23:36:59.380863] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.328 qpair failed and we were unable to recover it. 00:34:10.328 [2024-04-26 23:36:59.381162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.328 [2024-04-26 23:36:59.381455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.328 [2024-04-26 23:36:59.381462] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.328 qpair failed and we were unable to recover it. 00:34:10.328 [2024-04-26 23:36:59.381761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.328 [2024-04-26 23:36:59.382055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.328 [2024-04-26 23:36:59.382061] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.328 qpair failed and we were unable to recover it. 00:34:10.328 [2024-04-26 23:36:59.382386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.328 [2024-04-26 23:36:59.382715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.328 [2024-04-26 23:36:59.382721] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.328 qpair failed and we were unable to recover it. 00:34:10.328 [2024-04-26 23:36:59.383003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.328 [2024-04-26 23:36:59.383329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.328 [2024-04-26 23:36:59.383335] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.328 qpair failed and we were unable to recover it. 00:34:10.329 [2024-04-26 23:36:59.383707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.329 [2024-04-26 23:36:59.384030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.329 [2024-04-26 23:36:59.384037] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.329 qpair failed and we were unable to recover it. 00:34:10.329 [2024-04-26 23:36:59.384353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.329 [2024-04-26 23:36:59.384659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.329 [2024-04-26 23:36:59.384665] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.329 qpair failed and we were unable to recover it. 00:34:10.329 [2024-04-26 23:36:59.385014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.329 [2024-04-26 23:36:59.385320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.329 [2024-04-26 23:36:59.385327] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.329 qpair failed and we were unable to recover it. 00:34:10.329 [2024-04-26 23:36:59.385674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.329 [2024-04-26 23:36:59.386009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.329 [2024-04-26 23:36:59.386016] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.329 qpair failed and we were unable to recover it. 00:34:10.329 [2024-04-26 23:36:59.386382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.329 [2024-04-26 23:36:59.386708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.329 [2024-04-26 23:36:59.386717] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.329 qpair failed and we were unable to recover it. 00:34:10.329 [2024-04-26 23:36:59.387049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.329 [2024-04-26 23:36:59.387385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.329 [2024-04-26 23:36:59.387392] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.329 qpair failed and we were unable to recover it. 00:34:10.329 [2024-04-26 23:36:59.387747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.329 [2024-04-26 23:36:59.387940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.329 [2024-04-26 23:36:59.387947] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.329 qpair failed and we were unable to recover it. 00:34:10.329 [2024-04-26 23:36:59.388252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.329 [2024-04-26 23:36:59.388597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.329 [2024-04-26 23:36:59.388603] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.329 qpair failed and we were unable to recover it. 00:34:10.329 [2024-04-26 23:36:59.388962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.329 [2024-04-26 23:36:59.389339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.329 [2024-04-26 23:36:59.389345] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.329 qpair failed and we were unable to recover it. 00:34:10.329 [2024-04-26 23:36:59.389702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.329 [2024-04-26 23:36:59.390102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.329 [2024-04-26 23:36:59.390110] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.329 qpair failed and we were unable to recover it. 00:34:10.329 [2024-04-26 23:36:59.390305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.329 [2024-04-26 23:36:59.390612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.329 [2024-04-26 23:36:59.390619] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.329 qpair failed and we were unable to recover it. 00:34:10.329 [2024-04-26 23:36:59.390938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.329 [2024-04-26 23:36:59.391263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.329 [2024-04-26 23:36:59.391270] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.329 qpair failed and we were unable to recover it. 00:34:10.329 [2024-04-26 23:36:59.391588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.329 [2024-04-26 23:36:59.391901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.329 [2024-04-26 23:36:59.391908] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.329 qpair failed and we were unable to recover it. 00:34:10.329 [2024-04-26 23:36:59.392236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.329 [2024-04-26 23:36:59.392587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.329 [2024-04-26 23:36:59.392594] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.329 qpair failed and we were unable to recover it. 00:34:10.329 [2024-04-26 23:36:59.392910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.329 [2024-04-26 23:36:59.393216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.329 [2024-04-26 23:36:59.393224] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.329 qpair failed and we were unable to recover it. 00:34:10.329 [2024-04-26 23:36:59.393533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.329 [2024-04-26 23:36:59.393830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.329 [2024-04-26 23:36:59.393842] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.329 qpair failed and we were unable to recover it. 00:34:10.329 [2024-04-26 23:36:59.394169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.329 [2024-04-26 23:36:59.394443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.329 [2024-04-26 23:36:59.394449] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.329 qpair failed and we were unable to recover it. 00:34:10.329 [2024-04-26 23:36:59.394800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.329 [2024-04-26 23:36:59.395005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.329 [2024-04-26 23:36:59.395012] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.329 qpair failed and we were unable to recover it. 00:34:10.329 [2024-04-26 23:36:59.395410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.329 [2024-04-26 23:36:59.395765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.329 [2024-04-26 23:36:59.395771] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.329 qpair failed and we were unable to recover it. 00:34:10.329 [2024-04-26 23:36:59.396089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.329 [2024-04-26 23:36:59.396271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.329 [2024-04-26 23:36:59.396277] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.329 qpair failed and we were unable to recover it. 00:34:10.329 [2024-04-26 23:36:59.396600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.329 [2024-04-26 23:36:59.396931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.329 [2024-04-26 23:36:59.396937] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.329 qpair failed and we were unable to recover it. 00:34:10.329 [2024-04-26 23:36:59.397274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.329 [2024-04-26 23:36:59.397669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.329 [2024-04-26 23:36:59.397675] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.329 qpair failed and we were unable to recover it. 00:34:10.329 [2024-04-26 23:36:59.397941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.329 [2024-04-26 23:36:59.398282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.329 [2024-04-26 23:36:59.398289] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.329 qpair failed and we were unable to recover it. 00:34:10.329 [2024-04-26 23:36:59.398493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.329 [2024-04-26 23:36:59.398853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.329 [2024-04-26 23:36:59.398861] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.329 qpair failed and we were unable to recover it. 00:34:10.329 [2024-04-26 23:36:59.399188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.329 [2024-04-26 23:36:59.399528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.329 [2024-04-26 23:36:59.399536] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.329 qpair failed and we were unable to recover it. 00:34:10.329 [2024-04-26 23:36:59.399845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.329 [2024-04-26 23:36:59.400198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.329 [2024-04-26 23:36:59.400205] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.329 qpair failed and we were unable to recover it. 00:34:10.329 [2024-04-26 23:36:59.400553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.329 [2024-04-26 23:36:59.400880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.329 [2024-04-26 23:36:59.400886] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.329 qpair failed and we were unable to recover it. 00:34:10.329 [2024-04-26 23:36:59.401220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.329 [2024-04-26 23:36:59.401399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.329 [2024-04-26 23:36:59.401406] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.329 qpair failed and we were unable to recover it. 00:34:10.329 [2024-04-26 23:36:59.401617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.329 [2024-04-26 23:36:59.401912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.330 [2024-04-26 23:36:59.401919] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.330 qpair failed and we were unable to recover it. 00:34:10.330 [2024-04-26 23:36:59.402226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.330 [2024-04-26 23:36:59.402566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.330 [2024-04-26 23:36:59.402573] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.330 qpair failed and we were unable to recover it. 00:34:10.330 [2024-04-26 23:36:59.402961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.330 [2024-04-26 23:36:59.403391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.330 [2024-04-26 23:36:59.403397] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.330 qpair failed and we were unable to recover it. 00:34:10.330 [2024-04-26 23:36:59.403719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.330 [2024-04-26 23:36:59.404040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.330 [2024-04-26 23:36:59.404047] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.330 qpair failed and we were unable to recover it. 00:34:10.330 [2024-04-26 23:36:59.404364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.330 [2024-04-26 23:36:59.404653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.330 [2024-04-26 23:36:59.404659] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.330 qpair failed and we were unable to recover it. 00:34:10.330 [2024-04-26 23:36:59.404983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.330 [2024-04-26 23:36:59.405253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.330 [2024-04-26 23:36:59.405260] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.330 qpair failed and we were unable to recover it. 00:34:10.330 [2024-04-26 23:36:59.405488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.330 [2024-04-26 23:36:59.405794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.330 [2024-04-26 23:36:59.405801] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.330 qpair failed and we were unable to recover it. 00:34:10.330 [2024-04-26 23:36:59.406161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.330 [2024-04-26 23:36:59.406518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.330 [2024-04-26 23:36:59.406525] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.330 qpair failed and we were unable to recover it. 00:34:10.330 [2024-04-26 23:36:59.406724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.330 [2024-04-26 23:36:59.407042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.330 [2024-04-26 23:36:59.407049] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.330 qpair failed and we were unable to recover it. 00:34:10.330 [2024-04-26 23:36:59.407378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.330 [2024-04-26 23:36:59.407704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.330 [2024-04-26 23:36:59.407710] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.330 qpair failed and we were unable to recover it. 00:34:10.330 [2024-04-26 23:36:59.407931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.330 [2024-04-26 23:36:59.408217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.330 [2024-04-26 23:36:59.408223] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.330 qpair failed and we were unable to recover it. 00:34:10.330 [2024-04-26 23:36:59.408567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.330 [2024-04-26 23:36:59.408879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.330 [2024-04-26 23:36:59.408885] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.330 qpair failed and we were unable to recover it. 00:34:10.330 [2024-04-26 23:36:59.409201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.330 [2024-04-26 23:36:59.409552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.330 [2024-04-26 23:36:59.409558] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.330 qpair failed and we were unable to recover it. 00:34:10.330 [2024-04-26 23:36:59.409910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.330 [2024-04-26 23:36:59.410175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.330 [2024-04-26 23:36:59.410181] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.330 qpair failed and we were unable to recover it. 00:34:10.330 [2024-04-26 23:36:59.410542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.330 [2024-04-26 23:36:59.410907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.330 [2024-04-26 23:36:59.410914] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.330 qpair failed and we were unable to recover it. 00:34:10.330 [2024-04-26 23:36:59.411233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.330 [2024-04-26 23:36:59.411522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.330 [2024-04-26 23:36:59.411528] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.330 qpair failed and we were unable to recover it. 00:34:10.330 [2024-04-26 23:36:59.411842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.330 [2024-04-26 23:36:59.412183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.330 [2024-04-26 23:36:59.412189] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.330 qpair failed and we were unable to recover it. 00:34:10.330 [2024-04-26 23:36:59.412550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.330 [2024-04-26 23:36:59.412898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.330 [2024-04-26 23:36:59.412905] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.330 qpair failed and we were unable to recover it. 00:34:10.330 [2024-04-26 23:36:59.413267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.330 [2024-04-26 23:36:59.413469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.330 [2024-04-26 23:36:59.413477] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.330 qpair failed and we were unable to recover it. 00:34:10.330 [2024-04-26 23:36:59.413790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.330 [2024-04-26 23:36:59.414150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.330 [2024-04-26 23:36:59.414158] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.330 qpair failed and we were unable to recover it. 00:34:10.330 [2024-04-26 23:36:59.414504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.330 [2024-04-26 23:36:59.414705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.330 [2024-04-26 23:36:59.414712] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.330 qpair failed and we were unable to recover it. 00:34:10.330 [2024-04-26 23:36:59.414928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.330 [2024-04-26 23:36:59.415107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.330 [2024-04-26 23:36:59.415113] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.330 qpair failed and we were unable to recover it. 00:34:10.330 [2024-04-26 23:36:59.415410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.330 [2024-04-26 23:36:59.415739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.330 [2024-04-26 23:36:59.415746] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.330 qpair failed and we were unable to recover it. 00:34:10.330 [2024-04-26 23:36:59.416016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.330 [2024-04-26 23:36:59.416338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.330 [2024-04-26 23:36:59.416344] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.330 qpair failed and we were unable to recover it. 00:34:10.330 [2024-04-26 23:36:59.416651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.330 [2024-04-26 23:36:59.416897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.330 [2024-04-26 23:36:59.416904] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.330 qpair failed and we were unable to recover it. 00:34:10.330 [2024-04-26 23:36:59.417235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.330 [2024-04-26 23:36:59.417545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.330 [2024-04-26 23:36:59.417551] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.330 qpair failed and we were unable to recover it. 00:34:10.330 [2024-04-26 23:36:59.417903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.330 [2024-04-26 23:36:59.418250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.330 [2024-04-26 23:36:59.418258] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.330 qpair failed and we were unable to recover it. 00:34:10.330 [2024-04-26 23:36:59.418587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.330 [2024-04-26 23:36:59.418912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.330 [2024-04-26 23:36:59.418919] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.330 qpair failed and we were unable to recover it. 00:34:10.331 [2024-04-26 23:36:59.419242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.331 [2024-04-26 23:36:59.419618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.331 [2024-04-26 23:36:59.419625] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.331 qpair failed and we were unable to recover it. 00:34:10.331 [2024-04-26 23:36:59.419973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.331 [2024-04-26 23:36:59.420307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.331 [2024-04-26 23:36:59.420314] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.331 qpair failed and we were unable to recover it. 00:34:10.331 [2024-04-26 23:36:59.420656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.331 [2024-04-26 23:36:59.421009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.331 [2024-04-26 23:36:59.421016] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.331 qpair failed and we were unable to recover it. 00:34:10.331 [2024-04-26 23:36:59.421346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.331 [2024-04-26 23:36:59.421654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.331 [2024-04-26 23:36:59.421661] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.331 qpair failed and we were unable to recover it. 00:34:10.331 [2024-04-26 23:36:59.422023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.331 [2024-04-26 23:36:59.422185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.331 [2024-04-26 23:36:59.422192] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.331 qpair failed and we were unable to recover it. 00:34:10.331 [2024-04-26 23:36:59.422604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.331 [2024-04-26 23:36:59.422897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.331 [2024-04-26 23:36:59.422904] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.331 qpair failed and we were unable to recover it. 00:34:10.331 [2024-04-26 23:36:59.423248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.331 [2024-04-26 23:36:59.423573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.331 [2024-04-26 23:36:59.423579] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.331 qpair failed and we were unable to recover it. 00:34:10.331 [2024-04-26 23:36:59.423901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.331 [2024-04-26 23:36:59.424195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.331 [2024-04-26 23:36:59.424202] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.331 qpair failed and we were unable to recover it. 00:34:10.331 [2024-04-26 23:36:59.424522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.331 [2024-04-26 23:36:59.424846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.331 [2024-04-26 23:36:59.424854] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.331 qpair failed and we were unable to recover it. 00:34:10.331 [2024-04-26 23:36:59.425217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.331 [2024-04-26 23:36:59.425471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.331 [2024-04-26 23:36:59.425478] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.331 qpair failed and we were unable to recover it. 00:34:10.331 [2024-04-26 23:36:59.425853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.331 [2024-04-26 23:36:59.426152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.331 [2024-04-26 23:36:59.426159] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.331 qpair failed and we were unable to recover it. 00:34:10.331 [2024-04-26 23:36:59.426475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.331 [2024-04-26 23:36:59.426841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.331 [2024-04-26 23:36:59.426849] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.331 qpair failed and we were unable to recover it. 00:34:10.331 [2024-04-26 23:36:59.427072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.331 [2024-04-26 23:36:59.427267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.331 [2024-04-26 23:36:59.427274] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.331 qpair failed and we were unable to recover it. 00:34:10.331 [2024-04-26 23:36:59.427590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.331 [2024-04-26 23:36:59.427803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.331 [2024-04-26 23:36:59.427809] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.331 qpair failed and we were unable to recover it. 00:34:10.331 [2024-04-26 23:36:59.428136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.331 [2024-04-26 23:36:59.428451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.331 [2024-04-26 23:36:59.428458] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.331 qpair failed and we were unable to recover it. 00:34:10.331 [2024-04-26 23:36:59.428645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.331 [2024-04-26 23:36:59.428967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.331 [2024-04-26 23:36:59.428973] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.331 qpair failed and we were unable to recover it. 00:34:10.331 [2024-04-26 23:36:59.429300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.331 [2024-04-26 23:36:59.429436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.331 [2024-04-26 23:36:59.429443] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.331 qpair failed and we were unable to recover it. 00:34:10.331 [2024-04-26 23:36:59.429629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.331 [2024-04-26 23:36:59.429937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.331 [2024-04-26 23:36:59.429944] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.331 qpair failed and we were unable to recover it. 00:34:10.331 [2024-04-26 23:36:59.430282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.331 [2024-04-26 23:36:59.430493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.331 [2024-04-26 23:36:59.430500] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.331 qpair failed and we were unable to recover it. 00:34:10.331 [2024-04-26 23:36:59.430806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.331 [2024-04-26 23:36:59.431165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.331 [2024-04-26 23:36:59.431172] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.331 qpair failed and we were unable to recover it. 00:34:10.331 [2024-04-26 23:36:59.431481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.331 [2024-04-26 23:36:59.431784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.331 [2024-04-26 23:36:59.431790] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.331 qpair failed and we were unable to recover it. 00:34:10.331 [2024-04-26 23:36:59.431994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.331 [2024-04-26 23:36:59.432383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.331 [2024-04-26 23:36:59.432389] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.331 qpair failed and we were unable to recover it. 00:34:10.331 [2024-04-26 23:36:59.432746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.331 [2024-04-26 23:36:59.433033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.331 [2024-04-26 23:36:59.433040] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.331 qpair failed and we were unable to recover it. 00:34:10.331 [2024-04-26 23:36:59.433418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.331 [2024-04-26 23:36:59.433737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.331 [2024-04-26 23:36:59.433745] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.331 qpair failed and we were unable to recover it. 00:34:10.331 [2024-04-26 23:36:59.434075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.331 [2024-04-26 23:36:59.434411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.331 [2024-04-26 23:36:59.434417] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.331 qpair failed and we were unable to recover it. 00:34:10.331 [2024-04-26 23:36:59.434774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.331 [2024-04-26 23:36:59.435099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.331 [2024-04-26 23:36:59.435106] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.331 qpair failed and we were unable to recover it. 00:34:10.331 [2024-04-26 23:36:59.435463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.331 [2024-04-26 23:36:59.435819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.331 [2024-04-26 23:36:59.435826] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.331 qpair failed and we were unable to recover it. 00:34:10.331 [2024-04-26 23:36:59.436146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.331 [2024-04-26 23:36:59.436513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.331 [2024-04-26 23:36:59.436520] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.331 qpair failed and we were unable to recover it. 00:34:10.331 [2024-04-26 23:36:59.436877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.331 [2024-04-26 23:36:59.437205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.332 [2024-04-26 23:36:59.437211] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.332 qpair failed and we were unable to recover it. 00:34:10.332 [2024-04-26 23:36:59.437446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.332 [2024-04-26 23:36:59.437596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.332 [2024-04-26 23:36:59.437603] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.332 qpair failed and we were unable to recover it. 00:34:10.332 [2024-04-26 23:36:59.437827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.332 [2024-04-26 23:36:59.438121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.332 [2024-04-26 23:36:59.438128] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.332 qpair failed and we were unable to recover it. 00:34:10.332 [2024-04-26 23:36:59.438439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.332 [2024-04-26 23:36:59.438634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.332 [2024-04-26 23:36:59.438640] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.332 qpair failed and we were unable to recover it. 00:34:10.332 [2024-04-26 23:36:59.438953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.332 [2024-04-26 23:36:59.439326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.332 [2024-04-26 23:36:59.439332] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.332 qpair failed and we were unable to recover it. 00:34:10.332 [2024-04-26 23:36:59.439642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.332 [2024-04-26 23:36:59.440051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.332 [2024-04-26 23:36:59.440058] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.332 qpair failed and we were unable to recover it. 00:34:10.332 [2024-04-26 23:36:59.440363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.332 [2024-04-26 23:36:59.440702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.332 [2024-04-26 23:36:59.440709] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.332 qpair failed and we were unable to recover it. 00:34:10.332 [2024-04-26 23:36:59.441011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.332 [2024-04-26 23:36:59.441231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.332 [2024-04-26 23:36:59.441237] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.332 qpair failed and we were unable to recover it. 00:34:10.332 [2024-04-26 23:36:59.441416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.332 [2024-04-26 23:36:59.441772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.332 [2024-04-26 23:36:59.441778] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.332 qpair failed and we were unable to recover it. 00:34:10.332 [2024-04-26 23:36:59.442107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.332 [2024-04-26 23:36:59.442416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.332 [2024-04-26 23:36:59.442422] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.332 qpair failed and we were unable to recover it. 00:34:10.332 [2024-04-26 23:36:59.442858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.332 [2024-04-26 23:36:59.443105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.332 [2024-04-26 23:36:59.443111] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.332 qpair failed and we were unable to recover it. 00:34:10.332 [2024-04-26 23:36:59.443330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.332 [2024-04-26 23:36:59.443651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.332 [2024-04-26 23:36:59.443658] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.332 qpair failed and we were unable to recover it. 00:34:10.332 [2024-04-26 23:36:59.443962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.332 [2024-04-26 23:36:59.444291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.332 [2024-04-26 23:36:59.444298] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.332 qpair failed and we were unable to recover it. 00:34:10.332 [2024-04-26 23:36:59.444636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.332 [2024-04-26 23:36:59.444822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.332 [2024-04-26 23:36:59.444828] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.332 qpair failed and we were unable to recover it. 00:34:10.332 [2024-04-26 23:36:59.445168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.332 [2024-04-26 23:36:59.445505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.332 [2024-04-26 23:36:59.445512] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.332 qpair failed and we were unable to recover it. 00:34:10.332 [2024-04-26 23:36:59.445744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.332 [2024-04-26 23:36:59.446034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.332 [2024-04-26 23:36:59.446041] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.332 qpair failed and we were unable to recover it. 00:34:10.332 [2024-04-26 23:36:59.446244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.332 [2024-04-26 23:36:59.446463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.332 [2024-04-26 23:36:59.446469] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.332 qpair failed and we were unable to recover it. 00:34:10.332 [2024-04-26 23:36:59.446765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.332 [2024-04-26 23:36:59.447143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.332 [2024-04-26 23:36:59.447150] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.332 qpair failed and we were unable to recover it. 00:34:10.332 [2024-04-26 23:36:59.447479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.332 [2024-04-26 23:36:59.447820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.332 [2024-04-26 23:36:59.447827] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.332 qpair failed and we were unable to recover it. 00:34:10.332 [2024-04-26 23:36:59.448030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.332 [2024-04-26 23:36:59.448220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.332 [2024-04-26 23:36:59.448226] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.332 qpair failed and we were unable to recover it. 00:34:10.332 [2024-04-26 23:36:59.448351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.332 [2024-04-26 23:36:59.448689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.332 [2024-04-26 23:36:59.448695] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.332 qpair failed and we were unable to recover it. 00:34:10.332 [2024-04-26 23:36:59.449033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.332 [2024-04-26 23:36:59.449321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.332 [2024-04-26 23:36:59.449328] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.332 qpair failed and we were unable to recover it. 00:34:10.332 [2024-04-26 23:36:59.449689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.332 [2024-04-26 23:36:59.449993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.332 [2024-04-26 23:36:59.450000] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.332 qpair failed and we were unable to recover it. 00:34:10.332 [2024-04-26 23:36:59.450341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.332 [2024-04-26 23:36:59.450655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.332 [2024-04-26 23:36:59.450661] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.332 qpair failed and we were unable to recover it. 00:34:10.332 [2024-04-26 23:36:59.451026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.332 [2024-04-26 23:36:59.451366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.332 [2024-04-26 23:36:59.451373] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.332 qpair failed and we were unable to recover it. 00:34:10.332 [2024-04-26 23:36:59.451732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.332 [2024-04-26 23:36:59.452030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.332 [2024-04-26 23:36:59.452036] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.332 qpair failed and we were unable to recover it. 00:34:10.332 [2024-04-26 23:36:59.452350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.332 [2024-04-26 23:36:59.452647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.332 [2024-04-26 23:36:59.452653] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.332 qpair failed and we were unable to recover it. 00:34:10.332 [2024-04-26 23:36:59.452992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.332 [2024-04-26 23:36:59.453184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.332 [2024-04-26 23:36:59.453190] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.332 qpair failed and we were unable to recover it. 00:34:10.332 [2024-04-26 23:36:59.453509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.332 [2024-04-26 23:36:59.453746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.332 [2024-04-26 23:36:59.453753] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.332 qpair failed and we were unable to recover it. 00:34:10.333 [2024-04-26 23:36:59.453902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.333 [2024-04-26 23:36:59.454251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.333 [2024-04-26 23:36:59.454257] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.333 qpair failed and we were unable to recover it. 00:34:10.333 [2024-04-26 23:36:59.454590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.333 [2024-04-26 23:36:59.454929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.333 [2024-04-26 23:36:59.454935] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.333 qpair failed and we were unable to recover it. 00:34:10.333 [2024-04-26 23:36:59.455224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.333 [2024-04-26 23:36:59.455562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.333 [2024-04-26 23:36:59.455568] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.333 qpair failed and we were unable to recover it. 00:34:10.333 [2024-04-26 23:36:59.455815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.333 [2024-04-26 23:36:59.456006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.333 [2024-04-26 23:36:59.456014] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.333 qpair failed and we were unable to recover it. 00:34:10.333 [2024-04-26 23:36:59.456205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.333 [2024-04-26 23:36:59.456514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.333 [2024-04-26 23:36:59.456521] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.333 qpair failed and we were unable to recover it. 00:34:10.333 [2024-04-26 23:36:59.456738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.333 [2024-04-26 23:36:59.457039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.333 [2024-04-26 23:36:59.457046] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.333 qpair failed and we were unable to recover it. 00:34:10.333 [2024-04-26 23:36:59.457378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.333 [2024-04-26 23:36:59.457692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.333 [2024-04-26 23:36:59.457699] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.333 qpair failed and we were unable to recover it. 00:34:10.333 [2024-04-26 23:36:59.457897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.333 [2024-04-26 23:36:59.458195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.333 [2024-04-26 23:36:59.458203] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.333 qpair failed and we were unable to recover it. 00:34:10.333 [2024-04-26 23:36:59.458535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.333 [2024-04-26 23:36:59.458853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.333 [2024-04-26 23:36:59.458861] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.333 qpair failed and we were unable to recover it. 00:34:10.333 [2024-04-26 23:36:59.459191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.333 [2024-04-26 23:36:59.459501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.333 [2024-04-26 23:36:59.459508] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.333 qpair failed and we were unable to recover it. 00:34:10.333 [2024-04-26 23:36:59.459923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.333 [2024-04-26 23:36:59.460021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.333 [2024-04-26 23:36:59.460027] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.333 qpair failed and we were unable to recover it. 00:34:10.333 [2024-04-26 23:36:59.460356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.333 [2024-04-26 23:36:59.460654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.333 [2024-04-26 23:36:59.460662] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.333 qpair failed and we were unable to recover it. 00:34:10.333 [2024-04-26 23:36:59.460981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.333 [2024-04-26 23:36:59.461328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.333 [2024-04-26 23:36:59.461335] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.333 qpair failed and we were unable to recover it. 00:34:10.333 [2024-04-26 23:36:59.461654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.333 [2024-04-26 23:36:59.461889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.333 [2024-04-26 23:36:59.461895] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.333 qpair failed and we were unable to recover it. 00:34:10.333 [2024-04-26 23:36:59.462261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.333 [2024-04-26 23:36:59.462586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.333 [2024-04-26 23:36:59.462592] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.333 qpair failed and we were unable to recover it. 00:34:10.333 [2024-04-26 23:36:59.462913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.333 [2024-04-26 23:36:59.463250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.333 [2024-04-26 23:36:59.463257] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.333 qpair failed and we were unable to recover it. 00:34:10.333 [2024-04-26 23:36:59.463605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.333 [2024-04-26 23:36:59.463752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.333 [2024-04-26 23:36:59.463759] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.333 qpair failed and we were unable to recover it. 00:34:10.333 [2024-04-26 23:36:59.464093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.333 [2024-04-26 23:36:59.464313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.333 [2024-04-26 23:36:59.464319] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.333 qpair failed and we were unable to recover it. 00:34:10.333 [2024-04-26 23:36:59.464499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.333 [2024-04-26 23:36:59.464708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.333 [2024-04-26 23:36:59.464715] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.333 qpair failed and we were unable to recover it. 00:34:10.333 [2024-04-26 23:36:59.465033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.333 [2024-04-26 23:36:59.465312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.333 [2024-04-26 23:36:59.465319] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.333 qpair failed and we were unable to recover it. 00:34:10.333 [2024-04-26 23:36:59.465677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.333 [2024-04-26 23:36:59.466016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.333 [2024-04-26 23:36:59.466023] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.333 qpair failed and we were unable to recover it. 00:34:10.333 [2024-04-26 23:36:59.466363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.333 [2024-04-26 23:36:59.466587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.333 [2024-04-26 23:36:59.466594] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.333 qpair failed and we were unable to recover it. 00:34:10.333 [2024-04-26 23:36:59.466897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.333 [2024-04-26 23:36:59.467225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.333 [2024-04-26 23:36:59.467231] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.333 qpair failed and we were unable to recover it. 00:34:10.333 [2024-04-26 23:36:59.467581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.333 [2024-04-26 23:36:59.467878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.333 [2024-04-26 23:36:59.467885] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.333 qpair failed and we were unable to recover it. 00:34:10.333 [2024-04-26 23:36:59.468225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.334 [2024-04-26 23:36:59.468518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.334 [2024-04-26 23:36:59.468525] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.334 qpair failed and we were unable to recover it. 00:34:10.334 [2024-04-26 23:36:59.468841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.334 [2024-04-26 23:36:59.469183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.334 [2024-04-26 23:36:59.469189] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.334 qpair failed and we were unable to recover it. 00:34:10.334 [2024-04-26 23:36:59.469377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.334 [2024-04-26 23:36:59.469675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.334 [2024-04-26 23:36:59.469682] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.334 qpair failed and we were unable to recover it. 00:34:10.334 [2024-04-26 23:36:59.470009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.334 [2024-04-26 23:36:59.470342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.334 [2024-04-26 23:36:59.470349] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.334 qpair failed and we were unable to recover it. 00:34:10.334 [2024-04-26 23:36:59.470662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.334 [2024-04-26 23:36:59.470967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.334 [2024-04-26 23:36:59.470973] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.334 qpair failed and we were unable to recover it. 00:34:10.334 [2024-04-26 23:36:59.471281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.334 [2024-04-26 23:36:59.471437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.334 [2024-04-26 23:36:59.471444] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.334 qpair failed and we were unable to recover it. 00:34:10.334 [2024-04-26 23:36:59.471770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.334 [2024-04-26 23:36:59.471981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.334 [2024-04-26 23:36:59.471989] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.334 qpair failed and we were unable to recover it. 00:34:10.334 [2024-04-26 23:36:59.472292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.334 [2024-04-26 23:36:59.472599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.334 [2024-04-26 23:36:59.472605] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.334 qpair failed and we were unable to recover it. 00:34:10.334 [2024-04-26 23:36:59.472901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.334 [2024-04-26 23:36:59.473234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.334 [2024-04-26 23:36:59.473241] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.334 qpair failed and we were unable to recover it. 00:34:10.334 [2024-04-26 23:36:59.473596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.334 [2024-04-26 23:36:59.473933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.334 [2024-04-26 23:36:59.473940] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.334 qpair failed and we were unable to recover it. 00:34:10.334 [2024-04-26 23:36:59.474271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.334 [2024-04-26 23:36:59.474586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.334 [2024-04-26 23:36:59.474593] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.334 qpair failed and we were unable to recover it. 00:34:10.334 [2024-04-26 23:36:59.474816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.334 [2024-04-26 23:36:59.475150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.334 [2024-04-26 23:36:59.475158] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.334 qpair failed and we were unable to recover it. 00:34:10.334 [2024-04-26 23:36:59.475479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.334 [2024-04-26 23:36:59.475829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.334 [2024-04-26 23:36:59.475839] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.334 qpair failed and we were unable to recover it. 00:34:10.334 [2024-04-26 23:36:59.476090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.334 [2024-04-26 23:36:59.476394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.334 [2024-04-26 23:36:59.476401] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.334 qpair failed and we were unable to recover it. 00:34:10.334 [2024-04-26 23:36:59.476761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.334 [2024-04-26 23:36:59.477056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.334 [2024-04-26 23:36:59.477063] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.334 qpair failed and we were unable to recover it. 00:34:10.334 [2024-04-26 23:36:59.477208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.334 [2024-04-26 23:36:59.477530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.334 [2024-04-26 23:36:59.477536] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.334 qpair failed and we were unable to recover it. 00:34:10.334 [2024-04-26 23:36:59.477844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.334 [2024-04-26 23:36:59.478180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.334 [2024-04-26 23:36:59.478186] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.334 qpair failed and we were unable to recover it. 00:34:10.334 [2024-04-26 23:36:59.478566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.334 [2024-04-26 23:36:59.478795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.334 [2024-04-26 23:36:59.478801] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.334 qpair failed and we were unable to recover it. 00:34:10.334 [2024-04-26 23:36:59.479154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.334 [2024-04-26 23:36:59.479479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.334 [2024-04-26 23:36:59.479486] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.334 qpair failed and we were unable to recover it. 00:34:10.334 [2024-04-26 23:36:59.479816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.334 [2024-04-26 23:36:59.480034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.334 [2024-04-26 23:36:59.480041] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.334 qpair failed and we were unable to recover it. 00:34:10.334 [2024-04-26 23:36:59.480276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.334 [2024-04-26 23:36:59.480614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.334 [2024-04-26 23:36:59.480621] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.334 qpair failed and we were unable to recover it. 00:34:10.334 [2024-04-26 23:36:59.480832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.334 [2024-04-26 23:36:59.481062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.334 [2024-04-26 23:36:59.481069] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.334 qpair failed and we were unable to recover it. 00:34:10.334 [2024-04-26 23:36:59.481454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.334 [2024-04-26 23:36:59.481661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.334 [2024-04-26 23:36:59.481668] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.334 qpair failed and we were unable to recover it. 00:34:10.334 [2024-04-26 23:36:59.481848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.334 [2024-04-26 23:36:59.482191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.334 [2024-04-26 23:36:59.482197] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.334 qpair failed and we were unable to recover it. 00:34:10.334 [2024-04-26 23:36:59.482528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.334 [2024-04-26 23:36:59.482829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.334 [2024-04-26 23:36:59.482838] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.334 qpair failed and we were unable to recover it. 00:34:10.334 [2024-04-26 23:36:59.483226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.334 [2024-04-26 23:36:59.483542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.334 [2024-04-26 23:36:59.483548] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.334 qpair failed and we were unable to recover it. 00:34:10.334 [2024-04-26 23:36:59.483697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.334 [2024-04-26 23:36:59.484025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.334 [2024-04-26 23:36:59.484031] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.334 qpair failed and we were unable to recover it. 00:34:10.334 [2024-04-26 23:36:59.484351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.334 [2024-04-26 23:36:59.484661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.334 [2024-04-26 23:36:59.484668] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.334 qpair failed and we were unable to recover it. 00:34:10.334 [2024-04-26 23:36:59.484929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.335 [2024-04-26 23:36:59.485243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.335 [2024-04-26 23:36:59.485251] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.335 qpair failed and we were unable to recover it. 00:34:10.335 [2024-04-26 23:36:59.485595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.335 [2024-04-26 23:36:59.485887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.335 [2024-04-26 23:36:59.485894] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.335 qpair failed and we were unable to recover it. 00:34:10.335 [2024-04-26 23:36:59.486210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.335 [2024-04-26 23:36:59.486525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.335 [2024-04-26 23:36:59.486532] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.335 qpair failed and we were unable to recover it. 00:34:10.335 [2024-04-26 23:36:59.486890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.335 [2024-04-26 23:36:59.487283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.335 [2024-04-26 23:36:59.487290] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.335 qpair failed and we were unable to recover it. 00:34:10.335 [2024-04-26 23:36:59.487614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.335 [2024-04-26 23:36:59.487937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.335 [2024-04-26 23:36:59.487944] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.335 qpair failed and we were unable to recover it. 00:34:10.335 [2024-04-26 23:36:59.488292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.335 [2024-04-26 23:36:59.488584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.335 [2024-04-26 23:36:59.488591] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.335 qpair failed and we were unable to recover it. 00:34:10.335 [2024-04-26 23:36:59.488884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.335 [2024-04-26 23:36:59.489215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.335 [2024-04-26 23:36:59.489221] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.335 qpair failed and we were unable to recover it. 00:34:10.335 [2024-04-26 23:36:59.489543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.335 [2024-04-26 23:36:59.489828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.335 [2024-04-26 23:36:59.489835] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.335 qpair failed and we were unable to recover it. 00:34:10.335 [2024-04-26 23:36:59.490056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.335 [2024-04-26 23:36:59.490300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.335 [2024-04-26 23:36:59.490307] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.335 qpair failed and we were unable to recover it. 00:34:10.335 [2024-04-26 23:36:59.490503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.335 [2024-04-26 23:36:59.490872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.335 [2024-04-26 23:36:59.490880] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.335 qpair failed and we were unable to recover it. 00:34:10.335 [2024-04-26 23:36:59.491176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.335 [2024-04-26 23:36:59.491411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.335 [2024-04-26 23:36:59.491420] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.335 qpair failed and we were unable to recover it. 00:34:10.335 [2024-04-26 23:36:59.491758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.335 [2024-04-26 23:36:59.492088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.335 [2024-04-26 23:36:59.492094] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.335 qpair failed and we were unable to recover it. 00:34:10.335 [2024-04-26 23:36:59.492437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.335 [2024-04-26 23:36:59.492739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.335 [2024-04-26 23:36:59.492746] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.335 qpair failed and we were unable to recover it. 00:34:10.335 [2024-04-26 23:36:59.493069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.335 [2024-04-26 23:36:59.493378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.335 [2024-04-26 23:36:59.493384] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.335 qpair failed and we were unable to recover it. 00:34:10.335 [2024-04-26 23:36:59.493672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.335 [2024-04-26 23:36:59.493885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.335 [2024-04-26 23:36:59.493892] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.335 qpair failed and we were unable to recover it. 00:34:10.335 [2024-04-26 23:36:59.494244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.335 [2024-04-26 23:36:59.494439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.335 [2024-04-26 23:36:59.494446] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.335 qpair failed and we were unable to recover it. 00:34:10.335 [2024-04-26 23:36:59.494732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.335 [2024-04-26 23:36:59.494910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.335 [2024-04-26 23:36:59.494917] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.335 qpair failed and we were unable to recover it. 00:34:10.335 [2024-04-26 23:36:59.495066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.335 [2024-04-26 23:36:59.495317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.335 [2024-04-26 23:36:59.495323] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.335 qpair failed and we were unable to recover it. 00:34:10.335 [2024-04-26 23:36:59.495524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.335 [2024-04-26 23:36:59.495819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.335 [2024-04-26 23:36:59.495825] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.335 qpair failed and we were unable to recover it. 00:34:10.335 [2024-04-26 23:36:59.496202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.335 [2024-04-26 23:36:59.496558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.335 [2024-04-26 23:36:59.496565] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.335 qpair failed and we were unable to recover it. 00:34:10.335 [2024-04-26 23:36:59.496857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.335 [2024-04-26 23:36:59.497033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.335 [2024-04-26 23:36:59.497041] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.335 qpair failed and we were unable to recover it. 00:34:10.335 [2024-04-26 23:36:59.497302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.335 [2024-04-26 23:36:59.497464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.335 [2024-04-26 23:36:59.497471] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.335 qpair failed and we were unable to recover it. 00:34:10.335 [2024-04-26 23:36:59.497706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.335 [2024-04-26 23:36:59.497929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.335 [2024-04-26 23:36:59.497936] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.335 qpair failed and we were unable to recover it. 00:34:10.335 [2024-04-26 23:36:59.498174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.335 [2024-04-26 23:36:59.498552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.335 [2024-04-26 23:36:59.498558] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.335 qpair failed and we were unable to recover it. 00:34:10.335 [2024-04-26 23:36:59.498880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.335 [2024-04-26 23:36:59.499239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.335 [2024-04-26 23:36:59.499245] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.335 qpair failed and we were unable to recover it. 00:34:10.335 [2024-04-26 23:36:59.499495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.335 [2024-04-26 23:36:59.499779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.335 [2024-04-26 23:36:59.499786] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.335 qpair failed and we were unable to recover it. 00:34:10.335 [2024-04-26 23:36:59.500138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.335 [2024-04-26 23:36:59.500352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.335 [2024-04-26 23:36:59.500358] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.335 qpair failed and we were unable to recover it. 00:34:10.335 [2024-04-26 23:36:59.500723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.335 [2024-04-26 23:36:59.500963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.335 [2024-04-26 23:36:59.500969] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.335 qpair failed and we were unable to recover it. 00:34:10.335 [2024-04-26 23:36:59.501173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.335 [2024-04-26 23:36:59.501397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.335 [2024-04-26 23:36:59.501404] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.336 qpair failed and we were unable to recover it. 00:34:10.336 [2024-04-26 23:36:59.501792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.336 [2024-04-26 23:36:59.502212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.336 [2024-04-26 23:36:59.502219] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.336 qpair failed and we were unable to recover it. 00:34:10.336 [2024-04-26 23:36:59.502401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.336 [2024-04-26 23:36:59.502774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.336 [2024-04-26 23:36:59.502782] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.336 qpair failed and we were unable to recover it. 00:34:10.336 [2024-04-26 23:36:59.503014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.336 [2024-04-26 23:36:59.503387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.336 [2024-04-26 23:36:59.503394] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.336 qpair failed and we were unable to recover it. 00:34:10.336 [2024-04-26 23:36:59.503638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.336 [2024-04-26 23:36:59.503958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.336 [2024-04-26 23:36:59.503965] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.336 qpair failed and we were unable to recover it. 00:34:10.336 [2024-04-26 23:36:59.504328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.336 [2024-04-26 23:36:59.504665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.336 [2024-04-26 23:36:59.504671] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.336 qpair failed and we were unable to recover it. 00:34:10.336 [2024-04-26 23:36:59.504970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.336 [2024-04-26 23:36:59.505292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.336 [2024-04-26 23:36:59.505299] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.336 qpair failed and we were unable to recover it. 00:34:10.336 [2024-04-26 23:36:59.505526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.336 [2024-04-26 23:36:59.505765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.336 [2024-04-26 23:36:59.505771] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.336 qpair failed and we were unable to recover it. 00:34:10.336 [2024-04-26 23:36:59.506207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.336 [2024-04-26 23:36:59.506536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.336 [2024-04-26 23:36:59.506543] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.336 qpair failed and we were unable to recover it. 00:34:10.336 [2024-04-26 23:36:59.506934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.336 [2024-04-26 23:36:59.507274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.336 [2024-04-26 23:36:59.507282] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.336 qpair failed and we were unable to recover it. 00:34:10.336 [2024-04-26 23:36:59.507590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.336 [2024-04-26 23:36:59.507948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.336 [2024-04-26 23:36:59.507955] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.336 qpair failed and we were unable to recover it. 00:34:10.336 [2024-04-26 23:36:59.508291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.336 [2024-04-26 23:36:59.508624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.336 [2024-04-26 23:36:59.508631] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.336 qpair failed and we were unable to recover it. 00:34:10.336 [2024-04-26 23:36:59.508864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.336 [2024-04-26 23:36:59.509096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.336 [2024-04-26 23:36:59.509102] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.336 qpair failed and we were unable to recover it. 00:34:10.336 [2024-04-26 23:36:59.509267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.336 [2024-04-26 23:36:59.509620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.336 [2024-04-26 23:36:59.509626] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.336 qpair failed and we were unable to recover it. 00:34:10.336 [2024-04-26 23:36:59.509911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.336 [2024-04-26 23:36:59.510244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.336 [2024-04-26 23:36:59.510251] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.336 qpair failed and we were unable to recover it. 00:34:10.336 [2024-04-26 23:36:59.510447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.336 [2024-04-26 23:36:59.510771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.336 [2024-04-26 23:36:59.510777] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.336 qpair failed and we were unable to recover it. 00:34:10.336 [2024-04-26 23:36:59.511166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.336 [2024-04-26 23:36:59.511462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.336 [2024-04-26 23:36:59.511469] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.336 qpair failed and we were unable to recover it. 00:34:10.336 [2024-04-26 23:36:59.511828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.336 [2024-04-26 23:36:59.512148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.336 [2024-04-26 23:36:59.512155] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.336 qpair failed and we were unable to recover it. 00:34:10.336 [2024-04-26 23:36:59.512475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.336 [2024-04-26 23:36:59.512775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.336 [2024-04-26 23:36:59.512783] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.336 qpair failed and we were unable to recover it. 00:34:10.336 [2024-04-26 23:36:59.513123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.336 [2024-04-26 23:36:59.513486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.336 [2024-04-26 23:36:59.513493] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.336 qpair failed and we were unable to recover it. 00:34:10.336 [2024-04-26 23:36:59.513825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.336 [2024-04-26 23:36:59.514078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.336 [2024-04-26 23:36:59.514085] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.336 qpair failed and we were unable to recover it. 00:34:10.336 [2024-04-26 23:36:59.514443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.336 [2024-04-26 23:36:59.514754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.336 [2024-04-26 23:36:59.514762] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.336 qpair failed and we were unable to recover it. 00:34:10.336 [2024-04-26 23:36:59.515129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.336 [2024-04-26 23:36:59.515310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.336 [2024-04-26 23:36:59.515318] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.336 qpair failed and we were unable to recover it. 00:34:10.336 [2024-04-26 23:36:59.515544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.336 [2024-04-26 23:36:59.515669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.336 [2024-04-26 23:36:59.515676] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.336 qpair failed and we were unable to recover it. 00:34:10.336 [2024-04-26 23:36:59.515930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.336 [2024-04-26 23:36:59.516272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.336 [2024-04-26 23:36:59.516279] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.336 qpair failed and we were unable to recover it. 00:34:10.336 [2024-04-26 23:36:59.516626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.336 [2024-04-26 23:36:59.516932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.336 [2024-04-26 23:36:59.516939] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.336 qpair failed and we were unable to recover it. 00:34:10.336 [2024-04-26 23:36:59.517270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.336 [2024-04-26 23:36:59.517591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.336 [2024-04-26 23:36:59.517597] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.336 qpair failed and we were unable to recover it. 00:34:10.336 [2024-04-26 23:36:59.517819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.336 [2024-04-26 23:36:59.518182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.336 [2024-04-26 23:36:59.518188] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.336 qpair failed and we were unable to recover it. 00:34:10.336 [2024-04-26 23:36:59.518374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.336 [2024-04-26 23:36:59.518750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.336 [2024-04-26 23:36:59.518756] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.336 qpair failed and we were unable to recover it. 00:34:10.337 [2024-04-26 23:36:59.519094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.337 [2024-04-26 23:36:59.519342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.337 [2024-04-26 23:36:59.519349] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.337 qpair failed and we were unable to recover it. 00:34:10.337 [2024-04-26 23:36:59.519751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.337 [2024-04-26 23:36:59.520060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.337 [2024-04-26 23:36:59.520067] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.337 qpair failed and we were unable to recover it. 00:34:10.337 [2024-04-26 23:36:59.520393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.337 [2024-04-26 23:36:59.520702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.337 [2024-04-26 23:36:59.520708] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.337 qpair failed and we were unable to recover it. 00:34:10.337 [2024-04-26 23:36:59.521042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.337 [2024-04-26 23:36:59.521376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.337 [2024-04-26 23:36:59.521382] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.337 qpair failed and we were unable to recover it. 00:34:10.337 [2024-04-26 23:36:59.521723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.337 [2024-04-26 23:36:59.522084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.337 [2024-04-26 23:36:59.522091] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.337 qpair failed and we were unable to recover it. 00:34:10.337 [2024-04-26 23:36:59.522374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.337 [2024-04-26 23:36:59.522678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.337 [2024-04-26 23:36:59.522685] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.337 qpair failed and we were unable to recover it. 00:34:10.337 [2024-04-26 23:36:59.523043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.337 [2024-04-26 23:36:59.523349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.337 [2024-04-26 23:36:59.523356] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.337 qpair failed and we were unable to recover it. 00:34:10.337 [2024-04-26 23:36:59.523670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.337 [2024-04-26 23:36:59.523847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.337 [2024-04-26 23:36:59.523855] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.337 qpair failed and we were unable to recover it. 00:34:10.337 [2024-04-26 23:36:59.523987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.337 [2024-04-26 23:36:59.524357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.337 [2024-04-26 23:36:59.524364] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.337 qpair failed and we were unable to recover it. 00:34:10.337 [2024-04-26 23:36:59.524704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.337 [2024-04-26 23:36:59.524958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.337 [2024-04-26 23:36:59.524964] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.337 qpair failed and we were unable to recover it. 00:34:10.337 [2024-04-26 23:36:59.525298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.337 [2024-04-26 23:36:59.525626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.337 [2024-04-26 23:36:59.525632] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.337 qpair failed and we were unable to recover it. 00:34:10.337 [2024-04-26 23:36:59.525953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.337 [2024-04-26 23:36:59.526317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.337 [2024-04-26 23:36:59.526324] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.337 qpair failed and we were unable to recover it. 00:34:10.337 [2024-04-26 23:36:59.526643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.337 [2024-04-26 23:36:59.526980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.337 [2024-04-26 23:36:59.526987] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.337 qpair failed and we were unable to recover it. 00:34:10.337 [2024-04-26 23:36:59.527325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.337 [2024-04-26 23:36:59.527556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.337 [2024-04-26 23:36:59.527563] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.337 qpair failed and we were unable to recover it. 00:34:10.337 [2024-04-26 23:36:59.527886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.337 [2024-04-26 23:36:59.528246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.337 [2024-04-26 23:36:59.528252] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.337 qpair failed and we were unable to recover it. 00:34:10.337 [2024-04-26 23:36:59.528468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.337 [2024-04-26 23:36:59.528761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.337 [2024-04-26 23:36:59.528768] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.337 qpair failed and we were unable to recover it. 00:34:10.337 [2024-04-26 23:36:59.529089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.337 [2024-04-26 23:36:59.529408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.337 [2024-04-26 23:36:59.529415] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.337 qpair failed and we were unable to recover it. 00:34:10.337 [2024-04-26 23:36:59.529749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.337 [2024-04-26 23:36:59.530033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.337 [2024-04-26 23:36:59.530040] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.337 qpair failed and we were unable to recover it. 00:34:10.337 [2024-04-26 23:36:59.530359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.337 [2024-04-26 23:36:59.530650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.337 [2024-04-26 23:36:59.530656] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.337 qpair failed and we were unable to recover it. 00:34:10.337 [2024-04-26 23:36:59.531001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.337 [2024-04-26 23:36:59.531297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.337 [2024-04-26 23:36:59.531304] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.337 qpair failed and we were unable to recover it. 00:34:10.337 [2024-04-26 23:36:59.531690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.337 [2024-04-26 23:36:59.531952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.337 [2024-04-26 23:36:59.531959] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.337 qpair failed and we were unable to recover it. 00:34:10.337 [2024-04-26 23:36:59.532263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.337 [2024-04-26 23:36:59.532576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.337 [2024-04-26 23:36:59.532582] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.337 qpair failed and we were unable to recover it. 00:34:10.337 [2024-04-26 23:36:59.532950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.337 [2024-04-26 23:36:59.533154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.337 [2024-04-26 23:36:59.533160] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.337 qpair failed and we were unable to recover it. 00:34:10.337 [2024-04-26 23:36:59.533474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.337 [2024-04-26 23:36:59.533780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.338 [2024-04-26 23:36:59.533786] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.338 qpair failed and we were unable to recover it. 00:34:10.338 [2024-04-26 23:36:59.534159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.338 [2024-04-26 23:36:59.534494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.338 [2024-04-26 23:36:59.534501] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.338 qpair failed and we were unable to recover it. 00:34:10.338 [2024-04-26 23:36:59.534809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.338 [2024-04-26 23:36:59.535122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.338 [2024-04-26 23:36:59.535129] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.338 qpair failed and we were unable to recover it. 00:34:10.338 [2024-04-26 23:36:59.535458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.338 [2024-04-26 23:36:59.535725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.338 [2024-04-26 23:36:59.535732] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.338 qpair failed and we were unable to recover it. 00:34:10.338 [2024-04-26 23:36:59.535913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.338 [2024-04-26 23:36:59.536301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.338 [2024-04-26 23:36:59.536307] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.338 qpair failed and we were unable to recover it. 00:34:10.338 [2024-04-26 23:36:59.536492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.338 [2024-04-26 23:36:59.536774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.338 [2024-04-26 23:36:59.536781] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.338 qpair failed and we were unable to recover it. 00:34:10.338 [2024-04-26 23:36:59.537096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.338 [2024-04-26 23:36:59.537409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.338 [2024-04-26 23:36:59.537416] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.338 qpair failed and we were unable to recover it. 00:34:10.338 [2024-04-26 23:36:59.537758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.338 [2024-04-26 23:36:59.538056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.338 [2024-04-26 23:36:59.538064] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.338 qpair failed and we were unable to recover it. 00:34:10.338 [2024-04-26 23:36:59.538394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.338 [2024-04-26 23:36:59.538585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.338 [2024-04-26 23:36:59.538592] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.338 qpair failed and we were unable to recover it. 00:34:10.338 [2024-04-26 23:36:59.538847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.338 [2024-04-26 23:36:59.539170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.338 [2024-04-26 23:36:59.539176] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.338 qpair failed and we were unable to recover it. 00:34:10.338 [2024-04-26 23:36:59.539483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.338 [2024-04-26 23:36:59.539793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.338 [2024-04-26 23:36:59.539799] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.338 qpair failed and we were unable to recover it. 00:34:10.338 [2024-04-26 23:36:59.540173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.338 [2024-04-26 23:36:59.540493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.338 [2024-04-26 23:36:59.540500] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.338 qpair failed and we were unable to recover it. 00:34:10.338 [2024-04-26 23:36:59.540813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.338 [2024-04-26 23:36:59.541184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.338 [2024-04-26 23:36:59.541191] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.338 qpair failed and we were unable to recover it. 00:34:10.338 [2024-04-26 23:36:59.541546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.338 [2024-04-26 23:36:59.541707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.338 [2024-04-26 23:36:59.541714] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.338 qpair failed and we were unable to recover it. 00:34:10.338 [2024-04-26 23:36:59.541995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.338 [2024-04-26 23:36:59.542298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.338 [2024-04-26 23:36:59.542305] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.338 qpair failed and we were unable to recover it. 00:34:10.338 [2024-04-26 23:36:59.542615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.338 [2024-04-26 23:36:59.542970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.338 [2024-04-26 23:36:59.542976] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.338 qpair failed and we were unable to recover it. 00:34:10.338 [2024-04-26 23:36:59.543217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.338 [2024-04-26 23:36:59.543426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.338 [2024-04-26 23:36:59.543432] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.338 qpair failed and we were unable to recover it. 00:34:10.338 [2024-04-26 23:36:59.543738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.338 [2024-04-26 23:36:59.544003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.338 [2024-04-26 23:36:59.544009] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.338 qpair failed and we were unable to recover it. 00:34:10.338 [2024-04-26 23:36:59.544341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.338 [2024-04-26 23:36:59.544681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.338 [2024-04-26 23:36:59.544688] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.338 qpair failed and we were unable to recover it. 00:34:10.338 [2024-04-26 23:36:59.545470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.338 [2024-04-26 23:36:59.545699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.338 [2024-04-26 23:36:59.545707] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.338 qpair failed and we were unable to recover it. 00:34:10.338 [2024-04-26 23:36:59.546510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.338 [2024-04-26 23:36:59.546843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.338 [2024-04-26 23:36:59.546852] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.338 qpair failed and we were unable to recover it. 00:34:10.338 [2024-04-26 23:36:59.547210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.338 [2024-04-26 23:36:59.547536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.338 [2024-04-26 23:36:59.547542] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.338 qpair failed and we were unable to recover it. 00:34:10.338 [2024-04-26 23:36:59.547903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.338 [2024-04-26 23:36:59.548053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.338 [2024-04-26 23:36:59.548059] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.338 qpair failed and we were unable to recover it. 00:34:10.338 [2024-04-26 23:36:59.548325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.338 [2024-04-26 23:36:59.548506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.338 [2024-04-26 23:36:59.548512] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.338 qpair failed and we were unable to recover it. 00:34:10.338 [2024-04-26 23:36:59.548835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.338 [2024-04-26 23:36:59.549146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.338 [2024-04-26 23:36:59.549153] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.338 qpair failed and we were unable to recover it. 00:34:10.338 [2024-04-26 23:36:59.549410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.338 [2024-04-26 23:36:59.549687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.338 [2024-04-26 23:36:59.549694] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.338 qpair failed and we were unable to recover it. 00:34:10.338 [2024-04-26 23:36:59.549990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.338 [2024-04-26 23:36:59.550311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.338 [2024-04-26 23:36:59.550317] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.338 qpair failed and we were unable to recover it. 00:34:10.338 [2024-04-26 23:36:59.550622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.338 [2024-04-26 23:36:59.550929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.338 [2024-04-26 23:36:59.550936] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.338 qpair failed and we were unable to recover it. 00:34:10.338 [2024-04-26 23:36:59.551275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.338 [2024-04-26 23:36:59.551469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.338 [2024-04-26 23:36:59.551475] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.338 qpair failed and we were unable to recover it. 00:34:10.338 [2024-04-26 23:36:59.551828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.339 [2024-04-26 23:36:59.552138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.339 [2024-04-26 23:36:59.552144] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.339 qpair failed and we were unable to recover it. 00:34:10.339 [2024-04-26 23:36:59.552481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.339 [2024-04-26 23:36:59.552832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.339 [2024-04-26 23:36:59.552840] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.339 qpair failed and we were unable to recover it. 00:34:10.339 [2024-04-26 23:36:59.553149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.339 [2024-04-26 23:36:59.553441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.339 [2024-04-26 23:36:59.553447] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.339 qpair failed and we were unable to recover it. 00:34:10.339 [2024-04-26 23:36:59.553756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.339 [2024-04-26 23:36:59.554055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.339 [2024-04-26 23:36:59.554062] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.339 qpair failed and we were unable to recover it. 00:34:10.339 [2024-04-26 23:36:59.554414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.339 [2024-04-26 23:36:59.554731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.339 [2024-04-26 23:36:59.554738] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.339 qpair failed and we were unable to recover it. 00:34:10.339 [2024-04-26 23:36:59.555076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.339 [2024-04-26 23:36:59.555395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.339 [2024-04-26 23:36:59.555403] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.339 qpair failed and we were unable to recover it. 00:34:10.339 [2024-04-26 23:36:59.555784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.339 [2024-04-26 23:36:59.556127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.339 [2024-04-26 23:36:59.556134] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.339 qpair failed and we were unable to recover it. 00:34:10.339 [2024-04-26 23:36:59.556444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.339 [2024-04-26 23:36:59.556804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.339 [2024-04-26 23:36:59.556811] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.339 qpair failed and we were unable to recover it. 00:34:10.339 [2024-04-26 23:36:59.557197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.339 [2024-04-26 23:36:59.557544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.339 [2024-04-26 23:36:59.557550] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.339 qpair failed and we were unable to recover it. 00:34:10.339 [2024-04-26 23:36:59.557844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.339 [2024-04-26 23:36:59.558186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.339 [2024-04-26 23:36:59.558193] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.339 qpair failed and we were unable to recover it. 00:34:10.339 [2024-04-26 23:36:59.558510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.339 [2024-04-26 23:36:59.558824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.339 [2024-04-26 23:36:59.558831] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.339 qpair failed and we were unable to recover it. 00:34:10.339 [2024-04-26 23:36:59.559061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.339 [2024-04-26 23:36:59.559399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.339 [2024-04-26 23:36:59.559407] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.339 qpair failed and we were unable to recover it. 00:34:10.339 [2024-04-26 23:36:59.559759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.339 [2024-04-26 23:36:59.560073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.339 [2024-04-26 23:36:59.560079] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.339 qpair failed and we were unable to recover it. 00:34:10.339 [2024-04-26 23:36:59.560259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.339 [2024-04-26 23:36:59.560540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.339 [2024-04-26 23:36:59.560547] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.339 qpair failed and we were unable to recover it. 00:34:10.339 [2024-04-26 23:36:59.560883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.339 [2024-04-26 23:36:59.561236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.339 [2024-04-26 23:36:59.561242] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.339 qpair failed and we were unable to recover it. 00:34:10.339 [2024-04-26 23:36:59.561417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.339 [2024-04-26 23:36:59.561723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.339 [2024-04-26 23:36:59.561729] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.339 qpair failed and we were unable to recover it. 00:34:10.339 [2024-04-26 23:36:59.562073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.339 [2024-04-26 23:36:59.562380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.339 [2024-04-26 23:36:59.562386] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.339 qpair failed and we were unable to recover it. 00:34:10.339 [2024-04-26 23:36:59.562741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.339 [2024-04-26 23:36:59.562949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.339 [2024-04-26 23:36:59.562956] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.339 qpair failed and we were unable to recover it. 00:34:10.339 [2024-04-26 23:36:59.563272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.339 [2024-04-26 23:36:59.563421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.339 [2024-04-26 23:36:59.563427] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.339 qpair failed and we were unable to recover it. 00:34:10.339 [2024-04-26 23:36:59.563873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.339 [2024-04-26 23:36:59.564184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.339 [2024-04-26 23:36:59.564191] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.339 qpair failed and we were unable to recover it. 00:34:10.339 [2024-04-26 23:36:59.564540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.339 [2024-04-26 23:36:59.564854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.339 [2024-04-26 23:36:59.564862] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.339 qpair failed and we were unable to recover it. 00:34:10.339 [2024-04-26 23:36:59.565202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.339 [2024-04-26 23:36:59.565524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.339 [2024-04-26 23:36:59.565530] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.339 qpair failed and we were unable to recover it. 00:34:10.339 [2024-04-26 23:36:59.565791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.339 [2024-04-26 23:36:59.566150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.339 [2024-04-26 23:36:59.566157] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.339 qpair failed and we were unable to recover it. 00:34:10.339 [2024-04-26 23:36:59.566460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.339 [2024-04-26 23:36:59.566794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.339 [2024-04-26 23:36:59.566800] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.339 qpair failed and we were unable to recover it. 00:34:10.339 [2024-04-26 23:36:59.567107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.339 [2024-04-26 23:36:59.567435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.339 [2024-04-26 23:36:59.567442] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.339 qpair failed and we were unable to recover it. 00:34:10.339 [2024-04-26 23:36:59.567797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.339 [2024-04-26 23:36:59.568108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.339 [2024-04-26 23:36:59.568115] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.339 qpair failed and we were unable to recover it. 00:34:10.339 [2024-04-26 23:36:59.568464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.339 [2024-04-26 23:36:59.568826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.339 [2024-04-26 23:36:59.568832] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.339 qpair failed and we were unable to recover it. 00:34:10.611 [2024-04-26 23:36:59.569141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.611 [2024-04-26 23:36:59.569454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.611 [2024-04-26 23:36:59.569462] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.611 qpair failed and we were unable to recover it. 00:34:10.611 [2024-04-26 23:36:59.569820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.611 [2024-04-26 23:36:59.570165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.611 [2024-04-26 23:36:59.570173] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.611 qpair failed and we were unable to recover it. 00:34:10.611 [2024-04-26 23:36:59.570500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.611 [2024-04-26 23:36:59.570768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.611 [2024-04-26 23:36:59.570775] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.611 qpair failed and we were unable to recover it. 00:34:10.611 [2024-04-26 23:36:59.571099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.611 [2024-04-26 23:36:59.571324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.611 [2024-04-26 23:36:59.571330] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.611 qpair failed and we were unable to recover it. 00:34:10.611 [2024-04-26 23:36:59.571676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.611 [2024-04-26 23:36:59.572032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.611 [2024-04-26 23:36:59.572039] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.611 qpair failed and we were unable to recover it. 00:34:10.611 [2024-04-26 23:36:59.572346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.611 [2024-04-26 23:36:59.572673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.611 [2024-04-26 23:36:59.572679] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.611 qpair failed and we were unable to recover it. 00:34:10.611 [2024-04-26 23:36:59.572901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.611 [2024-04-26 23:36:59.573196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.611 [2024-04-26 23:36:59.573202] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.611 qpair failed and we were unable to recover it. 00:34:10.611 [2024-04-26 23:36:59.573504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.611 [2024-04-26 23:36:59.573821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.611 [2024-04-26 23:36:59.573828] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.611 qpair failed and we were unable to recover it. 00:34:10.611 [2024-04-26 23:36:59.574053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.611 [2024-04-26 23:36:59.574332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.611 [2024-04-26 23:36:59.574339] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.611 qpair failed and we were unable to recover it. 00:34:10.611 [2024-04-26 23:36:59.574643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.611 [2024-04-26 23:36:59.574981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.611 [2024-04-26 23:36:59.574987] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.611 qpair failed and we were unable to recover it. 00:34:10.611 [2024-04-26 23:36:59.575325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.611 [2024-04-26 23:36:59.575647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.611 [2024-04-26 23:36:59.575654] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.611 qpair failed and we were unable to recover it. 00:34:10.611 [2024-04-26 23:36:59.575850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.611 [2024-04-26 23:36:59.576142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.611 [2024-04-26 23:36:59.576149] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.611 qpair failed and we were unable to recover it. 00:34:10.611 [2024-04-26 23:36:59.576441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.611 [2024-04-26 23:36:59.576760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.611 [2024-04-26 23:36:59.576766] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.611 qpair failed and we were unable to recover it. 00:34:10.611 [2024-04-26 23:36:59.577132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.611 [2024-04-26 23:36:59.577492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.611 [2024-04-26 23:36:59.577498] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.611 qpair failed and we were unable to recover it. 00:34:10.611 [2024-04-26 23:36:59.577834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.611 [2024-04-26 23:36:59.578186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.611 [2024-04-26 23:36:59.578193] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.611 qpair failed and we were unable to recover it. 00:34:10.611 [2024-04-26 23:36:59.578509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.611 [2024-04-26 23:36:59.578675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.611 [2024-04-26 23:36:59.578683] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.611 qpair failed and we were unable to recover it. 00:34:10.611 [2024-04-26 23:36:59.579000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.611 [2024-04-26 23:36:59.579343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.611 [2024-04-26 23:36:59.579349] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.611 qpair failed and we were unable to recover it. 00:34:10.611 [2024-04-26 23:36:59.579704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.611 [2024-04-26 23:36:59.580007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.611 [2024-04-26 23:36:59.580014] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.611 qpair failed and we were unable to recover it. 00:34:10.611 [2024-04-26 23:36:59.580354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.611 [2024-04-26 23:36:59.580556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.611 [2024-04-26 23:36:59.580563] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.611 qpair failed and we were unable to recover it. 00:34:10.611 [2024-04-26 23:36:59.580880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.611 [2024-04-26 23:36:59.581179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.611 [2024-04-26 23:36:59.581186] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.611 qpair failed and we were unable to recover it. 00:34:10.611 [2024-04-26 23:36:59.581512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.611 [2024-04-26 23:36:59.581834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.611 [2024-04-26 23:36:59.581842] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.611 qpair failed and we were unable to recover it. 00:34:10.611 [2024-04-26 23:36:59.582196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.611 [2024-04-26 23:36:59.582516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.611 [2024-04-26 23:36:59.582523] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.611 qpair failed and we were unable to recover it. 00:34:10.611 [2024-04-26 23:36:59.582840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.611 [2024-04-26 23:36:59.583087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.611 [2024-04-26 23:36:59.583094] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.611 qpair failed and we were unable to recover it. 00:34:10.611 [2024-04-26 23:36:59.583401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.611 [2024-04-26 23:36:59.583690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.611 [2024-04-26 23:36:59.583696] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.611 qpair failed and we were unable to recover it. 00:34:10.611 [2024-04-26 23:36:59.584015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.611 [2024-04-26 23:36:59.584365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.611 [2024-04-26 23:36:59.584371] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.611 qpair failed and we were unable to recover it. 00:34:10.611 [2024-04-26 23:36:59.584718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.611 [2024-04-26 23:36:59.585078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.611 [2024-04-26 23:36:59.585086] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.611 qpair failed and we were unable to recover it. 00:34:10.611 [2024-04-26 23:36:59.585403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.611 [2024-04-26 23:36:59.585761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.611 [2024-04-26 23:36:59.585768] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.611 qpair failed and we were unable to recover it. 00:34:10.611 [2024-04-26 23:36:59.586087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.611 [2024-04-26 23:36:59.586377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.612 [2024-04-26 23:36:59.586384] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.612 qpair failed and we were unable to recover it. 00:34:10.612 [2024-04-26 23:36:59.586682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.612 [2024-04-26 23:36:59.587028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.612 [2024-04-26 23:36:59.587035] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.612 qpair failed and we were unable to recover it. 00:34:10.612 [2024-04-26 23:36:59.587427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.612 [2024-04-26 23:36:59.587747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.612 [2024-04-26 23:36:59.587754] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.612 qpair failed and we were unable to recover it. 00:34:10.612 [2024-04-26 23:36:59.588133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.612 [2024-04-26 23:36:59.588460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.612 [2024-04-26 23:36:59.588466] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.612 qpair failed and we were unable to recover it. 00:34:10.612 [2024-04-26 23:36:59.588720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.612 [2024-04-26 23:36:59.589035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.612 [2024-04-26 23:36:59.589042] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.612 qpair failed and we were unable to recover it. 00:34:10.612 [2024-04-26 23:36:59.589345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.612 [2024-04-26 23:36:59.589660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.612 [2024-04-26 23:36:59.589666] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.612 qpair failed and we were unable to recover it. 00:34:10.612 [2024-04-26 23:36:59.589979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.612 [2024-04-26 23:36:59.590283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.612 [2024-04-26 23:36:59.590290] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.612 qpair failed and we were unable to recover it. 00:34:10.612 [2024-04-26 23:36:59.590617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.612 [2024-04-26 23:36:59.590949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.612 [2024-04-26 23:36:59.590956] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.612 qpair failed and we were unable to recover it. 00:34:10.612 [2024-04-26 23:36:59.591284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.612 [2024-04-26 23:36:59.591633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.612 [2024-04-26 23:36:59.591641] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.612 qpair failed and we were unable to recover it. 00:34:10.612 [2024-04-26 23:36:59.592012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.612 [2024-04-26 23:36:59.592241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.612 [2024-04-26 23:36:59.592247] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.612 qpair failed and we were unable to recover it. 00:34:10.612 [2024-04-26 23:36:59.592607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.612 [2024-04-26 23:36:59.592921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.612 [2024-04-26 23:36:59.592928] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.612 qpair failed and we were unable to recover it. 00:34:10.612 [2024-04-26 23:36:59.593271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.612 [2024-04-26 23:36:59.593419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.612 [2024-04-26 23:36:59.593426] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.612 qpair failed and we were unable to recover it. 00:34:10.612 [2024-04-26 23:36:59.593744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.612 [2024-04-26 23:36:59.594048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.612 [2024-04-26 23:36:59.594054] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.612 qpair failed and we were unable to recover it. 00:34:10.612 [2024-04-26 23:36:59.594396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.612 [2024-04-26 23:36:59.594688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.612 [2024-04-26 23:36:59.594695] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.612 qpair failed and we were unable to recover it. 00:34:10.612 [2024-04-26 23:36:59.595056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.612 [2024-04-26 23:36:59.595349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.612 [2024-04-26 23:36:59.595355] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.612 qpair failed and we were unable to recover it. 00:34:10.612 [2024-04-26 23:36:59.595670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.612 [2024-04-26 23:36:59.596001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.612 [2024-04-26 23:36:59.596007] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.612 qpair failed and we were unable to recover it. 00:34:10.612 [2024-04-26 23:36:59.596398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.612 [2024-04-26 23:36:59.596550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.612 [2024-04-26 23:36:59.596557] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.612 qpair failed and we were unable to recover it. 00:34:10.612 [2024-04-26 23:36:59.596743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.612 [2024-04-26 23:36:59.597068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.612 [2024-04-26 23:36:59.597075] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.612 qpair failed and we were unable to recover it. 00:34:10.612 [2024-04-26 23:36:59.597389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.612 [2024-04-26 23:36:59.597711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.612 [2024-04-26 23:36:59.597719] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.612 qpair failed and we were unable to recover it. 00:34:10.612 [2024-04-26 23:36:59.598052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.612 [2024-04-26 23:36:59.598338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.612 [2024-04-26 23:36:59.598345] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.612 qpair failed and we were unable to recover it. 00:34:10.612 [2024-04-26 23:36:59.598663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.612 [2024-04-26 23:36:59.598961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.612 [2024-04-26 23:36:59.598968] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.612 qpair failed and we were unable to recover it. 00:34:10.612 [2024-04-26 23:36:59.599260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.612 [2024-04-26 23:36:59.599602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.612 [2024-04-26 23:36:59.599608] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.612 qpair failed and we were unable to recover it. 00:34:10.612 [2024-04-26 23:36:59.599920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.612 [2024-04-26 23:36:59.600124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.612 [2024-04-26 23:36:59.600130] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.612 qpair failed and we were unable to recover it. 00:34:10.612 [2024-04-26 23:36:59.600482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.612 [2024-04-26 23:36:59.600799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.612 [2024-04-26 23:36:59.600806] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.612 qpair failed and we were unable to recover it. 00:34:10.612 [2024-04-26 23:36:59.600995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.612 [2024-04-26 23:36:59.601353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.612 [2024-04-26 23:36:59.601360] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.612 qpair failed and we were unable to recover it. 00:34:10.612 [2024-04-26 23:36:59.601694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.612 [2024-04-26 23:36:59.601941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.612 [2024-04-26 23:36:59.601948] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.612 qpair failed and we were unable to recover it. 00:34:10.612 [2024-04-26 23:36:59.602094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.612 [2024-04-26 23:36:59.602376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.612 [2024-04-26 23:36:59.602383] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.612 qpair failed and we were unable to recover it. 00:34:10.612 [2024-04-26 23:36:59.602717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.612 [2024-04-26 23:36:59.603026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.612 [2024-04-26 23:36:59.603032] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.612 qpair failed and we were unable to recover it. 00:34:10.612 [2024-04-26 23:36:59.603356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.612 [2024-04-26 23:36:59.603686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.612 [2024-04-26 23:36:59.603693] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.612 qpair failed and we were unable to recover it. 00:34:10.613 [2024-04-26 23:36:59.604053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.613 [2024-04-26 23:36:59.604395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.613 [2024-04-26 23:36:59.604402] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.613 qpair failed and we were unable to recover it. 00:34:10.613 [2024-04-26 23:36:59.604750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.613 [2024-04-26 23:36:59.605118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.613 [2024-04-26 23:36:59.605125] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.613 qpair failed and we were unable to recover it. 00:34:10.613 [2024-04-26 23:36:59.605474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.613 [2024-04-26 23:36:59.605754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.613 [2024-04-26 23:36:59.605761] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.613 qpair failed and we were unable to recover it. 00:34:10.613 [2024-04-26 23:36:59.606004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.613 [2024-04-26 23:36:59.606330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.613 [2024-04-26 23:36:59.606337] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.613 qpair failed and we were unable to recover it. 00:34:10.613 [2024-04-26 23:36:59.606654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.613 [2024-04-26 23:36:59.606969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.613 [2024-04-26 23:36:59.606976] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.613 qpair failed and we were unable to recover it. 00:34:10.613 [2024-04-26 23:36:59.607298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.613 [2024-04-26 23:36:59.607549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.613 [2024-04-26 23:36:59.607556] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.613 qpair failed and we were unable to recover it. 00:34:10.613 [2024-04-26 23:36:59.607776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.613 [2024-04-26 23:36:59.607982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.613 [2024-04-26 23:36:59.607989] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.613 qpair failed and we were unable to recover it. 00:34:10.613 [2024-04-26 23:36:59.608317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.613 [2024-04-26 23:36:59.608592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.613 [2024-04-26 23:36:59.608599] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.613 qpair failed and we were unable to recover it. 00:34:10.613 [2024-04-26 23:36:59.608833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.613 [2024-04-26 23:36:59.609146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.613 [2024-04-26 23:36:59.609152] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.613 qpair failed and we were unable to recover it. 00:34:10.613 [2024-04-26 23:36:59.609506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.613 [2024-04-26 23:36:59.609865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.613 [2024-04-26 23:36:59.609872] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.613 qpair failed and we were unable to recover it. 00:34:10.613 [2024-04-26 23:36:59.610214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.613 [2024-04-26 23:36:59.610562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.613 [2024-04-26 23:36:59.610568] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.613 qpair failed and we were unable to recover it. 00:34:10.613 [2024-04-26 23:36:59.610888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.613 [2024-04-26 23:36:59.611122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.613 [2024-04-26 23:36:59.611128] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.613 qpair failed and we were unable to recover it. 00:34:10.613 [2024-04-26 23:36:59.611457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.613 [2024-04-26 23:36:59.611810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.613 [2024-04-26 23:36:59.611816] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.613 qpair failed and we were unable to recover it. 00:34:10.613 [2024-04-26 23:36:59.612206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.613 [2024-04-26 23:36:59.612542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.613 [2024-04-26 23:36:59.612548] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.613 qpair failed and we were unable to recover it. 00:34:10.613 [2024-04-26 23:36:59.612883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.613 [2024-04-26 23:36:59.613179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.613 [2024-04-26 23:36:59.613186] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.613 qpair failed and we were unable to recover it. 00:34:10.613 [2024-04-26 23:36:59.613381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.613 [2024-04-26 23:36:59.613735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.613 [2024-04-26 23:36:59.613742] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.613 qpair failed and we were unable to recover it. 00:34:10.613 [2024-04-26 23:36:59.613961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.613 [2024-04-26 23:36:59.614226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.613 [2024-04-26 23:36:59.614232] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.613 qpair failed and we were unable to recover it. 00:34:10.613 [2024-04-26 23:36:59.614573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.613 [2024-04-26 23:36:59.614909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.613 [2024-04-26 23:36:59.614916] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.613 qpair failed and we were unable to recover it. 00:34:10.613 [2024-04-26 23:36:59.615278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.613 [2024-04-26 23:36:59.615587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.613 [2024-04-26 23:36:59.615594] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.613 qpair failed and we were unable to recover it. 00:34:10.613 [2024-04-26 23:36:59.615944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.613 [2024-04-26 23:36:59.616286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.613 [2024-04-26 23:36:59.616292] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.613 qpair failed and we were unable to recover it. 00:34:10.613 [2024-04-26 23:36:59.616631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.613 [2024-04-26 23:36:59.616933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.613 [2024-04-26 23:36:59.616940] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.613 qpair failed and we were unable to recover it. 00:34:10.613 [2024-04-26 23:36:59.617270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.613 [2024-04-26 23:36:59.617581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.613 [2024-04-26 23:36:59.617587] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.613 qpair failed and we were unable to recover it. 00:34:10.613 [2024-04-26 23:36:59.617814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.613 [2024-04-26 23:36:59.618134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.613 [2024-04-26 23:36:59.618141] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.613 qpair failed and we were unable to recover it. 00:34:10.613 [2024-04-26 23:36:59.618481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.613 [2024-04-26 23:36:59.618823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.613 [2024-04-26 23:36:59.618830] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.613 qpair failed and we were unable to recover it. 00:34:10.613 [2024-04-26 23:36:59.619182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.613 [2024-04-26 23:36:59.619504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.613 [2024-04-26 23:36:59.619510] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.613 qpair failed and we were unable to recover it. 00:34:10.613 [2024-04-26 23:36:59.619853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.613 [2024-04-26 23:36:59.620182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.613 [2024-04-26 23:36:59.620188] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.613 qpair failed and we were unable to recover it. 00:34:10.613 [2024-04-26 23:36:59.620507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.613 [2024-04-26 23:36:59.620742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.613 [2024-04-26 23:36:59.620748] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.613 qpair failed and we were unable to recover it. 00:34:10.613 [2024-04-26 23:36:59.621127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.613 [2024-04-26 23:36:59.621456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.613 [2024-04-26 23:36:59.621463] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.613 qpair failed and we were unable to recover it. 00:34:10.613 [2024-04-26 23:36:59.621798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.613 [2024-04-26 23:36:59.622102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.614 [2024-04-26 23:36:59.622109] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.614 qpair failed and we were unable to recover it. 00:34:10.614 [2024-04-26 23:36:59.622454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.614 [2024-04-26 23:36:59.622764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.614 [2024-04-26 23:36:59.622771] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.614 qpair failed and we were unable to recover it. 00:34:10.614 [2024-04-26 23:36:59.623077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.614 [2024-04-26 23:36:59.623422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.614 [2024-04-26 23:36:59.623429] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.614 qpair failed and we were unable to recover it. 00:34:10.614 [2024-04-26 23:36:59.623730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.614 [2024-04-26 23:36:59.623930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.614 [2024-04-26 23:36:59.623937] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.614 qpair failed and we were unable to recover it. 00:34:10.614 [2024-04-26 23:36:59.624315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.614 [2024-04-26 23:36:59.624631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.614 [2024-04-26 23:36:59.624637] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.614 qpair failed and we were unable to recover it. 00:34:10.614 [2024-04-26 23:36:59.625011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.614 [2024-04-26 23:36:59.625361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.614 [2024-04-26 23:36:59.625368] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.614 qpair failed and we were unable to recover it. 00:34:10.614 [2024-04-26 23:36:59.625620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.614 [2024-04-26 23:36:59.625940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.614 [2024-04-26 23:36:59.625948] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.614 qpair failed and we were unable to recover it. 00:34:10.614 [2024-04-26 23:36:59.626257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.614 [2024-04-26 23:36:59.626579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.614 [2024-04-26 23:36:59.626585] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.614 qpair failed and we were unable to recover it. 00:34:10.614 [2024-04-26 23:36:59.626889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.614 [2024-04-26 23:36:59.627180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.614 [2024-04-26 23:36:59.627186] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.614 qpair failed and we were unable to recover it. 00:34:10.614 [2024-04-26 23:36:59.627570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.614 [2024-04-26 23:36:59.627788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.614 [2024-04-26 23:36:59.627794] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.614 qpair failed and we were unable to recover it. 00:34:10.614 [2024-04-26 23:36:59.628124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.614 [2024-04-26 23:36:59.628417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.614 [2024-04-26 23:36:59.628424] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.614 qpair failed and we were unable to recover it. 00:34:10.614 [2024-04-26 23:36:59.628777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.614 [2024-04-26 23:36:59.629073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.614 [2024-04-26 23:36:59.629079] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.614 qpair failed and we were unable to recover it. 00:34:10.614 [2024-04-26 23:36:59.629422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.614 [2024-04-26 23:36:59.629733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.614 [2024-04-26 23:36:59.629740] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.614 qpair failed and we were unable to recover it. 00:34:10.614 [2024-04-26 23:36:59.630074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.614 [2024-04-26 23:36:59.630434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.614 [2024-04-26 23:36:59.630441] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.614 qpair failed and we were unable to recover it. 00:34:10.614 [2024-04-26 23:36:59.630774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.614 [2024-04-26 23:36:59.631101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.614 [2024-04-26 23:36:59.631108] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.614 qpair failed and we were unable to recover it. 00:34:10.614 [2024-04-26 23:36:59.631372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.614 [2024-04-26 23:36:59.631581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.614 [2024-04-26 23:36:59.631588] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.614 qpair failed and we were unable to recover it. 00:34:10.614 [2024-04-26 23:36:59.631911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.614 [2024-04-26 23:36:59.632203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.614 [2024-04-26 23:36:59.632209] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.614 qpair failed and we were unable to recover it. 00:34:10.614 [2024-04-26 23:36:59.632519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.614 [2024-04-26 23:36:59.632830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.614 [2024-04-26 23:36:59.632838] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.614 qpair failed and we were unable to recover it. 00:34:10.614 [2024-04-26 23:36:59.633176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.614 [2024-04-26 23:36:59.633530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.614 [2024-04-26 23:36:59.633536] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.614 qpair failed and we were unable to recover it. 00:34:10.614 [2024-04-26 23:36:59.633888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.614 [2024-04-26 23:36:59.634196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.614 [2024-04-26 23:36:59.634203] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.614 qpair failed and we were unable to recover it. 00:34:10.614 [2024-04-26 23:36:59.634558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.614 [2024-04-26 23:36:59.634845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.614 [2024-04-26 23:36:59.634852] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.614 qpair failed and we were unable to recover it. 00:34:10.614 [2024-04-26 23:36:59.635181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.614 [2024-04-26 23:36:59.635508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.614 [2024-04-26 23:36:59.635515] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.614 qpair failed and we were unable to recover it. 00:34:10.614 [2024-04-26 23:36:59.635817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.614 [2024-04-26 23:36:59.636134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.614 [2024-04-26 23:36:59.636141] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.614 qpair failed and we were unable to recover it. 00:34:10.614 [2024-04-26 23:36:59.636504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.614 [2024-04-26 23:36:59.636729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.614 [2024-04-26 23:36:59.636736] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.614 qpair failed and we were unable to recover it. 00:34:10.614 [2024-04-26 23:36:59.637030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.614 [2024-04-26 23:36:59.637344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.614 [2024-04-26 23:36:59.637351] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.614 qpair failed and we were unable to recover it. 00:34:10.614 [2024-04-26 23:36:59.637591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.614 [2024-04-26 23:36:59.637845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.614 [2024-04-26 23:36:59.637852] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.614 qpair failed and we were unable to recover it. 00:34:10.614 [2024-04-26 23:36:59.638179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.614 [2024-04-26 23:36:59.638416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.614 [2024-04-26 23:36:59.638422] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.614 qpair failed and we were unable to recover it. 00:34:10.614 [2024-04-26 23:36:59.638731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.614 [2024-04-26 23:36:59.638976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.614 [2024-04-26 23:36:59.638982] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.614 qpair failed and we were unable to recover it. 00:34:10.614 [2024-04-26 23:36:59.639299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.614 [2024-04-26 23:36:59.639616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.614 [2024-04-26 23:36:59.639622] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.614 qpair failed and we were unable to recover it. 00:34:10.615 [2024-04-26 23:36:59.639974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.615 [2024-04-26 23:36:59.640342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.615 [2024-04-26 23:36:59.640349] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.615 qpair failed and we were unable to recover it. 00:34:10.615 [2024-04-26 23:36:59.640660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.615 [2024-04-26 23:36:59.640987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.615 [2024-04-26 23:36:59.640993] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.615 qpair failed and we were unable to recover it. 00:34:10.615 [2024-04-26 23:36:59.641200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.615 [2024-04-26 23:36:59.641530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.615 [2024-04-26 23:36:59.641537] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.615 qpair failed and we were unable to recover it. 00:34:10.615 [2024-04-26 23:36:59.641946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.615 [2024-04-26 23:36:59.642128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.615 [2024-04-26 23:36:59.642135] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.615 qpair failed and we were unable to recover it. 00:34:10.615 [2024-04-26 23:36:59.642458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.615 [2024-04-26 23:36:59.642778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.615 [2024-04-26 23:36:59.642784] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.615 qpair failed and we were unable to recover it. 00:34:10.615 [2024-04-26 23:36:59.643105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.615 [2024-04-26 23:36:59.643438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.615 [2024-04-26 23:36:59.643445] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.615 qpair failed and we were unable to recover it. 00:34:10.615 [2024-04-26 23:36:59.643791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.615 [2024-04-26 23:36:59.644120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.615 [2024-04-26 23:36:59.644127] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.615 qpair failed and we were unable to recover it. 00:34:10.615 [2024-04-26 23:36:59.644326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.615 [2024-04-26 23:36:59.644614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.615 [2024-04-26 23:36:59.644620] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.615 qpair failed and we were unable to recover it. 00:34:10.615 [2024-04-26 23:36:59.644926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.615 [2024-04-26 23:36:59.645258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.615 [2024-04-26 23:36:59.645265] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.615 qpair failed and we were unable to recover it. 00:34:10.615 [2024-04-26 23:36:59.645599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.615 [2024-04-26 23:36:59.645911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.615 [2024-04-26 23:36:59.645918] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.615 qpair failed and we were unable to recover it. 00:34:10.615 [2024-04-26 23:36:59.646274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.615 [2024-04-26 23:36:59.646601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.615 [2024-04-26 23:36:59.646607] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.615 qpair failed and we were unable to recover it. 00:34:10.615 [2024-04-26 23:36:59.646930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.615 [2024-04-26 23:36:59.647259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.615 [2024-04-26 23:36:59.647265] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.615 qpair failed and we were unable to recover it. 00:34:10.615 [2024-04-26 23:36:59.647612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.615 [2024-04-26 23:36:59.647964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.615 [2024-04-26 23:36:59.647971] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.615 qpair failed and we were unable to recover it. 00:34:10.615 [2024-04-26 23:36:59.648303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.615 [2024-04-26 23:36:59.648615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.615 [2024-04-26 23:36:59.648622] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.615 qpair failed and we were unable to recover it. 00:34:10.615 [2024-04-26 23:36:59.648975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.615 [2024-04-26 23:36:59.649317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.615 [2024-04-26 23:36:59.649323] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.615 qpair failed and we were unable to recover it. 00:34:10.615 [2024-04-26 23:36:59.649673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.615 [2024-04-26 23:36:59.649885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.615 [2024-04-26 23:36:59.649891] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.615 qpair failed and we were unable to recover it. 00:34:10.615 [2024-04-26 23:36:59.650231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.615 [2024-04-26 23:36:59.650433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.615 [2024-04-26 23:36:59.650440] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.615 qpair failed and we were unable to recover it. 00:34:10.615 [2024-04-26 23:36:59.650584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.615 [2024-04-26 23:36:59.650878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.615 [2024-04-26 23:36:59.650885] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.615 qpair failed and we were unable to recover it. 00:34:10.615 [2024-04-26 23:36:59.651176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.615 [2024-04-26 23:36:59.651488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.615 [2024-04-26 23:36:59.651495] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.615 qpair failed and we were unable to recover it. 00:34:10.615 [2024-04-26 23:36:59.651816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.615 [2024-04-26 23:36:59.652154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.615 [2024-04-26 23:36:59.652160] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.615 qpair failed and we were unable to recover it. 00:34:10.615 [2024-04-26 23:36:59.652511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.615 [2024-04-26 23:36:59.652847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.615 [2024-04-26 23:36:59.652854] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.615 qpair failed and we were unable to recover it. 00:34:10.615 [2024-04-26 23:36:59.653188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.615 [2024-04-26 23:36:59.653561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.615 [2024-04-26 23:36:59.653568] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.615 qpair failed and we were unable to recover it. 00:34:10.615 [2024-04-26 23:36:59.653856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.615 [2024-04-26 23:36:59.654178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.615 [2024-04-26 23:36:59.654184] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.615 qpair failed and we were unable to recover it. 00:34:10.615 [2024-04-26 23:36:59.654500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.616 [2024-04-26 23:36:59.654863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.616 [2024-04-26 23:36:59.654870] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.616 qpair failed and we were unable to recover it. 00:34:10.616 [2024-04-26 23:36:59.655253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.616 [2024-04-26 23:36:59.655556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.616 [2024-04-26 23:36:59.655564] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.616 qpair failed and we were unable to recover it. 00:34:10.616 [2024-04-26 23:36:59.655780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.616 [2024-04-26 23:36:59.656075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.616 [2024-04-26 23:36:59.656081] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.616 qpair failed and we were unable to recover it. 00:34:10.616 [2024-04-26 23:36:59.656476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.616 [2024-04-26 23:36:59.656762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.616 [2024-04-26 23:36:59.656769] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.616 qpair failed and we were unable to recover it. 00:34:10.616 [2024-04-26 23:36:59.656952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.616 [2024-04-26 23:36:59.657241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.616 [2024-04-26 23:36:59.657248] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.616 qpair failed and we were unable to recover it. 00:34:10.616 [2024-04-26 23:36:59.657586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.616 [2024-04-26 23:36:59.657900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.616 [2024-04-26 23:36:59.657907] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.616 qpair failed and we were unable to recover it. 00:34:10.616 [2024-04-26 23:36:59.658241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.616 [2024-04-26 23:36:59.658387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.616 [2024-04-26 23:36:59.658394] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.616 qpair failed and we were unable to recover it. 00:34:10.616 [2024-04-26 23:36:59.658738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.616 [2024-04-26 23:36:59.659072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.616 [2024-04-26 23:36:59.659079] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.616 qpair failed and we were unable to recover it. 00:34:10.616 [2024-04-26 23:36:59.659416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.616 [2024-04-26 23:36:59.659710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.616 [2024-04-26 23:36:59.659716] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.616 qpair failed and we were unable to recover it. 00:34:10.616 [2024-04-26 23:36:59.660063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.616 [2024-04-26 23:36:59.660362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.616 [2024-04-26 23:36:59.660369] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.616 qpair failed and we were unable to recover it. 00:34:10.616 [2024-04-26 23:36:59.660684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.616 [2024-04-26 23:36:59.660997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.616 [2024-04-26 23:36:59.661004] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.616 qpair failed and we were unable to recover it. 00:34:10.616 [2024-04-26 23:36:59.661333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.616 [2024-04-26 23:36:59.661655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.616 [2024-04-26 23:36:59.661661] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.616 qpair failed and we were unable to recover it. 00:34:10.616 [2024-04-26 23:36:59.661978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.616 [2024-04-26 23:36:59.662279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.616 [2024-04-26 23:36:59.662285] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.616 qpair failed and we were unable to recover it. 00:34:10.616 [2024-04-26 23:36:59.662638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.616 [2024-04-26 23:36:59.662975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.616 [2024-04-26 23:36:59.662982] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.616 qpair failed and we were unable to recover it. 00:34:10.616 [2024-04-26 23:36:59.663305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.616 [2024-04-26 23:36:59.663602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.616 [2024-04-26 23:36:59.663609] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.616 qpair failed and we were unable to recover it. 00:34:10.616 [2024-04-26 23:36:59.663970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.616 [2024-04-26 23:36:59.664158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.616 [2024-04-26 23:36:59.664164] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.616 qpair failed and we were unable to recover it. 00:34:10.616 [2024-04-26 23:36:59.664552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.616 [2024-04-26 23:36:59.664906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.616 [2024-04-26 23:36:59.664913] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.616 qpair failed and we were unable to recover it. 00:34:10.616 [2024-04-26 23:36:59.665235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.616 [2024-04-26 23:36:59.665594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.616 [2024-04-26 23:36:59.665600] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.616 qpair failed and we were unable to recover it. 00:34:10.616 [2024-04-26 23:36:59.665963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.616 [2024-04-26 23:36:59.666319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.616 [2024-04-26 23:36:59.666326] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.616 qpair failed and we were unable to recover it. 00:34:10.616 [2024-04-26 23:36:59.666678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.616 [2024-04-26 23:36:59.667002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.616 [2024-04-26 23:36:59.667009] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.616 qpair failed and we were unable to recover it. 00:34:10.616 [2024-04-26 23:36:59.667298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.616 [2024-04-26 23:36:59.667669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.616 [2024-04-26 23:36:59.667675] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.616 qpair failed and we were unable to recover it. 00:34:10.616 [2024-04-26 23:36:59.667995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.616 [2024-04-26 23:36:59.668184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.616 [2024-04-26 23:36:59.668191] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.616 qpair failed and we were unable to recover it. 00:34:10.616 [2024-04-26 23:36:59.668443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.616 [2024-04-26 23:36:59.668618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.616 [2024-04-26 23:36:59.668625] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.616 qpair failed and we were unable to recover it. 00:34:10.616 [2024-04-26 23:36:59.668977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.616 [2024-04-26 23:36:59.669204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.616 [2024-04-26 23:36:59.669211] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.616 qpair failed and we were unable to recover it. 00:34:10.616 [2024-04-26 23:36:59.669440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.616 [2024-04-26 23:36:59.669818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.616 [2024-04-26 23:36:59.669824] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.616 qpair failed and we were unable to recover it. 00:34:10.616 [2024-04-26 23:36:59.670163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.616 [2024-04-26 23:36:59.670428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.616 [2024-04-26 23:36:59.670434] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.616 qpair failed and we were unable to recover it. 00:34:10.616 [2024-04-26 23:36:59.670773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.616 [2024-04-26 23:36:59.670965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.616 [2024-04-26 23:36:59.670973] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.616 qpair failed and we were unable to recover it. 00:34:10.616 [2024-04-26 23:36:59.671311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.616 [2024-04-26 23:36:59.671611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.616 [2024-04-26 23:36:59.671617] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.616 qpair failed and we were unable to recover it. 00:34:10.616 [2024-04-26 23:36:59.671943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.617 [2024-04-26 23:36:59.672292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.617 [2024-04-26 23:36:59.672299] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.617 qpair failed and we were unable to recover it. 00:34:10.617 [2024-04-26 23:36:59.672610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.617 [2024-04-26 23:36:59.672968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.617 [2024-04-26 23:36:59.672975] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.617 qpair failed and we were unable to recover it. 00:34:10.617 [2024-04-26 23:36:59.673311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.617 [2024-04-26 23:36:59.673637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.617 [2024-04-26 23:36:59.673644] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.617 qpair failed and we were unable to recover it. 00:34:10.617 [2024-04-26 23:36:59.673851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.617 [2024-04-26 23:36:59.674072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.617 [2024-04-26 23:36:59.674078] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.617 qpair failed and we were unable to recover it. 00:34:10.617 [2024-04-26 23:36:59.674369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.617 [2024-04-26 23:36:59.674672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.617 [2024-04-26 23:36:59.674679] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.617 qpair failed and we were unable to recover it. 00:34:10.617 [2024-04-26 23:36:59.674986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.617 [2024-04-26 23:36:59.675367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.617 [2024-04-26 23:36:59.675373] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.617 qpair failed and we were unable to recover it. 00:34:10.617 [2024-04-26 23:36:59.675695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.617 [2024-04-26 23:36:59.675887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.617 [2024-04-26 23:36:59.675894] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.617 qpair failed and we were unable to recover it. 00:34:10.617 [2024-04-26 23:36:59.676223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.617 [2024-04-26 23:36:59.676579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.617 [2024-04-26 23:36:59.676585] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.617 qpair failed and we were unable to recover it. 00:34:10.617 [2024-04-26 23:36:59.676778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.617 [2024-04-26 23:36:59.676999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.617 [2024-04-26 23:36:59.677006] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.617 qpair failed and we were unable to recover it. 00:34:10.617 [2024-04-26 23:36:59.677341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.617 [2024-04-26 23:36:59.677662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.617 [2024-04-26 23:36:59.677668] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.617 qpair failed and we were unable to recover it. 00:34:10.617 [2024-04-26 23:36:59.677990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.617 [2024-04-26 23:36:59.678305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.617 [2024-04-26 23:36:59.678312] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.617 qpair failed and we were unable to recover it. 00:34:10.617 [2024-04-26 23:36:59.678663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.617 [2024-04-26 23:36:59.679026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.617 [2024-04-26 23:36:59.679033] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.617 qpair failed and we were unable to recover it. 00:34:10.617 [2024-04-26 23:36:59.679382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.617 [2024-04-26 23:36:59.679705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.617 [2024-04-26 23:36:59.679712] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.617 qpair failed and we were unable to recover it. 00:34:10.617 [2024-04-26 23:36:59.680043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.617 [2024-04-26 23:36:59.680344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.617 [2024-04-26 23:36:59.680351] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.617 qpair failed and we were unable to recover it. 00:34:10.617 [2024-04-26 23:36:59.680722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.617 [2024-04-26 23:36:59.680892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.617 [2024-04-26 23:36:59.680898] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.617 qpair failed and we were unable to recover it. 00:34:10.617 [2024-04-26 23:36:59.680982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.617 [2024-04-26 23:36:59.681350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.617 [2024-04-26 23:36:59.681357] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.617 qpair failed and we were unable to recover it. 00:34:10.617 [2024-04-26 23:36:59.681570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.617 [2024-04-26 23:36:59.681907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.617 [2024-04-26 23:36:59.681913] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.617 qpair failed and we were unable to recover it. 00:34:10.617 [2024-04-26 23:36:59.682245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.617 [2024-04-26 23:36:59.682621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.617 [2024-04-26 23:36:59.682627] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.617 qpair failed and we were unable to recover it. 00:34:10.617 [2024-04-26 23:36:59.682877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.617 [2024-04-26 23:36:59.683205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.617 [2024-04-26 23:36:59.683212] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.617 qpair failed and we were unable to recover it. 00:34:10.617 [2024-04-26 23:36:59.683560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.617 [2024-04-26 23:36:59.683920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.617 [2024-04-26 23:36:59.683926] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.617 qpair failed and we were unable to recover it. 00:34:10.617 [2024-04-26 23:36:59.684257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.617 [2024-04-26 23:36:59.684498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.617 [2024-04-26 23:36:59.684504] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.617 qpair failed and we were unable to recover it. 00:34:10.617 [2024-04-26 23:36:59.684563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.617 [2024-04-26 23:36:59.684867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.617 [2024-04-26 23:36:59.684874] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.617 qpair failed and we were unable to recover it. 00:34:10.617 [2024-04-26 23:36:59.685205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.617 [2024-04-26 23:36:59.685593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.617 [2024-04-26 23:36:59.685601] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.617 qpair failed and we were unable to recover it. 00:34:10.617 [2024-04-26 23:36:59.685897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.617 [2024-04-26 23:36:59.686100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.617 [2024-04-26 23:36:59.686107] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.617 qpair failed and we were unable to recover it. 00:34:10.617 [2024-04-26 23:36:59.686287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.617 [2024-04-26 23:36:59.686631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.617 [2024-04-26 23:36:59.686638] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.617 qpair failed and we were unable to recover it. 00:34:10.617 [2024-04-26 23:36:59.686991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.617 [2024-04-26 23:36:59.687195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.617 [2024-04-26 23:36:59.687202] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.617 qpair failed and we were unable to recover it. 00:34:10.617 [2024-04-26 23:36:59.687573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.617 [2024-04-26 23:36:59.687771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.617 [2024-04-26 23:36:59.687777] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.617 qpair failed and we were unable to recover it. 00:34:10.617 [2024-04-26 23:36:59.688103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.617 [2024-04-26 23:36:59.688424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.617 [2024-04-26 23:36:59.688431] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.617 qpair failed and we were unable to recover it. 00:34:10.617 [2024-04-26 23:36:59.688891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.617 [2024-04-26 23:36:59.689224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.618 [2024-04-26 23:36:59.689232] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.618 qpair failed and we were unable to recover it. 00:34:10.618 [2024-04-26 23:36:59.689578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.618 [2024-04-26 23:36:59.689784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.618 [2024-04-26 23:36:59.689791] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.618 qpair failed and we were unable to recover it. 00:34:10.618 [2024-04-26 23:36:59.690035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.618 [2024-04-26 23:36:59.690379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.618 [2024-04-26 23:36:59.690386] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.618 qpair failed and we were unable to recover it. 00:34:10.618 [2024-04-26 23:36:59.690698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.618 [2024-04-26 23:36:59.691015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.618 [2024-04-26 23:36:59.691021] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.618 qpair failed and we were unable to recover it. 00:34:10.618 [2024-04-26 23:36:59.691407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.618 [2024-04-26 23:36:59.691587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.618 [2024-04-26 23:36:59.691595] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.618 qpair failed and we were unable to recover it. 00:34:10.618 [2024-04-26 23:36:59.691970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.618 [2024-04-26 23:36:59.692300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.618 [2024-04-26 23:36:59.692307] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.618 qpair failed and we were unable to recover it. 00:34:10.618 [2024-04-26 23:36:59.692607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.618 [2024-04-26 23:36:59.692917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.618 [2024-04-26 23:36:59.692923] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.618 qpair failed and we were unable to recover it. 00:34:10.618 [2024-04-26 23:36:59.693294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.618 [2024-04-26 23:36:59.693660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.618 [2024-04-26 23:36:59.693667] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.618 qpair failed and we were unable to recover it. 00:34:10.618 [2024-04-26 23:36:59.693997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.618 [2024-04-26 23:36:59.694329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.618 [2024-04-26 23:36:59.694335] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.618 qpair failed and we were unable to recover it. 00:34:10.618 [2024-04-26 23:36:59.694680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.618 [2024-04-26 23:36:59.695041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.618 [2024-04-26 23:36:59.695048] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.618 qpair failed and we were unable to recover it. 00:34:10.618 [2024-04-26 23:36:59.695407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.618 [2024-04-26 23:36:59.695755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.618 [2024-04-26 23:36:59.695762] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.618 qpair failed and we were unable to recover it. 00:34:10.618 [2024-04-26 23:36:59.696167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.618 [2024-04-26 23:36:59.696480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.618 [2024-04-26 23:36:59.696486] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.618 qpair failed and we were unable to recover it. 00:34:10.618 [2024-04-26 23:36:59.696827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.618 [2024-04-26 23:36:59.697164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.618 [2024-04-26 23:36:59.697170] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.618 qpair failed and we were unable to recover it. 00:34:10.618 [2024-04-26 23:36:59.697522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.618 [2024-04-26 23:36:59.697789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.618 [2024-04-26 23:36:59.697796] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.618 qpair failed and we were unable to recover it. 00:34:10.618 [2024-04-26 23:36:59.697993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.618 [2024-04-26 23:36:59.698195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.618 [2024-04-26 23:36:59.698203] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.618 qpair failed and we were unable to recover it. 00:34:10.618 [2024-04-26 23:36:59.698513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.618 [2024-04-26 23:36:59.698845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.618 [2024-04-26 23:36:59.698851] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.618 qpair failed and we were unable to recover it. 00:34:10.618 [2024-04-26 23:36:59.699241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.618 [2024-04-26 23:36:59.699435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.618 [2024-04-26 23:36:59.699442] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.618 qpair failed and we were unable to recover it. 00:34:10.618 [2024-04-26 23:36:59.699677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.618 [2024-04-26 23:36:59.699877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.618 [2024-04-26 23:36:59.699883] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.618 qpair failed and we were unable to recover it. 00:34:10.618 [2024-04-26 23:36:59.700181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.618 [2024-04-26 23:36:59.700485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.618 [2024-04-26 23:36:59.700491] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.618 qpair failed and we were unable to recover it. 00:34:10.618 [2024-04-26 23:36:59.700639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.618 [2024-04-26 23:36:59.700923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.618 [2024-04-26 23:36:59.700930] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.618 qpair failed and we were unable to recover it. 00:34:10.618 [2024-04-26 23:36:59.701239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.618 [2024-04-26 23:36:59.701529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.618 [2024-04-26 23:36:59.701536] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.618 qpair failed and we were unable to recover it. 00:34:10.618 [2024-04-26 23:36:59.701858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.618 [2024-04-26 23:36:59.702160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.618 [2024-04-26 23:36:59.702167] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.618 qpair failed and we were unable to recover it. 00:34:10.618 [2024-04-26 23:36:59.702492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.618 [2024-04-26 23:36:59.702648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.618 [2024-04-26 23:36:59.702656] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.618 qpair failed and we were unable to recover it. 00:34:10.618 [2024-04-26 23:36:59.702940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.618 [2024-04-26 23:36:59.703234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.618 [2024-04-26 23:36:59.703240] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.618 qpair failed and we were unable to recover it. 00:34:10.618 [2024-04-26 23:36:59.703562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.618 [2024-04-26 23:36:59.703886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.618 [2024-04-26 23:36:59.703893] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.618 qpair failed and we were unable to recover it. 00:34:10.618 [2024-04-26 23:36:59.704217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.618 [2024-04-26 23:36:59.704548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.618 [2024-04-26 23:36:59.704555] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.618 qpair failed and we were unable to recover it. 00:34:10.618 [2024-04-26 23:36:59.704886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.618 [2024-04-26 23:36:59.705226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.618 [2024-04-26 23:36:59.705232] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.618 qpair failed and we were unable to recover it. 00:34:10.618 [2024-04-26 23:36:59.705429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.618 [2024-04-26 23:36:59.705741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.618 [2024-04-26 23:36:59.705747] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.618 qpair failed and we were unable to recover it. 00:34:10.618 [2024-04-26 23:36:59.706062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.618 [2024-04-26 23:36:59.706457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.619 [2024-04-26 23:36:59.706463] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.619 qpair failed and we were unable to recover it. 00:34:10.619 [2024-04-26 23:36:59.706776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.619 [2024-04-26 23:36:59.706981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.619 [2024-04-26 23:36:59.706988] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.619 qpair failed and we were unable to recover it. 00:34:10.619 [2024-04-26 23:36:59.707343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.619 [2024-04-26 23:36:59.707702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.619 [2024-04-26 23:36:59.707708] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.619 qpair failed and we were unable to recover it. 00:34:10.619 [2024-04-26 23:36:59.707934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.619 [2024-04-26 23:36:59.708149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.619 [2024-04-26 23:36:59.708155] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.619 qpair failed and we were unable to recover it. 00:34:10.619 [2024-04-26 23:36:59.708484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.619 [2024-04-26 23:36:59.708817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.619 [2024-04-26 23:36:59.708823] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.619 qpair failed and we were unable to recover it. 00:34:10.619 [2024-04-26 23:36:59.709213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.619 [2024-04-26 23:36:59.709567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.619 [2024-04-26 23:36:59.709574] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.619 qpair failed and we were unable to recover it. 00:34:10.619 [2024-04-26 23:36:59.709932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.619 [2024-04-26 23:36:59.710281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.619 [2024-04-26 23:36:59.710287] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.619 qpair failed and we were unable to recover it. 00:34:10.619 [2024-04-26 23:36:59.710689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.619 [2024-04-26 23:36:59.710903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.619 [2024-04-26 23:36:59.710909] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.619 qpair failed and we were unable to recover it. 00:34:10.619 [2024-04-26 23:36:59.711113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.619 [2024-04-26 23:36:59.711406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.619 [2024-04-26 23:36:59.711412] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.619 qpair failed and we were unable to recover it. 00:34:10.619 [2024-04-26 23:36:59.711730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.619 [2024-04-26 23:36:59.712027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.619 [2024-04-26 23:36:59.712034] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.619 qpair failed and we were unable to recover it. 00:34:10.619 [2024-04-26 23:36:59.712357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.619 [2024-04-26 23:36:59.712685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.619 [2024-04-26 23:36:59.712692] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.619 qpair failed and we were unable to recover it. 00:34:10.619 [2024-04-26 23:36:59.713052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.619 [2024-04-26 23:36:59.713456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.619 [2024-04-26 23:36:59.713462] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.619 qpair failed and we were unable to recover it. 00:34:10.619 [2024-04-26 23:36:59.713719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.619 [2024-04-26 23:36:59.714044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.619 [2024-04-26 23:36:59.714050] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.619 qpair failed and we were unable to recover it. 00:34:10.619 [2024-04-26 23:36:59.714446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.619 [2024-04-26 23:36:59.714649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.619 [2024-04-26 23:36:59.714655] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.619 qpair failed and we were unable to recover it. 00:34:10.619 [2024-04-26 23:36:59.714956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.619 [2024-04-26 23:36:59.715279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.619 [2024-04-26 23:36:59.715285] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.619 qpair failed and we were unable to recover it. 00:34:10.619 [2024-04-26 23:36:59.715438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.619 [2024-04-26 23:36:59.715722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.619 [2024-04-26 23:36:59.715728] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.619 qpair failed and we were unable to recover it. 00:34:10.619 [2024-04-26 23:36:59.716039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.619 [2024-04-26 23:36:59.716420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.619 [2024-04-26 23:36:59.716427] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.619 qpair failed and we were unable to recover it. 00:34:10.619 [2024-04-26 23:36:59.716781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.619 [2024-04-26 23:36:59.717120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.619 [2024-04-26 23:36:59.717127] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.619 qpair failed and we were unable to recover it. 00:34:10.619 [2024-04-26 23:36:59.717455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.619 [2024-04-26 23:36:59.717659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.619 [2024-04-26 23:36:59.717665] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.619 qpair failed and we were unable to recover it. 00:34:10.619 [2024-04-26 23:36:59.717987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.619 [2024-04-26 23:36:59.718311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.619 [2024-04-26 23:36:59.718318] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.619 qpair failed and we were unable to recover it. 00:34:10.619 [2024-04-26 23:36:59.718549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.619 [2024-04-26 23:36:59.718898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.619 [2024-04-26 23:36:59.718905] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.619 qpair failed and we were unable to recover it. 00:34:10.619 [2024-04-26 23:36:59.719244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.619 [2024-04-26 23:36:59.719553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.619 [2024-04-26 23:36:59.719559] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.619 qpair failed and we were unable to recover it. 00:34:10.619 [2024-04-26 23:36:59.719772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.619 [2024-04-26 23:36:59.720100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.619 [2024-04-26 23:36:59.720108] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.619 qpair failed and we were unable to recover it. 00:34:10.619 [2024-04-26 23:36:59.720431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.619 [2024-04-26 23:36:59.720748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.619 [2024-04-26 23:36:59.720755] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.619 qpair failed and we were unable to recover it. 00:34:10.619 [2024-04-26 23:36:59.720927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.619 [2024-04-26 23:36:59.721229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.619 [2024-04-26 23:36:59.721235] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.619 qpair failed and we were unable to recover it. 00:34:10.619 [2024-04-26 23:36:59.721558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.619 [2024-04-26 23:36:59.721874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.619 [2024-04-26 23:36:59.721880] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.619 qpair failed and we were unable to recover it. 00:34:10.619 [2024-04-26 23:36:59.722205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.619 [2024-04-26 23:36:59.722522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.619 [2024-04-26 23:36:59.722529] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.619 qpair failed and we were unable to recover it. 00:34:10.619 [2024-04-26 23:36:59.722845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.619 [2024-04-26 23:36:59.723116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.619 [2024-04-26 23:36:59.723123] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.619 qpair failed and we were unable to recover it. 00:34:10.619 [2024-04-26 23:36:59.723429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.619 [2024-04-26 23:36:59.723773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.619 [2024-04-26 23:36:59.723779] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.619 qpair failed and we were unable to recover it. 00:34:10.620 [2024-04-26 23:36:59.724147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.620 [2024-04-26 23:36:59.724459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.620 [2024-04-26 23:36:59.724466] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.620 qpair failed and we were unable to recover it. 00:34:10.620 [2024-04-26 23:36:59.724788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.620 [2024-04-26 23:36:59.725128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.620 [2024-04-26 23:36:59.725134] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.620 qpair failed and we were unable to recover it. 00:34:10.620 [2024-04-26 23:36:59.725482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.620 [2024-04-26 23:36:59.725637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.620 [2024-04-26 23:36:59.725645] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.620 qpair failed and we were unable to recover it. 00:34:10.620 [2024-04-26 23:36:59.725993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.620 [2024-04-26 23:36:59.726312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.620 [2024-04-26 23:36:59.726318] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.620 qpair failed and we were unable to recover it. 00:34:10.620 [2024-04-26 23:36:59.726637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.620 [2024-04-26 23:36:59.726956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.620 [2024-04-26 23:36:59.726962] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.620 qpair failed and we were unable to recover it. 00:34:10.620 [2024-04-26 23:36:59.727289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.620 [2024-04-26 23:36:59.727694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.620 [2024-04-26 23:36:59.727700] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.620 qpair failed and we were unable to recover it. 00:34:10.620 [2024-04-26 23:36:59.727877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.620 [2024-04-26 23:36:59.728193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.620 [2024-04-26 23:36:59.728200] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.620 qpair failed and we were unable to recover it. 00:34:10.620 [2024-04-26 23:36:59.728523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.620 [2024-04-26 23:36:59.728764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.620 [2024-04-26 23:36:59.728770] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.620 qpair failed and we were unable to recover it. 00:34:10.620 [2024-04-26 23:36:59.729036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.620 [2024-04-26 23:36:59.729324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.620 [2024-04-26 23:36:59.729330] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.620 qpair failed and we were unable to recover it. 00:34:10.620 [2024-04-26 23:36:59.729643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.620 [2024-04-26 23:36:59.729863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.620 [2024-04-26 23:36:59.729869] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.620 qpair failed and we were unable to recover it. 00:34:10.620 [2024-04-26 23:36:59.730297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.620 [2024-04-26 23:36:59.730521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.620 [2024-04-26 23:36:59.730527] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.620 qpair failed and we were unable to recover it. 00:34:10.620 [2024-04-26 23:36:59.730864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.620 [2024-04-26 23:36:59.731158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.620 [2024-04-26 23:36:59.731164] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.620 qpair failed and we were unable to recover it. 00:34:10.620 [2024-04-26 23:36:59.731457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.620 [2024-04-26 23:36:59.731775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.620 [2024-04-26 23:36:59.731781] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.620 qpair failed and we were unable to recover it. 00:34:10.620 [2024-04-26 23:36:59.732098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.620 [2024-04-26 23:36:59.732410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.620 [2024-04-26 23:36:59.732416] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.620 qpair failed and we were unable to recover it. 00:34:10.620 [2024-04-26 23:36:59.732721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.620 [2024-04-26 23:36:59.733042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.620 [2024-04-26 23:36:59.733049] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.620 qpair failed and we were unable to recover it. 00:34:10.620 [2024-04-26 23:36:59.733359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.620 [2024-04-26 23:36:59.733570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.620 [2024-04-26 23:36:59.733577] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.620 qpair failed and we were unable to recover it. 00:34:10.620 [2024-04-26 23:36:59.733914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.620 [2024-04-26 23:36:59.734221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.620 [2024-04-26 23:36:59.734227] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.620 qpair failed and we were unable to recover it. 00:34:10.620 [2024-04-26 23:36:59.734546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.620 [2024-04-26 23:36:59.734861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.620 [2024-04-26 23:36:59.734868] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.620 qpair failed and we were unable to recover it. 00:34:10.620 [2024-04-26 23:36:59.735268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.620 [2024-04-26 23:36:59.735435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.620 [2024-04-26 23:36:59.735442] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.620 qpair failed and we were unable to recover it. 00:34:10.620 [2024-04-26 23:36:59.735723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.620 [2024-04-26 23:36:59.735937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.620 [2024-04-26 23:36:59.735944] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.620 qpair failed and we were unable to recover it. 00:34:10.620 [2024-04-26 23:36:59.736229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.620 [2024-04-26 23:36:59.736515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.620 [2024-04-26 23:36:59.736521] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.620 qpair failed and we were unable to recover it. 00:34:10.620 [2024-04-26 23:36:59.736880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.620 [2024-04-26 23:36:59.737166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.620 [2024-04-26 23:36:59.737173] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.620 qpair failed and we were unable to recover it. 00:34:10.620 [2024-04-26 23:36:59.737485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.620 [2024-04-26 23:36:59.737846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.620 [2024-04-26 23:36:59.737852] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.620 qpair failed and we were unable to recover it. 00:34:10.620 [2024-04-26 23:36:59.738161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.620 [2024-04-26 23:36:59.738512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.621 [2024-04-26 23:36:59.738518] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.621 qpair failed and we were unable to recover it. 00:34:10.621 [2024-04-26 23:36:59.738758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.621 [2024-04-26 23:36:59.739052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.621 [2024-04-26 23:36:59.739059] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.621 qpair failed and we were unable to recover it. 00:34:10.621 [2024-04-26 23:36:59.739338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.621 [2024-04-26 23:36:59.739680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.621 [2024-04-26 23:36:59.739687] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.621 qpair failed and we were unable to recover it. 00:34:10.621 [2024-04-26 23:36:59.740081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.621 [2024-04-26 23:36:59.740388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.621 [2024-04-26 23:36:59.740395] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.621 qpair failed and we were unable to recover it. 00:34:10.621 [2024-04-26 23:36:59.740736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.621 [2024-04-26 23:36:59.741065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.621 [2024-04-26 23:36:59.741072] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.621 qpair failed and we were unable to recover it. 00:34:10.621 [2024-04-26 23:36:59.741385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.621 [2024-04-26 23:36:59.741638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.621 [2024-04-26 23:36:59.741644] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.621 qpair failed and we were unable to recover it. 00:34:10.621 [2024-04-26 23:36:59.742055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.621 [2024-04-26 23:36:59.742224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.621 [2024-04-26 23:36:59.742231] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.621 qpair failed and we were unable to recover it. 00:34:10.621 [2024-04-26 23:36:59.742565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.621 [2024-04-26 23:36:59.742775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.621 [2024-04-26 23:36:59.742781] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.621 qpair failed and we were unable to recover it. 00:34:10.621 [2024-04-26 23:36:59.743084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.621 [2024-04-26 23:36:59.743400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.621 [2024-04-26 23:36:59.743407] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.621 qpair failed and we were unable to recover it. 00:34:10.621 [2024-04-26 23:36:59.743730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.621 [2024-04-26 23:36:59.744077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.621 [2024-04-26 23:36:59.744084] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.621 qpair failed and we were unable to recover it. 00:34:10.621 [2024-04-26 23:36:59.744412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.621 [2024-04-26 23:36:59.744773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.621 [2024-04-26 23:36:59.744780] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.621 qpair failed and we were unable to recover it. 00:34:10.621 [2024-04-26 23:36:59.745116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.621 [2024-04-26 23:36:59.745279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.621 [2024-04-26 23:36:59.745285] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.621 qpair failed and we were unable to recover it. 00:34:10.621 [2024-04-26 23:36:59.745547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.621 [2024-04-26 23:36:59.745744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.621 [2024-04-26 23:36:59.745750] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.621 qpair failed and we were unable to recover it. 00:34:10.621 [2024-04-26 23:36:59.746060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.621 [2024-04-26 23:36:59.746349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.621 [2024-04-26 23:36:59.746356] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.621 qpair failed and we were unable to recover it. 00:34:10.621 [2024-04-26 23:36:59.746639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.621 [2024-04-26 23:36:59.746949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.621 [2024-04-26 23:36:59.746956] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.621 qpair failed and we were unable to recover it. 00:34:10.621 [2024-04-26 23:36:59.747264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.621 [2024-04-26 23:36:59.747557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.621 [2024-04-26 23:36:59.747563] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.621 qpair failed and we were unable to recover it. 00:34:10.621 [2024-04-26 23:36:59.747922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.621 [2024-04-26 23:36:59.748267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.621 [2024-04-26 23:36:59.748274] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.621 qpair failed and we were unable to recover it. 00:34:10.621 [2024-04-26 23:36:59.748627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.621 [2024-04-26 23:36:59.748971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.621 [2024-04-26 23:36:59.748978] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.621 qpair failed and we were unable to recover it. 00:34:10.621 [2024-04-26 23:36:59.749329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.621 [2024-04-26 23:36:59.749656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.621 [2024-04-26 23:36:59.749662] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.621 qpair failed and we were unable to recover it. 00:34:10.621 [2024-04-26 23:36:59.749981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.621 [2024-04-26 23:36:59.750276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.621 [2024-04-26 23:36:59.750282] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.621 qpair failed and we were unable to recover it. 00:34:10.621 [2024-04-26 23:36:59.750601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.621 [2024-04-26 23:36:59.750879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.621 [2024-04-26 23:36:59.750886] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.621 qpair failed and we were unable to recover it. 00:34:10.621 [2024-04-26 23:36:59.751211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.621 [2024-04-26 23:36:59.751526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.621 [2024-04-26 23:36:59.751532] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.621 qpair failed and we were unable to recover it. 00:34:10.621 [2024-04-26 23:36:59.751857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.621 [2024-04-26 23:36:59.752113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.621 [2024-04-26 23:36:59.752119] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.621 qpair failed and we were unable to recover it. 00:34:10.621 [2024-04-26 23:36:59.752428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.621 [2024-04-26 23:36:59.752734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.621 [2024-04-26 23:36:59.752740] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.621 qpair failed and we were unable to recover it. 00:34:10.621 [2024-04-26 23:36:59.753115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.621 [2024-04-26 23:36:59.753433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.621 [2024-04-26 23:36:59.753440] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.621 qpair failed and we were unable to recover it. 00:34:10.621 [2024-04-26 23:36:59.753630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.621 [2024-04-26 23:36:59.753946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.621 [2024-04-26 23:36:59.753952] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.621 qpair failed and we were unable to recover it. 00:34:10.621 [2024-04-26 23:36:59.754301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.621 [2024-04-26 23:36:59.754649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.621 [2024-04-26 23:36:59.754655] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.621 qpair failed and we were unable to recover it. 00:34:10.621 [2024-04-26 23:36:59.755003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.621 [2024-04-26 23:36:59.755316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.621 [2024-04-26 23:36:59.755322] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.621 qpair failed and we were unable to recover it. 00:34:10.621 [2024-04-26 23:36:59.755632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.621 [2024-04-26 23:36:59.755940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.621 [2024-04-26 23:36:59.755947] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.621 qpair failed and we were unable to recover it. 00:34:10.622 [2024-04-26 23:36:59.756270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.622 [2024-04-26 23:36:59.756597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.622 [2024-04-26 23:36:59.756603] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.622 qpair failed and we were unable to recover it. 00:34:10.622 [2024-04-26 23:36:59.756936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.622 [2024-04-26 23:36:59.757261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.622 [2024-04-26 23:36:59.757268] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.622 qpair failed and we were unable to recover it. 00:34:10.622 [2024-04-26 23:36:59.757623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.622 [2024-04-26 23:36:59.757962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.622 [2024-04-26 23:36:59.757969] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.622 qpair failed and we were unable to recover it. 00:34:10.622 [2024-04-26 23:36:59.758296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.622 [2024-04-26 23:36:59.758627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.622 [2024-04-26 23:36:59.758634] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.622 qpair failed and we were unable to recover it. 00:34:10.622 [2024-04-26 23:36:59.758979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.622 [2024-04-26 23:36:59.759307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.622 [2024-04-26 23:36:59.759313] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.622 qpair failed and we were unable to recover it. 00:34:10.622 [2024-04-26 23:36:59.759616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.622 [2024-04-26 23:36:59.759929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.622 [2024-04-26 23:36:59.759935] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.622 qpair failed and we were unable to recover it. 00:34:10.622 [2024-04-26 23:36:59.760166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.622 [2024-04-26 23:36:59.760378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.622 [2024-04-26 23:36:59.760385] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.622 qpair failed and we were unable to recover it. 00:34:10.622 [2024-04-26 23:36:59.760735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.622 [2024-04-26 23:36:59.760987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.622 [2024-04-26 23:36:59.760994] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.622 qpair failed and we were unable to recover it. 00:34:10.622 [2024-04-26 23:36:59.761309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.622 [2024-04-26 23:36:59.761662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.622 [2024-04-26 23:36:59.761668] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.622 qpair failed and we were unable to recover it. 00:34:10.622 [2024-04-26 23:36:59.762065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.622 [2024-04-26 23:36:59.762420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.622 [2024-04-26 23:36:59.762426] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.622 qpair failed and we were unable to recover it. 00:34:10.622 [2024-04-26 23:36:59.762735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.622 [2024-04-26 23:36:59.762956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.622 [2024-04-26 23:36:59.762963] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.622 qpair failed and we were unable to recover it. 00:34:10.622 [2024-04-26 23:36:59.763301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.622 [2024-04-26 23:36:59.763633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.622 [2024-04-26 23:36:59.763639] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.622 qpair failed and we were unable to recover it. 00:34:10.622 [2024-04-26 23:36:59.763955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.622 [2024-04-26 23:36:59.764254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.622 [2024-04-26 23:36:59.764260] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.622 qpair failed and we were unable to recover it. 00:34:10.622 [2024-04-26 23:36:59.764616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.622 [2024-04-26 23:36:59.764948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.622 [2024-04-26 23:36:59.764955] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.622 qpair failed and we were unable to recover it. 00:34:10.622 [2024-04-26 23:36:59.765315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.622 [2024-04-26 23:36:59.765630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.622 [2024-04-26 23:36:59.765636] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.622 qpair failed and we were unable to recover it. 00:34:10.622 [2024-04-26 23:36:59.765976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.622 [2024-04-26 23:36:59.766266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.622 [2024-04-26 23:36:59.766273] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.622 qpair failed and we were unable to recover it. 00:34:10.622 [2024-04-26 23:36:59.766625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.622 [2024-04-26 23:36:59.766961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.622 [2024-04-26 23:36:59.766968] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.622 qpair failed and we were unable to recover it. 00:34:10.622 [2024-04-26 23:36:59.767304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.622 [2024-04-26 23:36:59.767650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.622 [2024-04-26 23:36:59.767656] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.622 qpair failed and we were unable to recover it. 00:34:10.622 [2024-04-26 23:36:59.767979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.622 [2024-04-26 23:36:59.768267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.622 [2024-04-26 23:36:59.768273] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.622 qpair failed and we were unable to recover it. 00:34:10.622 [2024-04-26 23:36:59.768551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.622 [2024-04-26 23:36:59.768912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.622 [2024-04-26 23:36:59.768919] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.622 qpair failed and we were unable to recover it. 00:34:10.622 [2024-04-26 23:36:59.769148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.622 [2024-04-26 23:36:59.769433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.622 [2024-04-26 23:36:59.769440] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.622 qpair failed and we were unable to recover it. 00:34:10.622 [2024-04-26 23:36:59.769771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.622 [2024-04-26 23:36:59.770106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.622 [2024-04-26 23:36:59.770112] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.622 qpair failed and we were unable to recover it. 00:34:10.622 [2024-04-26 23:36:59.770418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.622 [2024-04-26 23:36:59.770716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.622 [2024-04-26 23:36:59.770722] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.622 qpair failed and we were unable to recover it. 00:34:10.622 [2024-04-26 23:36:59.771120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.622 [2024-04-26 23:36:59.771466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.622 [2024-04-26 23:36:59.771472] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.622 qpair failed and we were unable to recover it. 00:34:10.622 [2024-04-26 23:36:59.771797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.622 [2024-04-26 23:36:59.772119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.622 [2024-04-26 23:36:59.772125] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.622 qpair failed and we were unable to recover it. 00:34:10.622 [2024-04-26 23:36:59.772476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.622 [2024-04-26 23:36:59.772787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.622 [2024-04-26 23:36:59.772794] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.622 qpair failed and we were unable to recover it. 00:34:10.622 [2024-04-26 23:36:59.773146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.622 [2024-04-26 23:36:59.773466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.622 [2024-04-26 23:36:59.773474] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.622 qpair failed and we were unable to recover it. 00:34:10.622 [2024-04-26 23:36:59.773808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.622 [2024-04-26 23:36:59.774128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.622 [2024-04-26 23:36:59.774136] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.622 qpair failed and we were unable to recover it. 00:34:10.623 [2024-04-26 23:36:59.774491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.623 [2024-04-26 23:36:59.774856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.623 [2024-04-26 23:36:59.774862] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.623 qpair failed and we were unable to recover it. 00:34:10.623 [2024-04-26 23:36:59.775008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.623 [2024-04-26 23:36:59.775298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.623 [2024-04-26 23:36:59.775312] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.623 qpair failed and we were unable to recover it. 00:34:10.623 [2024-04-26 23:36:59.775632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.623 [2024-04-26 23:36:59.775933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.623 [2024-04-26 23:36:59.775941] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.623 qpair failed and we were unable to recover it. 00:34:10.623 [2024-04-26 23:36:59.776146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.623 [2024-04-26 23:36:59.776438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.623 [2024-04-26 23:36:59.776444] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.623 qpair failed and we were unable to recover it. 00:34:10.623 [2024-04-26 23:36:59.776657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.623 [2024-04-26 23:36:59.776900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.623 [2024-04-26 23:36:59.776907] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.623 qpair failed and we were unable to recover it. 00:34:10.623 [2024-04-26 23:36:59.777248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.623 [2024-04-26 23:36:59.777604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.623 [2024-04-26 23:36:59.777610] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.623 qpair failed and we were unable to recover it. 00:34:10.623 [2024-04-26 23:36:59.777925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.623 [2024-04-26 23:36:59.778239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.623 [2024-04-26 23:36:59.778245] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.623 qpair failed and we were unable to recover it. 00:34:10.623 [2024-04-26 23:36:59.778562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.623 [2024-04-26 23:36:59.778911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.623 [2024-04-26 23:36:59.778917] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.623 qpair failed and we were unable to recover it. 00:34:10.623 [2024-04-26 23:36:59.779242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.623 [2024-04-26 23:36:59.779601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.623 [2024-04-26 23:36:59.779609] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.623 qpair failed and we were unable to recover it. 00:34:10.623 [2024-04-26 23:36:59.779921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.623 [2024-04-26 23:36:59.780243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.623 [2024-04-26 23:36:59.780249] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.623 qpair failed and we were unable to recover it. 00:34:10.623 [2024-04-26 23:36:59.780558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.623 [2024-04-26 23:36:59.780876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.623 [2024-04-26 23:36:59.780883] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.623 qpair failed and we were unable to recover it. 00:34:10.623 [2024-04-26 23:36:59.781154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.623 [2024-04-26 23:36:59.781477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.623 [2024-04-26 23:36:59.781483] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.623 qpair failed and we were unable to recover it. 00:34:10.623 [2024-04-26 23:36:59.781849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.623 [2024-04-26 23:36:59.782146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.623 [2024-04-26 23:36:59.782153] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.623 qpair failed and we were unable to recover it. 00:34:10.623 [2024-04-26 23:36:59.782491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.623 [2024-04-26 23:36:59.782816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.623 [2024-04-26 23:36:59.782823] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.623 qpair failed and we were unable to recover it. 00:34:10.623 [2024-04-26 23:36:59.783120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.623 [2024-04-26 23:36:59.783481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.623 [2024-04-26 23:36:59.783487] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.623 qpair failed and we were unable to recover it. 00:34:10.623 [2024-04-26 23:36:59.783819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.623 [2024-04-26 23:36:59.784149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.623 [2024-04-26 23:36:59.784156] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.623 qpair failed and we were unable to recover it. 00:34:10.623 [2024-04-26 23:36:59.784503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.623 [2024-04-26 23:36:59.784816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.623 [2024-04-26 23:36:59.784823] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.623 qpair failed and we were unable to recover it. 00:34:10.623 [2024-04-26 23:36:59.784896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.623 [2024-04-26 23:36:59.785106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.623 [2024-04-26 23:36:59.785113] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.623 qpair failed and we were unable to recover it. 00:34:10.623 [2024-04-26 23:36:59.785262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.623 [2024-04-26 23:36:59.785596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.623 [2024-04-26 23:36:59.785604] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.623 qpair failed and we were unable to recover it. 00:34:10.623 [2024-04-26 23:36:59.785926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.623 [2024-04-26 23:36:59.786246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.623 [2024-04-26 23:36:59.786254] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.623 qpair failed and we were unable to recover it. 00:34:10.623 [2024-04-26 23:36:59.786597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.623 [2024-04-26 23:36:59.786919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.623 [2024-04-26 23:36:59.786926] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.623 qpair failed and we were unable to recover it. 00:34:10.623 [2024-04-26 23:36:59.787287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.623 [2024-04-26 23:36:59.787478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.623 [2024-04-26 23:36:59.787484] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.623 qpair failed and we were unable to recover it. 00:34:10.623 [2024-04-26 23:36:59.787802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.623 [2024-04-26 23:36:59.787944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.623 [2024-04-26 23:36:59.787951] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.623 qpair failed and we were unable to recover it. 00:34:10.623 [2024-04-26 23:36:59.788286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.623 [2024-04-26 23:36:59.788501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.623 [2024-04-26 23:36:59.788507] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.623 qpair failed and we were unable to recover it. 00:34:10.623 [2024-04-26 23:36:59.788820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.623 [2024-04-26 23:36:59.789040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.623 [2024-04-26 23:36:59.789047] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.623 qpair failed and we were unable to recover it. 00:34:10.623 [2024-04-26 23:36:59.789346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.623 [2024-04-26 23:36:59.789679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.623 [2024-04-26 23:36:59.789685] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.623 qpair failed and we were unable to recover it. 00:34:10.623 [2024-04-26 23:36:59.789930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.623 [2024-04-26 23:36:59.790257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.623 [2024-04-26 23:36:59.790263] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.623 qpair failed and we were unable to recover it. 00:34:10.623 [2024-04-26 23:36:59.790588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.623 [2024-04-26 23:36:59.790932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.623 [2024-04-26 23:36:59.790940] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.623 qpair failed and we were unable to recover it. 00:34:10.623 [2024-04-26 23:36:59.791283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.623 [2024-04-26 23:36:59.791595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.624 [2024-04-26 23:36:59.791604] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.624 qpair failed and we were unable to recover it. 00:34:10.624 [2024-04-26 23:36:59.791931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.624 [2024-04-26 23:36:59.792265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.624 [2024-04-26 23:36:59.792272] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.624 qpair failed and we were unable to recover it. 00:34:10.624 [2024-04-26 23:36:59.792624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.624 [2024-04-26 23:36:59.792967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.624 [2024-04-26 23:36:59.792973] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.624 qpair failed and we were unable to recover it. 00:34:10.624 [2024-04-26 23:36:59.793304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.624 [2024-04-26 23:36:59.793690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.624 [2024-04-26 23:36:59.793696] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.624 qpair failed and we were unable to recover it. 00:34:10.624 [2024-04-26 23:36:59.793865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.624 [2024-04-26 23:36:59.794179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.624 [2024-04-26 23:36:59.794186] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.624 qpair failed and we were unable to recover it. 00:34:10.624 [2024-04-26 23:36:59.794502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.624 [2024-04-26 23:36:59.794833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.624 [2024-04-26 23:36:59.794845] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.624 qpair failed and we were unable to recover it. 00:34:10.624 [2024-04-26 23:36:59.795211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.624 [2024-04-26 23:36:59.795556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.624 [2024-04-26 23:36:59.795563] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.624 qpair failed and we were unable to recover it. 00:34:10.624 [2024-04-26 23:36:59.795863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.624 [2024-04-26 23:36:59.796159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.624 [2024-04-26 23:36:59.796166] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.624 qpair failed and we were unable to recover it. 00:34:10.624 [2024-04-26 23:36:59.796409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.624 [2024-04-26 23:36:59.796774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.624 [2024-04-26 23:36:59.796780] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.624 qpair failed and we were unable to recover it. 00:34:10.624 [2024-04-26 23:36:59.797101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.624 [2024-04-26 23:36:59.797409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.624 [2024-04-26 23:36:59.797415] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.624 qpair failed and we were unable to recover it. 00:34:10.624 [2024-04-26 23:36:59.797723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.624 [2024-04-26 23:36:59.798075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.624 [2024-04-26 23:36:59.798083] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.624 qpair failed and we were unable to recover it. 00:34:10.624 [2024-04-26 23:36:59.798436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.624 [2024-04-26 23:36:59.798780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.624 [2024-04-26 23:36:59.798786] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.624 qpair failed and we were unable to recover it. 00:34:10.624 [2024-04-26 23:36:59.799114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.624 [2024-04-26 23:36:59.799461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.624 [2024-04-26 23:36:59.799467] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.624 qpair failed and we were unable to recover it. 00:34:10.624 [2024-04-26 23:36:59.799793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.624 [2024-04-26 23:36:59.800116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.624 [2024-04-26 23:36:59.800123] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.624 qpair failed and we were unable to recover it. 00:34:10.624 [2024-04-26 23:36:59.800324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.624 [2024-04-26 23:36:59.800645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.624 [2024-04-26 23:36:59.800652] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.624 qpair failed and we were unable to recover it. 00:34:10.624 [2024-04-26 23:36:59.800977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.624 [2024-04-26 23:36:59.801293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.624 [2024-04-26 23:36:59.801299] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.624 qpair failed and we were unable to recover it. 00:34:10.624 [2024-04-26 23:36:59.801652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.624 [2024-04-26 23:36:59.801858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.624 [2024-04-26 23:36:59.801864] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.624 qpair failed and we were unable to recover it. 00:34:10.624 [2024-04-26 23:36:59.802174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.624 [2024-04-26 23:36:59.802473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.624 [2024-04-26 23:36:59.802480] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.624 qpair failed and we were unable to recover it. 00:34:10.624 [2024-04-26 23:36:59.802832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.624 [2024-04-26 23:36:59.803148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.624 [2024-04-26 23:36:59.803156] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.624 qpair failed and we were unable to recover it. 00:34:10.624 [2024-04-26 23:36:59.803480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.624 [2024-04-26 23:36:59.803667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.624 [2024-04-26 23:36:59.803674] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.624 qpair failed and we were unable to recover it. 00:34:10.624 [2024-04-26 23:36:59.803978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.624 [2024-04-26 23:36:59.804311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.624 [2024-04-26 23:36:59.804318] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.624 qpair failed and we were unable to recover it. 00:34:10.624 [2024-04-26 23:36:59.804650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.624 [2024-04-26 23:36:59.804971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.624 [2024-04-26 23:36:59.804978] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.624 qpair failed and we were unable to recover it. 00:34:10.624 [2024-04-26 23:36:59.805302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.624 [2024-04-26 23:36:59.805593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.624 [2024-04-26 23:36:59.805599] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.624 qpair failed and we were unable to recover it. 00:34:10.624 [2024-04-26 23:36:59.805890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.624 [2024-04-26 23:36:59.806186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.624 [2024-04-26 23:36:59.806192] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.624 qpair failed and we were unable to recover it. 00:34:10.624 [2024-04-26 23:36:59.806398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.624 [2024-04-26 23:36:59.806605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.624 [2024-04-26 23:36:59.806612] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.625 qpair failed and we were unable to recover it. 00:34:10.625 [2024-04-26 23:36:59.806870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.625 [2024-04-26 23:36:59.807172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.625 [2024-04-26 23:36:59.807178] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.625 qpair failed and we were unable to recover it. 00:34:10.625 [2024-04-26 23:36:59.807365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.625 [2024-04-26 23:36:59.807583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.625 [2024-04-26 23:36:59.807589] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.625 qpair failed and we were unable to recover it. 00:34:10.625 [2024-04-26 23:36:59.807812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.625 [2024-04-26 23:36:59.808130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.625 [2024-04-26 23:36:59.808136] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.625 qpair failed and we were unable to recover it. 00:34:10.625 [2024-04-26 23:36:59.808440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.625 [2024-04-26 23:36:59.808752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.625 [2024-04-26 23:36:59.808758] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.625 qpair failed and we were unable to recover it. 00:34:10.625 [2024-04-26 23:36:59.809079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.625 [2024-04-26 23:36:59.809401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.625 [2024-04-26 23:36:59.809407] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.625 qpair failed and we were unable to recover it. 00:34:10.625 [2024-04-26 23:36:59.809757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.625 [2024-04-26 23:36:59.810120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.625 [2024-04-26 23:36:59.810127] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.625 qpair failed and we were unable to recover it. 00:34:10.625 [2024-04-26 23:36:59.810465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.625 [2024-04-26 23:36:59.810825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.625 [2024-04-26 23:36:59.810831] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.625 qpair failed and we were unable to recover it. 00:34:10.625 [2024-04-26 23:36:59.811177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.625 [2024-04-26 23:36:59.811529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.625 [2024-04-26 23:36:59.811536] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.625 qpair failed and we were unable to recover it. 00:34:10.625 [2024-04-26 23:36:59.811864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.625 [2024-04-26 23:36:59.812155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.625 [2024-04-26 23:36:59.812162] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.625 qpair failed and we were unable to recover it. 00:34:10.625 [2024-04-26 23:36:59.812506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.625 [2024-04-26 23:36:59.812863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.625 [2024-04-26 23:36:59.812871] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.625 qpair failed and we were unable to recover it. 00:34:10.625 [2024-04-26 23:36:59.813068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.625 [2024-04-26 23:36:59.813413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.625 [2024-04-26 23:36:59.813420] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.625 qpair failed and we were unable to recover it. 00:34:10.625 [2024-04-26 23:36:59.813769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.625 [2024-04-26 23:36:59.814069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.625 [2024-04-26 23:36:59.814076] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.625 qpair failed and we were unable to recover it. 00:34:10.625 [2024-04-26 23:36:59.814402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.625 [2024-04-26 23:36:59.814767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.625 [2024-04-26 23:36:59.814773] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.625 qpair failed and we were unable to recover it. 00:34:10.625 [2024-04-26 23:36:59.815100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.625 [2024-04-26 23:36:59.815449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.625 [2024-04-26 23:36:59.815456] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.625 qpair failed and we were unable to recover it. 00:34:10.625 [2024-04-26 23:36:59.815812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.625 [2024-04-26 23:36:59.816162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.625 [2024-04-26 23:36:59.816169] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.625 qpair failed and we were unable to recover it. 00:34:10.625 [2024-04-26 23:36:59.816520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.625 [2024-04-26 23:36:59.816722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.625 [2024-04-26 23:36:59.816729] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.625 qpair failed and we were unable to recover it. 00:34:10.625 [2024-04-26 23:36:59.817005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.625 [2024-04-26 23:36:59.817347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.625 [2024-04-26 23:36:59.817354] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.625 qpair failed and we were unable to recover it. 00:34:10.625 [2024-04-26 23:36:59.817704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.625 [2024-04-26 23:36:59.818023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.625 [2024-04-26 23:36:59.818029] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.625 qpair failed and we were unable to recover it. 00:34:10.625 [2024-04-26 23:36:59.818309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.625 [2024-04-26 23:36:59.818653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.625 [2024-04-26 23:36:59.818659] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.625 qpair failed and we were unable to recover it. 00:34:10.625 [2024-04-26 23:36:59.819000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.625 [2024-04-26 23:36:59.819351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.625 [2024-04-26 23:36:59.819357] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.625 qpair failed and we were unable to recover it. 00:34:10.625 [2024-04-26 23:36:59.819675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.625 [2024-04-26 23:36:59.819902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.625 [2024-04-26 23:36:59.819909] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.625 qpair failed and we were unable to recover it. 00:34:10.625 [2024-04-26 23:36:59.820278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.625 [2024-04-26 23:36:59.820571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.625 [2024-04-26 23:36:59.820577] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.625 qpair failed and we were unable to recover it. 00:34:10.625 [2024-04-26 23:36:59.820890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.625 [2024-04-26 23:36:59.821194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.625 [2024-04-26 23:36:59.821200] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.625 qpair failed and we were unable to recover it. 00:34:10.625 [2024-04-26 23:36:59.821635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.625 [2024-04-26 23:36:59.821913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.625 [2024-04-26 23:36:59.821920] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.625 qpair failed and we were unable to recover it. 00:34:10.625 [2024-04-26 23:36:59.822270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.625 [2024-04-26 23:36:59.822624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.625 [2024-04-26 23:36:59.822630] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.625 qpair failed and we were unable to recover it. 00:34:10.625 [2024-04-26 23:36:59.822951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.625 [2024-04-26 23:36:59.823300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.625 [2024-04-26 23:36:59.823307] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.625 qpair failed and we were unable to recover it. 00:34:10.625 [2024-04-26 23:36:59.823624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.625 [2024-04-26 23:36:59.823948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.625 [2024-04-26 23:36:59.823954] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.625 qpair failed and we were unable to recover it. 00:34:10.625 [2024-04-26 23:36:59.824131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.626 [2024-04-26 23:36:59.824444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.626 [2024-04-26 23:36:59.824450] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.626 qpair failed and we were unable to recover it. 00:34:10.626 [2024-04-26 23:36:59.824691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.626 [2024-04-26 23:36:59.824970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.626 [2024-04-26 23:36:59.824976] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.626 qpair failed and we were unable to recover it. 00:34:10.626 [2024-04-26 23:36:59.825175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.626 [2024-04-26 23:36:59.825456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.626 [2024-04-26 23:36:59.825462] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.626 qpair failed and we were unable to recover it. 00:34:10.626 [2024-04-26 23:36:59.825684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.626 [2024-04-26 23:36:59.826001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.626 [2024-04-26 23:36:59.826008] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.626 qpair failed and we were unable to recover it. 00:34:10.626 [2024-04-26 23:36:59.826343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.626 [2024-04-26 23:36:59.826672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.626 [2024-04-26 23:36:59.826679] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.626 qpair failed and we were unable to recover it. 00:34:10.626 [2024-04-26 23:36:59.826995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.626 [2024-04-26 23:36:59.827236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.626 [2024-04-26 23:36:59.827242] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.626 qpair failed and we were unable to recover it. 00:34:10.626 [2024-04-26 23:36:59.827558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.626 [2024-04-26 23:36:59.827876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.626 [2024-04-26 23:36:59.827882] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.626 qpair failed and we were unable to recover it. 00:34:10.626 [2024-04-26 23:36:59.828205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.626 [2024-04-26 23:36:59.828499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.626 [2024-04-26 23:36:59.828505] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.626 qpair failed and we were unable to recover it. 00:34:10.626 [2024-04-26 23:36:59.828840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.626 [2024-04-26 23:36:59.829089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.626 [2024-04-26 23:36:59.829095] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.626 qpair failed and we were unable to recover it. 00:34:10.626 [2024-04-26 23:36:59.829406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.626 [2024-04-26 23:36:59.829587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.626 [2024-04-26 23:36:59.829593] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.626 qpair failed and we were unable to recover it. 00:34:10.626 [2024-04-26 23:36:59.829949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.626 [2024-04-26 23:36:59.830308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.626 [2024-04-26 23:36:59.830314] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.626 qpair failed and we were unable to recover it. 00:34:10.626 [2024-04-26 23:36:59.830575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.626 [2024-04-26 23:36:59.830893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.626 [2024-04-26 23:36:59.830899] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.626 qpair failed and we were unable to recover it. 00:34:10.626 [2024-04-26 23:36:59.831115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.626 [2024-04-26 23:36:59.831470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.626 [2024-04-26 23:36:59.831476] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.626 qpair failed and we were unable to recover it. 00:34:10.626 [2024-04-26 23:36:59.831796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.626 [2024-04-26 23:36:59.832108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.626 [2024-04-26 23:36:59.832114] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.626 qpair failed and we were unable to recover it. 00:34:10.626 [2024-04-26 23:36:59.832473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.626 [2024-04-26 23:36:59.832785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.626 [2024-04-26 23:36:59.832792] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.626 qpair failed and we were unable to recover it. 00:34:10.626 [2024-04-26 23:36:59.832987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.626 [2024-04-26 23:36:59.833321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.626 [2024-04-26 23:36:59.833329] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.626 qpair failed and we were unable to recover it. 00:34:10.626 [2024-04-26 23:36:59.833558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.626 [2024-04-26 23:36:59.833901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.626 [2024-04-26 23:36:59.833908] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.626 qpair failed and we were unable to recover it. 00:34:10.626 [2024-04-26 23:36:59.834228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.626 [2024-04-26 23:36:59.834577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.626 [2024-04-26 23:36:59.834583] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.626 qpair failed and we were unable to recover it. 00:34:10.626 [2024-04-26 23:36:59.834931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.626 [2024-04-26 23:36:59.835260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.626 [2024-04-26 23:36:59.835267] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.626 qpair failed and we were unable to recover it. 00:34:10.626 [2024-04-26 23:36:59.835593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.626 [2024-04-26 23:36:59.835784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.626 [2024-04-26 23:36:59.835791] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.626 qpair failed and we were unable to recover it. 00:34:10.626 [2024-04-26 23:36:59.836184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.626 [2024-04-26 23:36:59.836531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.626 [2024-04-26 23:36:59.836538] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.626 qpair failed and we were unable to recover it. 00:34:10.626 [2024-04-26 23:36:59.836872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.626 [2024-04-26 23:36:59.837171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.626 [2024-04-26 23:36:59.837177] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.626 qpair failed and we were unable to recover it. 00:34:10.626 [2024-04-26 23:36:59.837493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.626 [2024-04-26 23:36:59.837850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.626 [2024-04-26 23:36:59.837858] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.626 qpair failed and we were unable to recover it. 00:34:10.626 [2024-04-26 23:36:59.838233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.626 [2024-04-26 23:36:59.838546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.626 [2024-04-26 23:36:59.838552] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.626 qpair failed and we were unable to recover it. 00:34:10.626 [2024-04-26 23:36:59.838873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.626 [2024-04-26 23:36:59.839170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.626 [2024-04-26 23:36:59.839176] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.626 qpair failed and we were unable to recover it. 00:34:10.626 [2024-04-26 23:36:59.839359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.626 [2024-04-26 23:36:59.839629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.626 [2024-04-26 23:36:59.839635] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.626 qpair failed and we were unable to recover it. 00:34:10.626 [2024-04-26 23:36:59.839928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.626 [2024-04-26 23:36:59.840253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.626 [2024-04-26 23:36:59.840259] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.626 qpair failed and we were unable to recover it. 00:34:10.626 [2024-04-26 23:36:59.840525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.626 [2024-04-26 23:36:59.840841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.626 [2024-04-26 23:36:59.840848] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.626 qpair failed and we were unable to recover it. 00:34:10.626 [2024-04-26 23:36:59.841204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.626 [2024-04-26 23:36:59.841566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.627 [2024-04-26 23:36:59.841572] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.627 qpair failed and we were unable to recover it. 00:34:10.627 [2024-04-26 23:36:59.841903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.627 [2024-04-26 23:36:59.842103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.627 [2024-04-26 23:36:59.842109] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.627 qpair failed and we were unable to recover it. 00:34:10.627 [2024-04-26 23:36:59.842553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.627 [2024-04-26 23:36:59.842792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.627 [2024-04-26 23:36:59.842799] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.627 qpair failed and we were unable to recover it. 00:34:10.627 [2024-04-26 23:36:59.843097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.627 [2024-04-26 23:36:59.843307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.627 [2024-04-26 23:36:59.843314] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.627 qpair failed and we were unable to recover it. 00:34:10.627 [2024-04-26 23:36:59.843674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.627 [2024-04-26 23:36:59.844038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.627 [2024-04-26 23:36:59.844045] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.627 qpair failed and we were unable to recover it. 00:34:10.627 [2024-04-26 23:36:59.844371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.627 [2024-04-26 23:36:59.844703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.627 [2024-04-26 23:36:59.844709] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.627 qpair failed and we were unable to recover it. 00:34:10.627 [2024-04-26 23:36:59.845039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.627 [2024-04-26 23:36:59.845383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.627 [2024-04-26 23:36:59.845389] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.627 qpair failed and we were unable to recover it. 00:34:10.627 [2024-04-26 23:36:59.845702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.627 [2024-04-26 23:36:59.846046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.627 [2024-04-26 23:36:59.846053] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.627 qpair failed and we were unable to recover it. 00:34:10.627 [2024-04-26 23:36:59.846403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.627 [2024-04-26 23:36:59.846662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.627 [2024-04-26 23:36:59.846669] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.627 qpair failed and we were unable to recover it. 00:34:10.627 [2024-04-26 23:36:59.846995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.627 [2024-04-26 23:36:59.847310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.627 [2024-04-26 23:36:59.847317] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.627 qpair failed and we were unable to recover it. 00:34:10.627 [2024-04-26 23:36:59.847669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.627 [2024-04-26 23:36:59.848032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.627 [2024-04-26 23:36:59.848039] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.627 qpair failed and we were unable to recover it. 00:34:10.627 [2024-04-26 23:36:59.848352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.627 [2024-04-26 23:36:59.848705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.627 [2024-04-26 23:36:59.848711] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.627 qpair failed and we were unable to recover it. 00:34:10.627 [2024-04-26 23:36:59.849108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.627 [2024-04-26 23:36:59.849440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.627 [2024-04-26 23:36:59.849447] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.627 qpair failed and we were unable to recover it. 00:34:10.627 [2024-04-26 23:36:59.849768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.627 [2024-04-26 23:36:59.849966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.627 [2024-04-26 23:36:59.849973] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.627 qpair failed and we were unable to recover it. 00:34:10.627 [2024-04-26 23:36:59.850179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.627 [2024-04-26 23:36:59.850442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.627 [2024-04-26 23:36:59.850448] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.627 qpair failed and we were unable to recover it. 00:34:10.627 [2024-04-26 23:36:59.850676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.627 [2024-04-26 23:36:59.850994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.627 [2024-04-26 23:36:59.851001] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.627 qpair failed and we were unable to recover it. 00:34:10.627 [2024-04-26 23:36:59.851290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.627 [2024-04-26 23:36:59.851590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.627 [2024-04-26 23:36:59.851597] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.627 qpair failed and we were unable to recover it. 00:34:10.627 [2024-04-26 23:36:59.851914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.627 [2024-04-26 23:36:59.852213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.627 [2024-04-26 23:36:59.852219] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.627 qpair failed and we were unable to recover it. 00:34:10.627 [2024-04-26 23:36:59.852562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.627 [2024-04-26 23:36:59.852922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.627 [2024-04-26 23:36:59.852928] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.627 qpair failed and we were unable to recover it. 00:34:10.627 [2024-04-26 23:36:59.853283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.627 [2024-04-26 23:36:59.853616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.627 [2024-04-26 23:36:59.853623] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.627 qpair failed and we were unable to recover it. 00:34:10.627 [2024-04-26 23:36:59.853974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.627 [2024-04-26 23:36:59.854303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.627 [2024-04-26 23:36:59.854309] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.627 qpair failed and we were unable to recover it. 00:34:10.627 [2024-04-26 23:36:59.854634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.627 [2024-04-26 23:36:59.854980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.627 [2024-04-26 23:36:59.854987] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.627 qpair failed and we were unable to recover it. 00:34:10.627 [2024-04-26 23:36:59.855300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.627 [2024-04-26 23:36:59.855623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.627 [2024-04-26 23:36:59.855629] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.627 qpair failed and we were unable to recover it. 00:34:10.627 [2024-04-26 23:36:59.855969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.898 [2024-04-26 23:36:59.856278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.898 [2024-04-26 23:36:59.856286] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.898 qpair failed and we were unable to recover it. 00:34:10.898 [2024-04-26 23:36:59.856640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.898 [2024-04-26 23:36:59.857001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.898 [2024-04-26 23:36:59.857008] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.898 qpair failed and we were unable to recover it. 00:34:10.898 [2024-04-26 23:36:59.857416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.898 [2024-04-26 23:36:59.857725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.898 [2024-04-26 23:36:59.857731] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.898 qpair failed and we were unable to recover it. 00:34:10.898 [2024-04-26 23:36:59.858083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.898 [2024-04-26 23:36:59.858413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.898 [2024-04-26 23:36:59.858420] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.898 qpair failed and we were unable to recover it. 00:34:10.898 [2024-04-26 23:36:59.858732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.898 [2024-04-26 23:36:59.859030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.898 [2024-04-26 23:36:59.859037] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.898 qpair failed and we were unable to recover it. 00:34:10.898 [2024-04-26 23:36:59.859355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.898 [2024-04-26 23:36:59.859572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.898 [2024-04-26 23:36:59.859578] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.898 qpair failed and we were unable to recover it. 00:34:10.898 [2024-04-26 23:36:59.859870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.898 [2024-04-26 23:36:59.860197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.898 [2024-04-26 23:36:59.860203] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.898 qpair failed and we were unable to recover it. 00:34:10.898 [2024-04-26 23:36:59.860516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.898 [2024-04-26 23:36:59.860875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.898 [2024-04-26 23:36:59.860882] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.898 qpair failed and we were unable to recover it. 00:34:10.898 [2024-04-26 23:36:59.861098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.898 [2024-04-26 23:36:59.861442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.898 [2024-04-26 23:36:59.861448] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.898 qpair failed and we were unable to recover it. 00:34:10.898 [2024-04-26 23:36:59.861757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.898 [2024-04-26 23:36:59.862093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.898 [2024-04-26 23:36:59.862100] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.898 qpair failed and we were unable to recover it. 00:34:10.898 [2024-04-26 23:36:59.862415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.898 [2024-04-26 23:36:59.862660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.898 [2024-04-26 23:36:59.862667] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.898 qpair failed and we were unable to recover it. 00:34:10.898 [2024-04-26 23:36:59.862993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.898 [2024-04-26 23:36:59.863182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.898 [2024-04-26 23:36:59.863188] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.898 qpair failed and we were unable to recover it. 00:34:10.898 [2024-04-26 23:36:59.863496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.898 [2024-04-26 23:36:59.863828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.898 [2024-04-26 23:36:59.863834] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.898 qpair failed and we were unable to recover it. 00:34:10.898 [2024-04-26 23:36:59.864144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.898 [2024-04-26 23:36:59.864391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.898 [2024-04-26 23:36:59.864397] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.898 qpair failed and we were unable to recover it. 00:34:10.898 [2024-04-26 23:36:59.864703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.898 [2024-04-26 23:36:59.865029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.898 [2024-04-26 23:36:59.865035] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.898 qpair failed and we were unable to recover it. 00:34:10.898 [2024-04-26 23:36:59.865247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.898 [2024-04-26 23:36:59.865533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.899 [2024-04-26 23:36:59.865540] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.899 qpair failed and we were unable to recover it. 00:34:10.899 [2024-04-26 23:36:59.865827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.899 [2024-04-26 23:36:59.866154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.899 [2024-04-26 23:36:59.866161] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.899 qpair failed and we were unable to recover it. 00:34:10.899 [2024-04-26 23:36:59.866520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.899 [2024-04-26 23:36:59.866848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.899 [2024-04-26 23:36:59.866856] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.899 qpair failed and we were unable to recover it. 00:34:10.899 [2024-04-26 23:36:59.867202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.899 [2024-04-26 23:36:59.867525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.899 [2024-04-26 23:36:59.867531] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.899 qpair failed and we were unable to recover it. 00:34:10.899 [2024-04-26 23:36:59.867845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.899 [2024-04-26 23:36:59.868193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.899 [2024-04-26 23:36:59.868199] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.899 qpair failed and we were unable to recover it. 00:34:10.899 [2024-04-26 23:36:59.868554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.899 [2024-04-26 23:36:59.868868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.899 [2024-04-26 23:36:59.868875] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.899 qpair failed and we were unable to recover it. 00:34:10.899 [2024-04-26 23:36:59.869210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.899 [2024-04-26 23:36:59.869563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.899 [2024-04-26 23:36:59.869569] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.899 qpair failed and we were unable to recover it. 00:34:10.899 [2024-04-26 23:36:59.869897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.899 [2024-04-26 23:36:59.870089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.899 [2024-04-26 23:36:59.870096] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.899 qpair failed and we were unable to recover it. 00:34:10.899 [2024-04-26 23:36:59.870408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.899 [2024-04-26 23:36:59.870576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.899 [2024-04-26 23:36:59.870583] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.899 qpair failed and we were unable to recover it. 00:34:10.899 [2024-04-26 23:36:59.870773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.899 [2024-04-26 23:36:59.870996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.899 [2024-04-26 23:36:59.871003] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.899 qpair failed and we were unable to recover it. 00:34:10.899 [2024-04-26 23:36:59.871334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.899 [2024-04-26 23:36:59.871637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.899 [2024-04-26 23:36:59.871643] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.899 qpair failed and we were unable to recover it. 00:34:10.899 [2024-04-26 23:36:59.871937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.899 [2024-04-26 23:36:59.872161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.899 [2024-04-26 23:36:59.872168] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.899 qpair failed and we were unable to recover it. 00:34:10.899 [2024-04-26 23:36:59.872477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.899 [2024-04-26 23:36:59.872806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.899 [2024-04-26 23:36:59.872813] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.899 qpair failed and we were unable to recover it. 00:34:10.899 [2024-04-26 23:36:59.873145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.899 [2024-04-26 23:36:59.873480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.899 [2024-04-26 23:36:59.873486] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.899 qpair failed and we were unable to recover it. 00:34:10.899 [2024-04-26 23:36:59.873791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.899 [2024-04-26 23:36:59.874092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.899 [2024-04-26 23:36:59.874099] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.899 qpair failed and we were unable to recover it. 00:34:10.899 [2024-04-26 23:36:59.874426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.899 [2024-04-26 23:36:59.874738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.899 [2024-04-26 23:36:59.874745] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.899 qpair failed and we were unable to recover it. 00:34:10.899 [2024-04-26 23:36:59.875078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.899 [2024-04-26 23:36:59.875387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.899 [2024-04-26 23:36:59.875394] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.899 qpair failed and we were unable to recover it. 00:34:10.899 [2024-04-26 23:36:59.875724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.899 [2024-04-26 23:36:59.876029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.899 [2024-04-26 23:36:59.876036] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.899 qpair failed and we were unable to recover it. 00:34:10.899 [2024-04-26 23:36:59.876360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.899 [2024-04-26 23:36:59.876673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.899 [2024-04-26 23:36:59.876679] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.899 qpair failed and we were unable to recover it. 00:34:10.899 [2024-04-26 23:36:59.876955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.899 [2024-04-26 23:36:59.877291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.899 [2024-04-26 23:36:59.877298] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.899 qpair failed and we were unable to recover it. 00:34:10.899 [2024-04-26 23:36:59.877621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.899 [2024-04-26 23:36:59.877807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.899 [2024-04-26 23:36:59.877813] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.899 qpair failed and we were unable to recover it. 00:34:10.899 [2024-04-26 23:36:59.878127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.899 [2024-04-26 23:36:59.878445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.899 [2024-04-26 23:36:59.878451] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.899 qpair failed and we were unable to recover it. 00:34:10.899 [2024-04-26 23:36:59.878760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.899 [2024-04-26 23:36:59.879138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.899 [2024-04-26 23:36:59.879144] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.899 qpair failed and we were unable to recover it. 00:34:10.899 [2024-04-26 23:36:59.879456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.899 [2024-04-26 23:36:59.879785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.899 [2024-04-26 23:36:59.879793] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.899 qpair failed and we were unable to recover it. 00:34:10.899 [2024-04-26 23:36:59.880111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.899 [2024-04-26 23:36:59.880423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.899 [2024-04-26 23:36:59.880430] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.899 qpair failed and we were unable to recover it. 00:34:10.899 [2024-04-26 23:36:59.880755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.899 [2024-04-26 23:36:59.881159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.899 [2024-04-26 23:36:59.881166] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.899 qpair failed and we were unable to recover it. 00:34:10.899 [2024-04-26 23:36:59.881490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.899 [2024-04-26 23:36:59.881801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.899 [2024-04-26 23:36:59.881808] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.899 qpair failed and we were unable to recover it. 00:34:10.899 [2024-04-26 23:36:59.882103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.899 [2024-04-26 23:36:59.882296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.899 [2024-04-26 23:36:59.882303] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.899 qpair failed and we were unable to recover it. 00:34:10.899 [2024-04-26 23:36:59.882671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.899 [2024-04-26 23:36:59.882993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.899 [2024-04-26 23:36:59.882999] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.899 qpair failed and we were unable to recover it. 00:34:10.899 [2024-04-26 23:36:59.883285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.899 [2024-04-26 23:36:59.883627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.899 [2024-04-26 23:36:59.883634] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.899 qpair failed and we were unable to recover it. 00:34:10.899 [2024-04-26 23:36:59.883942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.899 [2024-04-26 23:36:59.884246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.899 [2024-04-26 23:36:59.884252] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.899 qpair failed and we were unable to recover it. 00:34:10.899 [2024-04-26 23:36:59.884562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.899 [2024-04-26 23:36:59.884813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.899 [2024-04-26 23:36:59.884819] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.899 qpair failed and we were unable to recover it. 00:34:10.899 [2024-04-26 23:36:59.885130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.899 [2024-04-26 23:36:59.885276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.899 [2024-04-26 23:36:59.885283] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.899 qpair failed and we were unable to recover it. 00:34:10.899 [2024-04-26 23:36:59.885548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.899 [2024-04-26 23:36:59.885841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.899 [2024-04-26 23:36:59.885849] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.899 qpair failed and we were unable to recover it. 00:34:10.899 [2024-04-26 23:36:59.886155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.899 [2024-04-26 23:36:59.886446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.899 [2024-04-26 23:36:59.886452] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.899 qpair failed and we were unable to recover it. 00:34:10.899 [2024-04-26 23:36:59.886776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.899 [2024-04-26 23:36:59.887064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.899 [2024-04-26 23:36:59.887071] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.899 qpair failed and we were unable to recover it. 00:34:10.899 [2024-04-26 23:36:59.887389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.899 [2024-04-26 23:36:59.887670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.899 [2024-04-26 23:36:59.887676] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.899 qpair failed and we were unable to recover it. 00:34:10.899 [2024-04-26 23:36:59.887998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.899 [2024-04-26 23:36:59.888341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.899 [2024-04-26 23:36:59.888347] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.899 qpair failed and we were unable to recover it. 00:34:10.899 [2024-04-26 23:36:59.888688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.899 [2024-04-26 23:36:59.889009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.899 [2024-04-26 23:36:59.889016] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.899 qpair failed and we were unable to recover it. 00:34:10.899 [2024-04-26 23:36:59.889367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.899 [2024-04-26 23:36:59.889676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.899 [2024-04-26 23:36:59.889683] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.899 qpair failed and we were unable to recover it. 00:34:10.899 [2024-04-26 23:36:59.890017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.899 [2024-04-26 23:36:59.890359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.899 [2024-04-26 23:36:59.890365] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.899 qpair failed and we were unable to recover it. 00:34:10.899 [2024-04-26 23:36:59.890700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.899 [2024-04-26 23:36:59.890911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.899 [2024-04-26 23:36:59.890918] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.899 qpair failed and we were unable to recover it. 00:34:10.899 [2024-04-26 23:36:59.891279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.899 [2024-04-26 23:36:59.891618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.899 [2024-04-26 23:36:59.891624] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.899 qpair failed and we were unable to recover it. 00:34:10.899 [2024-04-26 23:36:59.891940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.899 [2024-04-26 23:36:59.892239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.899 [2024-04-26 23:36:59.892247] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.899 qpair failed and we were unable to recover it. 00:34:10.899 [2024-04-26 23:36:59.892600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.899 [2024-04-26 23:36:59.892917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.899 [2024-04-26 23:36:59.892924] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.899 qpair failed and we were unable to recover it. 00:34:10.899 [2024-04-26 23:36:59.893206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.899 [2024-04-26 23:36:59.893536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.899 [2024-04-26 23:36:59.893542] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.899 qpair failed and we were unable to recover it. 00:34:10.899 [2024-04-26 23:36:59.893860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.899 [2024-04-26 23:36:59.894113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.899 [2024-04-26 23:36:59.894120] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.899 qpair failed and we were unable to recover it. 00:34:10.899 [2024-04-26 23:36:59.894449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.899 [2024-04-26 23:36:59.894852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.899 [2024-04-26 23:36:59.894859] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.899 qpair failed and we were unable to recover it. 00:34:10.899 [2024-04-26 23:36:59.895162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.899 [2024-04-26 23:36:59.895505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.899 [2024-04-26 23:36:59.895512] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.899 qpair failed and we were unable to recover it. 00:34:10.899 [2024-04-26 23:36:59.895845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.899 [2024-04-26 23:36:59.896188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.899 [2024-04-26 23:36:59.896194] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.899 qpair failed and we were unable to recover it. 00:34:10.899 [2024-04-26 23:36:59.896542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.899 [2024-04-26 23:36:59.896875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.899 [2024-04-26 23:36:59.896882] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.899 qpair failed and we were unable to recover it. 00:34:10.899 [2024-04-26 23:36:59.897217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.899 [2024-04-26 23:36:59.897532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.899 [2024-04-26 23:36:59.897538] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.899 qpair failed and we were unable to recover it. 00:34:10.899 [2024-04-26 23:36:59.897877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.899 [2024-04-26 23:36:59.898179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.899 [2024-04-26 23:36:59.898185] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.899 qpair failed and we were unable to recover it. 00:34:10.899 [2024-04-26 23:36:59.898543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.899 [2024-04-26 23:36:59.898878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.899 [2024-04-26 23:36:59.898885] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.899 qpair failed and we were unable to recover it. 00:34:10.899 [2024-04-26 23:36:59.899230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.900 [2024-04-26 23:36:59.899550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.900 [2024-04-26 23:36:59.899556] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.900 qpair failed and we were unable to recover it. 00:34:10.900 [2024-04-26 23:36:59.899898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.900 [2024-04-26 23:36:59.900191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.900 [2024-04-26 23:36:59.900197] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.900 qpair failed and we were unable to recover it. 00:34:10.900 [2024-04-26 23:36:59.900541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.900 [2024-04-26 23:36:59.900867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.900 [2024-04-26 23:36:59.900874] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.900 qpair failed and we were unable to recover it. 00:34:10.900 [2024-04-26 23:36:59.901130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.900 [2024-04-26 23:36:59.901441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.900 [2024-04-26 23:36:59.901447] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.900 qpair failed and we were unable to recover it. 00:34:10.900 [2024-04-26 23:36:59.901753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.900 [2024-04-26 23:36:59.902062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.900 [2024-04-26 23:36:59.902068] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.900 qpair failed and we were unable to recover it. 00:34:10.900 [2024-04-26 23:36:59.902462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.900 [2024-04-26 23:36:59.902803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.900 [2024-04-26 23:36:59.902810] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.900 qpair failed and we were unable to recover it. 00:34:10.900 [2024-04-26 23:36:59.903159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.900 [2024-04-26 23:36:59.903469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.900 [2024-04-26 23:36:59.903476] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.900 qpair failed and we were unable to recover it. 00:34:10.900 [2024-04-26 23:36:59.903806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.900 [2024-04-26 23:36:59.904153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.900 [2024-04-26 23:36:59.904159] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.900 qpair failed and we were unable to recover it. 00:34:10.900 [2024-04-26 23:36:59.904419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.900 [2024-04-26 23:36:59.904754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.900 [2024-04-26 23:36:59.904761] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.900 qpair failed and we were unable to recover it. 00:34:10.900 [2024-04-26 23:36:59.904993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.900 [2024-04-26 23:36:59.905272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.900 [2024-04-26 23:36:59.905279] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.900 qpair failed and we were unable to recover it. 00:34:10.900 [2024-04-26 23:36:59.905630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.900 [2024-04-26 23:36:59.905935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.900 [2024-04-26 23:36:59.905941] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.900 qpair failed and we were unable to recover it. 00:34:10.900 [2024-04-26 23:36:59.906286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.900 [2024-04-26 23:36:59.906575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.900 [2024-04-26 23:36:59.906581] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.900 qpair failed and we were unable to recover it. 00:34:10.900 [2024-04-26 23:36:59.906857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.900 [2024-04-26 23:36:59.907184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.900 [2024-04-26 23:36:59.907190] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.900 qpair failed and we were unable to recover it. 00:34:10.900 [2024-04-26 23:36:59.907365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.900 [2024-04-26 23:36:59.907652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.900 [2024-04-26 23:36:59.907658] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.900 qpair failed and we were unable to recover it. 00:34:10.900 [2024-04-26 23:36:59.907966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.900 [2024-04-26 23:36:59.908188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.900 [2024-04-26 23:36:59.908195] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.900 qpair failed and we were unable to recover it. 00:34:10.900 [2024-04-26 23:36:59.908500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.900 [2024-04-26 23:36:59.908853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.900 [2024-04-26 23:36:59.908859] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.900 qpair failed and we were unable to recover it. 00:34:10.900 [2024-04-26 23:36:59.909179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.900 [2024-04-26 23:36:59.909533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.900 [2024-04-26 23:36:59.909539] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.900 qpair failed and we were unable to recover it. 00:34:10.900 [2024-04-26 23:36:59.909604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.900 [2024-04-26 23:36:59.909893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.900 [2024-04-26 23:36:59.909900] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.900 qpair failed and we were unable to recover it. 00:34:10.900 [2024-04-26 23:36:59.910083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.900 [2024-04-26 23:36:59.910387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.900 [2024-04-26 23:36:59.910393] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.900 qpair failed and we were unable to recover it. 00:34:10.900 [2024-04-26 23:36:59.910721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.900 [2024-04-26 23:36:59.911034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.900 [2024-04-26 23:36:59.911041] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.900 qpair failed and we were unable to recover it. 00:34:10.900 [2024-04-26 23:36:59.911197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.900 [2024-04-26 23:36:59.911525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.900 [2024-04-26 23:36:59.911531] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.900 qpair failed and we were unable to recover it. 00:34:10.900 [2024-04-26 23:36:59.911859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.900 [2024-04-26 23:36:59.912193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.900 [2024-04-26 23:36:59.912199] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.900 qpair failed and we were unable to recover it. 00:34:10.900 [2024-04-26 23:36:59.912500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.900 [2024-04-26 23:36:59.912812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.900 [2024-04-26 23:36:59.912819] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.900 qpair failed and we were unable to recover it. 00:34:10.900 [2024-04-26 23:36:59.913143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.900 [2024-04-26 23:36:59.913506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.900 [2024-04-26 23:36:59.913513] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.900 qpair failed and we were unable to recover it. 00:34:10.900 [2024-04-26 23:36:59.913904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.900 [2024-04-26 23:36:59.914207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.900 [2024-04-26 23:36:59.914213] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.900 qpair failed and we were unable to recover it. 00:34:10.900 [2024-04-26 23:36:59.914548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.900 [2024-04-26 23:36:59.914892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.900 [2024-04-26 23:36:59.914898] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.900 qpair failed and we were unable to recover it. 00:34:10.900 [2024-04-26 23:36:59.915189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.900 [2024-04-26 23:36:59.915544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.900 [2024-04-26 23:36:59.915550] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.900 qpair failed and we were unable to recover it. 00:34:10.900 [2024-04-26 23:36:59.915772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.900 [2024-04-26 23:36:59.916052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.900 [2024-04-26 23:36:59.916059] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.900 qpair failed and we were unable to recover it. 00:34:10.900 [2024-04-26 23:36:59.916411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.900 [2024-04-26 23:36:59.916726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.900 [2024-04-26 23:36:59.916733] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.900 qpair failed and we were unable to recover it. 00:34:10.900 [2024-04-26 23:36:59.917043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.900 [2024-04-26 23:36:59.917387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.900 [2024-04-26 23:36:59.917393] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.900 qpair failed and we were unable to recover it. 00:34:10.900 [2024-04-26 23:36:59.917760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.900 [2024-04-26 23:36:59.918059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.900 [2024-04-26 23:36:59.918065] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.900 qpair failed and we were unable to recover it. 00:34:10.900 [2024-04-26 23:36:59.918485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.900 [2024-04-26 23:36:59.918801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.900 [2024-04-26 23:36:59.918808] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.900 qpair failed and we were unable to recover it. 00:34:10.900 [2024-04-26 23:36:59.919140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.900 [2024-04-26 23:36:59.919342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.900 [2024-04-26 23:36:59.919348] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.900 qpair failed and we were unable to recover it. 00:34:10.900 [2024-04-26 23:36:59.919703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.900 [2024-04-26 23:36:59.920027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.900 [2024-04-26 23:36:59.920034] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.900 qpair failed and we were unable to recover it. 00:34:10.900 [2024-04-26 23:36:59.920367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.900 [2024-04-26 23:36:59.920706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.900 [2024-04-26 23:36:59.920712] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.900 qpair failed and we were unable to recover it. 00:34:10.900 [2024-04-26 23:36:59.921119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.900 [2024-04-26 23:36:59.921474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.900 [2024-04-26 23:36:59.921480] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.900 qpair failed and we were unable to recover it. 00:34:10.900 [2024-04-26 23:36:59.921868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.900 [2024-04-26 23:36:59.922197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.900 [2024-04-26 23:36:59.922203] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.900 qpair failed and we were unable to recover it. 00:34:10.900 [2024-04-26 23:36:59.922497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.900 [2024-04-26 23:36:59.922805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.900 [2024-04-26 23:36:59.922811] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.900 qpair failed and we were unable to recover it. 00:34:10.900 [2024-04-26 23:36:59.923186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.900 [2024-04-26 23:36:59.923414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.900 [2024-04-26 23:36:59.923420] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.900 qpair failed and we were unable to recover it. 00:34:10.900 [2024-04-26 23:36:59.923630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.900 [2024-04-26 23:36:59.923931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.900 [2024-04-26 23:36:59.923938] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.900 qpair failed and we were unable to recover it. 00:34:10.900 [2024-04-26 23:36:59.924218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.900 [2024-04-26 23:36:59.924464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.900 [2024-04-26 23:36:59.924470] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.900 qpair failed and we were unable to recover it. 00:34:10.900 [2024-04-26 23:36:59.924806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.900 [2024-04-26 23:36:59.925101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.900 [2024-04-26 23:36:59.925107] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.900 qpair failed and we were unable to recover it. 00:34:10.900 [2024-04-26 23:36:59.925452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.900 [2024-04-26 23:36:59.925810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.900 [2024-04-26 23:36:59.925817] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.900 qpair failed and we were unable to recover it. 00:34:10.900 [2024-04-26 23:36:59.926116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.900 [2024-04-26 23:36:59.926358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.900 [2024-04-26 23:36:59.926365] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.900 qpair failed and we were unable to recover it. 00:34:10.900 [2024-04-26 23:36:59.926509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.900 [2024-04-26 23:36:59.926951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.900 [2024-04-26 23:36:59.926957] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.900 qpair failed and we were unable to recover it. 00:34:10.900 [2024-04-26 23:36:59.927302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.900 [2024-04-26 23:36:59.927587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.900 [2024-04-26 23:36:59.927593] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.900 qpair failed and we were unable to recover it. 00:34:10.900 [2024-04-26 23:36:59.927988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.900 [2024-04-26 23:36:59.928349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.900 [2024-04-26 23:36:59.928356] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.900 qpair failed and we were unable to recover it. 00:34:10.900 [2024-04-26 23:36:59.928552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.900 [2024-04-26 23:36:59.928845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.900 [2024-04-26 23:36:59.928853] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.900 qpair failed and we were unable to recover it. 00:34:10.900 [2024-04-26 23:36:59.929171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.900 [2024-04-26 23:36:59.929431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.900 [2024-04-26 23:36:59.929438] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.900 qpair failed and we were unable to recover it. 00:34:10.900 [2024-04-26 23:36:59.929768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.900 [2024-04-26 23:36:59.930139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.900 [2024-04-26 23:36:59.930146] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.900 qpair failed and we were unable to recover it. 00:34:10.900 [2024-04-26 23:36:59.930460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.900 [2024-04-26 23:36:59.930651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.900 [2024-04-26 23:36:59.930657] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.900 qpair failed and we were unable to recover it. 00:34:10.900 [2024-04-26 23:36:59.930983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.900 [2024-04-26 23:36:59.931311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.900 [2024-04-26 23:36:59.931318] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.900 qpair failed and we were unable to recover it. 00:34:10.900 [2024-04-26 23:36:59.931622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.900 [2024-04-26 23:36:59.931863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.900 [2024-04-26 23:36:59.931870] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.900 qpair failed and we were unable to recover it. 00:34:10.900 [2024-04-26 23:36:59.932216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.900 [2024-04-26 23:36:59.932511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.900 [2024-04-26 23:36:59.932517] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.900 qpair failed and we were unable to recover it. 00:34:10.900 [2024-04-26 23:36:59.932881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.900 [2024-04-26 23:36:59.933052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.900 [2024-04-26 23:36:59.933059] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.900 qpair failed and we were unable to recover it. 00:34:10.901 [2024-04-26 23:36:59.933362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.901 [2024-04-26 23:36:59.933706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.901 [2024-04-26 23:36:59.933713] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.901 qpair failed and we were unable to recover it. 00:34:10.901 [2024-04-26 23:36:59.933916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.901 [2024-04-26 23:36:59.934227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.901 [2024-04-26 23:36:59.934233] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.901 qpair failed and we were unable to recover it. 00:34:10.901 [2024-04-26 23:36:59.934560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.901 [2024-04-26 23:36:59.934822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.901 [2024-04-26 23:36:59.934828] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.901 qpair failed and we were unable to recover it. 00:34:10.901 [2024-04-26 23:36:59.935176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.901 [2024-04-26 23:36:59.935474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.901 [2024-04-26 23:36:59.935481] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.901 qpair failed and we were unable to recover it. 00:34:10.901 [2024-04-26 23:36:59.935763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.901 [2024-04-26 23:36:59.936083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.901 [2024-04-26 23:36:59.936090] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.901 qpair failed and we were unable to recover it. 00:34:10.901 [2024-04-26 23:36:59.936464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.901 [2024-04-26 23:36:59.936820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.901 [2024-04-26 23:36:59.936826] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.901 qpair failed and we were unable to recover it. 00:34:10.901 [2024-04-26 23:36:59.937156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.901 [2024-04-26 23:36:59.937505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.901 [2024-04-26 23:36:59.937512] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.901 qpair failed and we were unable to recover it. 00:34:10.901 [2024-04-26 23:36:59.937832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.901 [2024-04-26 23:36:59.938144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.901 [2024-04-26 23:36:59.938150] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.901 qpair failed and we were unable to recover it. 00:34:10.901 [2024-04-26 23:36:59.938526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.901 [2024-04-26 23:36:59.938892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.901 [2024-04-26 23:36:59.938899] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.901 qpair failed and we were unable to recover it. 00:34:10.901 [2024-04-26 23:36:59.939226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.901 [2024-04-26 23:36:59.939560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.901 [2024-04-26 23:36:59.939566] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.901 qpair failed and we were unable to recover it. 00:34:10.901 [2024-04-26 23:36:59.939897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.901 [2024-04-26 23:36:59.940059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.901 [2024-04-26 23:36:59.940065] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.901 qpair failed and we were unable to recover it. 00:34:10.901 [2024-04-26 23:36:59.940372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.901 [2024-04-26 23:36:59.940579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.901 [2024-04-26 23:36:59.940585] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.901 qpair failed and we were unable to recover it. 00:34:10.901 [2024-04-26 23:36:59.940804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.901 [2024-04-26 23:36:59.941135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.901 [2024-04-26 23:36:59.941142] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.901 qpair failed and we were unable to recover it. 00:34:10.901 [2024-04-26 23:36:59.941500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.901 [2024-04-26 23:36:59.941835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.901 [2024-04-26 23:36:59.941843] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.901 qpair failed and we were unable to recover it. 00:34:10.901 [2024-04-26 23:36:59.942086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.901 [2024-04-26 23:36:59.942233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.901 [2024-04-26 23:36:59.942240] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.901 qpair failed and we were unable to recover it. 00:34:10.901 [2024-04-26 23:36:59.942574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.901 [2024-04-26 23:36:59.942860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.901 [2024-04-26 23:36:59.942867] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.901 qpair failed and we were unable to recover it. 00:34:10.901 [2024-04-26 23:36:59.943196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.901 [2024-04-26 23:36:59.943500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.901 [2024-04-26 23:36:59.943507] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.901 qpair failed and we were unable to recover it. 00:34:10.901 [2024-04-26 23:36:59.943890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.901 [2024-04-26 23:36:59.944183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.901 [2024-04-26 23:36:59.944189] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.901 qpair failed and we were unable to recover it. 00:34:10.901 [2024-04-26 23:36:59.944564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.901 [2024-04-26 23:36:59.944874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.901 [2024-04-26 23:36:59.944880] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.901 qpair failed and we were unable to recover it. 00:34:10.901 [2024-04-26 23:36:59.945080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.901 [2024-04-26 23:36:59.945379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.901 [2024-04-26 23:36:59.945386] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.901 qpair failed and we were unable to recover it. 00:34:10.901 [2024-04-26 23:36:59.945663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.901 [2024-04-26 23:36:59.945894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.901 [2024-04-26 23:36:59.945900] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.901 qpair failed and we were unable to recover it. 00:34:10.901 [2024-04-26 23:36:59.946239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.901 [2024-04-26 23:36:59.946557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.901 [2024-04-26 23:36:59.946564] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.901 qpair failed and we were unable to recover it. 00:34:10.901 [2024-04-26 23:36:59.946762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.901 [2024-04-26 23:36:59.946865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.901 [2024-04-26 23:36:59.946872] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.901 qpair failed and we were unable to recover it. 00:34:10.901 [2024-04-26 23:36:59.947198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.901 [2024-04-26 23:36:59.947496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.901 [2024-04-26 23:36:59.947503] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.901 qpair failed and we were unable to recover it. 00:34:10.901 [2024-04-26 23:36:59.947823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.901 [2024-04-26 23:36:59.948187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.901 [2024-04-26 23:36:59.948193] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.901 qpair failed and we were unable to recover it. 00:34:10.901 [2024-04-26 23:36:59.948501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.901 [2024-04-26 23:36:59.948801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.901 [2024-04-26 23:36:59.948808] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.901 qpair failed and we were unable to recover it. 00:34:10.901 [2024-04-26 23:36:59.949046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.901 [2024-04-26 23:36:59.949345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.901 [2024-04-26 23:36:59.949351] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.901 qpair failed and we were unable to recover it. 00:34:10.901 [2024-04-26 23:36:59.949551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.901 [2024-04-26 23:36:59.949862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.901 [2024-04-26 23:36:59.949869] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.901 qpair failed and we were unable to recover it. 00:34:10.901 [2024-04-26 23:36:59.950081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.901 [2024-04-26 23:36:59.950462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.901 [2024-04-26 23:36:59.950468] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.901 qpair failed and we were unable to recover it. 00:34:10.901 [2024-04-26 23:36:59.950791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.901 [2024-04-26 23:36:59.951129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.901 [2024-04-26 23:36:59.951136] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.901 qpair failed and we were unable to recover it. 00:34:10.901 [2024-04-26 23:36:59.951451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.901 [2024-04-26 23:36:59.951698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.901 [2024-04-26 23:36:59.951704] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.901 qpair failed and we were unable to recover it. 00:34:10.901 [2024-04-26 23:36:59.952019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.901 [2024-04-26 23:36:59.952343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.901 [2024-04-26 23:36:59.952349] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.901 qpair failed and we were unable to recover it. 00:34:10.901 [2024-04-26 23:36:59.952665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.901 [2024-04-26 23:36:59.952907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.901 [2024-04-26 23:36:59.952913] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.901 qpair failed and we were unable to recover it. 00:34:10.901 [2024-04-26 23:36:59.953228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.901 [2024-04-26 23:36:59.953576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.901 [2024-04-26 23:36:59.953583] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.901 qpair failed and we were unable to recover it. 00:34:10.901 [2024-04-26 23:36:59.953912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.901 [2024-04-26 23:36:59.954212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.901 [2024-04-26 23:36:59.954223] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.901 qpair failed and we were unable to recover it. 00:34:10.901 [2024-04-26 23:36:59.954561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.901 [2024-04-26 23:36:59.954851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.901 [2024-04-26 23:36:59.954857] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.901 qpair failed and we were unable to recover it. 00:34:10.901 [2024-04-26 23:36:59.955205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.901 [2024-04-26 23:36:59.955525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.901 [2024-04-26 23:36:59.955532] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.901 qpair failed and we were unable to recover it. 00:34:10.901 [2024-04-26 23:36:59.955896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.901 [2024-04-26 23:36:59.956229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.901 [2024-04-26 23:36:59.956236] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.901 qpair failed and we were unable to recover it. 00:34:10.901 [2024-04-26 23:36:59.956591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.901 [2024-04-26 23:36:59.956943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.901 [2024-04-26 23:36:59.956949] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.901 qpair failed and we were unable to recover it. 00:34:10.901 [2024-04-26 23:36:59.957261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.901 [2024-04-26 23:36:59.957554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.901 [2024-04-26 23:36:59.957560] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.901 qpair failed and we were unable to recover it. 00:34:10.901 [2024-04-26 23:36:59.957703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.901 [2024-04-26 23:36:59.957938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.901 [2024-04-26 23:36:59.957945] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.901 qpair failed and we were unable to recover it. 00:34:10.901 [2024-04-26 23:36:59.958267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.901 [2024-04-26 23:36:59.958584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.901 [2024-04-26 23:36:59.958590] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.901 qpair failed and we were unable to recover it. 00:34:10.901 [2024-04-26 23:36:59.958890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.901 [2024-04-26 23:36:59.959226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.901 [2024-04-26 23:36:59.959232] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.901 qpair failed and we were unable to recover it. 00:34:10.901 [2024-04-26 23:36:59.959433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.901 [2024-04-26 23:36:59.959776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.901 [2024-04-26 23:36:59.959782] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.901 qpair failed and we were unable to recover it. 00:34:10.901 [2024-04-26 23:36:59.959995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.901 [2024-04-26 23:36:59.960341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.901 [2024-04-26 23:36:59.960347] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.901 qpair failed and we were unable to recover it. 00:34:10.901 [2024-04-26 23:36:59.960714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.901 [2024-04-26 23:36:59.961049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.901 [2024-04-26 23:36:59.961056] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.901 qpair failed and we were unable to recover it. 00:34:10.901 [2024-04-26 23:36:59.961425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.901 [2024-04-26 23:36:59.961725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.901 [2024-04-26 23:36:59.961731] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.901 qpair failed and we were unable to recover it. 00:34:10.902 [2024-04-26 23:36:59.962054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.902 [2024-04-26 23:36:59.962448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.902 [2024-04-26 23:36:59.962454] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.902 qpair failed and we were unable to recover it. 00:34:10.902 [2024-04-26 23:36:59.962618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.902 [2024-04-26 23:36:59.962851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.902 [2024-04-26 23:36:59.962858] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.902 qpair failed and we were unable to recover it. 00:34:10.902 [2024-04-26 23:36:59.963048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.902 [2024-04-26 23:36:59.963468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.902 [2024-04-26 23:36:59.963474] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.902 qpair failed and we were unable to recover it. 00:34:10.902 [2024-04-26 23:36:59.963687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.902 [2024-04-26 23:36:59.963998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.902 [2024-04-26 23:36:59.964004] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.902 qpair failed and we were unable to recover it. 00:34:10.902 [2024-04-26 23:36:59.964331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.902 [2024-04-26 23:36:59.964626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.902 [2024-04-26 23:36:59.964633] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.902 qpair failed and we were unable to recover it. 00:34:10.902 [2024-04-26 23:36:59.964994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.902 [2024-04-26 23:36:59.965351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.902 [2024-04-26 23:36:59.965358] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.902 qpair failed and we were unable to recover it. 00:34:10.902 [2024-04-26 23:36:59.965688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.902 [2024-04-26 23:36:59.965871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.902 [2024-04-26 23:36:59.965878] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.902 qpair failed and we were unable to recover it. 00:34:10.902 [2024-04-26 23:36:59.966120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.902 [2024-04-26 23:36:59.966366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.902 [2024-04-26 23:36:59.966372] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.902 qpair failed and we were unable to recover it. 00:34:10.902 [2024-04-26 23:36:59.966707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.902 [2024-04-26 23:36:59.966962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.902 [2024-04-26 23:36:59.966968] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.902 qpair failed and we were unable to recover it. 00:34:10.902 [2024-04-26 23:36:59.967296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.902 [2024-04-26 23:36:59.967621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.902 [2024-04-26 23:36:59.967628] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.902 qpair failed and we were unable to recover it. 00:34:10.902 [2024-04-26 23:36:59.967939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.902 [2024-04-26 23:36:59.968241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.902 [2024-04-26 23:36:59.968247] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.902 qpair failed and we were unable to recover it. 00:34:10.902 [2024-04-26 23:36:59.968571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.902 [2024-04-26 23:36:59.968915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.902 [2024-04-26 23:36:59.968921] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.902 qpair failed and we were unable to recover it. 00:34:10.902 [2024-04-26 23:36:59.969096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.902 [2024-04-26 23:36:59.969439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.902 [2024-04-26 23:36:59.969445] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.902 qpair failed and we were unable to recover it. 00:34:10.902 [2024-04-26 23:36:59.969694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.902 [2024-04-26 23:36:59.969899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.902 [2024-04-26 23:36:59.969906] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.902 qpair failed and we were unable to recover it. 00:34:10.902 [2024-04-26 23:36:59.970237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.902 [2024-04-26 23:36:59.970527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.902 [2024-04-26 23:36:59.970534] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.902 qpair failed and we were unable to recover it. 00:34:10.902 [2024-04-26 23:36:59.970727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.902 [2024-04-26 23:36:59.971044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.902 [2024-04-26 23:36:59.971050] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.902 qpair failed and we were unable to recover it. 00:34:10.902 [2024-04-26 23:36:59.971357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.902 [2024-04-26 23:36:59.971677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.902 [2024-04-26 23:36:59.971683] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.902 qpair failed and we were unable to recover it. 00:34:10.902 [2024-04-26 23:36:59.972002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.902 [2024-04-26 23:36:59.972357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.902 [2024-04-26 23:36:59.972363] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.902 qpair failed and we were unable to recover it. 00:34:10.902 [2024-04-26 23:36:59.972756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.902 [2024-04-26 23:36:59.973069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.902 [2024-04-26 23:36:59.973078] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.902 qpair failed and we were unable to recover it. 00:34:10.902 [2024-04-26 23:36:59.973534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.902 [2024-04-26 23:36:59.973691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.902 [2024-04-26 23:36:59.973698] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.902 qpair failed and we were unable to recover it. 00:34:10.902 [2024-04-26 23:36:59.974027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.902 [2024-04-26 23:36:59.974383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.902 [2024-04-26 23:36:59.974390] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.902 qpair failed and we were unable to recover it. 00:34:10.902 [2024-04-26 23:36:59.974723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.902 [2024-04-26 23:36:59.975040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.902 [2024-04-26 23:36:59.975047] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.902 qpair failed and we were unable to recover it. 00:34:10.902 [2024-04-26 23:36:59.975199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.902 [2024-04-26 23:36:59.975389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.902 [2024-04-26 23:36:59.975395] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.902 qpair failed and we were unable to recover it. 00:34:10.902 [2024-04-26 23:36:59.975719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.902 [2024-04-26 23:36:59.976016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.902 [2024-04-26 23:36:59.976023] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.902 qpair failed and we were unable to recover it. 00:34:10.902 [2024-04-26 23:36:59.976228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.902 [2024-04-26 23:36:59.976542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.902 [2024-04-26 23:36:59.976549] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.902 qpair failed and we were unable to recover it. 00:34:10.902 [2024-04-26 23:36:59.976833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.902 [2024-04-26 23:36:59.977128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.902 [2024-04-26 23:36:59.977134] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.902 qpair failed and we were unable to recover it. 00:34:10.902 [2024-04-26 23:36:59.977484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.902 [2024-04-26 23:36:59.977797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.902 [2024-04-26 23:36:59.977804] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.902 qpair failed and we were unable to recover it. 00:34:10.902 [2024-04-26 23:36:59.978121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.902 [2024-04-26 23:36:59.978479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.902 [2024-04-26 23:36:59.978485] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.902 qpair failed and we were unable to recover it. 00:34:10.902 [2024-04-26 23:36:59.978796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.902 [2024-04-26 23:36:59.979010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.902 [2024-04-26 23:36:59.979019] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.902 qpair failed and we were unable to recover it. 00:34:10.902 [2024-04-26 23:36:59.979367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.902 [2024-04-26 23:36:59.979687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.902 [2024-04-26 23:36:59.979694] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.902 qpair failed and we were unable to recover it. 00:34:10.902 [2024-04-26 23:36:59.980019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.902 [2024-04-26 23:36:59.980381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.902 [2024-04-26 23:36:59.980388] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.902 qpair failed and we were unable to recover it. 00:34:10.902 [2024-04-26 23:36:59.980696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.902 [2024-04-26 23:36:59.980993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.902 [2024-04-26 23:36:59.981000] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.902 qpair failed and we were unable to recover it. 00:34:10.902 [2024-04-26 23:36:59.981385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.902 [2024-04-26 23:36:59.981715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.902 [2024-04-26 23:36:59.981722] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.902 qpair failed and we were unable to recover it. 00:34:10.902 [2024-04-26 23:36:59.982046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.902 [2024-04-26 23:36:59.982367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.902 [2024-04-26 23:36:59.982373] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.902 qpair failed and we were unable to recover it. 00:34:10.902 [2024-04-26 23:36:59.982694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.902 [2024-04-26 23:36:59.983024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.902 [2024-04-26 23:36:59.983030] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.902 qpair failed and we were unable to recover it. 00:34:10.902 [2024-04-26 23:36:59.983349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.902 [2024-04-26 23:36:59.983708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.902 [2024-04-26 23:36:59.983715] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.902 qpair failed and we were unable to recover it. 00:34:10.902 [2024-04-26 23:36:59.984051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.902 [2024-04-26 23:36:59.984385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.902 [2024-04-26 23:36:59.984391] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.902 qpair failed and we were unable to recover it. 00:34:10.902 [2024-04-26 23:36:59.984732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.902 [2024-04-26 23:36:59.984967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.902 [2024-04-26 23:36:59.984974] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.902 qpair failed and we were unable to recover it. 00:34:10.902 [2024-04-26 23:36:59.985314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.902 [2024-04-26 23:36:59.985617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.902 [2024-04-26 23:36:59.985625] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.902 qpair failed and we were unable to recover it. 00:34:10.902 [2024-04-26 23:36:59.985853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.902 [2024-04-26 23:36:59.986038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.902 [2024-04-26 23:36:59.986045] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.902 qpair failed and we were unable to recover it. 00:34:10.902 [2024-04-26 23:36:59.986308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.902 [2024-04-26 23:36:59.986612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.902 [2024-04-26 23:36:59.986619] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.902 qpair failed and we were unable to recover it. 00:34:10.902 [2024-04-26 23:36:59.986807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.902 [2024-04-26 23:36:59.987016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.902 [2024-04-26 23:36:59.987023] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.902 qpair failed and we were unable to recover it. 00:34:10.902 [2024-04-26 23:36:59.987370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.902 [2024-04-26 23:36:59.987684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.902 [2024-04-26 23:36:59.987691] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.902 qpair failed and we were unable to recover it. 00:34:10.902 [2024-04-26 23:36:59.988019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.902 [2024-04-26 23:36:59.988316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.902 [2024-04-26 23:36:59.988322] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.902 qpair failed and we were unable to recover it. 00:34:10.902 [2024-04-26 23:36:59.988670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.902 [2024-04-26 23:36:59.988998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.902 [2024-04-26 23:36:59.989005] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.902 qpair failed and we were unable to recover it. 00:34:10.902 [2024-04-26 23:36:59.989337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.902 [2024-04-26 23:36:59.989653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.902 [2024-04-26 23:36:59.989660] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.902 qpair failed and we were unable to recover it. 00:34:10.902 [2024-04-26 23:36:59.989982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.902 [2024-04-26 23:36:59.990201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.902 [2024-04-26 23:36:59.990207] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.902 qpair failed and we were unable to recover it. 00:34:10.902 [2024-04-26 23:36:59.990501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.902 [2024-04-26 23:36:59.990687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.902 [2024-04-26 23:36:59.990694] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.902 qpair failed and we were unable to recover it. 00:34:10.902 [2024-04-26 23:36:59.990962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.902 [2024-04-26 23:36:59.991181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.902 [2024-04-26 23:36:59.991189] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.902 qpair failed and we were unable to recover it. 00:34:10.902 [2024-04-26 23:36:59.991517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.902 [2024-04-26 23:36:59.991843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.902 [2024-04-26 23:36:59.991850] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.902 qpair failed and we were unable to recover it. 00:34:10.902 [2024-04-26 23:36:59.992176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.902 [2024-04-26 23:36:59.992470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.902 [2024-04-26 23:36:59.992476] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.902 qpair failed and we were unable to recover it. 00:34:10.902 [2024-04-26 23:36:59.992827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.902 [2024-04-26 23:36:59.993102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.902 [2024-04-26 23:36:59.993109] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.902 qpair failed and we were unable to recover it. 00:34:10.902 [2024-04-26 23:36:59.993294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.902 [2024-04-26 23:36:59.993624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.902 [2024-04-26 23:36:59.993630] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.902 qpair failed and we were unable to recover it. 00:34:10.902 [2024-04-26 23:36:59.993959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.902 [2024-04-26 23:36:59.994308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.902 [2024-04-26 23:36:59.994314] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.902 qpair failed and we were unable to recover it. 00:34:10.903 [2024-04-26 23:36:59.994671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.903 [2024-04-26 23:36:59.994874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.903 [2024-04-26 23:36:59.994881] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.903 qpair failed and we were unable to recover it. 00:34:10.903 [2024-04-26 23:36:59.995198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.903 [2024-04-26 23:36:59.995510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.903 [2024-04-26 23:36:59.995517] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.903 qpair failed and we were unable to recover it. 00:34:10.903 [2024-04-26 23:36:59.995840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.903 [2024-04-26 23:36:59.996169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.903 [2024-04-26 23:36:59.996176] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.903 qpair failed and we were unable to recover it. 00:34:10.903 [2024-04-26 23:36:59.996483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.903 [2024-04-26 23:36:59.996831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.903 [2024-04-26 23:36:59.996845] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.903 qpair failed and we were unable to recover it. 00:34:10.903 [2024-04-26 23:36:59.997146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.903 [2024-04-26 23:36:59.997474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.903 [2024-04-26 23:36:59.997481] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.903 qpair failed and we were unable to recover it. 00:34:10.903 [2024-04-26 23:36:59.997802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.903 [2024-04-26 23:36:59.998110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.903 [2024-04-26 23:36:59.998117] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.903 qpair failed and we were unable to recover it. 00:34:10.903 [2024-04-26 23:36:59.998461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.903 [2024-04-26 23:36:59.998608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.903 [2024-04-26 23:36:59.998614] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.903 qpair failed and we were unable to recover it. 00:34:10.903 [2024-04-26 23:36:59.998914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.903 [2024-04-26 23:36:59.999246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.903 [2024-04-26 23:36:59.999252] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.903 qpair failed and we were unable to recover it. 00:34:10.903 [2024-04-26 23:36:59.999599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.903 [2024-04-26 23:36:59.999924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.903 [2024-04-26 23:36:59.999930] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.903 qpair failed and we were unable to recover it. 00:34:10.903 [2024-04-26 23:37:00.000265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.903 [2024-04-26 23:37:00.000628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.903 [2024-04-26 23:37:00.000635] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.903 qpair failed and we were unable to recover it. 00:34:10.903 [2024-04-26 23:37:00.000860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.903 [2024-04-26 23:37:00.001157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.903 [2024-04-26 23:37:00.001164] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.903 qpair failed and we were unable to recover it. 00:34:10.903 [2024-04-26 23:37:00.001355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.903 [2024-04-26 23:37:00.001656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.903 [2024-04-26 23:37:00.001663] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.903 qpair failed and we were unable to recover it. 00:34:10.903 [2024-04-26 23:37:00.001999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.903 [2024-04-26 23:37:00.002234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.903 [2024-04-26 23:37:00.002728] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.903 qpair failed and we were unable to recover it. 00:34:10.903 [2024-04-26 23:37:00.003097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.903 [2024-04-26 23:37:00.003403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.903 [2024-04-26 23:37:00.003410] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.903 qpair failed and we were unable to recover it. 00:34:10.903 [2024-04-26 23:37:00.003741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.903 [2024-04-26 23:37:00.003985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.903 [2024-04-26 23:37:00.003992] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.903 qpair failed and we were unable to recover it. 00:34:10.903 [2024-04-26 23:37:00.004336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.903 [2024-04-26 23:37:00.004610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.903 [2024-04-26 23:37:00.004617] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.903 qpair failed and we were unable to recover it. 00:34:10.903 [2024-04-26 23:37:00.004939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.903 [2024-04-26 23:37:00.005167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.903 [2024-04-26 23:37:00.005173] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.903 qpair failed and we were unable to recover it. 00:34:10.903 [2024-04-26 23:37:00.005563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.903 [2024-04-26 23:37:00.005807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.903 [2024-04-26 23:37:00.005813] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.903 qpair failed and we were unable to recover it. 00:34:10.903 [2024-04-26 23:37:00.006055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.903 [2024-04-26 23:37:00.006479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.903 [2024-04-26 23:37:00.006485] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.903 qpair failed and we were unable to recover it. 00:34:10.903 [2024-04-26 23:37:00.006829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.903 [2024-04-26 23:37:00.006989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.903 [2024-04-26 23:37:00.006998] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.903 qpair failed and we were unable to recover it. 00:34:10.903 [2024-04-26 23:37:00.007243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.903 [2024-04-26 23:37:00.007505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.903 [2024-04-26 23:37:00.007513] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.903 qpair failed and we were unable to recover it. 00:34:10.903 [2024-04-26 23:37:00.007843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.903 [2024-04-26 23:37:00.008168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.903 [2024-04-26 23:37:00.008175] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.903 qpair failed and we were unable to recover it. 00:34:10.903 [2024-04-26 23:37:00.008530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.903 [2024-04-26 23:37:00.008886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.903 [2024-04-26 23:37:00.008893] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.903 qpair failed and we were unable to recover it. 00:34:10.903 [2024-04-26 23:37:00.009224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.903 [2024-04-26 23:37:00.009552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.903 [2024-04-26 23:37:00.009559] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.903 qpair failed and we were unable to recover it. 00:34:10.903 [2024-04-26 23:37:00.009870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.903 [2024-04-26 23:37:00.010103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.903 [2024-04-26 23:37:00.010110] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.903 qpair failed and we were unable to recover it. 00:34:10.903 [2024-04-26 23:37:00.010399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.903 [2024-04-26 23:37:00.010715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.903 [2024-04-26 23:37:00.010722] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.903 qpair failed and we were unable to recover it. 00:34:10.903 [2024-04-26 23:37:00.010910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.903 [2024-04-26 23:37:00.011166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.903 [2024-04-26 23:37:00.011173] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.903 qpair failed and we were unable to recover it. 00:34:10.903 [2024-04-26 23:37:00.011389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.903 [2024-04-26 23:37:00.011694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.903 [2024-04-26 23:37:00.011701] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.903 qpair failed and we were unable to recover it. 00:34:10.903 [2024-04-26 23:37:00.012031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.903 [2024-04-26 23:37:00.012243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.903 [2024-04-26 23:37:00.012250] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.903 qpair failed and we were unable to recover it. 00:34:10.903 [2024-04-26 23:37:00.012578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.903 [2024-04-26 23:37:00.012811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.903 [2024-04-26 23:37:00.012819] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.904 qpair failed and we were unable to recover it. 00:34:10.904 [2024-04-26 23:37:00.013126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.904 [2024-04-26 23:37:00.013489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.904 [2024-04-26 23:37:00.013497] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.904 qpair failed and we were unable to recover it. 00:34:10.904 [2024-04-26 23:37:00.013864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.904 [2024-04-26 23:37:00.014166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.904 [2024-04-26 23:37:00.014172] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.904 qpair failed and we were unable to recover it. 00:34:10.904 [2024-04-26 23:37:00.014564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.904 [2024-04-26 23:37:00.014876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.904 [2024-04-26 23:37:00.014884] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.904 qpair failed and we were unable to recover it. 00:34:10.904 [2024-04-26 23:37:00.015108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.904 [2024-04-26 23:37:00.015438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.904 [2024-04-26 23:37:00.015445] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.904 qpair failed and we were unable to recover it. 00:34:10.904 [2024-04-26 23:37:00.015761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.904 [2024-04-26 23:37:00.016051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.904 [2024-04-26 23:37:00.016058] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.904 qpair failed and we were unable to recover it. 00:34:10.904 [2024-04-26 23:37:00.016465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.904 [2024-04-26 23:37:00.016795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.904 [2024-04-26 23:37:00.016803] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.904 qpair failed and we were unable to recover it. 00:34:10.904 [2024-04-26 23:37:00.017136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.904 [2024-04-26 23:37:00.017438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.904 [2024-04-26 23:37:00.017445] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.904 qpair failed and we were unable to recover it. 00:34:10.904 [2024-04-26 23:37:00.017687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.904 [2024-04-26 23:37:00.017981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.904 [2024-04-26 23:37:00.017988] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.904 qpair failed and we were unable to recover it. 00:34:10.904 [2024-04-26 23:37:00.018303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.904 [2024-04-26 23:37:00.018665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.904 [2024-04-26 23:37:00.018671] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.904 qpair failed and we were unable to recover it. 00:34:10.904 [2024-04-26 23:37:00.018990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.904 [2024-04-26 23:37:00.019332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.904 [2024-04-26 23:37:00.019339] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.904 qpair failed and we were unable to recover it. 00:34:10.904 [2024-04-26 23:37:00.019637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.904 [2024-04-26 23:37:00.019982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.904 [2024-04-26 23:37:00.019989] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.904 qpair failed and we were unable to recover it. 00:34:10.904 [2024-04-26 23:37:00.020315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.904 [2024-04-26 23:37:00.020645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.904 [2024-04-26 23:37:00.020651] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.904 qpair failed and we were unable to recover it. 00:34:10.904 [2024-04-26 23:37:00.020901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.904 [2024-04-26 23:37:00.021048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.904 [2024-04-26 23:37:00.021056] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.904 qpair failed and we were unable to recover it. 00:34:10.904 [2024-04-26 23:37:00.021381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.904 [2024-04-26 23:37:00.021728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.904 [2024-04-26 23:37:00.021736] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.904 qpair failed and we were unable to recover it. 00:34:10.904 [2024-04-26 23:37:00.022077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.904 [2024-04-26 23:37:00.022398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.904 [2024-04-26 23:37:00.022405] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.904 qpair failed and we were unable to recover it. 00:34:10.904 [2024-04-26 23:37:00.022741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.904 [2024-04-26 23:37:00.023070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.904 [2024-04-26 23:37:00.023078] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.904 qpair failed and we were unable to recover it. 00:34:10.904 [2024-04-26 23:37:00.023431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.904 [2024-04-26 23:37:00.023772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.904 [2024-04-26 23:37:00.023779] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.904 qpair failed and we were unable to recover it. 00:34:10.904 [2024-04-26 23:37:00.024084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.904 [2024-04-26 23:37:00.024396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.904 [2024-04-26 23:37:00.024402] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.904 qpair failed and we were unable to recover it. 00:34:10.904 [2024-04-26 23:37:00.024585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.904 [2024-04-26 23:37:00.024931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.904 [2024-04-26 23:37:00.024938] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.904 qpair failed and we were unable to recover it. 00:34:10.904 [2024-04-26 23:37:00.025266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.904 [2024-04-26 23:37:00.025603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.904 [2024-04-26 23:37:00.025609] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.904 qpair failed and we were unable to recover it. 00:34:10.904 [2024-04-26 23:37:00.025930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.904 [2024-04-26 23:37:00.026260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.904 [2024-04-26 23:37:00.026267] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.904 qpair failed and we were unable to recover it. 00:34:10.904 [2024-04-26 23:37:00.026625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.904 [2024-04-26 23:37:00.026977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.904 [2024-04-26 23:37:00.026983] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.904 qpair failed and we were unable to recover it. 00:34:10.904 [2024-04-26 23:37:00.027334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.904 [2024-04-26 23:37:00.027533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.904 [2024-04-26 23:37:00.027540] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.904 qpair failed and we were unable to recover it. 00:34:10.904 [2024-04-26 23:37:00.027772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.904 [2024-04-26 23:37:00.028045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.904 [2024-04-26 23:37:00.028052] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.904 qpair failed and we were unable to recover it. 00:34:10.904 [2024-04-26 23:37:00.028252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.904 [2024-04-26 23:37:00.028667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.904 [2024-04-26 23:37:00.028674] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.904 qpair failed and we were unable to recover it. 00:34:10.904 [2024-04-26 23:37:00.028996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.904 [2024-04-26 23:37:00.029305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.904 [2024-04-26 23:37:00.029311] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.904 qpair failed and we were unable to recover it. 00:34:10.904 [2024-04-26 23:37:00.029634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.905 [2024-04-26 23:37:00.029929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.905 [2024-04-26 23:37:00.029936] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.905 qpair failed and we were unable to recover it. 00:34:10.905 [2024-04-26 23:37:00.030269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.905 [2024-04-26 23:37:00.030510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.905 [2024-04-26 23:37:00.030516] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.905 qpair failed and we were unable to recover it. 00:34:10.905 [2024-04-26 23:37:00.030830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.905 [2024-04-26 23:37:00.031056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.905 [2024-04-26 23:37:00.031063] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.905 qpair failed and we were unable to recover it. 00:34:10.905 [2024-04-26 23:37:00.031292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.905 [2024-04-26 23:37:00.031650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.905 [2024-04-26 23:37:00.031656] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.905 qpair failed and we were unable to recover it. 00:34:10.905 [2024-04-26 23:37:00.031985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.905 [2024-04-26 23:37:00.032369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.905 [2024-04-26 23:37:00.032376] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.905 qpair failed and we were unable to recover it. 00:34:10.905 [2024-04-26 23:37:00.032675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.905 [2024-04-26 23:37:00.032852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.905 [2024-04-26 23:37:00.032859] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.905 qpair failed and we were unable to recover it. 00:34:10.905 [2024-04-26 23:37:00.033185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.905 [2024-04-26 23:37:00.033483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.905 [2024-04-26 23:37:00.033489] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.905 qpair failed and we were unable to recover it. 00:34:10.905 [2024-04-26 23:37:00.033798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.905 [2024-04-26 23:37:00.034003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.905 [2024-04-26 23:37:00.034010] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.905 qpair failed and we were unable to recover it. 00:34:10.905 [2024-04-26 23:37:00.034315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.905 [2024-04-26 23:37:00.034628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.905 [2024-04-26 23:37:00.034634] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.905 qpair failed and we were unable to recover it. 00:34:10.905 [2024-04-26 23:37:00.034919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.905 [2024-04-26 23:37:00.035246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.905 [2024-04-26 23:37:00.035253] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.905 qpair failed and we were unable to recover it. 00:34:10.905 [2024-04-26 23:37:00.035566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.905 [2024-04-26 23:37:00.035882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.905 [2024-04-26 23:37:00.035889] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.905 qpair failed and we were unable to recover it. 00:34:10.905 [2024-04-26 23:37:00.036230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.905 [2024-04-26 23:37:00.036543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.905 [2024-04-26 23:37:00.036549] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.905 qpair failed and we were unable to recover it. 00:34:10.905 [2024-04-26 23:37:00.036858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.905 [2024-04-26 23:37:00.037157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.905 [2024-04-26 23:37:00.037163] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.905 qpair failed and we were unable to recover it. 00:34:10.905 [2024-04-26 23:37:00.037469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.905 [2024-04-26 23:37:00.037766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.905 [2024-04-26 23:37:00.037773] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.905 qpair failed and we were unable to recover it. 00:34:10.905 [2024-04-26 23:37:00.038095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.905 [2024-04-26 23:37:00.038415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.905 [2024-04-26 23:37:00.038422] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.905 qpair failed and we were unable to recover it. 00:34:10.905 [2024-04-26 23:37:00.038618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.905 [2024-04-26 23:37:00.038948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.905 [2024-04-26 23:37:00.038956] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.905 qpair failed and we were unable to recover it. 00:34:10.905 [2024-04-26 23:37:00.039234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.905 [2024-04-26 23:37:00.039572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.905 [2024-04-26 23:37:00.039578] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.905 qpair failed and we were unable to recover it. 00:34:10.905 [2024-04-26 23:37:00.039782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.905 [2024-04-26 23:37:00.039983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.905 [2024-04-26 23:37:00.039990] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.905 qpair failed and we were unable to recover it. 00:34:10.905 [2024-04-26 23:37:00.040204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.905 [2024-04-26 23:37:00.040347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.905 [2024-04-26 23:37:00.040354] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.905 qpair failed and we were unable to recover it. 00:34:10.905 [2024-04-26 23:37:00.040695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.905 [2024-04-26 23:37:00.041008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.905 [2024-04-26 23:37:00.041015] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.905 qpair failed and we were unable to recover it. 00:34:10.905 [2024-04-26 23:37:00.041354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.905 [2024-04-26 23:37:00.041618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.905 [2024-04-26 23:37:00.041624] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.905 qpair failed and we were unable to recover it. 00:34:10.905 [2024-04-26 23:37:00.041849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.905 [2024-04-26 23:37:00.042207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.905 [2024-04-26 23:37:00.042213] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.905 qpair failed and we were unable to recover it. 00:34:10.905 [2024-04-26 23:37:00.042568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.905 [2024-04-26 23:37:00.042881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.905 [2024-04-26 23:37:00.042888] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.905 qpair failed and we were unable to recover it. 00:34:10.905 [2024-04-26 23:37:00.043095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.905 [2024-04-26 23:37:00.043390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.905 [2024-04-26 23:37:00.043396] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.905 qpair failed and we were unable to recover it. 00:34:10.905 [2024-04-26 23:37:00.043747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.905 [2024-04-26 23:37:00.044090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.905 [2024-04-26 23:37:00.044097] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.905 qpair failed and we were unable to recover it. 00:34:10.905 [2024-04-26 23:37:00.044406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.905 [2024-04-26 23:37:00.044738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.905 [2024-04-26 23:37:00.044746] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.905 qpair failed and we were unable to recover it. 00:34:10.905 [2024-04-26 23:37:00.045074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.905 [2024-04-26 23:37:00.045393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.905 [2024-04-26 23:37:00.045400] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.905 qpair failed and we were unable to recover it. 00:34:10.905 [2024-04-26 23:37:00.045787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.905 [2024-04-26 23:37:00.046053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.905 [2024-04-26 23:37:00.046061] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.905 qpair failed and we were unable to recover it. 00:34:10.905 [2024-04-26 23:37:00.046173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.905 [2024-04-26 23:37:00.046456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.905 [2024-04-26 23:37:00.046463] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.905 qpair failed and we were unable to recover it. 00:34:10.905 [2024-04-26 23:37:00.046890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.905 [2024-04-26 23:37:00.047225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.905 [2024-04-26 23:37:00.047232] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.905 qpair failed and we were unable to recover it. 00:34:10.905 [2024-04-26 23:37:00.047452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.905 [2024-04-26 23:37:00.047769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.905 [2024-04-26 23:37:00.047775] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.905 qpair failed and we were unable to recover it. 00:34:10.905 [2024-04-26 23:37:00.048100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.905 [2024-04-26 23:37:00.048212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.905 [2024-04-26 23:37:00.048218] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.905 qpair failed and we were unable to recover it. 00:34:10.905 [2024-04-26 23:37:00.048551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.905 [2024-04-26 23:37:00.048869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.905 [2024-04-26 23:37:00.048876] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.905 qpair failed and we were unable to recover it. 00:34:10.905 [2024-04-26 23:37:00.049189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.905 [2024-04-26 23:37:00.049467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.905 [2024-04-26 23:37:00.049473] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.905 qpair failed and we were unable to recover it. 00:34:10.905 [2024-04-26 23:37:00.049810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.905 [2024-04-26 23:37:00.050125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.905 [2024-04-26 23:37:00.050131] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.905 qpair failed and we were unable to recover it. 00:34:10.905 [2024-04-26 23:37:00.050454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.905 [2024-04-26 23:37:00.050768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.905 [2024-04-26 23:37:00.050775] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.905 qpair failed and we were unable to recover it. 00:34:10.905 [2024-04-26 23:37:00.051005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.905 [2024-04-26 23:37:00.051193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.905 [2024-04-26 23:37:00.051200] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.905 qpair failed and we were unable to recover it. 00:34:10.905 [2024-04-26 23:37:00.051424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.905 [2024-04-26 23:37:00.051756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.905 [2024-04-26 23:37:00.051764] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.905 qpair failed and we were unable to recover it. 00:34:10.905 [2024-04-26 23:37:00.051990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.905 [2024-04-26 23:37:00.052211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.905 [2024-04-26 23:37:00.052219] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.905 qpair failed and we were unable to recover it. 00:34:10.905 [2024-04-26 23:37:00.052505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.905 [2024-04-26 23:37:00.052871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.905 [2024-04-26 23:37:00.052878] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.905 qpair failed and we were unable to recover it. 00:34:10.905 [2024-04-26 23:37:00.053034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.905 [2024-04-26 23:37:00.053351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.905 [2024-04-26 23:37:00.053357] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.905 qpair failed and we were unable to recover it. 00:34:10.905 [2024-04-26 23:37:00.053581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.905 [2024-04-26 23:37:00.053927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.905 [2024-04-26 23:37:00.053933] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.905 qpair failed and we were unable to recover it. 00:34:10.905 [2024-04-26 23:37:00.054284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.905 [2024-04-26 23:37:00.054494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.905 [2024-04-26 23:37:00.054500] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.905 qpair failed and we were unable to recover it. 00:34:10.905 [2024-04-26 23:37:00.054716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.905 [2024-04-26 23:37:00.055041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.905 [2024-04-26 23:37:00.055047] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.905 qpair failed and we were unable to recover it. 00:34:10.905 [2024-04-26 23:37:00.055362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.905 [2024-04-26 23:37:00.055655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.905 [2024-04-26 23:37:00.055662] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.905 qpair failed and we were unable to recover it. 00:34:10.905 [2024-04-26 23:37:00.056030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.905 [2024-04-26 23:37:00.056359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.905 [2024-04-26 23:37:00.056366] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.905 qpair failed and we were unable to recover it. 00:34:10.905 [2024-04-26 23:37:00.056595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.905 [2024-04-26 23:37:00.056763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.905 [2024-04-26 23:37:00.056770] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.905 qpair failed and we were unable to recover it. 00:34:10.905 [2024-04-26 23:37:00.056999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.905 [2024-04-26 23:37:00.057253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.905 [2024-04-26 23:37:00.057260] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.905 qpair failed and we were unable to recover it. 00:34:10.905 [2024-04-26 23:37:00.057549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.905 [2024-04-26 23:37:00.057864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.905 [2024-04-26 23:37:00.057870] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.905 qpair failed and we were unable to recover it. 00:34:10.905 [2024-04-26 23:37:00.058191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.905 [2024-04-26 23:37:00.058399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.905 [2024-04-26 23:37:00.058405] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.905 qpair failed and we were unable to recover it. 00:34:10.905 [2024-04-26 23:37:00.058788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.905 [2024-04-26 23:37:00.059084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.905 [2024-04-26 23:37:00.059091] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.905 qpair failed and we were unable to recover it. 00:34:10.905 [2024-04-26 23:37:00.059331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.905 [2024-04-26 23:37:00.059648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.905 [2024-04-26 23:37:00.059655] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.905 qpair failed and we were unable to recover it. 00:34:10.905 [2024-04-26 23:37:00.059970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.905 [2024-04-26 23:37:00.060187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.905 [2024-04-26 23:37:00.060193] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.905 qpair failed and we were unable to recover it. 00:34:10.905 [2024-04-26 23:37:00.060531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.905 [2024-04-26 23:37:00.060763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.905 [2024-04-26 23:37:00.060769] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.905 qpair failed and we were unable to recover it. 00:34:10.906 [2024-04-26 23:37:00.060917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.906 [2024-04-26 23:37:00.061273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.906 [2024-04-26 23:37:00.061279] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.906 qpair failed and we were unable to recover it. 00:34:10.906 [2024-04-26 23:37:00.061604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.906 [2024-04-26 23:37:00.061925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.906 [2024-04-26 23:37:00.061932] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.906 qpair failed and we were unable to recover it. 00:34:10.906 [2024-04-26 23:37:00.062243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.906 [2024-04-26 23:37:00.062424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.906 [2024-04-26 23:37:00.062430] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.906 qpair failed and we were unable to recover it. 00:34:10.906 [2024-04-26 23:37:00.062757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.906 [2024-04-26 23:37:00.063048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.906 [2024-04-26 23:37:00.063055] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.906 qpair failed and we were unable to recover it. 00:34:10.906 [2024-04-26 23:37:00.063376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.906 [2024-04-26 23:37:00.063661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.906 [2024-04-26 23:37:00.063667] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.906 qpair failed and we were unable to recover it. 00:34:10.906 [2024-04-26 23:37:00.063998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.906 [2024-04-26 23:37:00.064218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.906 [2024-04-26 23:37:00.064224] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.906 qpair failed and we were unable to recover it. 00:34:10.906 [2024-04-26 23:37:00.064562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.906 [2024-04-26 23:37:00.064878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.906 [2024-04-26 23:37:00.064885] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.906 qpair failed and we were unable to recover it. 00:34:10.906 [2024-04-26 23:37:00.065205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.906 [2024-04-26 23:37:00.065342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.906 [2024-04-26 23:37:00.065349] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.906 qpair failed and we were unable to recover it. 00:34:10.906 [2024-04-26 23:37:00.065514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.906 [2024-04-26 23:37:00.065753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.906 [2024-04-26 23:37:00.065760] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.906 qpair failed and we were unable to recover it. 00:34:10.906 [2024-04-26 23:37:00.066116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.906 [2024-04-26 23:37:00.066482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.906 [2024-04-26 23:37:00.066490] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.906 qpair failed and we were unable to recover it. 00:34:10.906 [2024-04-26 23:37:00.066816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.906 [2024-04-26 23:37:00.067123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.906 [2024-04-26 23:37:00.067130] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.906 qpair failed and we were unable to recover it. 00:34:10.906 [2024-04-26 23:37:00.067424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.906 [2024-04-26 23:37:00.067630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.906 [2024-04-26 23:37:00.067637] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.906 qpair failed and we were unable to recover it. 00:34:10.906 [2024-04-26 23:37:00.067861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.906 [2024-04-26 23:37:00.068181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.906 [2024-04-26 23:37:00.068188] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.906 qpair failed and we were unable to recover it. 00:34:10.906 [2024-04-26 23:37:00.068398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.906 [2024-04-26 23:37:00.068734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.906 [2024-04-26 23:37:00.068741] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.906 qpair failed and we were unable to recover it. 00:34:10.906 [2024-04-26 23:37:00.068964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.906 [2024-04-26 23:37:00.069305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.906 [2024-04-26 23:37:00.069311] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.906 qpair failed and we were unable to recover it. 00:34:10.906 [2024-04-26 23:37:00.069538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.906 [2024-04-26 23:37:00.069808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.906 [2024-04-26 23:37:00.069816] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.906 qpair failed and we were unable to recover it. 00:34:10.906 [2024-04-26 23:37:00.070133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.906 [2024-04-26 23:37:00.070451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.906 [2024-04-26 23:37:00.070457] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.906 qpair failed and we were unable to recover it. 00:34:10.906 [2024-04-26 23:37:00.070786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.906 [2024-04-26 23:37:00.071123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.906 [2024-04-26 23:37:00.071130] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.906 qpair failed and we were unable to recover it. 00:34:10.906 [2024-04-26 23:37:00.071434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.906 [2024-04-26 23:37:00.071618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.906 [2024-04-26 23:37:00.071624] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.906 qpair failed and we were unable to recover it. 00:34:10.906 [2024-04-26 23:37:00.071978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.906 [2024-04-26 23:37:00.072266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.906 [2024-04-26 23:37:00.072274] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.906 qpair failed and we were unable to recover it. 00:34:10.906 [2024-04-26 23:37:00.072628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.906 [2024-04-26 23:37:00.072961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.906 [2024-04-26 23:37:00.072967] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.906 qpair failed and we were unable to recover it. 00:34:10.906 [2024-04-26 23:37:00.073303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.906 [2024-04-26 23:37:00.073590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.906 [2024-04-26 23:37:00.073596] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.906 qpair failed and we were unable to recover it. 00:34:10.906 [2024-04-26 23:37:00.073952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.906 [2024-04-26 23:37:00.074126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.906 [2024-04-26 23:37:00.074133] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.906 qpair failed and we were unable to recover it. 00:34:10.906 [2024-04-26 23:37:00.074455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.906 [2024-04-26 23:37:00.074663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.906 [2024-04-26 23:37:00.074670] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.906 qpair failed and we were unable to recover it. 00:34:10.906 [2024-04-26 23:37:00.075010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.906 [2024-04-26 23:37:00.075312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.906 [2024-04-26 23:37:00.075326] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.906 qpair failed and we were unable to recover it. 00:34:10.906 [2024-04-26 23:37:00.075660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.906 [2024-04-26 23:37:00.076029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.906 [2024-04-26 23:37:00.076037] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.906 qpair failed and we were unable to recover it. 00:34:10.906 [2024-04-26 23:37:00.076379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.906 [2024-04-26 23:37:00.076671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.906 [2024-04-26 23:37:00.076677] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.906 qpair failed and we were unable to recover it. 00:34:10.906 [2024-04-26 23:37:00.076988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.906 [2024-04-26 23:37:00.077341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.906 [2024-04-26 23:37:00.077347] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.906 qpair failed and we were unable to recover it. 00:34:10.906 [2024-04-26 23:37:00.077697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.906 [2024-04-26 23:37:00.077911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.906 [2024-04-26 23:37:00.077918] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.906 qpair failed and we were unable to recover it. 00:34:10.906 [2024-04-26 23:37:00.078311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.906 [2024-04-26 23:37:00.078625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.906 [2024-04-26 23:37:00.078631] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.906 qpair failed and we were unable to recover it. 00:34:10.906 [2024-04-26 23:37:00.078849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.906 [2024-04-26 23:37:00.079219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.906 [2024-04-26 23:37:00.079225] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.906 qpair failed and we were unable to recover it. 00:34:10.906 [2024-04-26 23:37:00.079462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.906 [2024-04-26 23:37:00.079793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.906 [2024-04-26 23:37:00.079799] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.906 qpair failed and we were unable to recover it. 00:34:10.906 [2024-04-26 23:37:00.080148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.906 [2024-04-26 23:37:00.080413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.906 [2024-04-26 23:37:00.080421] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.906 qpair failed and we were unable to recover it. 00:34:10.906 [2024-04-26 23:37:00.080662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.906 [2024-04-26 23:37:00.080763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.906 [2024-04-26 23:37:00.080769] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.906 qpair failed and we were unable to recover it. 00:34:10.906 [2024-04-26 23:37:00.080950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.906 [2024-04-26 23:37:00.081239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.906 [2024-04-26 23:37:00.081245] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.906 qpair failed and we were unable to recover it. 00:34:10.906 [2024-04-26 23:37:00.081350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.906 [2024-04-26 23:37:00.081639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.906 [2024-04-26 23:37:00.081647] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.906 qpair failed and we were unable to recover it. 00:34:10.906 [2024-04-26 23:37:00.081964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.906 [2024-04-26 23:37:00.082288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.906 [2024-04-26 23:37:00.082295] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.906 qpair failed and we were unable to recover it. 00:34:10.906 [2024-04-26 23:37:00.082686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.906 [2024-04-26 23:37:00.082968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.906 [2024-04-26 23:37:00.082974] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.906 qpair failed and we were unable to recover it. 00:34:10.906 [2024-04-26 23:37:00.083380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.906 [2024-04-26 23:37:00.083620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.906 [2024-04-26 23:37:00.083626] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.906 qpair failed and we were unable to recover it. 00:34:10.906 [2024-04-26 23:37:00.083956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.906 [2024-04-26 23:37:00.084287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.906 [2024-04-26 23:37:00.084294] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.906 qpair failed and we were unable to recover it. 00:34:10.906 [2024-04-26 23:37:00.084664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.906 [2024-04-26 23:37:00.084989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.906 [2024-04-26 23:37:00.084995] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.906 qpair failed and we were unable to recover it. 00:34:10.906 [2024-04-26 23:37:00.085200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.906 [2024-04-26 23:37:00.085552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.906 [2024-04-26 23:37:00.085558] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.906 qpair failed and we were unable to recover it. 00:34:10.906 [2024-04-26 23:37:00.085867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.906 [2024-04-26 23:37:00.086166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.906 [2024-04-26 23:37:00.086173] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.906 qpair failed and we were unable to recover it. 00:34:10.906 [2024-04-26 23:37:00.086369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.906 [2024-04-26 23:37:00.086682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.906 [2024-04-26 23:37:00.086689] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.906 qpair failed and we were unable to recover it. 00:34:10.906 [2024-04-26 23:37:00.087100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.906 [2024-04-26 23:37:00.087442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.906 [2024-04-26 23:37:00.087449] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.906 qpair failed and we were unable to recover it. 00:34:10.906 [2024-04-26 23:37:00.087754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.906 [2024-04-26 23:37:00.088075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.906 [2024-04-26 23:37:00.088083] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.906 qpair failed and we were unable to recover it. 00:34:10.906 [2024-04-26 23:37:00.088433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.906 [2024-04-26 23:37:00.088806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.906 [2024-04-26 23:37:00.088813] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.906 qpair failed and we were unable to recover it. 00:34:10.906 [2024-04-26 23:37:00.089140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.906 [2024-04-26 23:37:00.089454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.906 [2024-04-26 23:37:00.089461] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.906 qpair failed and we were unable to recover it. 00:34:10.906 [2024-04-26 23:37:00.089817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.906 [2024-04-26 23:37:00.090089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.906 [2024-04-26 23:37:00.090097] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.906 qpair failed and we were unable to recover it. 00:34:10.906 [2024-04-26 23:37:00.090429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.906 [2024-04-26 23:37:00.090696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.907 [2024-04-26 23:37:00.090703] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.907 qpair failed and we were unable to recover it. 00:34:10.907 [2024-04-26 23:37:00.090885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.907 [2024-04-26 23:37:00.091204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.907 [2024-04-26 23:37:00.091211] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.907 qpair failed and we were unable to recover it. 00:34:10.907 [2024-04-26 23:37:00.091418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.907 [2024-04-26 23:37:00.091731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.907 [2024-04-26 23:37:00.091737] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.907 qpair failed and we were unable to recover it. 00:34:10.907 [2024-04-26 23:37:00.091945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.907 [2024-04-26 23:37:00.092244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.907 [2024-04-26 23:37:00.092250] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.907 qpair failed and we were unable to recover it. 00:34:10.907 [2024-04-26 23:37:00.092508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.907 [2024-04-26 23:37:00.092713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.907 [2024-04-26 23:37:00.092720] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.907 qpair failed and we were unable to recover it. 00:34:10.907 [2024-04-26 23:37:00.093041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.907 [2024-04-26 23:37:00.093372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.907 [2024-04-26 23:37:00.093379] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.907 qpair failed and we were unable to recover it. 00:34:10.907 [2024-04-26 23:37:00.093675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.907 [2024-04-26 23:37:00.093978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.907 [2024-04-26 23:37:00.093984] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.907 qpair failed and we were unable to recover it. 00:34:10.907 [2024-04-26 23:37:00.094314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.907 [2024-04-26 23:37:00.094572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.907 [2024-04-26 23:37:00.094578] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.907 qpair failed and we were unable to recover it. 00:34:10.907 [2024-04-26 23:37:00.094754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.907 [2024-04-26 23:37:00.095062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.907 [2024-04-26 23:37:00.095069] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.907 qpair failed and we were unable to recover it. 00:34:10.907 [2024-04-26 23:37:00.095368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.907 [2024-04-26 23:37:00.095672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.907 [2024-04-26 23:37:00.095679] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.907 qpair failed and we were unable to recover it. 00:34:10.907 [2024-04-26 23:37:00.095989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.907 [2024-04-26 23:37:00.096277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.907 [2024-04-26 23:37:00.096283] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.907 qpair failed and we were unable to recover it. 00:34:10.907 [2024-04-26 23:37:00.096596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.907 [2024-04-26 23:37:00.096916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.907 [2024-04-26 23:37:00.096922] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.907 qpair failed and we were unable to recover it. 00:34:10.907 [2024-04-26 23:37:00.097161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.907 [2024-04-26 23:37:00.097496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.907 [2024-04-26 23:37:00.097502] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.907 qpair failed and we were unable to recover it. 00:34:10.907 [2024-04-26 23:37:00.097823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.907 [2024-04-26 23:37:00.098127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.907 [2024-04-26 23:37:00.098133] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.907 qpair failed and we were unable to recover it. 00:34:10.907 [2024-04-26 23:37:00.098475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.907 [2024-04-26 23:37:00.098792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.907 [2024-04-26 23:37:00.098798] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.907 qpair failed and we were unable to recover it. 00:34:10.907 [2024-04-26 23:37:00.099120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.907 [2024-04-26 23:37:00.099434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.907 [2024-04-26 23:37:00.099440] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.907 qpair failed and we were unable to recover it. 00:34:10.907 [2024-04-26 23:37:00.099665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.907 [2024-04-26 23:37:00.099990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.907 [2024-04-26 23:37:00.099996] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.907 qpair failed and we were unable to recover it. 00:34:10.907 [2024-04-26 23:37:00.100305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.907 [2024-04-26 23:37:00.100587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.907 [2024-04-26 23:37:00.100593] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.907 qpair failed and we were unable to recover it. 00:34:10.907 [2024-04-26 23:37:00.100911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.907 [2024-04-26 23:37:00.101123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.907 [2024-04-26 23:37:00.101129] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.907 qpair failed and we were unable to recover it. 00:34:10.907 [2024-04-26 23:37:00.101352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.907 [2024-04-26 23:37:00.101662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.907 [2024-04-26 23:37:00.101668] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.907 qpair failed and we were unable to recover it. 00:34:10.907 [2024-04-26 23:37:00.101844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.907 [2024-04-26 23:37:00.102182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.907 [2024-04-26 23:37:00.102189] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.907 qpair failed and we were unable to recover it. 00:34:10.907 [2024-04-26 23:37:00.102538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.907 [2024-04-26 23:37:00.102849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.907 [2024-04-26 23:37:00.102856] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.907 qpair failed and we were unable to recover it. 00:34:10.907 [2024-04-26 23:37:00.103183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.907 [2024-04-26 23:37:00.103331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.907 [2024-04-26 23:37:00.103338] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.907 qpair failed and we were unable to recover it. 00:34:10.907 [2024-04-26 23:37:00.103667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.907 [2024-04-26 23:37:00.103982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.907 [2024-04-26 23:37:00.103988] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.907 qpair failed and we were unable to recover it. 00:34:10.907 [2024-04-26 23:37:00.104329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.907 [2024-04-26 23:37:00.104640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.907 [2024-04-26 23:37:00.104647] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.907 qpair failed and we were unable to recover it. 00:34:10.907 [2024-04-26 23:37:00.105003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.907 [2024-04-26 23:37:00.105287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.907 [2024-04-26 23:37:00.105294] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.907 qpair failed and we were unable to recover it. 00:34:10.907 [2024-04-26 23:37:00.105489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.907 [2024-04-26 23:37:00.105807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.907 [2024-04-26 23:37:00.105813] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.907 qpair failed and we were unable to recover it. 00:34:10.907 [2024-04-26 23:37:00.106127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.907 [2024-04-26 23:37:00.106447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.907 [2024-04-26 23:37:00.106454] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.907 qpair failed and we were unable to recover it. 00:34:10.907 [2024-04-26 23:37:00.106792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.907 [2024-04-26 23:37:00.107078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.907 [2024-04-26 23:37:00.107085] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.907 qpair failed and we were unable to recover it. 00:34:10.907 [2024-04-26 23:37:00.107351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.907 [2024-04-26 23:37:00.107728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.907 [2024-04-26 23:37:00.107735] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.907 qpair failed and we were unable to recover it. 00:34:10.907 [2024-04-26 23:37:00.108008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.907 [2024-04-26 23:37:00.108230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.907 [2024-04-26 23:37:00.108238] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.907 qpair failed and we were unable to recover it. 00:34:10.907 [2024-04-26 23:37:00.108600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.907 [2024-04-26 23:37:00.108809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.907 [2024-04-26 23:37:00.108816] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.907 qpair failed and we were unable to recover it. 00:34:10.907 [2024-04-26 23:37:00.109148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.907 [2024-04-26 23:37:00.109499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.907 [2024-04-26 23:37:00.109506] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.907 qpair failed and we were unable to recover it. 00:34:10.907 [2024-04-26 23:37:00.109889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.907 [2024-04-26 23:37:00.110192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.907 [2024-04-26 23:37:00.110199] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.907 qpair failed and we were unable to recover it. 00:34:10.907 [2024-04-26 23:37:00.110543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.907 [2024-04-26 23:37:00.110888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.907 [2024-04-26 23:37:00.110895] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.907 qpair failed and we were unable to recover it. 00:34:10.907 [2024-04-26 23:37:00.111208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.907 [2024-04-26 23:37:00.111528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.907 [2024-04-26 23:37:00.111534] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.907 qpair failed and we were unable to recover it. 00:34:10.907 [2024-04-26 23:37:00.111853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.907 [2024-04-26 23:37:00.112138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.907 [2024-04-26 23:37:00.112144] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.907 qpair failed and we were unable to recover it. 00:34:10.907 [2024-04-26 23:37:00.112538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.907 [2024-04-26 23:37:00.112887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.907 [2024-04-26 23:37:00.112894] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.907 qpair failed and we were unable to recover it. 00:34:10.907 [2024-04-26 23:37:00.113229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.907 [2024-04-26 23:37:00.113580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.907 [2024-04-26 23:37:00.113586] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.907 qpair failed and we were unable to recover it. 00:34:10.907 [2024-04-26 23:37:00.113815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.907 [2024-04-26 23:37:00.114156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.907 [2024-04-26 23:37:00.114163] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.907 qpair failed and we were unable to recover it. 00:34:10.907 [2024-04-26 23:37:00.114481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.907 [2024-04-26 23:37:00.114806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.907 [2024-04-26 23:37:00.114813] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.907 qpair failed and we were unable to recover it. 00:34:10.907 [2024-04-26 23:37:00.115117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.907 [2024-04-26 23:37:00.115373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.907 [2024-04-26 23:37:00.115379] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.907 qpair failed and we were unable to recover it. 00:34:10.907 [2024-04-26 23:37:00.115689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.907 [2024-04-26 23:37:00.115926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.907 [2024-04-26 23:37:00.115933] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.907 qpair failed and we were unable to recover it. 00:34:10.907 [2024-04-26 23:37:00.116254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.907 [2024-04-26 23:37:00.116604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.907 [2024-04-26 23:37:00.116612] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.907 qpair failed and we were unable to recover it. 00:34:10.907 [2024-04-26 23:37:00.116843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.907 [2024-04-26 23:37:00.117065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.907 [2024-04-26 23:37:00.117072] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.907 qpair failed and we were unable to recover it. 00:34:10.907 [2024-04-26 23:37:00.117423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.907 [2024-04-26 23:37:00.117761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.907 [2024-04-26 23:37:00.117768] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.907 qpair failed and we were unable to recover it. 00:34:10.907 [2024-04-26 23:37:00.117974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.907 [2024-04-26 23:37:00.118272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.907 [2024-04-26 23:37:00.118279] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.907 qpair failed and we were unable to recover it. 00:34:10.907 [2024-04-26 23:37:00.118637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.907 [2024-04-26 23:37:00.118873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.907 [2024-04-26 23:37:00.118880] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.907 qpair failed and we were unable to recover it. 00:34:10.907 [2024-04-26 23:37:00.119098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.907 [2024-04-26 23:37:00.119479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.907 [2024-04-26 23:37:00.119485] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.907 qpair failed and we were unable to recover it. 00:34:10.907 [2024-04-26 23:37:00.119801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.907 [2024-04-26 23:37:00.120158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.907 [2024-04-26 23:37:00.120165] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.907 qpair failed and we were unable to recover it. 00:34:10.907 [2024-04-26 23:37:00.120513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.907 [2024-04-26 23:37:00.120825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.907 [2024-04-26 23:37:00.120832] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.907 qpair failed and we were unable to recover it. 00:34:10.907 [2024-04-26 23:37:00.121147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.907 [2024-04-26 23:37:00.121449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.907 [2024-04-26 23:37:00.121456] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.907 qpair failed and we were unable to recover it. 00:34:10.907 [2024-04-26 23:37:00.121661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.907 [2024-04-26 23:37:00.121983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.907 [2024-04-26 23:37:00.121990] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.907 qpair failed and we were unable to recover it. 00:34:10.907 [2024-04-26 23:37:00.122316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.907 [2024-04-26 23:37:00.122647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.907 [2024-04-26 23:37:00.122653] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.907 qpair failed and we were unable to recover it. 00:34:10.907 [2024-04-26 23:37:00.123012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.907 [2024-04-26 23:37:00.123239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.907 [2024-04-26 23:37:00.123246] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.907 qpair failed and we were unable to recover it. 00:34:10.908 [2024-04-26 23:37:00.123641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.908 [2024-04-26 23:37:00.123956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.908 [2024-04-26 23:37:00.123963] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.908 qpair failed and we were unable to recover it. 00:34:10.908 [2024-04-26 23:37:00.124334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.908 [2024-04-26 23:37:00.124647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.908 [2024-04-26 23:37:00.124653] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.908 qpair failed and we were unable to recover it. 00:34:10.908 [2024-04-26 23:37:00.125021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.908 [2024-04-26 23:37:00.125345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.908 [2024-04-26 23:37:00.125352] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.908 qpair failed and we were unable to recover it. 00:34:10.908 [2024-04-26 23:37:00.125683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.908 [2024-04-26 23:37:00.125922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.908 [2024-04-26 23:37:00.125928] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.908 qpair failed and we were unable to recover it. 00:34:10.908 [2024-04-26 23:37:00.126261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.908 [2024-04-26 23:37:00.126639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.908 [2024-04-26 23:37:00.126645] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.908 qpair failed and we were unable to recover it. 00:34:10.908 [2024-04-26 23:37:00.126966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.908 [2024-04-26 23:37:00.127302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.908 [2024-04-26 23:37:00.127308] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.908 qpair failed and we were unable to recover it. 00:34:10.908 [2024-04-26 23:37:00.127588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.908 [2024-04-26 23:37:00.127897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.908 [2024-04-26 23:37:00.127904] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.908 qpair failed and we were unable to recover it. 00:34:10.908 [2024-04-26 23:37:00.128274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.908 [2024-04-26 23:37:00.128598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.908 [2024-04-26 23:37:00.128605] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.908 qpair failed and we were unable to recover it. 00:34:10.908 [2024-04-26 23:37:00.128958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.908 [2024-04-26 23:37:00.129354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.908 [2024-04-26 23:37:00.129361] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.908 qpair failed and we were unable to recover it. 00:34:10.908 [2024-04-26 23:37:00.129679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.908 [2024-04-26 23:37:00.130012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.908 [2024-04-26 23:37:00.130019] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.908 qpair failed and we were unable to recover it. 00:34:10.908 [2024-04-26 23:37:00.130319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.908 [2024-04-26 23:37:00.130528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.908 [2024-04-26 23:37:00.130534] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.908 qpair failed and we were unable to recover it. 00:34:10.908 [2024-04-26 23:37:00.130843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.908 [2024-04-26 23:37:00.131150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.908 [2024-04-26 23:37:00.131157] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.908 qpair failed and we were unable to recover it. 00:34:10.908 [2024-04-26 23:37:00.131465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.908 [2024-04-26 23:37:00.131764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.908 [2024-04-26 23:37:00.131770] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.908 qpair failed and we were unable to recover it. 00:34:10.908 [2024-04-26 23:37:00.131847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.908 [2024-04-26 23:37:00.132116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.908 [2024-04-26 23:37:00.132122] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.908 qpair failed and we were unable to recover it. 00:34:10.908 [2024-04-26 23:37:00.132465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.908 [2024-04-26 23:37:00.132760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.908 [2024-04-26 23:37:00.132768] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.908 qpair failed and we were unable to recover it. 00:34:10.908 [2024-04-26 23:37:00.133011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.908 [2024-04-26 23:37:00.133288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.908 [2024-04-26 23:37:00.133302] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.908 qpair failed and we were unable to recover it. 00:34:10.908 [2024-04-26 23:37:00.133580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.908 [2024-04-26 23:37:00.133938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.908 [2024-04-26 23:37:00.133945] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.908 qpair failed and we were unable to recover it. 00:34:10.908 [2024-04-26 23:37:00.134275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.908 [2024-04-26 23:37:00.134600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.908 [2024-04-26 23:37:00.134607] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.908 qpair failed and we were unable to recover it. 00:34:10.908 [2024-04-26 23:37:00.134896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.908 [2024-04-26 23:37:00.135228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.908 [2024-04-26 23:37:00.135235] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.908 qpair failed and we were unable to recover it. 00:34:10.908 [2024-04-26 23:37:00.135591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.908 [2024-04-26 23:37:00.135800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.908 [2024-04-26 23:37:00.135806] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.908 qpair failed and we were unable to recover it. 00:34:10.908 [2024-04-26 23:37:00.135983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.908 [2024-04-26 23:37:00.136337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.908 [2024-04-26 23:37:00.136344] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.908 qpair failed and we were unable to recover it. 00:34:10.908 [2024-04-26 23:37:00.136663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.908 [2024-04-26 23:37:00.137003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.908 [2024-04-26 23:37:00.137010] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.908 qpair failed and we were unable to recover it. 00:34:10.908 [2024-04-26 23:37:00.137317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.908 [2024-04-26 23:37:00.137672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.908 [2024-04-26 23:37:00.137678] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.908 qpair failed and we were unable to recover it. 00:34:10.908 [2024-04-26 23:37:00.138064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.908 [2024-04-26 23:37:00.138418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.908 [2024-04-26 23:37:00.138425] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.908 qpair failed and we were unable to recover it. 00:34:10.908 [2024-04-26 23:37:00.138776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.908 [2024-04-26 23:37:00.138988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.908 [2024-04-26 23:37:00.138994] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.908 qpair failed and we were unable to recover it. 00:34:10.908 [2024-04-26 23:37:00.139398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.908 [2024-04-26 23:37:00.139666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.908 [2024-04-26 23:37:00.139673] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.908 qpair failed and we were unable to recover it. 00:34:10.908 [2024-04-26 23:37:00.140025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.908 [2024-04-26 23:37:00.140187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.908 [2024-04-26 23:37:00.140194] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.908 qpair failed and we were unable to recover it. 00:34:10.908 [2024-04-26 23:37:00.140560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.908 [2024-04-26 23:37:00.140866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.908 [2024-04-26 23:37:00.140874] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.908 qpair failed and we were unable to recover it. 00:34:10.908 [2024-04-26 23:37:00.141056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.908 [2024-04-26 23:37:00.141275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.908 [2024-04-26 23:37:00.141282] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.908 qpair failed and we were unable to recover it. 00:34:10.908 [2024-04-26 23:37:00.141453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.908 [2024-04-26 23:37:00.141754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.908 [2024-04-26 23:37:00.141760] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.908 qpair failed and we were unable to recover it. 00:34:10.908 [2024-04-26 23:37:00.142135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.908 [2024-04-26 23:37:00.142494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.908 [2024-04-26 23:37:00.142501] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.908 qpair failed and we were unable to recover it. 00:34:10.908 [2024-04-26 23:37:00.142832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.908 [2024-04-26 23:37:00.143132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.908 [2024-04-26 23:37:00.143138] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.908 qpair failed and we were unable to recover it. 00:34:10.908 [2024-04-26 23:37:00.143429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.908 [2024-04-26 23:37:00.143745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.908 [2024-04-26 23:37:00.143753] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:10.908 qpair failed and we were unable to recover it. 00:34:11.181 [2024-04-26 23:37:00.143951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.181 [2024-04-26 23:37:00.144280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.181 [2024-04-26 23:37:00.144288] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.181 qpair failed and we were unable to recover it. 00:34:11.181 [2024-04-26 23:37:00.144579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.181 [2024-04-26 23:37:00.144845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.181 [2024-04-26 23:37:00.144852] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.181 qpair failed and we were unable to recover it. 00:34:11.181 [2024-04-26 23:37:00.145153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.181 [2024-04-26 23:37:00.145447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.181 [2024-04-26 23:37:00.145453] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.181 qpair failed and we were unable to recover it. 00:34:11.181 [2024-04-26 23:37:00.145815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.181 [2024-04-26 23:37:00.146125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.181 [2024-04-26 23:37:00.146131] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.181 qpair failed and we were unable to recover it. 00:34:11.181 [2024-04-26 23:37:00.146418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.181 [2024-04-26 23:37:00.146727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.181 [2024-04-26 23:37:00.146733] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.181 qpair failed and we were unable to recover it. 00:34:11.181 [2024-04-26 23:37:00.147058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.181 [2024-04-26 23:37:00.147376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.181 [2024-04-26 23:37:00.147382] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.181 qpair failed and we were unable to recover it. 00:34:11.181 [2024-04-26 23:37:00.147747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.181 [2024-04-26 23:37:00.147968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.181 [2024-04-26 23:37:00.147975] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.181 qpair failed and we were unable to recover it. 00:34:11.181 [2024-04-26 23:37:00.148364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.181 [2024-04-26 23:37:00.148684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.181 [2024-04-26 23:37:00.148690] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.181 qpair failed and we were unable to recover it. 00:34:11.181 [2024-04-26 23:37:00.149028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.181 [2024-04-26 23:37:00.149366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.181 [2024-04-26 23:37:00.149372] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.181 qpair failed and we were unable to recover it. 00:34:11.181 [2024-04-26 23:37:00.149726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.181 [2024-04-26 23:37:00.150058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.181 [2024-04-26 23:37:00.150065] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.181 qpair failed and we were unable to recover it. 00:34:11.181 [2024-04-26 23:37:00.150443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.181 [2024-04-26 23:37:00.150801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.181 [2024-04-26 23:37:00.150807] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.181 qpair failed and we were unable to recover it. 00:34:11.181 [2024-04-26 23:37:00.151130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.181 [2024-04-26 23:37:00.151448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.182 [2024-04-26 23:37:00.151455] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.182 qpair failed and we were unable to recover it. 00:34:11.182 [2024-04-26 23:37:00.151781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.182 [2024-04-26 23:37:00.152125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.182 [2024-04-26 23:37:00.152132] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.182 qpair failed and we were unable to recover it. 00:34:11.182 [2024-04-26 23:37:00.152462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.182 [2024-04-26 23:37:00.152774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.182 [2024-04-26 23:37:00.152781] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.182 qpair failed and we were unable to recover it. 00:34:11.182 [2024-04-26 23:37:00.153110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.182 [2024-04-26 23:37:00.153389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.182 [2024-04-26 23:37:00.153396] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.182 qpair failed and we were unable to recover it. 00:34:11.182 [2024-04-26 23:37:00.153696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.182 [2024-04-26 23:37:00.154029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.182 [2024-04-26 23:37:00.154036] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.182 qpair failed and we were unable to recover it. 00:34:11.182 [2024-04-26 23:37:00.154231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.182 [2024-04-26 23:37:00.154489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.182 [2024-04-26 23:37:00.154496] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.182 qpair failed and we were unable to recover it. 00:34:11.182 [2024-04-26 23:37:00.154835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.182 [2024-04-26 23:37:00.155008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.182 [2024-04-26 23:37:00.155015] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.182 qpair failed and we were unable to recover it. 00:34:11.182 [2024-04-26 23:37:00.155206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.182 [2024-04-26 23:37:00.155408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.182 [2024-04-26 23:37:00.155415] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.182 qpair failed and we were unable to recover it. 00:34:11.182 [2024-04-26 23:37:00.155722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.182 [2024-04-26 23:37:00.156155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.182 [2024-04-26 23:37:00.156162] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.182 qpair failed and we were unable to recover it. 00:34:11.182 [2024-04-26 23:37:00.156422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.182 [2024-04-26 23:37:00.156795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.182 [2024-04-26 23:37:00.156801] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.182 qpair failed and we were unable to recover it. 00:34:11.182 [2024-04-26 23:37:00.157006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.182 [2024-04-26 23:37:00.157294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.182 [2024-04-26 23:37:00.157300] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.182 qpair failed and we were unable to recover it. 00:34:11.182 [2024-04-26 23:37:00.157591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.182 [2024-04-26 23:37:00.157890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.182 [2024-04-26 23:37:00.157897] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.182 qpair failed and we were unable to recover it. 00:34:11.182 [2024-04-26 23:37:00.158231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.182 [2024-04-26 23:37:00.158551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.182 [2024-04-26 23:37:00.158557] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.182 qpair failed and we were unable to recover it. 00:34:11.182 [2024-04-26 23:37:00.159015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.182 [2024-04-26 23:37:00.159338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.182 [2024-04-26 23:37:00.159344] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.182 qpair failed and we were unable to recover it. 00:34:11.182 [2024-04-26 23:37:00.159696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.182 [2024-04-26 23:37:00.160020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.182 [2024-04-26 23:37:00.160027] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.182 qpair failed and we were unable to recover it. 00:34:11.182 [2024-04-26 23:37:00.160328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.182 [2024-04-26 23:37:00.160654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.182 [2024-04-26 23:37:00.160660] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.182 qpair failed and we were unable to recover it. 00:34:11.182 [2024-04-26 23:37:00.161075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.182 [2024-04-26 23:37:00.161303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.182 [2024-04-26 23:37:00.161309] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.182 qpair failed and we were unable to recover it. 00:34:11.182 [2024-04-26 23:37:00.161484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.182 [2024-04-26 23:37:00.161680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.182 [2024-04-26 23:37:00.161686] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.182 qpair failed and we were unable to recover it. 00:34:11.182 [2024-04-26 23:37:00.161975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.182 [2024-04-26 23:37:00.162315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.182 [2024-04-26 23:37:00.162322] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.182 qpair failed and we were unable to recover it. 00:34:11.182 [2024-04-26 23:37:00.162608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.182 [2024-04-26 23:37:00.162901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.182 [2024-04-26 23:37:00.162907] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.182 qpair failed and we were unable to recover it. 00:34:11.182 [2024-04-26 23:37:00.163136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.182 [2024-04-26 23:37:00.163482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.182 [2024-04-26 23:37:00.163489] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.182 qpair failed and we were unable to recover it. 00:34:11.182 [2024-04-26 23:37:00.163787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.182 [2024-04-26 23:37:00.164084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.182 [2024-04-26 23:37:00.164091] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.182 qpair failed and we were unable to recover it. 00:34:11.182 [2024-04-26 23:37:00.164406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.182 [2024-04-26 23:37:00.164728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.182 [2024-04-26 23:37:00.164734] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.182 qpair failed and we were unable to recover it. 00:34:11.182 [2024-04-26 23:37:00.165079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.182 [2024-04-26 23:37:00.165393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.182 [2024-04-26 23:37:00.165399] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.182 qpair failed and we were unable to recover it. 00:34:11.182 [2024-04-26 23:37:00.165805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.182 [2024-04-26 23:37:00.166134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.182 [2024-04-26 23:37:00.166141] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.182 qpair failed and we were unable to recover it. 00:34:11.182 [2024-04-26 23:37:00.166435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.182 [2024-04-26 23:37:00.166762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.182 [2024-04-26 23:37:00.166769] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.182 qpair failed and we were unable to recover it. 00:34:11.182 [2024-04-26 23:37:00.167147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.182 [2024-04-26 23:37:00.167506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.182 [2024-04-26 23:37:00.167514] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.182 qpair failed and we were unable to recover it. 00:34:11.182 [2024-04-26 23:37:00.167845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.182 [2024-04-26 23:37:00.168087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.182 [2024-04-26 23:37:00.168094] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.182 qpair failed and we were unable to recover it. 00:34:11.182 [2024-04-26 23:37:00.168381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.182 [2024-04-26 23:37:00.168719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.182 [2024-04-26 23:37:00.168727] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.182 qpair failed and we were unable to recover it. 00:34:11.182 [2024-04-26 23:37:00.169120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.183 [2024-04-26 23:37:00.169473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.183 [2024-04-26 23:37:00.169480] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.183 qpair failed and we were unable to recover it. 00:34:11.183 [2024-04-26 23:37:00.169787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.183 [2024-04-26 23:37:00.170091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.183 [2024-04-26 23:37:00.170099] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.183 qpair failed and we were unable to recover it. 00:34:11.183 [2024-04-26 23:37:00.170252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.183 [2024-04-26 23:37:00.170660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.183 [2024-04-26 23:37:00.170667] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.183 qpair failed and we were unable to recover it. 00:34:11.183 [2024-04-26 23:37:00.171000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.183 [2024-04-26 23:37:00.171347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.183 [2024-04-26 23:37:00.171354] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.183 qpair failed and we were unable to recover it. 00:34:11.183 [2024-04-26 23:37:00.171684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.183 [2024-04-26 23:37:00.171898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.183 [2024-04-26 23:37:00.171906] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.183 qpair failed and we were unable to recover it. 00:34:11.183 [2024-04-26 23:37:00.172271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.183 [2024-04-26 23:37:00.172628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.183 [2024-04-26 23:37:00.172635] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.183 qpair failed and we were unable to recover it. 00:34:11.183 [2024-04-26 23:37:00.172966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.183 [2024-04-26 23:37:00.173282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.183 [2024-04-26 23:37:00.173289] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.183 qpair failed and we were unable to recover it. 00:34:11.183 [2024-04-26 23:37:00.173612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.183 [2024-04-26 23:37:00.173792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.183 [2024-04-26 23:37:00.173799] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.183 qpair failed and we were unable to recover it. 00:34:11.183 [2024-04-26 23:37:00.174037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.183 [2024-04-26 23:37:00.174334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.183 [2024-04-26 23:37:00.174341] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.183 qpair failed and we were unable to recover it. 00:34:11.183 [2024-04-26 23:37:00.174541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.183 [2024-04-26 23:37:00.174898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.183 [2024-04-26 23:37:00.174907] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.183 qpair failed and we were unable to recover it. 00:34:11.183 [2024-04-26 23:37:00.175210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.183 [2024-04-26 23:37:00.175562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.183 [2024-04-26 23:37:00.175569] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.183 qpair failed and we were unable to recover it. 00:34:11.183 [2024-04-26 23:37:00.175922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.183 [2024-04-26 23:37:00.176240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.183 [2024-04-26 23:37:00.176247] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.183 qpair failed and we were unable to recover it. 00:34:11.183 [2024-04-26 23:37:00.176569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.183 [2024-04-26 23:37:00.176827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.183 [2024-04-26 23:37:00.176835] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.183 qpair failed and we were unable to recover it. 00:34:11.183 [2024-04-26 23:37:00.176912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.183 [2024-04-26 23:37:00.177224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.183 [2024-04-26 23:37:00.177231] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.183 qpair failed and we were unable to recover it. 00:34:11.183 [2024-04-26 23:37:00.177721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.183 [2024-04-26 23:37:00.178010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.183 [2024-04-26 23:37:00.178018] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.183 qpair failed and we were unable to recover it. 00:34:11.183 [2024-04-26 23:37:00.178359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.183 [2024-04-26 23:37:00.178569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.183 [2024-04-26 23:37:00.178576] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.183 qpair failed and we were unable to recover it. 00:34:11.183 [2024-04-26 23:37:00.178903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.183 [2024-04-26 23:37:00.179220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.183 [2024-04-26 23:37:00.179227] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.183 qpair failed and we were unable to recover it. 00:34:11.183 [2024-04-26 23:37:00.179568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.183 [2024-04-26 23:37:00.179882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.183 [2024-04-26 23:37:00.179890] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.183 qpair failed and we were unable to recover it. 00:34:11.183 [2024-04-26 23:37:00.180281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.183 [2024-04-26 23:37:00.180645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.183 [2024-04-26 23:37:00.180653] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.183 qpair failed and we were unable to recover it. 00:34:11.183 [2024-04-26 23:37:00.180818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.183 [2024-04-26 23:37:00.181149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.183 [2024-04-26 23:37:00.181158] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.183 qpair failed and we were unable to recover it. 00:34:11.183 [2024-04-26 23:37:00.181509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.183 [2024-04-26 23:37:00.181877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.183 [2024-04-26 23:37:00.181884] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.183 qpair failed and we were unable to recover it. 00:34:11.184 [2024-04-26 23:37:00.182199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.184 [2024-04-26 23:37:00.182552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.184 [2024-04-26 23:37:00.182559] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.184 qpair failed and we were unable to recover it. 00:34:11.184 [2024-04-26 23:37:00.182758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.184 [2024-04-26 23:37:00.183081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.184 [2024-04-26 23:37:00.183089] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.184 qpair failed and we were unable to recover it. 00:34:11.184 [2024-04-26 23:37:00.183444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.184 [2024-04-26 23:37:00.183655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.184 [2024-04-26 23:37:00.183662] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.184 qpair failed and we were unable to recover it. 00:34:11.184 [2024-04-26 23:37:00.183985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.184 [2024-04-26 23:37:00.184325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.184 [2024-04-26 23:37:00.184332] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.184 qpair failed and we were unable to recover it. 00:34:11.184 [2024-04-26 23:37:00.184679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.184 [2024-04-26 23:37:00.185029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.184 [2024-04-26 23:37:00.185037] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.184 qpair failed and we were unable to recover it. 00:34:11.184 [2024-04-26 23:37:00.185349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.184 [2024-04-26 23:37:00.185657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.184 [2024-04-26 23:37:00.185664] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.184 qpair failed and we were unable to recover it. 00:34:11.184 [2024-04-26 23:37:00.186032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.184 [2024-04-26 23:37:00.186242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.184 [2024-04-26 23:37:00.186249] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.184 qpair failed and we were unable to recover it. 00:34:11.184 [2024-04-26 23:37:00.186446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.184 [2024-04-26 23:37:00.186762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.184 [2024-04-26 23:37:00.186769] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.184 qpair failed and we were unable to recover it. 00:34:11.184 [2024-04-26 23:37:00.186949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.184 [2024-04-26 23:37:00.187261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.184 [2024-04-26 23:37:00.187268] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.184 qpair failed and we were unable to recover it. 00:34:11.184 [2024-04-26 23:37:00.187518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.184 [2024-04-26 23:37:00.187703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.184 [2024-04-26 23:37:00.187711] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.184 qpair failed and we were unable to recover it. 00:34:11.184 [2024-04-26 23:37:00.187969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.184 [2024-04-26 23:37:00.188297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.184 [2024-04-26 23:37:00.188305] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.184 qpair failed and we were unable to recover it. 00:34:11.184 [2024-04-26 23:37:00.188640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.184 [2024-04-26 23:37:00.188879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.184 [2024-04-26 23:37:00.188886] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.184 qpair failed and we were unable to recover it. 00:34:11.184 [2024-04-26 23:37:00.189076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.184 [2024-04-26 23:37:00.189324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.184 [2024-04-26 23:37:00.189331] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.184 qpair failed and we were unable to recover it. 00:34:11.184 [2024-04-26 23:37:00.189635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.184 [2024-04-26 23:37:00.189982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.184 [2024-04-26 23:37:00.189989] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.184 qpair failed and we were unable to recover it. 00:34:11.184 [2024-04-26 23:37:00.190328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.184 [2024-04-26 23:37:00.190555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.184 [2024-04-26 23:37:00.190562] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.184 qpair failed and we were unable to recover it. 00:34:11.184 [2024-04-26 23:37:00.190897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.184 [2024-04-26 23:37:00.191239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.184 [2024-04-26 23:37:00.191247] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.184 qpair failed and we were unable to recover it. 00:34:11.184 [2024-04-26 23:37:00.191599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.184 [2024-04-26 23:37:00.191859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.184 [2024-04-26 23:37:00.191866] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.184 qpair failed and we were unable to recover it. 00:34:11.184 [2024-04-26 23:37:00.192042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.184 [2024-04-26 23:37:00.192226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.184 [2024-04-26 23:37:00.192233] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.184 qpair failed and we were unable to recover it. 00:34:11.184 [2024-04-26 23:37:00.192547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.184 [2024-04-26 23:37:00.192843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.184 [2024-04-26 23:37:00.192850] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.184 qpair failed and we were unable to recover it. 00:34:11.184 [2024-04-26 23:37:00.193274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.184 [2024-04-26 23:37:00.193495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.184 [2024-04-26 23:37:00.193502] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.184 qpair failed and we were unable to recover it. 00:34:11.184 [2024-04-26 23:37:00.193840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.184 [2024-04-26 23:37:00.194026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.184 [2024-04-26 23:37:00.194032] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.184 qpair failed and we were unable to recover it. 00:34:11.184 [2024-04-26 23:37:00.194388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.184 [2024-04-26 23:37:00.194714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.184 [2024-04-26 23:37:00.194720] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.184 qpair failed and we were unable to recover it. 00:34:11.184 [2024-04-26 23:37:00.195042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.184 [2024-04-26 23:37:00.195329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.184 [2024-04-26 23:37:00.195335] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.184 qpair failed and we were unable to recover it. 00:34:11.184 [2024-04-26 23:37:00.195679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.184 [2024-04-26 23:37:00.195925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.184 [2024-04-26 23:37:00.195932] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.184 qpair failed and we were unable to recover it. 00:34:11.184 [2024-04-26 23:37:00.196053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.184 [2024-04-26 23:37:00.196256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.184 [2024-04-26 23:37:00.196263] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.184 qpair failed and we were unable to recover it. 00:34:11.184 [2024-04-26 23:37:00.196561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.184 [2024-04-26 23:37:00.196834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.184 [2024-04-26 23:37:00.196843] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.184 qpair failed and we were unable to recover it. 00:34:11.184 [2024-04-26 23:37:00.197160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.184 [2024-04-26 23:37:00.197462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.184 [2024-04-26 23:37:00.197469] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.184 qpair failed and we were unable to recover it. 00:34:11.184 [2024-04-26 23:37:00.197804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.184 [2024-04-26 23:37:00.198159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.184 [2024-04-26 23:37:00.198166] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.184 qpair failed and we were unable to recover it. 00:34:11.184 [2024-04-26 23:37:00.198489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.184 [2024-04-26 23:37:00.198801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.184 [2024-04-26 23:37:00.198808] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.185 qpair failed and we were unable to recover it. 00:34:11.185 [2024-04-26 23:37:00.199104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.185 [2024-04-26 23:37:00.199427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.185 [2024-04-26 23:37:00.199434] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.185 qpair failed and we were unable to recover it. 00:34:11.185 [2024-04-26 23:37:00.199630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.185 [2024-04-26 23:37:00.199957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.185 [2024-04-26 23:37:00.199964] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.185 qpair failed and we were unable to recover it. 00:34:11.185 [2024-04-26 23:37:00.200298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.185 [2024-04-26 23:37:00.200657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.185 [2024-04-26 23:37:00.200663] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.185 qpair failed and we were unable to recover it. 00:34:11.185 [2024-04-26 23:37:00.201027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.185 [2024-04-26 23:37:00.201329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.185 [2024-04-26 23:37:00.201336] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.185 qpair failed and we were unable to recover it. 00:34:11.185 [2024-04-26 23:37:00.201628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.185 [2024-04-26 23:37:00.201991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.185 [2024-04-26 23:37:00.201998] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.185 qpair failed and we were unable to recover it. 00:34:11.185 [2024-04-26 23:37:00.202337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.185 [2024-04-26 23:37:00.202576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.185 [2024-04-26 23:37:00.202583] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.185 qpair failed and we were unable to recover it. 00:34:11.185 [2024-04-26 23:37:00.202911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.185 [2024-04-26 23:37:00.203244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.185 [2024-04-26 23:37:00.203250] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.185 qpair failed and we were unable to recover it. 00:34:11.185 [2024-04-26 23:37:00.203583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.185 [2024-04-26 23:37:00.203900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.185 [2024-04-26 23:37:00.203906] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.185 qpair failed and we were unable to recover it. 00:34:11.185 [2024-04-26 23:37:00.203951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.185 [2024-04-26 23:37:00.204287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.185 [2024-04-26 23:37:00.204294] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.185 qpair failed and we were unable to recover it. 00:34:11.185 [2024-04-26 23:37:00.204691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.185 [2024-04-26 23:37:00.204881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.185 [2024-04-26 23:37:00.204888] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.185 qpair failed and we were unable to recover it. 00:34:11.185 [2024-04-26 23:37:00.205200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.185 [2024-04-26 23:37:00.205500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.185 [2024-04-26 23:37:00.205506] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.185 qpair failed and we were unable to recover it. 00:34:11.185 [2024-04-26 23:37:00.205826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.185 [2024-04-26 23:37:00.206144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.185 [2024-04-26 23:37:00.206150] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.185 qpair failed and we were unable to recover it. 00:34:11.185 [2024-04-26 23:37:00.206351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.185 [2024-04-26 23:37:00.206735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.185 [2024-04-26 23:37:00.206741] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.185 qpair failed and we were unable to recover it. 00:34:11.185 [2024-04-26 23:37:00.207075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.185 [2024-04-26 23:37:00.207392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.185 [2024-04-26 23:37:00.207398] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.185 qpair failed and we were unable to recover it. 00:34:11.185 [2024-04-26 23:37:00.207577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.185 [2024-04-26 23:37:00.207895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.185 [2024-04-26 23:37:00.207901] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.185 qpair failed and we were unable to recover it. 00:34:11.185 [2024-04-26 23:37:00.208236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.185 [2024-04-26 23:37:00.208573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.185 [2024-04-26 23:37:00.208579] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.185 qpair failed and we were unable to recover it. 00:34:11.185 [2024-04-26 23:37:00.208784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.185 [2024-04-26 23:37:00.209092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.185 [2024-04-26 23:37:00.209098] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.185 qpair failed and we were unable to recover it. 00:34:11.185 [2024-04-26 23:37:00.209407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.185 [2024-04-26 23:37:00.209734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.185 [2024-04-26 23:37:00.209740] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.185 qpair failed and we were unable to recover it. 00:34:11.185 [2024-04-26 23:37:00.209970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.185 [2024-04-26 23:37:00.210333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.185 [2024-04-26 23:37:00.210339] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.185 qpair failed and we were unable to recover it. 00:34:11.185 [2024-04-26 23:37:00.210680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.185 [2024-04-26 23:37:00.210848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.185 [2024-04-26 23:37:00.210855] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.185 qpair failed and we were unable to recover it. 00:34:11.185 [2024-04-26 23:37:00.211115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.185 [2024-04-26 23:37:00.211405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.185 [2024-04-26 23:37:00.211412] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.185 qpair failed and we were unable to recover it. 00:34:11.185 [2024-04-26 23:37:00.211748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.185 [2024-04-26 23:37:00.212130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.185 [2024-04-26 23:37:00.212137] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.185 qpair failed and we were unable to recover it. 00:34:11.185 [2024-04-26 23:37:00.212362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.185 [2024-04-26 23:37:00.212647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.185 [2024-04-26 23:37:00.212654] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.185 qpair failed and we were unable to recover it. 00:34:11.185 [2024-04-26 23:37:00.213022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.185 [2024-04-26 23:37:00.213356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.185 [2024-04-26 23:37:00.213368] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.185 qpair failed and we were unable to recover it. 00:34:11.185 [2024-04-26 23:37:00.213732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.185 [2024-04-26 23:37:00.213948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.185 [2024-04-26 23:37:00.213955] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.185 qpair failed and we were unable to recover it. 00:34:11.185 [2024-04-26 23:37:00.214149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.185 [2024-04-26 23:37:00.214442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.185 [2024-04-26 23:37:00.214449] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.185 qpair failed and we were unable to recover it. 00:34:11.185 [2024-04-26 23:37:00.214786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.185 [2024-04-26 23:37:00.215108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.185 [2024-04-26 23:37:00.215115] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.185 qpair failed and we were unable to recover it. 00:34:11.185 [2024-04-26 23:37:00.215417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.185 [2024-04-26 23:37:00.215742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.185 [2024-04-26 23:37:00.215749] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.185 qpair failed and we were unable to recover it. 00:34:11.185 [2024-04-26 23:37:00.215827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.185 [2024-04-26 23:37:00.216077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.186 [2024-04-26 23:37:00.216084] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.186 qpair failed and we were unable to recover it. 00:34:11.186 [2024-04-26 23:37:00.216451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.186 [2024-04-26 23:37:00.216798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.186 [2024-04-26 23:37:00.216805] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.186 qpair failed and we were unable to recover it. 00:34:11.186 [2024-04-26 23:37:00.217124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.186 [2024-04-26 23:37:00.217452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.186 [2024-04-26 23:37:00.217460] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.186 qpair failed and we were unable to recover it. 00:34:11.186 [2024-04-26 23:37:00.217831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.186 [2024-04-26 23:37:00.218144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.186 [2024-04-26 23:37:00.218151] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.186 qpair failed and we were unable to recover it. 00:34:11.186 [2024-04-26 23:37:00.218483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.186 [2024-04-26 23:37:00.218818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.186 [2024-04-26 23:37:00.218825] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.186 qpair failed and we were unable to recover it. 00:34:11.186 [2024-04-26 23:37:00.219062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.186 [2024-04-26 23:37:00.219396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.186 [2024-04-26 23:37:00.219403] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.186 qpair failed and we were unable to recover it. 00:34:11.186 [2024-04-26 23:37:00.219782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.186 [2024-04-26 23:37:00.219957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.186 [2024-04-26 23:37:00.219964] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.186 qpair failed and we were unable to recover it. 00:34:11.186 [2024-04-26 23:37:00.220170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.186 [2024-04-26 23:37:00.220461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.186 [2024-04-26 23:37:00.220467] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.186 qpair failed and we were unable to recover it. 00:34:11.186 [2024-04-26 23:37:00.220757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.186 [2024-04-26 23:37:00.221076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.186 [2024-04-26 23:37:00.221082] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.186 qpair failed and we were unable to recover it. 00:34:11.186 [2024-04-26 23:37:00.221481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.186 [2024-04-26 23:37:00.221889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.186 [2024-04-26 23:37:00.221895] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.186 qpair failed and we were unable to recover it. 00:34:11.186 [2024-04-26 23:37:00.222110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.186 [2024-04-26 23:37:00.222397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.186 [2024-04-26 23:37:00.222404] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.186 qpair failed and we were unable to recover it. 00:34:11.186 [2024-04-26 23:37:00.222736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.186 [2024-04-26 23:37:00.223057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.186 [2024-04-26 23:37:00.223065] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.186 qpair failed and we were unable to recover it. 00:34:11.186 [2024-04-26 23:37:00.223416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.186 [2024-04-26 23:37:00.223699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.186 [2024-04-26 23:37:00.223706] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.186 qpair failed and we were unable to recover it. 00:34:11.186 [2024-04-26 23:37:00.224036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.186 [2024-04-26 23:37:00.224329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.186 [2024-04-26 23:37:00.224336] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.186 qpair failed and we were unable to recover it. 00:34:11.186 [2024-04-26 23:37:00.224535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.186 [2024-04-26 23:37:00.224876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.186 [2024-04-26 23:37:00.224883] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.186 qpair failed and we were unable to recover it. 00:34:11.186 [2024-04-26 23:37:00.225215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.186 [2024-04-26 23:37:00.225527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.186 [2024-04-26 23:37:00.225534] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.186 qpair failed and we were unable to recover it. 00:34:11.186 [2024-04-26 23:37:00.225858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.186 [2024-04-26 23:37:00.226212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.186 [2024-04-26 23:37:00.226218] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.186 qpair failed and we were unable to recover it. 00:34:11.186 [2024-04-26 23:37:00.226581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.186 [2024-04-26 23:37:00.226916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.186 [2024-04-26 23:37:00.226923] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.186 qpair failed and we were unable to recover it. 00:34:11.186 [2024-04-26 23:37:00.227130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.186 [2024-04-26 23:37:00.227431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.186 [2024-04-26 23:37:00.227437] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.186 qpair failed and we were unable to recover it. 00:34:11.186 [2024-04-26 23:37:00.227749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.186 [2024-04-26 23:37:00.227944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.186 [2024-04-26 23:37:00.227952] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.186 qpair failed and we were unable to recover it. 00:34:11.186 [2024-04-26 23:37:00.228267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.186 [2024-04-26 23:37:00.228496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.186 [2024-04-26 23:37:00.228502] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.186 qpair failed and we were unable to recover it. 00:34:11.186 [2024-04-26 23:37:00.228733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.186 [2024-04-26 23:37:00.229032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.186 [2024-04-26 23:37:00.229039] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.186 qpair failed and we were unable to recover it. 00:34:11.186 [2024-04-26 23:37:00.229367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.186 [2024-04-26 23:37:00.229562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.186 [2024-04-26 23:37:00.229569] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.186 qpair failed and we were unable to recover it. 00:34:11.186 [2024-04-26 23:37:00.229876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.186 [2024-04-26 23:37:00.230205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.186 [2024-04-26 23:37:00.230211] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.186 qpair failed and we were unable to recover it. 00:34:11.186 [2024-04-26 23:37:00.230548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.186 [2024-04-26 23:37:00.230868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.186 [2024-04-26 23:37:00.230875] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.186 qpair failed and we were unable to recover it. 00:34:11.186 [2024-04-26 23:37:00.231260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.186 [2024-04-26 23:37:00.231508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.186 [2024-04-26 23:37:00.231515] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.186 qpair failed and we were unable to recover it. 00:34:11.186 [2024-04-26 23:37:00.231852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.186 [2024-04-26 23:37:00.232172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.186 [2024-04-26 23:37:00.232179] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.186 qpair failed and we were unable to recover it. 00:34:11.186 [2024-04-26 23:37:00.232504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.186 [2024-04-26 23:37:00.232810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.186 [2024-04-26 23:37:00.232818] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.186 qpair failed and we were unable to recover it. 00:34:11.186 [2024-04-26 23:37:00.233144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.186 [2024-04-26 23:37:00.233322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.186 [2024-04-26 23:37:00.233329] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.186 qpair failed and we were unable to recover it. 00:34:11.186 [2024-04-26 23:37:00.233706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.187 [2024-04-26 23:37:00.233789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.187 [2024-04-26 23:37:00.233796] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.187 qpair failed and we were unable to recover it. 00:34:11.187 [2024-04-26 23:37:00.234044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.187 [2024-04-26 23:37:00.234336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.187 [2024-04-26 23:37:00.234343] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.187 qpair failed and we were unable to recover it. 00:34:11.187 [2024-04-26 23:37:00.234648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.187 [2024-04-26 23:37:00.234905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.187 [2024-04-26 23:37:00.234912] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.187 qpair failed and we were unable to recover it. 00:34:11.187 [2024-04-26 23:37:00.235248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.187 [2024-04-26 23:37:00.235453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.187 [2024-04-26 23:37:00.235459] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.187 qpair failed and we were unable to recover it. 00:34:11.187 [2024-04-26 23:37:00.235795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.187 [2024-04-26 23:37:00.236102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.187 [2024-04-26 23:37:00.236108] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.187 qpair failed and we were unable to recover it. 00:34:11.187 [2024-04-26 23:37:00.236458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.187 [2024-04-26 23:37:00.236775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.187 [2024-04-26 23:37:00.236781] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.187 qpair failed and we were unable to recover it. 00:34:11.187 [2024-04-26 23:37:00.237104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.187 [2024-04-26 23:37:00.237419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.187 [2024-04-26 23:37:00.237425] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.187 qpair failed and we were unable to recover it. 00:34:11.187 [2024-04-26 23:37:00.237737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.187 [2024-04-26 23:37:00.238073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.187 [2024-04-26 23:37:00.238080] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.187 qpair failed and we were unable to recover it. 00:34:11.187 [2024-04-26 23:37:00.238276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.187 [2024-04-26 23:37:00.238595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.187 [2024-04-26 23:37:00.238602] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.187 qpair failed and we were unable to recover it. 00:34:11.187 [2024-04-26 23:37:00.238959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.187 [2024-04-26 23:37:00.239180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.187 [2024-04-26 23:37:00.239187] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.187 qpair failed and we were unable to recover it. 00:34:11.187 [2024-04-26 23:37:00.239522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.187 [2024-04-26 23:37:00.239810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.187 [2024-04-26 23:37:00.239816] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.187 qpair failed and we were unable to recover it. 00:34:11.187 [2024-04-26 23:37:00.240147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.187 [2024-04-26 23:37:00.240499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.187 [2024-04-26 23:37:00.240506] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.187 qpair failed and we were unable to recover it. 00:34:11.187 [2024-04-26 23:37:00.240855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.187 [2024-04-26 23:37:00.241153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.187 [2024-04-26 23:37:00.241160] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.187 qpair failed and we were unable to recover it. 00:34:11.187 [2024-04-26 23:37:00.241490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.187 [2024-04-26 23:37:00.241635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.187 [2024-04-26 23:37:00.241643] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.187 qpair failed and we were unable to recover it. 00:34:11.187 [2024-04-26 23:37:00.241970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.187 [2024-04-26 23:37:00.242263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.187 [2024-04-26 23:37:00.242270] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.187 qpair failed and we were unable to recover it. 00:34:11.187 [2024-04-26 23:37:00.242500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.187 [2024-04-26 23:37:00.242845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.187 [2024-04-26 23:37:00.242851] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.187 qpair failed and we were unable to recover it. 00:34:11.187 [2024-04-26 23:37:00.243166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.187 [2024-04-26 23:37:00.243484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.187 [2024-04-26 23:37:00.243491] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.187 qpair failed and we were unable to recover it. 00:34:11.187 [2024-04-26 23:37:00.243827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.187 [2024-04-26 23:37:00.244140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.187 [2024-04-26 23:37:00.244146] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.187 qpair failed and we were unable to recover it. 00:34:11.187 [2024-04-26 23:37:00.244459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.187 [2024-04-26 23:37:00.244754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.187 [2024-04-26 23:37:00.244760] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.187 qpair failed and we were unable to recover it. 00:34:11.187 [2024-04-26 23:37:00.245092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.187 [2024-04-26 23:37:00.245135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.187 [2024-04-26 23:37:00.245143] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.187 qpair failed and we were unable to recover it. 00:34:11.187 [2024-04-26 23:37:00.245486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.187 [2024-04-26 23:37:00.245814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.187 [2024-04-26 23:37:00.245821] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.187 qpair failed and we were unable to recover it. 00:34:11.187 [2024-04-26 23:37:00.246211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.187 [2024-04-26 23:37:00.246542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.187 [2024-04-26 23:37:00.246549] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.187 qpair failed and we were unable to recover it. 00:34:11.187 [2024-04-26 23:37:00.246894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.187 [2024-04-26 23:37:00.247199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.187 [2024-04-26 23:37:00.247206] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.187 qpair failed and we were unable to recover it. 00:34:11.187 [2024-04-26 23:37:00.247565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.187 [2024-04-26 23:37:00.247882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.187 [2024-04-26 23:37:00.247889] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.187 qpair failed and we were unable to recover it. 00:34:11.187 [2024-04-26 23:37:00.248242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.187 [2024-04-26 23:37:00.248601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.187 [2024-04-26 23:37:00.248608] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.187 qpair failed and we were unable to recover it. 00:34:11.187 [2024-04-26 23:37:00.248892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.187 [2024-04-26 23:37:00.249236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.187 [2024-04-26 23:37:00.249243] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.187 qpair failed and we were unable to recover it. 00:34:11.187 [2024-04-26 23:37:00.249549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.187 [2024-04-26 23:37:00.249907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.187 [2024-04-26 23:37:00.249913] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.187 qpair failed and we were unable to recover it. 00:34:11.187 [2024-04-26 23:37:00.250247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.187 [2024-04-26 23:37:00.250579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.187 [2024-04-26 23:37:00.250585] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.187 qpair failed and we were unable to recover it. 00:34:11.187 [2024-04-26 23:37:00.250925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.187 [2024-04-26 23:37:00.251218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.187 [2024-04-26 23:37:00.251225] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.187 qpair failed and we were unable to recover it. 00:34:11.188 [2024-04-26 23:37:00.251563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.188 [2024-04-26 23:37:00.251852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.188 [2024-04-26 23:37:00.251859] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.188 qpair failed and we were unable to recover it. 00:34:11.188 [2024-04-26 23:37:00.252153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.188 [2024-04-26 23:37:00.252467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.188 [2024-04-26 23:37:00.252473] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.188 qpair failed and we were unable to recover it. 00:34:11.188 [2024-04-26 23:37:00.252790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.188 [2024-04-26 23:37:00.253031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.188 [2024-04-26 23:37:00.253038] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.188 qpair failed and we were unable to recover it. 00:34:11.188 [2024-04-26 23:37:00.253373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.188 [2024-04-26 23:37:00.253696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.188 [2024-04-26 23:37:00.253702] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.188 qpair failed and we were unable to recover it. 00:34:11.188 [2024-04-26 23:37:00.254030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.188 [2024-04-26 23:37:00.254357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.188 [2024-04-26 23:37:00.254364] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.188 qpair failed and we were unable to recover it. 00:34:11.188 [2024-04-26 23:37:00.254602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.188 [2024-04-26 23:37:00.254912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.188 [2024-04-26 23:37:00.254918] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.188 qpair failed and we were unable to recover it. 00:34:11.188 [2024-04-26 23:37:00.255232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.188 [2024-04-26 23:37:00.255426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.188 [2024-04-26 23:37:00.255433] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.188 qpair failed and we were unable to recover it. 00:34:11.188 [2024-04-26 23:37:00.255689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.188 [2024-04-26 23:37:00.255854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.188 [2024-04-26 23:37:00.255862] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.188 qpair failed and we were unable to recover it. 00:34:11.188 [2024-04-26 23:37:00.256071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.188 [2024-04-26 23:37:00.256257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.188 [2024-04-26 23:37:00.256264] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.188 qpair failed and we were unable to recover it. 00:34:11.188 [2024-04-26 23:37:00.256583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.188 [2024-04-26 23:37:00.256790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.188 [2024-04-26 23:37:00.256796] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.188 qpair failed and we were unable to recover it. 00:34:11.188 [2024-04-26 23:37:00.257132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.188 [2024-04-26 23:37:00.257456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.188 [2024-04-26 23:37:00.257462] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.188 qpair failed and we were unable to recover it. 00:34:11.188 [2024-04-26 23:37:00.257788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.188 [2024-04-26 23:37:00.258128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.188 [2024-04-26 23:37:00.258135] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.188 qpair failed and we were unable to recover it. 00:34:11.188 [2024-04-26 23:37:00.258465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.188 [2024-04-26 23:37:00.258786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.188 [2024-04-26 23:37:00.258793] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.188 qpair failed and we were unable to recover it. 00:34:11.188 [2024-04-26 23:37:00.259135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.188 [2024-04-26 23:37:00.259344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.188 [2024-04-26 23:37:00.259351] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.188 qpair failed and we were unable to recover it. 00:34:11.188 [2024-04-26 23:37:00.259674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.188 [2024-04-26 23:37:00.259902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.188 [2024-04-26 23:37:00.259911] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.188 qpair failed and we were unable to recover it. 00:34:11.188 [2024-04-26 23:37:00.260324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.188 [2024-04-26 23:37:00.260637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.188 [2024-04-26 23:37:00.260643] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.188 qpair failed and we were unable to recover it. 00:34:11.188 [2024-04-26 23:37:00.260985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.188 [2024-04-26 23:37:00.261338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.188 [2024-04-26 23:37:00.261345] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.188 qpair failed and we were unable to recover it. 00:34:11.188 [2024-04-26 23:37:00.261695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.188 [2024-04-26 23:37:00.262030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.188 [2024-04-26 23:37:00.262037] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.188 qpair failed and we were unable to recover it. 00:34:11.188 [2024-04-26 23:37:00.262394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.188 [2024-04-26 23:37:00.262689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.188 [2024-04-26 23:37:00.262696] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.188 qpair failed and we were unable to recover it. 00:34:11.188 [2024-04-26 23:37:00.263047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.188 [2024-04-26 23:37:00.263391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.188 [2024-04-26 23:37:00.263397] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.188 qpair failed and we were unable to recover it. 00:34:11.188 [2024-04-26 23:37:00.263685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.188 [2024-04-26 23:37:00.263943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.188 [2024-04-26 23:37:00.263949] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.188 qpair failed and we were unable to recover it. 00:34:11.188 [2024-04-26 23:37:00.264176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.188 [2024-04-26 23:37:00.264502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.188 [2024-04-26 23:37:00.264508] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.188 qpair failed and we were unable to recover it. 00:34:11.188 [2024-04-26 23:37:00.264814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.188 [2024-04-26 23:37:00.265167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.188 [2024-04-26 23:37:00.265174] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.188 qpair failed and we were unable to recover it. 00:34:11.188 [2024-04-26 23:37:00.265546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.189 [2024-04-26 23:37:00.265887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.189 [2024-04-26 23:37:00.265894] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.189 qpair failed and we were unable to recover it. 00:34:11.189 [2024-04-26 23:37:00.266224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.189 [2024-04-26 23:37:00.266563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.189 [2024-04-26 23:37:00.266571] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.189 qpair failed and we were unable to recover it. 00:34:11.189 [2024-04-26 23:37:00.266813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.189 [2024-04-26 23:37:00.267029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.189 [2024-04-26 23:37:00.267035] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.189 qpair failed and we were unable to recover it. 00:34:11.189 [2024-04-26 23:37:00.267395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.189 [2024-04-26 23:37:00.267724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.189 [2024-04-26 23:37:00.267730] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.189 qpair failed and we were unable to recover it. 00:34:11.189 [2024-04-26 23:37:00.268078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.189 [2024-04-26 23:37:00.268221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.189 [2024-04-26 23:37:00.268227] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.189 qpair failed and we were unable to recover it. 00:34:11.189 [2024-04-26 23:37:00.268427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.189 [2024-04-26 23:37:00.268789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.189 [2024-04-26 23:37:00.268795] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.189 qpair failed and we were unable to recover it. 00:34:11.189 [2024-04-26 23:37:00.269119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.189 [2024-04-26 23:37:00.269480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.189 [2024-04-26 23:37:00.269486] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.189 qpair failed and we were unable to recover it. 00:34:11.189 [2024-04-26 23:37:00.269814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.189 [2024-04-26 23:37:00.270211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.189 [2024-04-26 23:37:00.270218] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.189 qpair failed and we were unable to recover it. 00:34:11.189 [2024-04-26 23:37:00.270528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.189 [2024-04-26 23:37:00.270720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.189 [2024-04-26 23:37:00.270728] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.189 qpair failed and we were unable to recover it. 00:34:11.189 [2024-04-26 23:37:00.270935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.189 [2024-04-26 23:37:00.271260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.189 [2024-04-26 23:37:00.271267] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.189 qpair failed and we were unable to recover it. 00:34:11.189 [2024-04-26 23:37:00.271622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.189 [2024-04-26 23:37:00.271960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.189 [2024-04-26 23:37:00.271968] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.189 qpair failed and we were unable to recover it. 00:34:11.189 [2024-04-26 23:37:00.272319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.189 [2024-04-26 23:37:00.272638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.189 [2024-04-26 23:37:00.272646] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.189 qpair failed and we were unable to recover it. 00:34:11.189 [2024-04-26 23:37:00.272999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.189 [2024-04-26 23:37:00.273321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.189 [2024-04-26 23:37:00.273328] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.189 qpair failed and we were unable to recover it. 00:34:11.189 [2024-04-26 23:37:00.273665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.189 [2024-04-26 23:37:00.273908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.189 [2024-04-26 23:37:00.273915] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.189 qpair failed and we were unable to recover it. 00:34:11.189 [2024-04-26 23:37:00.274131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.189 [2024-04-26 23:37:00.274232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.189 [2024-04-26 23:37:00.274238] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.189 qpair failed and we were unable to recover it. 00:34:11.189 [2024-04-26 23:37:00.274444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.189 [2024-04-26 23:37:00.274555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.189 [2024-04-26 23:37:00.274562] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.189 qpair failed and we were unable to recover it. 00:34:11.189 [2024-04-26 23:37:00.274784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.189 [2024-04-26 23:37:00.275128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.189 [2024-04-26 23:37:00.275135] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.189 qpair failed and we were unable to recover it. 00:34:11.189 [2024-04-26 23:37:00.275458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.189 [2024-04-26 23:37:00.275794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.189 [2024-04-26 23:37:00.275801] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.189 qpair failed and we were unable to recover it. 00:34:11.189 [2024-04-26 23:37:00.276152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.189 [2024-04-26 23:37:00.276381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.189 [2024-04-26 23:37:00.276388] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.189 qpair failed and we were unable to recover it. 00:34:11.189 [2024-04-26 23:37:00.276720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.189 [2024-04-26 23:37:00.277006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.189 [2024-04-26 23:37:00.277013] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.189 qpair failed and we were unable to recover it. 00:34:11.189 [2024-04-26 23:37:00.277242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.189 [2024-04-26 23:37:00.277555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.189 [2024-04-26 23:37:00.277562] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.189 qpair failed and we were unable to recover it. 00:34:11.189 [2024-04-26 23:37:00.277904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.189 [2024-04-26 23:37:00.278258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.189 [2024-04-26 23:37:00.278267] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.189 qpair failed and we were unable to recover it. 00:34:11.189 [2024-04-26 23:37:00.278495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.189 [2024-04-26 23:37:00.278762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.189 [2024-04-26 23:37:00.278770] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.189 qpair failed and we were unable to recover it. 00:34:11.189 [2024-04-26 23:37:00.279177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.189 [2024-04-26 23:37:00.279368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.189 [2024-04-26 23:37:00.279376] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.189 qpair failed and we were unable to recover it. 00:34:11.189 [2024-04-26 23:37:00.279695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.189 [2024-04-26 23:37:00.279964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.189 [2024-04-26 23:37:00.279972] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.189 qpair failed and we were unable to recover it. 00:34:11.189 [2024-04-26 23:37:00.280154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.189 [2024-04-26 23:37:00.280357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.189 [2024-04-26 23:37:00.280364] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.189 qpair failed and we were unable to recover it. 00:34:11.189 [2024-04-26 23:37:00.280682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.189 [2024-04-26 23:37:00.280975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.189 [2024-04-26 23:37:00.280983] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.189 qpair failed and we were unable to recover it. 00:34:11.189 [2024-04-26 23:37:00.281312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.189 [2024-04-26 23:37:00.281557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.189 [2024-04-26 23:37:00.281565] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.189 qpair failed and we were unable to recover it. 00:34:11.189 [2024-04-26 23:37:00.281780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.189 [2024-04-26 23:37:00.282012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.189 [2024-04-26 23:37:00.282019] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.189 qpair failed and we were unable to recover it. 00:34:11.190 [2024-04-26 23:37:00.282349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.190 [2024-04-26 23:37:00.282707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.190 [2024-04-26 23:37:00.282714] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.190 qpair failed and we were unable to recover it. 00:34:11.190 [2024-04-26 23:37:00.283026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.190 [2024-04-26 23:37:00.283342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.190 [2024-04-26 23:37:00.283350] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.190 qpair failed and we were unable to recover it. 00:34:11.190 [2024-04-26 23:37:00.283681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.190 [2024-04-26 23:37:00.283835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.190 [2024-04-26 23:37:00.283845] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.190 qpair failed and we were unable to recover it. 00:34:11.190 [2024-04-26 23:37:00.284217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.190 [2024-04-26 23:37:00.284545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.190 [2024-04-26 23:37:00.284552] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.190 qpair failed and we were unable to recover it. 00:34:11.190 [2024-04-26 23:37:00.284764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.190 [2024-04-26 23:37:00.285104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.190 [2024-04-26 23:37:00.285111] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.190 qpair failed and we were unable to recover it. 00:34:11.190 [2024-04-26 23:37:00.285449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.190 [2024-04-26 23:37:00.285616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.190 [2024-04-26 23:37:00.285622] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.190 qpair failed and we were unable to recover it. 00:34:11.190 [2024-04-26 23:37:00.285901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.190 [2024-04-26 23:37:00.286184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.190 [2024-04-26 23:37:00.286191] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.190 qpair failed and we were unable to recover it. 00:34:11.190 [2024-04-26 23:37:00.286565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.190 [2024-04-26 23:37:00.286924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.190 [2024-04-26 23:37:00.286931] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.190 qpair failed and we were unable to recover it. 00:34:11.190 [2024-04-26 23:37:00.287251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.190 [2024-04-26 23:37:00.287599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.190 [2024-04-26 23:37:00.287605] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.190 qpair failed and we were unable to recover it. 00:34:11.190 [2024-04-26 23:37:00.287761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.190 [2024-04-26 23:37:00.288141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.190 [2024-04-26 23:37:00.288148] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.190 qpair failed and we were unable to recover it. 00:34:11.190 [2024-04-26 23:37:00.288481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.190 [2024-04-26 23:37:00.288773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.190 [2024-04-26 23:37:00.288780] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.190 qpair failed and we were unable to recover it. 00:34:11.190 [2024-04-26 23:37:00.289168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.190 [2024-04-26 23:37:00.289376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.190 [2024-04-26 23:37:00.289383] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.190 qpair failed and we were unable to recover it. 00:34:11.190 [2024-04-26 23:37:00.289831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.190 [2024-04-26 23:37:00.290170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.190 [2024-04-26 23:37:00.290178] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.190 qpair failed and we were unable to recover it. 00:34:11.190 [2024-04-26 23:37:00.290499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.190 [2024-04-26 23:37:00.290829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.190 [2024-04-26 23:37:00.290838] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.190 qpair failed and we were unable to recover it. 00:34:11.190 [2024-04-26 23:37:00.291166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.190 [2024-04-26 23:37:00.291374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.190 [2024-04-26 23:37:00.291381] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.190 qpair failed and we were unable to recover it. 00:34:11.190 [2024-04-26 23:37:00.291791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.190 [2024-04-26 23:37:00.292136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.190 [2024-04-26 23:37:00.292143] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.190 qpair failed and we were unable to recover it. 00:34:11.190 [2024-04-26 23:37:00.292481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.190 [2024-04-26 23:37:00.292829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.190 [2024-04-26 23:37:00.292838] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.190 qpair failed and we were unable to recover it. 00:34:11.190 [2024-04-26 23:37:00.293155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.190 [2024-04-26 23:37:00.293474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.190 [2024-04-26 23:37:00.293481] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.190 qpair failed and we were unable to recover it. 00:34:11.190 [2024-04-26 23:37:00.293815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.190 [2024-04-26 23:37:00.294140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.190 [2024-04-26 23:37:00.294149] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.190 qpair failed and we were unable to recover it. 00:34:11.190 [2024-04-26 23:37:00.294357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.190 [2024-04-26 23:37:00.294722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.190 [2024-04-26 23:37:00.294730] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.190 qpair failed and we were unable to recover it. 00:34:11.190 [2024-04-26 23:37:00.295033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.190 [2024-04-26 23:37:00.295349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.190 [2024-04-26 23:37:00.295357] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.190 qpair failed and we were unable to recover it. 00:34:11.190 [2024-04-26 23:37:00.295584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.190 [2024-04-26 23:37:00.295877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.190 [2024-04-26 23:37:00.295885] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.190 qpair failed and we were unable to recover it. 00:34:11.190 [2024-04-26 23:37:00.296097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.190 [2024-04-26 23:37:00.296343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.190 [2024-04-26 23:37:00.296350] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.190 qpair failed and we were unable to recover it. 00:34:11.190 [2024-04-26 23:37:00.296688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.190 [2024-04-26 23:37:00.296910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.190 [2024-04-26 23:37:00.296917] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.190 qpair failed and we were unable to recover it. 00:34:11.190 [2024-04-26 23:37:00.297124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.190 [2024-04-26 23:37:00.297442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.190 [2024-04-26 23:37:00.297449] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.190 qpair failed and we were unable to recover it. 00:34:11.190 [2024-04-26 23:37:00.297808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.190 [2024-04-26 23:37:00.298098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.190 [2024-04-26 23:37:00.298104] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.190 qpair failed and we were unable to recover it. 00:34:11.190 [2024-04-26 23:37:00.298430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.190 [2024-04-26 23:37:00.298785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.190 [2024-04-26 23:37:00.298791] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.190 qpair failed and we were unable to recover it. 00:34:11.190 [2024-04-26 23:37:00.298956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.190 [2024-04-26 23:37:00.299293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.190 [2024-04-26 23:37:00.299299] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.190 qpair failed and we were unable to recover it. 00:34:11.190 [2024-04-26 23:37:00.299649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.190 [2024-04-26 23:37:00.299952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.190 [2024-04-26 23:37:00.299959] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.191 qpair failed and we were unable to recover it. 00:34:11.191 [2024-04-26 23:37:00.300299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.191 [2024-04-26 23:37:00.300630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.191 [2024-04-26 23:37:00.300637] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.191 qpair failed and we were unable to recover it. 00:34:11.191 [2024-04-26 23:37:00.300878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.191 [2024-04-26 23:37:00.301175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.191 [2024-04-26 23:37:00.301182] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.191 qpair failed and we were unable to recover it. 00:34:11.191 [2024-04-26 23:37:00.301491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.191 [2024-04-26 23:37:00.301811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.191 [2024-04-26 23:37:00.301818] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.191 qpair failed and we were unable to recover it. 00:34:11.191 [2024-04-26 23:37:00.302129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.191 [2024-04-26 23:37:00.302424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.191 [2024-04-26 23:37:00.302430] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.191 qpair failed and we were unable to recover it. 00:34:11.191 [2024-04-26 23:37:00.302786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.191 [2024-04-26 23:37:00.302996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.191 [2024-04-26 23:37:00.303004] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.191 qpair failed and we were unable to recover it. 00:34:11.191 [2024-04-26 23:37:00.303294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.191 [2024-04-26 23:37:00.303630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.191 [2024-04-26 23:37:00.303636] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.191 qpair failed and we were unable to recover it. 00:34:11.191 [2024-04-26 23:37:00.303974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.191 [2024-04-26 23:37:00.304310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.191 [2024-04-26 23:37:00.304316] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.191 qpair failed and we were unable to recover it. 00:34:11.191 [2024-04-26 23:37:00.304469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.191 [2024-04-26 23:37:00.304714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.191 [2024-04-26 23:37:00.304720] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.191 qpair failed and we were unable to recover it. 00:34:11.191 [2024-04-26 23:37:00.304931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.191 [2024-04-26 23:37:00.305252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.191 [2024-04-26 23:37:00.305258] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.191 qpair failed and we were unable to recover it. 00:34:11.191 [2024-04-26 23:37:00.305504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.191 [2024-04-26 23:37:00.305834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.191 [2024-04-26 23:37:00.305844] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.191 qpair failed and we were unable to recover it. 00:34:11.191 [2024-04-26 23:37:00.306163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.191 [2024-04-26 23:37:00.306489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.191 [2024-04-26 23:37:00.306496] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.191 qpair failed and we were unable to recover it. 00:34:11.191 [2024-04-26 23:37:00.306820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.191 [2024-04-26 23:37:00.307159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.191 [2024-04-26 23:37:00.307166] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.191 qpair failed and we were unable to recover it. 00:34:11.191 [2024-04-26 23:37:00.307516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.191 [2024-04-26 23:37:00.307718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.191 [2024-04-26 23:37:00.307725] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.191 qpair failed and we were unable to recover it. 00:34:11.191 [2024-04-26 23:37:00.308049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.191 [2024-04-26 23:37:00.308242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.191 [2024-04-26 23:37:00.308248] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.191 qpair failed and we were unable to recover it. 00:34:11.191 [2024-04-26 23:37:00.308613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.191 [2024-04-26 23:37:00.308778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.191 [2024-04-26 23:37:00.308784] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.191 qpair failed and we were unable to recover it. 00:34:11.191 [2024-04-26 23:37:00.309023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.191 [2024-04-26 23:37:00.309359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.191 [2024-04-26 23:37:00.309365] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.191 qpair failed and we were unable to recover it. 00:34:11.191 [2024-04-26 23:37:00.309708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.191 [2024-04-26 23:37:00.310052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.191 [2024-04-26 23:37:00.310059] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.191 qpair failed and we were unable to recover it. 00:34:11.191 [2024-04-26 23:37:00.310298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.191 [2024-04-26 23:37:00.310587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.191 [2024-04-26 23:37:00.310593] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.191 qpair failed and we were unable to recover it. 00:34:11.191 [2024-04-26 23:37:00.310947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.191 [2024-04-26 23:37:00.311286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.191 [2024-04-26 23:37:00.311293] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.191 qpair failed and we were unable to recover it. 00:34:11.191 [2024-04-26 23:37:00.311477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.191 [2024-04-26 23:37:00.311696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.191 [2024-04-26 23:37:00.311703] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.191 qpair failed and we were unable to recover it. 00:34:11.191 [2024-04-26 23:37:00.312004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.191 [2024-04-26 23:37:00.312316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.191 [2024-04-26 23:37:00.312322] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.191 qpair failed and we were unable to recover it. 00:34:11.191 [2024-04-26 23:37:00.312607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.191 [2024-04-26 23:37:00.312817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.191 [2024-04-26 23:37:00.312823] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.191 qpair failed and we were unable to recover it. 00:34:11.191 [2024-04-26 23:37:00.313149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.191 [2024-04-26 23:37:00.313466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.191 [2024-04-26 23:37:00.313473] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.191 qpair failed and we were unable to recover it. 00:34:11.191 [2024-04-26 23:37:00.313795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.191 [2024-04-26 23:37:00.314133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.191 [2024-04-26 23:37:00.314140] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.191 qpair failed and we were unable to recover it. 00:34:11.191 [2024-04-26 23:37:00.314462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.191 [2024-04-26 23:37:00.314730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.191 [2024-04-26 23:37:00.314737] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.191 qpair failed and we were unable to recover it. 00:34:11.191 [2024-04-26 23:37:00.315112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.191 [2024-04-26 23:37:00.315417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.191 [2024-04-26 23:37:00.315424] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.191 qpair failed and we were unable to recover it. 00:34:11.191 [2024-04-26 23:37:00.315717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.191 [2024-04-26 23:37:00.316071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.191 [2024-04-26 23:37:00.316077] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.191 qpair failed and we were unable to recover it. 00:34:11.191 [2024-04-26 23:37:00.316396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.191 [2024-04-26 23:37:00.316596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.191 [2024-04-26 23:37:00.316602] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.191 qpair failed and we were unable to recover it. 00:34:11.191 [2024-04-26 23:37:00.316910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.191 [2024-04-26 23:37:00.317051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.192 [2024-04-26 23:37:00.317057] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.192 qpair failed and we were unable to recover it. 00:34:11.192 [2024-04-26 23:37:00.317393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.192 [2024-04-26 23:37:00.317715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.192 [2024-04-26 23:37:00.317721] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.192 qpair failed and we were unable to recover it. 00:34:11.192 [2024-04-26 23:37:00.318021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.192 [2024-04-26 23:37:00.318314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.192 [2024-04-26 23:37:00.318320] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.192 qpair failed and we were unable to recover it. 00:34:11.192 [2024-04-26 23:37:00.318639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.192 [2024-04-26 23:37:00.318962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.192 [2024-04-26 23:37:00.318969] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.192 qpair failed and we were unable to recover it. 00:34:11.192 [2024-04-26 23:37:00.319305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.192 [2024-04-26 23:37:00.319641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.192 [2024-04-26 23:37:00.319647] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.192 qpair failed and we were unable to recover it. 00:34:11.192 [2024-04-26 23:37:00.319981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.192 [2024-04-26 23:37:00.320297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.192 [2024-04-26 23:37:00.320304] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.192 qpair failed and we were unable to recover it. 00:34:11.192 [2024-04-26 23:37:00.320613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.192 [2024-04-26 23:37:00.320973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.192 [2024-04-26 23:37:00.320980] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.192 qpair failed and we were unable to recover it. 00:34:11.192 [2024-04-26 23:37:00.321280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.192 [2024-04-26 23:37:00.321569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.192 [2024-04-26 23:37:00.321575] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.192 qpair failed and we were unable to recover it. 00:34:11.192 [2024-04-26 23:37:00.321916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.192 [2024-04-26 23:37:00.322239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.192 [2024-04-26 23:37:00.322245] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.192 qpair failed and we were unable to recover it. 00:34:11.192 [2024-04-26 23:37:00.322570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.192 [2024-04-26 23:37:00.322779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.192 [2024-04-26 23:37:00.322786] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.192 qpair failed and we were unable to recover it. 00:34:11.192 [2024-04-26 23:37:00.323131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.192 [2024-04-26 23:37:00.323457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.192 [2024-04-26 23:37:00.323464] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.192 qpair failed and we were unable to recover it. 00:34:11.192 [2024-04-26 23:37:00.323780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.192 [2024-04-26 23:37:00.324011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.192 [2024-04-26 23:37:00.324018] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.192 qpair failed and we were unable to recover it. 00:34:11.192 [2024-04-26 23:37:00.324360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.192 [2024-04-26 23:37:00.324596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.192 [2024-04-26 23:37:00.324603] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.192 qpair failed and we were unable to recover it. 00:34:11.192 [2024-04-26 23:37:00.324930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.192 [2024-04-26 23:37:00.325146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.192 [2024-04-26 23:37:00.325153] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.192 qpair failed and we were unable to recover it. 00:34:11.192 [2024-04-26 23:37:00.325396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.192 [2024-04-26 23:37:00.325710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.192 [2024-04-26 23:37:00.325716] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.192 qpair failed and we were unable to recover it. 00:34:11.192 [2024-04-26 23:37:00.326001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.192 [2024-04-26 23:37:00.326327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.192 [2024-04-26 23:37:00.326334] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.192 qpair failed and we were unable to recover it. 00:34:11.192 [2024-04-26 23:37:00.326604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.192 [2024-04-26 23:37:00.326955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.192 [2024-04-26 23:37:00.326961] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.192 qpair failed and we were unable to recover it. 00:34:11.192 [2024-04-26 23:37:00.327201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.192 [2024-04-26 23:37:00.327513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.192 [2024-04-26 23:37:00.327519] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.192 qpair failed and we were unable to recover it. 00:34:11.192 [2024-04-26 23:37:00.327741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.192 [2024-04-26 23:37:00.328047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.192 [2024-04-26 23:37:00.328053] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.192 qpair failed and we were unable to recover it. 00:34:11.192 [2024-04-26 23:37:00.328350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.192 [2024-04-26 23:37:00.328670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.192 [2024-04-26 23:37:00.328677] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.192 qpair failed and we were unable to recover it. 00:34:11.192 [2024-04-26 23:37:00.329045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.192 [2024-04-26 23:37:00.329377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.192 [2024-04-26 23:37:00.329383] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.192 qpair failed and we were unable to recover it. 00:34:11.192 [2024-04-26 23:37:00.329736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.192 [2024-04-26 23:37:00.330064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.192 [2024-04-26 23:37:00.330071] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.192 qpair failed and we were unable to recover it. 00:34:11.192 [2024-04-26 23:37:00.330364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.192 [2024-04-26 23:37:00.330680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.192 [2024-04-26 23:37:00.330687] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.192 qpair failed and we were unable to recover it. 00:34:11.192 [2024-04-26 23:37:00.331016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.192 [2024-04-26 23:37:00.331371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.192 [2024-04-26 23:37:00.331377] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.192 qpair failed and we were unable to recover it. 00:34:11.192 [2024-04-26 23:37:00.331693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.192 [2024-04-26 23:37:00.332045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.192 [2024-04-26 23:37:00.332052] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.192 qpair failed and we were unable to recover it. 00:34:11.192 [2024-04-26 23:37:00.332228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.192 [2024-04-26 23:37:00.332566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.192 [2024-04-26 23:37:00.332573] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.192 qpair failed and we were unable to recover it. 00:34:11.192 [2024-04-26 23:37:00.332890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.192 [2024-04-26 23:37:00.333225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.192 [2024-04-26 23:37:00.333231] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.192 qpair failed and we were unable to recover it. 00:34:11.192 [2024-04-26 23:37:00.333461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.192 [2024-04-26 23:37:00.333809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.192 [2024-04-26 23:37:00.333816] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.193 qpair failed and we were unable to recover it. 00:34:11.193 [2024-04-26 23:37:00.334175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.193 [2024-04-26 23:37:00.334413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.193 [2024-04-26 23:37:00.334420] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.193 qpair failed and we were unable to recover it. 00:34:11.193 [2024-04-26 23:37:00.334541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.193 [2024-04-26 23:37:00.334829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.193 [2024-04-26 23:37:00.334835] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.193 qpair failed and we were unable to recover it. 00:34:11.193 [2024-04-26 23:37:00.335236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.193 [2024-04-26 23:37:00.335585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.193 [2024-04-26 23:37:00.335591] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.193 qpair failed and we were unable to recover it. 00:34:11.193 [2024-04-26 23:37:00.335941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.193 [2024-04-26 23:37:00.336248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.193 [2024-04-26 23:37:00.336255] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.193 qpair failed and we were unable to recover it. 00:34:11.193 [2024-04-26 23:37:00.336606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.193 [2024-04-26 23:37:00.336959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.193 [2024-04-26 23:37:00.336966] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.193 qpair failed and we were unable to recover it. 00:34:11.193 [2024-04-26 23:37:00.337306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.193 [2024-04-26 23:37:00.337670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.193 [2024-04-26 23:37:00.337678] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.193 qpair failed and we were unable to recover it. 00:34:11.193 [2024-04-26 23:37:00.338014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.193 [2024-04-26 23:37:00.338207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.193 [2024-04-26 23:37:00.338214] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.193 qpair failed and we were unable to recover it. 00:34:11.193 [2024-04-26 23:37:00.338432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.193 [2024-04-26 23:37:00.338742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.193 [2024-04-26 23:37:00.338748] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.193 qpair failed and we were unable to recover it. 00:34:11.193 [2024-04-26 23:37:00.339104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.193 [2024-04-26 23:37:00.339421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.193 [2024-04-26 23:37:00.339427] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.193 qpair failed and we were unable to recover it. 00:34:11.193 [2024-04-26 23:37:00.339769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.193 [2024-04-26 23:37:00.340085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.193 [2024-04-26 23:37:00.340091] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.193 qpair failed and we were unable to recover it. 00:34:11.193 [2024-04-26 23:37:00.340412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.193 [2024-04-26 23:37:00.340662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.193 [2024-04-26 23:37:00.340668] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.193 qpair failed and we were unable to recover it. 00:34:11.193 [2024-04-26 23:37:00.340933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.193 [2024-04-26 23:37:00.341307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.193 [2024-04-26 23:37:00.341314] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.193 qpair failed and we were unable to recover it. 00:34:11.193 [2024-04-26 23:37:00.341666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.193 [2024-04-26 23:37:00.341986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.193 [2024-04-26 23:37:00.341992] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.193 qpair failed and we were unable to recover it. 00:34:11.193 [2024-04-26 23:37:00.342302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.193 [2024-04-26 23:37:00.342619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.193 [2024-04-26 23:37:00.342625] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.193 qpair failed and we were unable to recover it. 00:34:11.193 [2024-04-26 23:37:00.343012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.193 [2024-04-26 23:37:00.343446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.193 [2024-04-26 23:37:00.343452] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.193 qpair failed and we were unable to recover it. 00:34:11.193 [2024-04-26 23:37:00.343802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.193 [2024-04-26 23:37:00.344145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.193 [2024-04-26 23:37:00.344152] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.193 qpair failed and we were unable to recover it. 00:34:11.193 [2024-04-26 23:37:00.344469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.193 [2024-04-26 23:37:00.344826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.193 [2024-04-26 23:37:00.344832] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.193 qpair failed and we were unable to recover it. 00:34:11.193 [2024-04-26 23:37:00.345020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.193 [2024-04-26 23:37:00.345366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.193 [2024-04-26 23:37:00.345372] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.193 qpair failed and we were unable to recover it. 00:34:11.193 [2024-04-26 23:37:00.345682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.193 [2024-04-26 23:37:00.345885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.193 [2024-04-26 23:37:00.345892] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.193 qpair failed and we were unable to recover it. 00:34:11.193 [2024-04-26 23:37:00.346207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.193 [2024-04-26 23:37:00.346503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.193 [2024-04-26 23:37:00.346509] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.193 qpair failed and we were unable to recover it. 00:34:11.193 [2024-04-26 23:37:00.346799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.193 [2024-04-26 23:37:00.347219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.193 [2024-04-26 23:37:00.347225] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.193 qpair failed and we were unable to recover it. 00:34:11.193 [2024-04-26 23:37:00.347532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.193 [2024-04-26 23:37:00.347856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.193 [2024-04-26 23:37:00.347862] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.193 qpair failed and we were unable to recover it. 00:34:11.193 [2024-04-26 23:37:00.348197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.193 [2024-04-26 23:37:00.348519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.193 [2024-04-26 23:37:00.348526] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.193 qpair failed and we were unable to recover it. 00:34:11.193 [2024-04-26 23:37:00.348867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.193 [2024-04-26 23:37:00.349170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.193 [2024-04-26 23:37:00.349177] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.193 qpair failed and we were unable to recover it. 00:34:11.193 [2024-04-26 23:37:00.349438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.193 [2024-04-26 23:37:00.349756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.193 [2024-04-26 23:37:00.349762] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.193 qpair failed and we were unable to recover it. 00:34:11.193 [2024-04-26 23:37:00.350132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.194 [2024-04-26 23:37:00.350477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.194 [2024-04-26 23:37:00.350484] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.194 qpair failed and we were unable to recover it. 00:34:11.194 [2024-04-26 23:37:00.350682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.194 [2024-04-26 23:37:00.351028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.194 [2024-04-26 23:37:00.351035] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.194 qpair failed and we were unable to recover it. 00:34:11.194 [2024-04-26 23:37:00.351422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.194 [2024-04-26 23:37:00.351736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.194 [2024-04-26 23:37:00.351742] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.194 qpair failed and we were unable to recover it. 00:34:11.194 [2024-04-26 23:37:00.352074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.194 [2024-04-26 23:37:00.352388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.194 [2024-04-26 23:37:00.352395] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.194 qpair failed and we were unable to recover it. 00:34:11.194 [2024-04-26 23:37:00.352704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.194 [2024-04-26 23:37:00.353065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.194 [2024-04-26 23:37:00.353071] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.194 qpair failed and we were unable to recover it. 00:34:11.194 [2024-04-26 23:37:00.353414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.194 [2024-04-26 23:37:00.353731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.194 [2024-04-26 23:37:00.353738] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.194 qpair failed and we were unable to recover it. 00:34:11.194 [2024-04-26 23:37:00.354063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.194 [2024-04-26 23:37:00.354393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.194 [2024-04-26 23:37:00.354399] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.194 qpair failed and we were unable to recover it. 00:34:11.194 [2024-04-26 23:37:00.354730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.194 [2024-04-26 23:37:00.355029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.194 [2024-04-26 23:37:00.355035] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.194 qpair failed and we were unable to recover it. 00:34:11.194 [2024-04-26 23:37:00.355426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.194 [2024-04-26 23:37:00.355637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.194 [2024-04-26 23:37:00.355643] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.194 qpair failed and we were unable to recover it. 00:34:11.194 [2024-04-26 23:37:00.356006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.194 [2024-04-26 23:37:00.356321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.194 [2024-04-26 23:37:00.356328] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.194 qpair failed and we were unable to recover it. 00:34:11.194 [2024-04-26 23:37:00.356663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.194 [2024-04-26 23:37:00.356963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.194 [2024-04-26 23:37:00.356969] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.194 qpair failed and we were unable to recover it. 00:34:11.194 [2024-04-26 23:37:00.357295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.194 [2024-04-26 23:37:00.357497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.194 [2024-04-26 23:37:00.357503] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.194 qpair failed and we were unable to recover it. 00:34:11.194 [2024-04-26 23:37:00.357805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.194 [2024-04-26 23:37:00.358008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.194 [2024-04-26 23:37:00.358015] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.194 qpair failed and we were unable to recover it. 00:34:11.194 [2024-04-26 23:37:00.358171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.194 [2024-04-26 23:37:00.358481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.194 [2024-04-26 23:37:00.358488] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.194 qpair failed and we were unable to recover it. 00:34:11.194 [2024-04-26 23:37:00.358789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.194 [2024-04-26 23:37:00.358998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.194 [2024-04-26 23:37:00.359004] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.194 qpair failed and we were unable to recover it. 00:34:11.194 [2024-04-26 23:37:00.359222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.194 [2024-04-26 23:37:00.359557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.194 [2024-04-26 23:37:00.359563] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.194 qpair failed and we were unable to recover it. 00:34:11.194 [2024-04-26 23:37:00.359854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.194 [2024-04-26 23:37:00.360149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.194 [2024-04-26 23:37:00.360155] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.194 qpair failed and we were unable to recover it. 00:34:11.194 [2024-04-26 23:37:00.360504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.194 [2024-04-26 23:37:00.360831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.194 [2024-04-26 23:37:00.360839] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.194 qpair failed and we were unable to recover it. 00:34:11.194 [2024-04-26 23:37:00.361060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.194 [2024-04-26 23:37:00.361354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.194 [2024-04-26 23:37:00.361361] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.194 qpair failed and we were unable to recover it. 00:34:11.194 [2024-04-26 23:37:00.361588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.194 [2024-04-26 23:37:00.361810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.194 [2024-04-26 23:37:00.361818] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.194 qpair failed and we were unable to recover it. 00:34:11.194 [2024-04-26 23:37:00.362127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.194 [2024-04-26 23:37:00.362339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.194 [2024-04-26 23:37:00.362347] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.194 qpair failed and we were unable to recover it. 00:34:11.194 [2024-04-26 23:37:00.362669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.194 [2024-04-26 23:37:00.363028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.194 [2024-04-26 23:37:00.363034] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.194 qpair failed and we were unable to recover it. 00:34:11.194 [2024-04-26 23:37:00.363380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.194 [2024-04-26 23:37:00.363691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.194 [2024-04-26 23:37:00.363697] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.194 qpair failed and we were unable to recover it. 00:34:11.194 [2024-04-26 23:37:00.363993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.194 [2024-04-26 23:37:00.364197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.194 [2024-04-26 23:37:00.364205] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.194 qpair failed and we were unable to recover it. 00:34:11.194 [2024-04-26 23:37:00.364416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.194 [2024-04-26 23:37:00.364711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.194 [2024-04-26 23:37:00.364717] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.194 qpair failed and we were unable to recover it. 00:34:11.194 [2024-04-26 23:37:00.365017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.194 [2024-04-26 23:37:00.365353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.194 [2024-04-26 23:37:00.365360] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.194 qpair failed and we were unable to recover it. 00:34:11.194 [2024-04-26 23:37:00.365697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.194 [2024-04-26 23:37:00.365993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.194 [2024-04-26 23:37:00.366000] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.194 qpair failed and we were unable to recover it. 00:34:11.194 [2024-04-26 23:37:00.366328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.194 [2024-04-26 23:37:00.366635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.194 [2024-04-26 23:37:00.366641] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.194 qpair failed and we were unable to recover it. 00:34:11.194 [2024-04-26 23:37:00.367020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.194 [2024-04-26 23:37:00.367335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.194 [2024-04-26 23:37:00.367341] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.194 qpair failed and we were unable to recover it. 00:34:11.195 [2024-04-26 23:37:00.367657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.195 [2024-04-26 23:37:00.367944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.195 [2024-04-26 23:37:00.367951] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.195 qpair failed and we were unable to recover it. 00:34:11.195 [2024-04-26 23:37:00.368273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.195 [2024-04-26 23:37:00.368593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.195 [2024-04-26 23:37:00.368599] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.195 qpair failed and we were unable to recover it. 00:34:11.195 [2024-04-26 23:37:00.368777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.195 [2024-04-26 23:37:00.369094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.195 [2024-04-26 23:37:00.369100] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.195 qpair failed and we were unable to recover it. 00:34:11.195 [2024-04-26 23:37:00.369470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.195 [2024-04-26 23:37:00.369705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.195 [2024-04-26 23:37:00.369712] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.195 qpair failed and we were unable to recover it. 00:34:11.195 [2024-04-26 23:37:00.369947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.195 [2024-04-26 23:37:00.370304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.195 [2024-04-26 23:37:00.370312] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.195 qpair failed and we were unable to recover it. 00:34:11.195 [2024-04-26 23:37:00.370479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.195 [2024-04-26 23:37:00.370785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.195 [2024-04-26 23:37:00.370792] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.195 qpair failed and we were unable to recover it. 00:34:11.195 [2024-04-26 23:37:00.371242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.195 [2024-04-26 23:37:00.371572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.195 [2024-04-26 23:37:00.371579] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.195 qpair failed and we were unable to recover it. 00:34:11.195 [2024-04-26 23:37:00.371920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.195 [2024-04-26 23:37:00.372255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.195 [2024-04-26 23:37:00.372261] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.195 qpair failed and we were unable to recover it. 00:34:11.195 [2024-04-26 23:37:00.372585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.195 [2024-04-26 23:37:00.372882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.195 [2024-04-26 23:37:00.372889] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.195 qpair failed and we were unable to recover it. 00:34:11.195 [2024-04-26 23:37:00.373221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.195 [2024-04-26 23:37:00.373417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.195 [2024-04-26 23:37:00.373423] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.195 qpair failed and we were unable to recover it. 00:34:11.195 [2024-04-26 23:37:00.373698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.195 [2024-04-26 23:37:00.374004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.195 [2024-04-26 23:37:00.374010] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.195 qpair failed and we were unable to recover it. 00:34:11.195 [2024-04-26 23:37:00.374424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.195 [2024-04-26 23:37:00.374675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.195 [2024-04-26 23:37:00.374682] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.195 qpair failed and we were unable to recover it. 00:34:11.195 [2024-04-26 23:37:00.375012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.195 [2024-04-26 23:37:00.375401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.195 [2024-04-26 23:37:00.375408] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.195 qpair failed and we were unable to recover it. 00:34:11.195 [2024-04-26 23:37:00.375734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.195 [2024-04-26 23:37:00.376072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.195 [2024-04-26 23:37:00.376079] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.195 qpair failed and we were unable to recover it. 00:34:11.195 [2024-04-26 23:37:00.376360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.195 [2024-04-26 23:37:00.376676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.195 [2024-04-26 23:37:00.376686] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.195 qpair failed and we were unable to recover it. 00:34:11.195 [2024-04-26 23:37:00.377022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.195 [2024-04-26 23:37:00.377366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.195 [2024-04-26 23:37:00.377372] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.195 qpair failed and we were unable to recover it. 00:34:11.195 [2024-04-26 23:37:00.377595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.195 [2024-04-26 23:37:00.377973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.195 [2024-04-26 23:37:00.377980] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.195 qpair failed and we were unable to recover it. 00:34:11.195 [2024-04-26 23:37:00.378309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.195 [2024-04-26 23:37:00.378634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.195 [2024-04-26 23:37:00.378640] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.195 qpair failed and we were unable to recover it. 00:34:11.195 [2024-04-26 23:37:00.378992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.195 [2024-04-26 23:37:00.379201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.195 [2024-04-26 23:37:00.379207] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.195 qpair failed and we were unable to recover it. 00:34:11.195 [2024-04-26 23:37:00.379532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.195 [2024-04-26 23:37:00.379735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.195 [2024-04-26 23:37:00.379741] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.195 qpair failed and we were unable to recover it. 00:34:11.195 [2024-04-26 23:37:00.379962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.195 [2024-04-26 23:37:00.380270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.195 [2024-04-26 23:37:00.380277] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.195 qpair failed and we were unable to recover it. 00:34:11.195 [2024-04-26 23:37:00.380468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.195 [2024-04-26 23:37:00.380834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.195 [2024-04-26 23:37:00.380842] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.195 qpair failed and we were unable to recover it. 00:34:11.195 [2024-04-26 23:37:00.381144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.195 [2024-04-26 23:37:00.381456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.195 [2024-04-26 23:37:00.381462] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.195 qpair failed and we were unable to recover it. 00:34:11.195 [2024-04-26 23:37:00.381782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.195 [2024-04-26 23:37:00.382159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.195 [2024-04-26 23:37:00.382165] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.195 qpair failed and we were unable to recover it. 00:34:11.195 [2024-04-26 23:37:00.382510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.195 [2024-04-26 23:37:00.382665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.195 [2024-04-26 23:37:00.382672] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.195 qpair failed and we were unable to recover it. 00:34:11.195 [2024-04-26 23:37:00.383064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.195 [2024-04-26 23:37:00.383396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.195 [2024-04-26 23:37:00.383403] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.195 qpair failed and we were unable to recover it. 00:34:11.195 [2024-04-26 23:37:00.383599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.195 [2024-04-26 23:37:00.383794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.195 [2024-04-26 23:37:00.383801] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.195 qpair failed and we were unable to recover it. 00:34:11.195 [2024-04-26 23:37:00.384132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.195 [2024-04-26 23:37:00.384348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.195 [2024-04-26 23:37:00.384355] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.195 qpair failed and we were unable to recover it. 00:34:11.195 [2024-04-26 23:37:00.384662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.195 [2024-04-26 23:37:00.384952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.195 [2024-04-26 23:37:00.384959] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.196 qpair failed and we were unable to recover it. 00:34:11.196 [2024-04-26 23:37:00.385170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.196 [2024-04-26 23:37:00.385363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.196 [2024-04-26 23:37:00.385370] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.196 qpair failed and we were unable to recover it. 00:34:11.196 [2024-04-26 23:37:00.385682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.196 [2024-04-26 23:37:00.386021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.196 [2024-04-26 23:37:00.386028] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.196 qpair failed and we were unable to recover it. 00:34:11.196 [2024-04-26 23:37:00.386367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.196 [2024-04-26 23:37:00.386597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.196 [2024-04-26 23:37:00.386603] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.196 qpair failed and we were unable to recover it. 00:34:11.196 [2024-04-26 23:37:00.386775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.196 [2024-04-26 23:37:00.387136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.196 [2024-04-26 23:37:00.387143] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.196 qpair failed and we were unable to recover it. 00:34:11.196 [2024-04-26 23:37:00.387471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.196 [2024-04-26 23:37:00.387726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.196 [2024-04-26 23:37:00.387733] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.196 qpair failed and we were unable to recover it. 00:34:11.196 [2024-04-26 23:37:00.388033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.196 [2024-04-26 23:37:00.388351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.196 [2024-04-26 23:37:00.388357] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.196 qpair failed and we were unable to recover it. 00:34:11.196 [2024-04-26 23:37:00.388687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.196 [2024-04-26 23:37:00.388998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.196 [2024-04-26 23:37:00.389005] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.196 qpair failed and we were unable to recover it. 00:34:11.196 [2024-04-26 23:37:00.389206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.196 [2024-04-26 23:37:00.389498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.196 [2024-04-26 23:37:00.389504] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.196 qpair failed and we were unable to recover it. 00:34:11.196 [2024-04-26 23:37:00.389836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.196 [2024-04-26 23:37:00.390177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.196 [2024-04-26 23:37:00.390183] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.196 qpair failed and we were unable to recover it. 00:34:11.196 [2024-04-26 23:37:00.390377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.196 [2024-04-26 23:37:00.390727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.196 [2024-04-26 23:37:00.390733] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.196 qpair failed and we were unable to recover it. 00:34:11.196 [2024-04-26 23:37:00.391144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.196 [2024-04-26 23:37:00.391472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.196 [2024-04-26 23:37:00.391479] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.196 qpair failed and we were unable to recover it. 00:34:11.196 [2024-04-26 23:37:00.391838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.196 [2024-04-26 23:37:00.392185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.196 [2024-04-26 23:37:00.392191] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.196 qpair failed and we were unable to recover it. 00:34:11.196 [2024-04-26 23:37:00.392524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.196 [2024-04-26 23:37:00.392840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.196 [2024-04-26 23:37:00.392846] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.196 qpair failed and we were unable to recover it. 00:34:11.196 [2024-04-26 23:37:00.393135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.196 [2024-04-26 23:37:00.393350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.196 [2024-04-26 23:37:00.393356] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.196 qpair failed and we were unable to recover it. 00:34:11.196 [2024-04-26 23:37:00.393563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.196 [2024-04-26 23:37:00.393819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.196 [2024-04-26 23:37:00.393825] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.196 qpair failed and we were unable to recover it. 00:34:11.196 [2024-04-26 23:37:00.394196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.196 [2024-04-26 23:37:00.394486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.196 [2024-04-26 23:37:00.394492] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.196 qpair failed and we were unable to recover it. 00:34:11.196 [2024-04-26 23:37:00.394788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.196 [2024-04-26 23:37:00.395151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.196 [2024-04-26 23:37:00.395157] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.196 qpair failed and we were unable to recover it. 00:34:11.196 [2024-04-26 23:37:00.395393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.196 [2024-04-26 23:37:00.395540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.196 [2024-04-26 23:37:00.395547] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.196 qpair failed and we were unable to recover it. 00:34:11.196 [2024-04-26 23:37:00.395838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.196 [2024-04-26 23:37:00.396132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.196 [2024-04-26 23:37:00.396138] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.196 qpair failed and we were unable to recover it. 00:34:11.196 [2024-04-26 23:37:00.396526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.196 [2024-04-26 23:37:00.396877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.196 [2024-04-26 23:37:00.396883] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.196 qpair failed and we were unable to recover it. 00:34:11.196 [2024-04-26 23:37:00.397184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.196 [2024-04-26 23:37:00.397534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.196 [2024-04-26 23:37:00.397540] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.196 qpair failed and we were unable to recover it. 00:34:11.196 [2024-04-26 23:37:00.397744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.196 [2024-04-26 23:37:00.397889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.196 [2024-04-26 23:37:00.397897] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.196 qpair failed and we were unable to recover it. 00:34:11.196 [2024-04-26 23:37:00.398119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.196 [2024-04-26 23:37:00.398337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.196 [2024-04-26 23:37:00.398343] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.196 qpair failed and we were unable to recover it. 00:34:11.196 [2024-04-26 23:37:00.398713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.196 [2024-04-26 23:37:00.398916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.196 [2024-04-26 23:37:00.398923] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.196 qpair failed and we were unable to recover it. 00:34:11.196 [2024-04-26 23:37:00.399272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.196 [2024-04-26 23:37:00.399562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.196 [2024-04-26 23:37:00.399568] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.196 qpair failed and we were unable to recover it. 00:34:11.196 [2024-04-26 23:37:00.399869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.196 [2024-04-26 23:37:00.400165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.196 [2024-04-26 23:37:00.400172] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.196 qpair failed and we were unable to recover it. 00:34:11.196 [2024-04-26 23:37:00.400505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.196 [2024-04-26 23:37:00.400664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.196 [2024-04-26 23:37:00.400672] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.196 qpair failed and we were unable to recover it. 00:34:11.196 [2024-04-26 23:37:00.400998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.196 [2024-04-26 23:37:00.401339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.196 [2024-04-26 23:37:00.401346] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.196 qpair failed and we were unable to recover it. 00:34:11.196 [2024-04-26 23:37:00.401675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.196 [2024-04-26 23:37:00.402076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.197 [2024-04-26 23:37:00.402082] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.197 qpair failed and we were unable to recover it. 00:34:11.197 [2024-04-26 23:37:00.402398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.197 [2024-04-26 23:37:00.402608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.197 [2024-04-26 23:37:00.402614] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.197 qpair failed and we were unable to recover it. 00:34:11.197 [2024-04-26 23:37:00.402940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.197 [2024-04-26 23:37:00.403249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.197 [2024-04-26 23:37:00.403255] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.197 qpair failed and we were unable to recover it. 00:34:11.197 [2024-04-26 23:37:00.403615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.197 [2024-04-26 23:37:00.403937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.197 [2024-04-26 23:37:00.403943] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.197 qpair failed and we were unable to recover it. 00:34:11.197 [2024-04-26 23:37:00.404174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.197 [2024-04-26 23:37:00.404316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.197 [2024-04-26 23:37:00.404323] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.197 qpair failed and we were unable to recover it. 00:34:11.197 [2024-04-26 23:37:00.404643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.197 [2024-04-26 23:37:00.404959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.197 [2024-04-26 23:37:00.404966] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.197 qpair failed and we were unable to recover it. 00:34:11.197 [2024-04-26 23:37:00.405271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.197 [2024-04-26 23:37:00.405566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.197 [2024-04-26 23:37:00.405572] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.197 qpair failed and we were unable to recover it. 00:34:11.197 [2024-04-26 23:37:00.405775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.197 [2024-04-26 23:37:00.406121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.197 [2024-04-26 23:37:00.406127] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.197 qpair failed and we were unable to recover it. 00:34:11.197 [2024-04-26 23:37:00.406443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.197 [2024-04-26 23:37:00.406763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.197 [2024-04-26 23:37:00.406770] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.197 qpair failed and we were unable to recover it. 00:34:11.197 [2024-04-26 23:37:00.407090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.197 [2024-04-26 23:37:00.407401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.197 [2024-04-26 23:37:00.407408] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.197 qpair failed and we were unable to recover it. 00:34:11.197 [2024-04-26 23:37:00.407746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.197 [2024-04-26 23:37:00.408074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.197 [2024-04-26 23:37:00.408080] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.197 qpair failed and we were unable to recover it. 00:34:11.197 [2024-04-26 23:37:00.408406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.197 [2024-04-26 23:37:00.408738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.197 [2024-04-26 23:37:00.408745] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.197 qpair failed and we were unable to recover it. 00:34:11.197 [2024-04-26 23:37:00.409072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.197 [2024-04-26 23:37:00.409394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.197 [2024-04-26 23:37:00.409401] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.197 qpair failed and we were unable to recover it. 00:34:11.197 [2024-04-26 23:37:00.409750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.197 [2024-04-26 23:37:00.410133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.197 [2024-04-26 23:37:00.410139] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.197 qpair failed and we were unable to recover it. 00:34:11.197 [2024-04-26 23:37:00.410464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.197 [2024-04-26 23:37:00.410785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.197 [2024-04-26 23:37:00.410792] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.197 qpair failed and we were unable to recover it. 00:34:11.197 [2024-04-26 23:37:00.411122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.197 [2024-04-26 23:37:00.411434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.197 [2024-04-26 23:37:00.411440] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.197 qpair failed and we were unable to recover it. 00:34:11.197 [2024-04-26 23:37:00.411799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.197 [2024-04-26 23:37:00.412088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.197 [2024-04-26 23:37:00.412094] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.197 qpair failed and we were unable to recover it. 00:34:11.197 [2024-04-26 23:37:00.412421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.197 [2024-04-26 23:37:00.412769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.197 [2024-04-26 23:37:00.412776] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.197 qpair failed and we were unable to recover it. 00:34:11.197 [2024-04-26 23:37:00.413109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.197 [2024-04-26 23:37:00.413447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.197 [2024-04-26 23:37:00.413453] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.197 qpair failed and we were unable to recover it. 00:34:11.197 [2024-04-26 23:37:00.413815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.197 [2024-04-26 23:37:00.414121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.197 [2024-04-26 23:37:00.414128] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.197 qpair failed and we were unable to recover it. 00:34:11.197 [2024-04-26 23:37:00.414485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.197 [2024-04-26 23:37:00.414848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.197 [2024-04-26 23:37:00.414856] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.197 qpair failed and we were unable to recover it. 00:34:11.197 [2024-04-26 23:37:00.415182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.197 [2024-04-26 23:37:00.415530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.197 [2024-04-26 23:37:00.415536] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.197 qpair failed and we were unable to recover it. 00:34:11.197 [2024-04-26 23:37:00.415873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.197 [2024-04-26 23:37:00.416278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.197 [2024-04-26 23:37:00.416284] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.197 qpair failed and we were unable to recover it. 00:34:11.197 [2024-04-26 23:37:00.416604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.197 [2024-04-26 23:37:00.416902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.197 [2024-04-26 23:37:00.416909] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.197 qpair failed and we were unable to recover it. 00:34:11.197 [2024-04-26 23:37:00.417278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.197 [2024-04-26 23:37:00.417594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.197 [2024-04-26 23:37:00.417600] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.197 qpair failed and we were unable to recover it. 00:34:11.197 [2024-04-26 23:37:00.417952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.197 [2024-04-26 23:37:00.418282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.198 [2024-04-26 23:37:00.418289] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.198 qpair failed and we were unable to recover it. 00:34:11.198 [2024-04-26 23:37:00.418623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.198 [2024-04-26 23:37:00.418945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.198 [2024-04-26 23:37:00.418951] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.198 qpair failed and we were unable to recover it. 00:34:11.198 [2024-04-26 23:37:00.419272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.198 [2024-04-26 23:37:00.419620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.198 [2024-04-26 23:37:00.419626] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.198 qpair failed and we were unable to recover it. 00:34:11.198 [2024-04-26 23:37:00.420002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.198 [2024-04-26 23:37:00.420294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.198 [2024-04-26 23:37:00.420301] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.198 qpair failed and we were unable to recover it. 00:34:11.198 [2024-04-26 23:37:00.420647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.198 [2024-04-26 23:37:00.420977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.198 [2024-04-26 23:37:00.420983] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.198 qpair failed and we were unable to recover it. 00:34:11.198 [2024-04-26 23:37:00.421278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.198 [2024-04-26 23:37:00.421645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.198 [2024-04-26 23:37:00.421651] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.198 qpair failed and we were unable to recover it. 00:34:11.198 [2024-04-26 23:37:00.421976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.198 [2024-04-26 23:37:00.422238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.198 [2024-04-26 23:37:00.422244] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.198 qpair failed and we were unable to recover it. 00:34:11.198 [2024-04-26 23:37:00.422429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.198 [2024-04-26 23:37:00.422733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.198 [2024-04-26 23:37:00.422739] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.198 qpair failed and we were unable to recover it. 00:34:11.198 [2024-04-26 23:37:00.423076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.198 [2024-04-26 23:37:00.423376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.198 [2024-04-26 23:37:00.423382] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.198 qpair failed and we were unable to recover it. 00:34:11.198 [2024-04-26 23:37:00.423655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.198 [2024-04-26 23:37:00.423999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.198 [2024-04-26 23:37:00.424006] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.198 qpair failed and we were unable to recover it. 00:34:11.198 [2024-04-26 23:37:00.424229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.487 [2024-04-26 23:37:00.424579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.487 [2024-04-26 23:37:00.424587] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.487 qpair failed and we were unable to recover it. 00:34:11.487 [2024-04-26 23:37:00.424897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.487 [2024-04-26 23:37:00.425148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.487 [2024-04-26 23:37:00.425154] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.487 qpair failed and we were unable to recover it. 00:34:11.487 [2024-04-26 23:37:00.425387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.487 [2024-04-26 23:37:00.425676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.487 [2024-04-26 23:37:00.425682] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.487 qpair failed and we were unable to recover it. 00:34:11.487 [2024-04-26 23:37:00.425854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.487 [2024-04-26 23:37:00.426167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.487 [2024-04-26 23:37:00.426174] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.487 qpair failed and we were unable to recover it. 00:34:11.487 [2024-04-26 23:37:00.426495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.487 [2024-04-26 23:37:00.426825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.487 [2024-04-26 23:37:00.426831] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.487 qpair failed and we were unable to recover it. 00:34:11.487 [2024-04-26 23:37:00.427163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.487 [2024-04-26 23:37:00.427498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.487 [2024-04-26 23:37:00.427504] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.487 qpair failed and we were unable to recover it. 00:34:11.487 [2024-04-26 23:37:00.427852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.487 [2024-04-26 23:37:00.428033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.487 [2024-04-26 23:37:00.428040] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.487 qpair failed and we were unable to recover it. 00:34:11.487 [2024-04-26 23:37:00.428356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.487 [2024-04-26 23:37:00.428708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.487 [2024-04-26 23:37:00.428715] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.487 qpair failed and we were unable to recover it. 00:34:11.487 [2024-04-26 23:37:00.428919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.487 [2024-04-26 23:37:00.429220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.487 [2024-04-26 23:37:00.429227] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.487 qpair failed and we were unable to recover it. 00:34:11.487 [2024-04-26 23:37:00.429558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.487 [2024-04-26 23:37:00.429854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.487 [2024-04-26 23:37:00.429861] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.487 qpair failed and we were unable to recover it. 00:34:11.487 [2024-04-26 23:37:00.430157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.487 [2024-04-26 23:37:00.430481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.487 [2024-04-26 23:37:00.430488] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.487 qpair failed and we were unable to recover it. 00:34:11.487 [2024-04-26 23:37:00.430816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.487 [2024-04-26 23:37:00.431144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.487 [2024-04-26 23:37:00.431151] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.487 qpair failed and we were unable to recover it. 00:34:11.487 [2024-04-26 23:37:00.431494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.487 [2024-04-26 23:37:00.431812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.487 [2024-04-26 23:37:00.431818] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.487 qpair failed and we were unable to recover it. 00:34:11.487 [2024-04-26 23:37:00.432137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.487 [2024-04-26 23:37:00.432380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.487 [2024-04-26 23:37:00.432387] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.487 qpair failed and we were unable to recover it. 00:34:11.487 [2024-04-26 23:37:00.432718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.487 [2024-04-26 23:37:00.432989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.487 [2024-04-26 23:37:00.432996] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.487 qpair failed and we were unable to recover it. 00:34:11.487 [2024-04-26 23:37:00.433336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.487 [2024-04-26 23:37:00.433711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.487 [2024-04-26 23:37:00.433717] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.487 qpair failed and we were unable to recover it. 00:34:11.487 [2024-04-26 23:37:00.433944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.487 [2024-04-26 23:37:00.434183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.487 [2024-04-26 23:37:00.434189] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.487 qpair failed and we were unable to recover it. 00:34:11.487 [2024-04-26 23:37:00.434385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.487 [2024-04-26 23:37:00.434617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.487 [2024-04-26 23:37:00.434624] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.487 qpair failed and we were unable to recover it. 00:34:11.487 [2024-04-26 23:37:00.434932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.487 [2024-04-26 23:37:00.435218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.487 [2024-04-26 23:37:00.435225] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.487 qpair failed and we were unable to recover it. 00:34:11.487 [2024-04-26 23:37:00.435552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.487 [2024-04-26 23:37:00.435853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.487 [2024-04-26 23:37:00.435859] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.487 qpair failed and we were unable to recover it. 00:34:11.487 [2024-04-26 23:37:00.436252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.487 [2024-04-26 23:37:00.436579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.487 [2024-04-26 23:37:00.436586] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.487 qpair failed and we were unable to recover it. 00:34:11.487 [2024-04-26 23:37:00.436926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.487 [2024-04-26 23:37:00.437259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.488 [2024-04-26 23:37:00.437265] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.488 qpair failed and we were unable to recover it. 00:34:11.488 [2024-04-26 23:37:00.437467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.488 [2024-04-26 23:37:00.437712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.488 [2024-04-26 23:37:00.437719] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.488 qpair failed and we were unable to recover it. 00:34:11.488 [2024-04-26 23:37:00.438059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.488 [2024-04-26 23:37:00.438392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.488 [2024-04-26 23:37:00.438398] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.488 qpair failed and we were unable to recover it. 00:34:11.488 [2024-04-26 23:37:00.438631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.488 [2024-04-26 23:37:00.438889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.488 [2024-04-26 23:37:00.438896] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.488 qpair failed and we were unable to recover it. 00:34:11.488 [2024-04-26 23:37:00.439198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.488 [2024-04-26 23:37:00.439518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.488 [2024-04-26 23:37:00.439525] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.488 qpair failed and we were unable to recover it. 00:34:11.488 [2024-04-26 23:37:00.439847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.488 [2024-04-26 23:37:00.440043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.488 [2024-04-26 23:37:00.440050] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.488 qpair failed and we were unable to recover it. 00:34:11.488 [2024-04-26 23:37:00.440382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.488 [2024-04-26 23:37:00.440738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.488 [2024-04-26 23:37:00.440744] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.488 qpair failed and we were unable to recover it. 00:34:11.488 [2024-04-26 23:37:00.441078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.488 [2024-04-26 23:37:00.441289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.488 [2024-04-26 23:37:00.441295] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.488 qpair failed and we were unable to recover it. 00:34:11.488 [2024-04-26 23:37:00.441580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.488 [2024-04-26 23:37:00.441931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.488 [2024-04-26 23:37:00.441938] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.488 qpair failed and we were unable to recover it. 00:34:11.488 [2024-04-26 23:37:00.442262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.488 [2024-04-26 23:37:00.442565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.488 [2024-04-26 23:37:00.442571] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.488 qpair failed and we were unable to recover it. 00:34:11.488 [2024-04-26 23:37:00.442890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.488 [2024-04-26 23:37:00.443205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.488 [2024-04-26 23:37:00.443211] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.488 qpair failed and we were unable to recover it. 00:34:11.488 [2024-04-26 23:37:00.443528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.488 [2024-04-26 23:37:00.443734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.488 [2024-04-26 23:37:00.443740] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.488 qpair failed and we were unable to recover it. 00:34:11.488 [2024-04-26 23:37:00.444043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.488 [2024-04-26 23:37:00.444246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.488 [2024-04-26 23:37:00.444253] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.488 qpair failed and we were unable to recover it. 00:34:11.488 [2024-04-26 23:37:00.444564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.488 [2024-04-26 23:37:00.444737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.488 [2024-04-26 23:37:00.444743] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.488 qpair failed and we were unable to recover it. 00:34:11.488 [2024-04-26 23:37:00.445066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.488 [2024-04-26 23:37:00.445362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.488 [2024-04-26 23:37:00.445368] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.488 qpair failed and we were unable to recover it. 00:34:11.488 [2024-04-26 23:37:00.445691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.488 [2024-04-26 23:37:00.445906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.488 [2024-04-26 23:37:00.445912] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.488 qpair failed and we were unable to recover it. 00:34:11.488 [2024-04-26 23:37:00.446142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.488 [2024-04-26 23:37:00.446370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.488 [2024-04-26 23:37:00.446376] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.488 qpair failed and we were unable to recover it. 00:34:11.488 [2024-04-26 23:37:00.446730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.488 [2024-04-26 23:37:00.447062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.488 [2024-04-26 23:37:00.447068] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.488 qpair failed and we were unable to recover it. 00:34:11.488 [2024-04-26 23:37:00.447401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.488 [2024-04-26 23:37:00.447692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.488 [2024-04-26 23:37:00.447698] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.488 qpair failed and we were unable to recover it. 00:34:11.488 [2024-04-26 23:37:00.448061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.488 [2024-04-26 23:37:00.448354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.488 [2024-04-26 23:37:00.448361] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.488 qpair failed and we were unable to recover it. 00:34:11.488 [2024-04-26 23:37:00.448663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.488 [2024-04-26 23:37:00.448909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.488 [2024-04-26 23:37:00.448915] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.488 qpair failed and we were unable to recover it. 00:34:11.488 [2024-04-26 23:37:00.449258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.488 [2024-04-26 23:37:00.449406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.488 [2024-04-26 23:37:00.449413] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.488 qpair failed and we were unable to recover it. 00:34:11.488 [2024-04-26 23:37:00.449801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.488 [2024-04-26 23:37:00.450119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.488 [2024-04-26 23:37:00.450126] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.488 qpair failed and we were unable to recover it. 00:34:11.488 [2024-04-26 23:37:00.450422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.488 [2024-04-26 23:37:00.450618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.488 [2024-04-26 23:37:00.450624] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.488 qpair failed and we were unable to recover it. 00:34:11.488 [2024-04-26 23:37:00.450868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.488 [2024-04-26 23:37:00.451107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.488 [2024-04-26 23:37:00.451113] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.488 qpair failed and we were unable to recover it. 00:34:11.488 [2024-04-26 23:37:00.451441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.488 [2024-04-26 23:37:00.451735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.488 [2024-04-26 23:37:00.451741] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.488 qpair failed and we were unable to recover it. 00:34:11.488 [2024-04-26 23:37:00.452064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.488 [2024-04-26 23:37:00.452356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.488 [2024-04-26 23:37:00.452363] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.488 qpair failed and we were unable to recover it. 00:34:11.488 [2024-04-26 23:37:00.452591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.488 [2024-04-26 23:37:00.452873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.488 [2024-04-26 23:37:00.452879] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.488 qpair failed and we were unable to recover it. 00:34:11.488 [2024-04-26 23:37:00.453188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.488 [2024-04-26 23:37:00.453495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.488 [2024-04-26 23:37:00.453502] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.488 qpair failed and we were unable to recover it. 00:34:11.488 [2024-04-26 23:37:00.453823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.488 [2024-04-26 23:37:00.454135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.488 [2024-04-26 23:37:00.454142] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.488 qpair failed and we were unable to recover it. 00:34:11.488 [2024-04-26 23:37:00.454453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.488 [2024-04-26 23:37:00.454770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.488 [2024-04-26 23:37:00.454776] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.488 qpair failed and we were unable to recover it. 00:34:11.488 [2024-04-26 23:37:00.454972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.488 [2024-04-26 23:37:00.455235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.488 [2024-04-26 23:37:00.455241] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.488 qpair failed and we were unable to recover it. 00:34:11.488 [2024-04-26 23:37:00.455585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.488 [2024-04-26 23:37:00.455908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.488 [2024-04-26 23:37:00.455916] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.488 qpair failed and we were unable to recover it. 00:34:11.488 [2024-04-26 23:37:00.456247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.488 [2024-04-26 23:37:00.456579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.488 [2024-04-26 23:37:00.456585] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.488 qpair failed and we were unable to recover it. 00:34:11.488 [2024-04-26 23:37:00.456807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.488 [2024-04-26 23:37:00.457140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.488 [2024-04-26 23:37:00.457146] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.488 qpair failed and we were unable to recover it. 00:34:11.488 [2024-04-26 23:37:00.457457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.488 [2024-04-26 23:37:00.457801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.488 [2024-04-26 23:37:00.457808] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.488 qpair failed and we were unable to recover it. 00:34:11.488 [2024-04-26 23:37:00.458140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.488 [2024-04-26 23:37:00.458458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.488 [2024-04-26 23:37:00.458464] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.488 qpair failed and we were unable to recover it. 00:34:11.488 [2024-04-26 23:37:00.458664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.488 [2024-04-26 23:37:00.458938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.488 [2024-04-26 23:37:00.458945] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.488 qpair failed and we were unable to recover it. 00:34:11.488 [2024-04-26 23:37:00.459268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.488 [2024-04-26 23:37:00.459500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.488 [2024-04-26 23:37:00.459506] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.488 qpair failed and we were unable to recover it. 00:34:11.488 [2024-04-26 23:37:00.459795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.488 [2024-04-26 23:37:00.460114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.488 [2024-04-26 23:37:00.460120] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.488 qpair failed and we were unable to recover it. 00:34:11.488 [2024-04-26 23:37:00.460435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.488 [2024-04-26 23:37:00.460758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.488 [2024-04-26 23:37:00.460766] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.488 qpair failed and we were unable to recover it. 00:34:11.488 [2024-04-26 23:37:00.460931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.488 [2024-04-26 23:37:00.461160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.488 [2024-04-26 23:37:00.461167] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.488 qpair failed and we were unable to recover it. 00:34:11.488 [2024-04-26 23:37:00.461498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.488 [2024-04-26 23:37:00.461862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.488 [2024-04-26 23:37:00.461871] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.488 qpair failed and we were unable to recover it. 00:34:11.488 [2024-04-26 23:37:00.462221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.488 [2024-04-26 23:37:00.462412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.488 [2024-04-26 23:37:00.462419] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.488 qpair failed and we were unable to recover it. 00:34:11.488 [2024-04-26 23:37:00.462749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.488 [2024-04-26 23:37:00.463081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.488 [2024-04-26 23:37:00.463088] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.488 qpair failed and we were unable to recover it. 00:34:11.488 [2024-04-26 23:37:00.463258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.488 [2024-04-26 23:37:00.463573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.488 [2024-04-26 23:37:00.463579] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.488 qpair failed and we were unable to recover it. 00:34:11.488 [2024-04-26 23:37:00.463913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.488 [2024-04-26 23:37:00.464226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.488 [2024-04-26 23:37:00.464232] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.488 qpair failed and we were unable to recover it. 00:34:11.488 [2024-04-26 23:37:00.464579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.488 [2024-04-26 23:37:00.464915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.488 [2024-04-26 23:37:00.464923] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.488 qpair failed and we were unable to recover it. 00:34:11.488 [2024-04-26 23:37:00.465258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.488 [2024-04-26 23:37:00.465576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.488 [2024-04-26 23:37:00.465582] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.488 qpair failed and we were unable to recover it. 00:34:11.488 [2024-04-26 23:37:00.465718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.488 [2024-04-26 23:37:00.466038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.488 [2024-04-26 23:37:00.466045] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.488 qpair failed and we were unable to recover it. 00:34:11.488 [2024-04-26 23:37:00.466377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.488 [2024-04-26 23:37:00.466690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.488 [2024-04-26 23:37:00.466696] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.489 qpair failed and we were unable to recover it. 00:34:11.489 [2024-04-26 23:37:00.467034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.489 [2024-04-26 23:37:00.467340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.489 [2024-04-26 23:37:00.467347] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.489 qpair failed and we were unable to recover it. 00:34:11.489 [2024-04-26 23:37:00.467621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.489 [2024-04-26 23:37:00.467839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.489 [2024-04-26 23:37:00.467849] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.489 qpair failed and we were unable to recover it. 00:34:11.489 [2024-04-26 23:37:00.468242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.489 [2024-04-26 23:37:00.468466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.489 [2024-04-26 23:37:00.468473] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.489 qpair failed and we were unable to recover it. 00:34:11.489 [2024-04-26 23:37:00.468807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.489 [2024-04-26 23:37:00.469113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.489 [2024-04-26 23:37:00.469119] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.489 qpair failed and we were unable to recover it. 00:34:11.489 [2024-04-26 23:37:00.469466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.489 [2024-04-26 23:37:00.469820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.489 [2024-04-26 23:37:00.469827] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.489 qpair failed and we were unable to recover it. 00:34:11.489 [2024-04-26 23:37:00.470159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.489 [2024-04-26 23:37:00.470491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.489 [2024-04-26 23:37:00.470498] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.489 qpair failed and we were unable to recover it. 00:34:11.489 [2024-04-26 23:37:00.470882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.489 [2024-04-26 23:37:00.471148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.489 [2024-04-26 23:37:00.471154] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.489 qpair failed and we were unable to recover it. 00:34:11.489 [2024-04-26 23:37:00.471477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.489 [2024-04-26 23:37:00.471671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.489 [2024-04-26 23:37:00.471678] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.489 qpair failed and we were unable to recover it. 00:34:11.489 [2024-04-26 23:37:00.471972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.489 [2024-04-26 23:37:00.472263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.489 [2024-04-26 23:37:00.472270] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.489 qpair failed and we were unable to recover it. 00:34:11.489 [2024-04-26 23:37:00.472603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.489 [2024-04-26 23:37:00.472961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.489 [2024-04-26 23:37:00.472967] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.489 qpair failed and we were unable to recover it. 00:34:11.489 [2024-04-26 23:37:00.473124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.489 [2024-04-26 23:37:00.473420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.489 [2024-04-26 23:37:00.473427] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.489 qpair failed and we were unable to recover it. 00:34:11.489 [2024-04-26 23:37:00.473755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.489 [2024-04-26 23:37:00.474058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.489 [2024-04-26 23:37:00.474065] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.489 qpair failed and we were unable to recover it. 00:34:11.489 [2024-04-26 23:37:00.474460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.489 [2024-04-26 23:37:00.474700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.489 [2024-04-26 23:37:00.474707] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.489 qpair failed and we were unable to recover it. 00:34:11.489 [2024-04-26 23:37:00.475027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.489 [2024-04-26 23:37:00.475365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.489 [2024-04-26 23:37:00.475371] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.489 qpair failed and we were unable to recover it. 00:34:11.489 [2024-04-26 23:37:00.475559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.489 [2024-04-26 23:37:00.475862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.489 [2024-04-26 23:37:00.475868] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.489 qpair failed and we were unable to recover it. 00:34:11.489 [2024-04-26 23:37:00.476198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.489 [2024-04-26 23:37:00.476506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.489 [2024-04-26 23:37:00.476513] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.489 qpair failed and we were unable to recover it. 00:34:11.489 [2024-04-26 23:37:00.476859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.489 [2024-04-26 23:37:00.477233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.489 [2024-04-26 23:37:00.477239] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.489 qpair failed and we were unable to recover it. 00:34:11.489 [2024-04-26 23:37:00.477555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.489 [2024-04-26 23:37:00.477850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.489 [2024-04-26 23:37:00.477857] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.489 qpair failed and we were unable to recover it. 00:34:11.489 [2024-04-26 23:37:00.478179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.489 [2024-04-26 23:37:00.478470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.489 [2024-04-26 23:37:00.478476] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.489 qpair failed and we were unable to recover it. 00:34:11.489 [2024-04-26 23:37:00.478801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.489 [2024-04-26 23:37:00.479119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.489 [2024-04-26 23:37:00.479126] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.489 qpair failed and we were unable to recover it. 00:34:11.489 [2024-04-26 23:37:00.479440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.489 [2024-04-26 23:37:00.479790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.489 [2024-04-26 23:37:00.479796] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.489 qpair failed and we were unable to recover it. 00:34:11.489 [2024-04-26 23:37:00.480107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.489 [2024-04-26 23:37:00.480415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.489 [2024-04-26 23:37:00.480421] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.489 qpair failed and we were unable to recover it. 00:34:11.489 [2024-04-26 23:37:00.480739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.489 [2024-04-26 23:37:00.481045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.489 [2024-04-26 23:37:00.481052] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.489 qpair failed and we were unable to recover it. 00:34:11.489 [2024-04-26 23:37:00.481250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.489 [2024-04-26 23:37:00.481622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.489 [2024-04-26 23:37:00.481628] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.489 qpair failed and we were unable to recover it. 00:34:11.489 [2024-04-26 23:37:00.482008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.489 [2024-04-26 23:37:00.482316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.489 [2024-04-26 23:37:00.482322] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.489 qpair failed and we were unable to recover it. 00:34:11.489 [2024-04-26 23:37:00.482667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.489 [2024-04-26 23:37:00.482989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.489 [2024-04-26 23:37:00.482995] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.489 qpair failed and we were unable to recover it. 00:34:11.489 [2024-04-26 23:37:00.483323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.489 [2024-04-26 23:37:00.483498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.489 [2024-04-26 23:37:00.483506] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.489 qpair failed and we were unable to recover it. 00:34:11.489 [2024-04-26 23:37:00.483812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.489 [2024-04-26 23:37:00.484140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.489 [2024-04-26 23:37:00.484147] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.489 qpair failed and we were unable to recover it. 00:34:11.489 [2024-04-26 23:37:00.484462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.489 [2024-04-26 23:37:00.484823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.489 [2024-04-26 23:37:00.484830] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.489 qpair failed and we were unable to recover it. 00:34:11.489 [2024-04-26 23:37:00.485263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.489 [2024-04-26 23:37:00.485622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.489 [2024-04-26 23:37:00.485630] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.489 qpair failed and we were unable to recover it. 00:34:11.489 [2024-04-26 23:37:00.485952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.489 [2024-04-26 23:37:00.486256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.489 [2024-04-26 23:37:00.486263] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.489 qpair failed and we were unable to recover it. 00:34:11.489 [2024-04-26 23:37:00.486599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.489 [2024-04-26 23:37:00.486948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.489 [2024-04-26 23:37:00.486955] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.489 qpair failed and we were unable to recover it. 00:34:11.489 [2024-04-26 23:37:00.487264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.489 [2024-04-26 23:37:00.487621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.489 [2024-04-26 23:37:00.487627] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.489 qpair failed and we were unable to recover it. 00:34:11.489 [2024-04-26 23:37:00.487888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.489 [2024-04-26 23:37:00.488218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.489 [2024-04-26 23:37:00.488224] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.489 qpair failed and we were unable to recover it. 00:34:11.489 [2024-04-26 23:37:00.488551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.489 [2024-04-26 23:37:00.488867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.489 [2024-04-26 23:37:00.488873] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.489 qpair failed and we were unable to recover it. 00:34:11.489 [2024-04-26 23:37:00.489189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.489 [2024-04-26 23:37:00.489497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.489 [2024-04-26 23:37:00.489503] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.489 qpair failed and we were unable to recover it. 00:34:11.489 [2024-04-26 23:37:00.489688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.489 [2024-04-26 23:37:00.489977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.489 [2024-04-26 23:37:00.489984] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.489 qpair failed and we were unable to recover it. 00:34:11.489 [2024-04-26 23:37:00.490333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.489 [2024-04-26 23:37:00.490545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.489 [2024-04-26 23:37:00.490552] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.489 qpair failed and we were unable to recover it. 00:34:11.489 [2024-04-26 23:37:00.490894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.489 [2024-04-26 23:37:00.491113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.489 [2024-04-26 23:37:00.491120] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.489 qpair failed and we were unable to recover it. 00:34:11.489 [2024-04-26 23:37:00.491330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.489 [2024-04-26 23:37:00.491710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.489 [2024-04-26 23:37:00.491717] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.489 qpair failed and we were unable to recover it. 00:34:11.489 [2024-04-26 23:37:00.492103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.489 [2024-04-26 23:37:00.492458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.489 [2024-04-26 23:37:00.492465] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.489 qpair failed and we were unable to recover it. 00:34:11.489 [2024-04-26 23:37:00.492699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.489 [2024-04-26 23:37:00.493036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.489 [2024-04-26 23:37:00.493042] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.489 qpair failed and we were unable to recover it. 00:34:11.489 [2024-04-26 23:37:00.493352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.489 [2024-04-26 23:37:00.493648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.489 [2024-04-26 23:37:00.493654] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.489 qpair failed and we were unable to recover it. 00:34:11.489 [2024-04-26 23:37:00.493884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.489 [2024-04-26 23:37:00.494105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.489 [2024-04-26 23:37:00.494112] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.489 qpair failed and we were unable to recover it. 00:34:11.489 [2024-04-26 23:37:00.494453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.489 [2024-04-26 23:37:00.494655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.489 [2024-04-26 23:37:00.494662] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.489 qpair failed and we were unable to recover it. 00:34:11.489 [2024-04-26 23:37:00.494897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.489 [2024-04-26 23:37:00.495219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.489 [2024-04-26 23:37:00.495225] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.489 qpair failed and we were unable to recover it. 00:34:11.489 [2024-04-26 23:37:00.495583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.489 [2024-04-26 23:37:00.495917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.489 [2024-04-26 23:37:00.495923] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.489 qpair failed and we were unable to recover it. 00:34:11.489 [2024-04-26 23:37:00.496231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.489 [2024-04-26 23:37:00.496542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.489 [2024-04-26 23:37:00.496549] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.489 qpair failed and we were unable to recover it. 00:34:11.489 [2024-04-26 23:37:00.496860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.489 [2024-04-26 23:37:00.497152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.489 [2024-04-26 23:37:00.497158] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.489 qpair failed and we were unable to recover it. 00:34:11.489 [2024-04-26 23:37:00.497548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.489 [2024-04-26 23:37:00.497895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.489 [2024-04-26 23:37:00.497901] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.489 qpair failed and we were unable to recover it. 00:34:11.489 [2024-04-26 23:37:00.498198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.489 [2024-04-26 23:37:00.498529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.489 [2024-04-26 23:37:00.498535] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.489 qpair failed and we were unable to recover it. 00:34:11.489 [2024-04-26 23:37:00.498741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.489 [2024-04-26 23:37:00.498927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.489 [2024-04-26 23:37:00.498934] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.489 qpair failed and we were unable to recover it. 00:34:11.489 [2024-04-26 23:37:00.499265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.489 [2024-04-26 23:37:00.499607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.489 [2024-04-26 23:37:00.499614] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.489 qpair failed and we were unable to recover it. 00:34:11.489 [2024-04-26 23:37:00.499973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.489 [2024-04-26 23:37:00.500321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.489 [2024-04-26 23:37:00.500327] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.489 qpair failed and we were unable to recover it. 00:34:11.489 [2024-04-26 23:37:00.500659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.489 [2024-04-26 23:37:00.500871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.490 [2024-04-26 23:37:00.500878] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.490 qpair failed and we were unable to recover it. 00:34:11.490 [2024-04-26 23:37:00.501095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.490 [2024-04-26 23:37:00.501404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.490 [2024-04-26 23:37:00.501410] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.490 qpair failed and we were unable to recover it. 00:34:11.490 [2024-04-26 23:37:00.501769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.490 [2024-04-26 23:37:00.502087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.490 [2024-04-26 23:37:00.502094] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.490 qpair failed and we were unable to recover it. 00:34:11.490 [2024-04-26 23:37:00.502414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.490 [2024-04-26 23:37:00.502710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.490 [2024-04-26 23:37:00.502716] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.490 qpair failed and we were unable to recover it. 00:34:11.490 [2024-04-26 23:37:00.502993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.490 [2024-04-26 23:37:00.503290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.490 [2024-04-26 23:37:00.503296] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.490 qpair failed and we were unable to recover it. 00:34:11.490 [2024-04-26 23:37:00.503622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.490 [2024-04-26 23:37:00.503952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.490 [2024-04-26 23:37:00.503958] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.490 qpair failed and we were unable to recover it. 00:34:11.490 [2024-04-26 23:37:00.504280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.490 [2024-04-26 23:37:00.504615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.490 [2024-04-26 23:37:00.504621] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.490 qpair failed and we were unable to recover it. 00:34:11.490 [2024-04-26 23:37:00.504853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.490 [2024-04-26 23:37:00.505092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.490 [2024-04-26 23:37:00.505098] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.490 qpair failed and we were unable to recover it. 00:34:11.490 [2024-04-26 23:37:00.505427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.490 [2024-04-26 23:37:00.505775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.490 [2024-04-26 23:37:00.505782] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.490 qpair failed and we were unable to recover it. 00:34:11.490 [2024-04-26 23:37:00.506112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.490 [2024-04-26 23:37:00.506436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.490 [2024-04-26 23:37:00.506442] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.490 qpair failed and we were unable to recover it. 00:34:11.490 [2024-04-26 23:37:00.506793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.490 [2024-04-26 23:37:00.507112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.490 [2024-04-26 23:37:00.507119] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.490 qpair failed and we were unable to recover it. 00:34:11.490 [2024-04-26 23:37:00.507440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.490 [2024-04-26 23:37:00.507629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.490 [2024-04-26 23:37:00.507636] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.490 qpair failed and we were unable to recover it. 00:34:11.490 [2024-04-26 23:37:00.507945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.490 [2024-04-26 23:37:00.508287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.490 [2024-04-26 23:37:00.508293] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.490 qpair failed and we were unable to recover it. 00:34:11.490 [2024-04-26 23:37:00.508571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.490 [2024-04-26 23:37:00.508953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.490 [2024-04-26 23:37:00.508960] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.490 qpair failed and we were unable to recover it. 00:34:11.490 [2024-04-26 23:37:00.509281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.490 [2024-04-26 23:37:00.509551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.490 [2024-04-26 23:37:00.509557] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.490 qpair failed and we were unable to recover it. 00:34:11.490 [2024-04-26 23:37:00.509904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.490 [2024-04-26 23:37:00.510079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.490 [2024-04-26 23:37:00.510086] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.490 qpair failed and we were unable to recover it. 00:34:11.490 [2024-04-26 23:37:00.510407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.490 [2024-04-26 23:37:00.510703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.490 [2024-04-26 23:37:00.510710] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.490 qpair failed and we were unable to recover it. 00:34:11.490 [2024-04-26 23:37:00.510912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.490 [2024-04-26 23:37:00.511296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.490 [2024-04-26 23:37:00.511302] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.490 qpair failed and we were unable to recover it. 00:34:11.490 [2024-04-26 23:37:00.511617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.490 [2024-04-26 23:37:00.511937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.490 [2024-04-26 23:37:00.511944] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.490 qpair failed and we were unable to recover it. 00:34:11.490 [2024-04-26 23:37:00.512106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.490 [2024-04-26 23:37:00.512405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.490 [2024-04-26 23:37:00.512411] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.490 qpair failed and we were unable to recover it. 00:34:11.490 [2024-04-26 23:37:00.512732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.490 [2024-04-26 23:37:00.513029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.490 [2024-04-26 23:37:00.513036] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.490 qpair failed and we were unable to recover it. 00:34:11.490 [2024-04-26 23:37:00.513375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.490 [2024-04-26 23:37:00.513615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.490 [2024-04-26 23:37:00.513621] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.490 qpair failed and we were unable to recover it. 00:34:11.490 [2024-04-26 23:37:00.513793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.490 [2024-04-26 23:37:00.514051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.490 [2024-04-26 23:37:00.514058] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.490 qpair failed and we were unable to recover it. 00:34:11.490 [2024-04-26 23:37:00.514401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.490 [2024-04-26 23:37:00.514686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.490 [2024-04-26 23:37:00.514693] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.490 qpair failed and we were unable to recover it. 00:34:11.490 [2024-04-26 23:37:00.515018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.490 [2024-04-26 23:37:00.515416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.490 [2024-04-26 23:37:00.515423] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.490 qpair failed and we were unable to recover it. 00:34:11.490 [2024-04-26 23:37:00.515713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.490 [2024-04-26 23:37:00.516012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.490 [2024-04-26 23:37:00.516019] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.490 qpair failed and we were unable to recover it. 00:34:11.490 [2024-04-26 23:37:00.516340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.490 [2024-04-26 23:37:00.516673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.490 [2024-04-26 23:37:00.516679] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.490 qpair failed and we were unable to recover it. 00:34:11.490 [2024-04-26 23:37:00.516996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.490 [2024-04-26 23:37:00.517301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.490 [2024-04-26 23:37:00.517308] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.490 qpair failed and we were unable to recover it. 00:34:11.490 [2024-04-26 23:37:00.517636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.490 [2024-04-26 23:37:00.517873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.490 [2024-04-26 23:37:00.517880] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.490 qpair failed and we were unable to recover it. 00:34:11.490 [2024-04-26 23:37:00.518192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.490 [2024-04-26 23:37:00.518507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.490 [2024-04-26 23:37:00.518513] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.490 qpair failed and we were unable to recover it. 00:34:11.490 [2024-04-26 23:37:00.518709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.490 [2024-04-26 23:37:00.518997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.490 [2024-04-26 23:37:00.519004] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.490 qpair failed and we were unable to recover it. 00:34:11.490 [2024-04-26 23:37:00.519350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.490 [2024-04-26 23:37:00.519725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.490 [2024-04-26 23:37:00.519731] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.490 qpair failed and we were unable to recover it. 00:34:11.490 [2024-04-26 23:37:00.520036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.490 [2024-04-26 23:37:00.520339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.490 [2024-04-26 23:37:00.520345] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.490 qpair failed and we were unable to recover it. 00:34:11.490 [2024-04-26 23:37:00.520705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.490 [2024-04-26 23:37:00.520882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.490 [2024-04-26 23:37:00.520889] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.490 qpair failed and we were unable to recover it. 00:34:11.490 [2024-04-26 23:37:00.521109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.490 [2024-04-26 23:37:00.521412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.490 [2024-04-26 23:37:00.521418] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.490 qpair failed and we were unable to recover it. 00:34:11.490 [2024-04-26 23:37:00.521735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.490 [2024-04-26 23:37:00.522036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.490 [2024-04-26 23:37:00.522043] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.490 qpair failed and we were unable to recover it. 00:34:11.490 [2024-04-26 23:37:00.522365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.490 [2024-04-26 23:37:00.522686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.490 [2024-04-26 23:37:00.522693] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.490 qpair failed and we were unable to recover it. 00:34:11.490 [2024-04-26 23:37:00.523026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.490 [2024-04-26 23:37:00.523364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.490 [2024-04-26 23:37:00.523370] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.490 qpair failed and we were unable to recover it. 00:34:11.490 [2024-04-26 23:37:00.523603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.490 [2024-04-26 23:37:00.523976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.490 [2024-04-26 23:37:00.523983] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.490 qpair failed and we were unable to recover it. 00:34:11.490 [2024-04-26 23:37:00.524387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.490 [2024-04-26 23:37:00.524728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.490 [2024-04-26 23:37:00.524734] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.490 qpair failed and we were unable to recover it. 00:34:11.490 [2024-04-26 23:37:00.525096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.490 [2024-04-26 23:37:00.525432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.490 [2024-04-26 23:37:00.525440] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.490 qpair failed and we were unable to recover it. 00:34:11.490 [2024-04-26 23:37:00.525674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.490 [2024-04-26 23:37:00.525967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.490 [2024-04-26 23:37:00.525981] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.490 qpair failed and we were unable to recover it. 00:34:11.490 [2024-04-26 23:37:00.526294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.490 [2024-04-26 23:37:00.526596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.490 [2024-04-26 23:37:00.526603] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.490 qpair failed and we were unable to recover it. 00:34:11.490 [2024-04-26 23:37:00.526735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.490 [2024-04-26 23:37:00.527042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.490 [2024-04-26 23:37:00.527048] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.490 qpair failed and we were unable to recover it. 00:34:11.490 [2024-04-26 23:37:00.527335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.490 [2024-04-26 23:37:00.527653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.490 [2024-04-26 23:37:00.527659] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.490 qpair failed and we were unable to recover it. 00:34:11.490 [2024-04-26 23:37:00.527888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.490 [2024-04-26 23:37:00.528186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.490 [2024-04-26 23:37:00.528192] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.490 qpair failed and we were unable to recover it. 00:34:11.490 [2024-04-26 23:37:00.528509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.490 [2024-04-26 23:37:00.528823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.490 [2024-04-26 23:37:00.528829] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.490 qpair failed and we were unable to recover it. 00:34:11.490 [2024-04-26 23:37:00.529193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.490 [2024-04-26 23:37:00.529433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.490 [2024-04-26 23:37:00.529440] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.490 qpair failed and we were unable to recover it. 00:34:11.490 [2024-04-26 23:37:00.529787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.490 [2024-04-26 23:37:00.530127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.491 [2024-04-26 23:37:00.530134] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.491 qpair failed and we were unable to recover it. 00:34:11.491 [2024-04-26 23:37:00.530445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.491 [2024-04-26 23:37:00.530760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.491 [2024-04-26 23:37:00.530768] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.491 qpair failed and we were unable to recover it. 00:34:11.491 [2024-04-26 23:37:00.531098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.491 [2024-04-26 23:37:00.531388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.491 [2024-04-26 23:37:00.531395] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.491 qpair failed and we were unable to recover it. 00:34:11.491 [2024-04-26 23:37:00.531725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.491 [2024-04-26 23:37:00.531931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.491 [2024-04-26 23:37:00.531938] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.491 qpair failed and we were unable to recover it. 00:34:11.491 [2024-04-26 23:37:00.532263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.491 [2024-04-26 23:37:00.532579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.491 [2024-04-26 23:37:00.532585] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.491 qpair failed and we were unable to recover it. 00:34:11.491 [2024-04-26 23:37:00.532943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.491 [2024-04-26 23:37:00.533304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.491 [2024-04-26 23:37:00.533310] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.491 qpair failed and we were unable to recover it. 00:34:11.491 [2024-04-26 23:37:00.533528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.491 [2024-04-26 23:37:00.533816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.491 [2024-04-26 23:37:00.533822] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.491 qpair failed and we were unable to recover it. 00:34:11.491 [2024-04-26 23:37:00.534099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.491 [2024-04-26 23:37:00.534450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.491 [2024-04-26 23:37:00.534456] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.491 qpair failed and we were unable to recover it. 00:34:11.491 [2024-04-26 23:37:00.534751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.491 [2024-04-26 23:37:00.535044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.491 [2024-04-26 23:37:00.535051] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.491 qpair failed and we were unable to recover it. 00:34:11.491 [2024-04-26 23:37:00.535264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.491 [2024-04-26 23:37:00.535493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.491 [2024-04-26 23:37:00.535499] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.491 qpair failed and we were unable to recover it. 00:34:11.491 [2024-04-26 23:37:00.535839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.491 [2024-04-26 23:37:00.536147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.491 [2024-04-26 23:37:00.536154] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.491 qpair failed and we were unable to recover it. 00:34:11.491 [2024-04-26 23:37:00.536349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.491 [2024-04-26 23:37:00.536630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.491 [2024-04-26 23:37:00.536637] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.491 qpair failed and we were unable to recover it. 00:34:11.491 [2024-04-26 23:37:00.536812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.491 [2024-04-26 23:37:00.537111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.491 [2024-04-26 23:37:00.537118] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.491 qpair failed and we were unable to recover it. 00:34:11.491 [2024-04-26 23:37:00.537315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.491 [2024-04-26 23:37:00.537510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.491 [2024-04-26 23:37:00.537516] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.491 qpair failed and we were unable to recover it. 00:34:11.491 [2024-04-26 23:37:00.537846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.491 [2024-04-26 23:37:00.538177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.491 [2024-04-26 23:37:00.538184] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.491 qpair failed and we were unable to recover it. 00:34:11.491 [2024-04-26 23:37:00.538386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.491 [2024-04-26 23:37:00.538730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.491 [2024-04-26 23:37:00.538737] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.491 qpair failed and we were unable to recover it. 00:34:11.491 [2024-04-26 23:37:00.538932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.491 [2024-04-26 23:37:00.539259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.491 [2024-04-26 23:37:00.539266] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.491 qpair failed and we were unable to recover it. 00:34:11.491 [2024-04-26 23:37:00.539590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.491 [2024-04-26 23:37:00.539902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.491 [2024-04-26 23:37:00.539908] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.491 qpair failed and we were unable to recover it. 00:34:11.491 [2024-04-26 23:37:00.540252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.491 [2024-04-26 23:37:00.540564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.491 [2024-04-26 23:37:00.540570] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.491 qpair failed and we were unable to recover it. 00:34:11.491 [2024-04-26 23:37:00.540834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.491 [2024-04-26 23:37:00.540995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.491 [2024-04-26 23:37:00.541002] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.491 qpair failed and we were unable to recover it. 00:34:11.491 [2024-04-26 23:37:00.541316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.491 [2024-04-26 23:37:00.541630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.491 [2024-04-26 23:37:00.541637] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.491 qpair failed and we were unable to recover it. 00:34:11.491 [2024-04-26 23:37:00.541951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.491 [2024-04-26 23:37:00.542250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.491 [2024-04-26 23:37:00.542256] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.491 qpair failed and we were unable to recover it. 00:34:11.491 [2024-04-26 23:37:00.542647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.491 [2024-04-26 23:37:00.542991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.491 [2024-04-26 23:37:00.542998] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.491 qpair failed and we were unable to recover it. 00:34:11.491 [2024-04-26 23:37:00.543332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.491 [2024-04-26 23:37:00.543524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.491 [2024-04-26 23:37:00.543531] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.491 qpair failed and we were unable to recover it. 00:34:11.491 [2024-04-26 23:37:00.543849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.491 [2024-04-26 23:37:00.544176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.491 [2024-04-26 23:37:00.544182] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.491 qpair failed and we were unable to recover it. 00:34:11.491 [2024-04-26 23:37:00.544498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.491 [2024-04-26 23:37:00.544820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.491 [2024-04-26 23:37:00.544826] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.491 qpair failed and we were unable to recover it. 00:34:11.491 [2024-04-26 23:37:00.545166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.491 [2024-04-26 23:37:00.545355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.491 [2024-04-26 23:37:00.545362] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.491 qpair failed and we were unable to recover it. 00:34:11.491 [2024-04-26 23:37:00.545665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.491 [2024-04-26 23:37:00.546029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.491 [2024-04-26 23:37:00.546036] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.491 qpair failed and we were unable to recover it. 00:34:11.491 [2024-04-26 23:37:00.546343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.491 [2024-04-26 23:37:00.546656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.491 [2024-04-26 23:37:00.546663] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.491 qpair failed and we were unable to recover it. 00:34:11.491 [2024-04-26 23:37:00.546990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.491 [2024-04-26 23:37:00.547295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.491 [2024-04-26 23:37:00.547301] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.491 qpair failed and we were unable to recover it. 00:34:11.491 [2024-04-26 23:37:00.547511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.491 [2024-04-26 23:37:00.547861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.491 [2024-04-26 23:37:00.547869] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.491 qpair failed and we were unable to recover it. 00:34:11.491 [2024-04-26 23:37:00.548206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.491 [2024-04-26 23:37:00.548574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.491 [2024-04-26 23:37:00.548581] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.491 qpair failed and we were unable to recover it. 00:34:11.491 [2024-04-26 23:37:00.548790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.491 [2024-04-26 23:37:00.548991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.491 [2024-04-26 23:37:00.548997] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.491 qpair failed and we were unable to recover it. 00:34:11.491 [2024-04-26 23:37:00.549311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.491 [2024-04-26 23:37:00.549646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.491 [2024-04-26 23:37:00.549652] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.491 qpair failed and we were unable to recover it. 00:34:11.491 [2024-04-26 23:37:00.549892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.491 [2024-04-26 23:37:00.550204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.491 [2024-04-26 23:37:00.550210] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.491 qpair failed and we were unable to recover it. 00:34:11.491 [2024-04-26 23:37:00.550538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.491 [2024-04-26 23:37:00.550849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.491 [2024-04-26 23:37:00.550857] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.491 qpair failed and we were unable to recover it. 00:34:11.491 [2024-04-26 23:37:00.551145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.491 [2024-04-26 23:37:00.551463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.491 [2024-04-26 23:37:00.551470] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.491 qpair failed and we were unable to recover it. 00:34:11.491 [2024-04-26 23:37:00.551794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.491 [2024-04-26 23:37:00.552158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.491 [2024-04-26 23:37:00.552166] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.491 qpair failed and we were unable to recover it. 00:34:11.491 [2024-04-26 23:37:00.552481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.491 [2024-04-26 23:37:00.552691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.491 [2024-04-26 23:37:00.552697] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.491 qpair failed and we were unable to recover it. 00:34:11.491 [2024-04-26 23:37:00.553037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.491 [2024-04-26 23:37:00.553384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.491 [2024-04-26 23:37:00.553390] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.491 qpair failed and we were unable to recover it. 00:34:11.491 [2024-04-26 23:37:00.553582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.491 [2024-04-26 23:37:00.553929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.491 [2024-04-26 23:37:00.553938] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.491 qpair failed and we were unable to recover it. 00:34:11.491 [2024-04-26 23:37:00.554230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.491 [2024-04-26 23:37:00.554532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.491 [2024-04-26 23:37:00.554540] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.491 qpair failed and we were unable to recover it. 00:34:11.491 [2024-04-26 23:37:00.554930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.491 [2024-04-26 23:37:00.555269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.491 [2024-04-26 23:37:00.555276] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.491 qpair failed and we were unable to recover it. 00:34:11.491 [2024-04-26 23:37:00.555604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.491 [2024-04-26 23:37:00.555972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.491 [2024-04-26 23:37:00.555979] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.491 qpair failed and we were unable to recover it. 00:34:11.491 [2024-04-26 23:37:00.556297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.491 [2024-04-26 23:37:00.556624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.491 [2024-04-26 23:37:00.556630] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.491 qpair failed and we were unable to recover it. 00:34:11.491 [2024-04-26 23:37:00.556946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.491 [2024-04-26 23:37:00.557345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.491 [2024-04-26 23:37:00.557352] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.491 qpair failed and we were unable to recover it. 00:34:11.491 [2024-04-26 23:37:00.557616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.491 [2024-04-26 23:37:00.557978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.491 [2024-04-26 23:37:00.557985] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.491 qpair failed and we were unable to recover it. 00:34:11.491 [2024-04-26 23:37:00.558321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.491 [2024-04-26 23:37:00.558657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.491 [2024-04-26 23:37:00.558663] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.491 qpair failed and we were unable to recover it. 00:34:11.491 [2024-04-26 23:37:00.559017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.491 [2024-04-26 23:37:00.559316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.491 [2024-04-26 23:37:00.559323] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.491 qpair failed and we were unable to recover it. 00:34:11.491 [2024-04-26 23:37:00.559515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.491 [2024-04-26 23:37:00.559862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.491 [2024-04-26 23:37:00.559870] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.491 qpair failed and we were unable to recover it. 00:34:11.491 [2024-04-26 23:37:00.560058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.491 [2024-04-26 23:37:00.560324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.491 [2024-04-26 23:37:00.560334] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.491 qpair failed and we were unable to recover it. 00:34:11.491 [2024-04-26 23:37:00.560710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.491 [2024-04-26 23:37:00.560997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.491 [2024-04-26 23:37:00.561005] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.491 qpair failed and we were unable to recover it. 00:34:11.491 [2024-04-26 23:37:00.561332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.491 [2024-04-26 23:37:00.561662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.491 [2024-04-26 23:37:00.561669] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.491 qpair failed and we were unable to recover it. 00:34:11.491 [2024-04-26 23:37:00.562010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.491 [2024-04-26 23:37:00.562344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.491 [2024-04-26 23:37:00.562351] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.491 qpair failed and we were unable to recover it. 00:34:11.491 [2024-04-26 23:37:00.562664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.491 [2024-04-26 23:37:00.562876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.491 [2024-04-26 23:37:00.562883] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.491 qpair failed and we were unable to recover it. 00:34:11.491 [2024-04-26 23:37:00.563117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.491 [2024-04-26 23:37:00.563443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.492 [2024-04-26 23:37:00.563449] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.492 qpair failed and we were unable to recover it. 00:34:11.492 [2024-04-26 23:37:00.563713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.492 [2024-04-26 23:37:00.563907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.492 [2024-04-26 23:37:00.563914] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.492 qpair failed and we were unable to recover it. 00:34:11.492 [2024-04-26 23:37:00.564091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.492 [2024-04-26 23:37:00.564412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.492 [2024-04-26 23:37:00.564425] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.492 qpair failed and we were unable to recover it. 00:34:11.492 [2024-04-26 23:37:00.564750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.492 [2024-04-26 23:37:00.565078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.492 [2024-04-26 23:37:00.565085] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.492 qpair failed and we were unable to recover it. 00:34:11.492 [2024-04-26 23:37:00.565385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.492 [2024-04-26 23:37:00.565704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.492 [2024-04-26 23:37:00.565710] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.492 qpair failed and we were unable to recover it. 00:34:11.492 [2024-04-26 23:37:00.566037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.492 [2024-04-26 23:37:00.566367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.492 [2024-04-26 23:37:00.566375] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.492 qpair failed and we were unable to recover it. 00:34:11.492 [2024-04-26 23:37:00.566716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.492 [2024-04-26 23:37:00.567044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.492 [2024-04-26 23:37:00.567051] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.492 qpair failed and we were unable to recover it. 00:34:11.492 [2024-04-26 23:37:00.567363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.492 [2024-04-26 23:37:00.567652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.492 [2024-04-26 23:37:00.567659] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.492 qpair failed and we were unable to recover it. 00:34:11.492 [2024-04-26 23:37:00.567982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.492 [2024-04-26 23:37:00.568318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.492 [2024-04-26 23:37:00.568325] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.492 qpair failed and we were unable to recover it. 00:34:11.492 [2024-04-26 23:37:00.568682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.492 [2024-04-26 23:37:00.569011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.492 [2024-04-26 23:37:00.569025] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.492 qpair failed and we were unable to recover it. 00:34:11.492 [2024-04-26 23:37:00.569306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.492 [2024-04-26 23:37:00.569638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.492 [2024-04-26 23:37:00.569644] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.492 qpair failed and we were unable to recover it. 00:34:11.492 [2024-04-26 23:37:00.569969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.492 [2024-04-26 23:37:00.570258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.492 [2024-04-26 23:37:00.570265] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.492 qpair failed and we were unable to recover it. 00:34:11.492 [2024-04-26 23:37:00.570482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.492 [2024-04-26 23:37:00.570810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.492 [2024-04-26 23:37:00.570816] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.492 qpair failed and we were unable to recover it. 00:34:11.492 [2024-04-26 23:37:00.571043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.492 [2024-04-26 23:37:00.571373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.492 [2024-04-26 23:37:00.571379] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.492 qpair failed and we were unable to recover it. 00:34:11.492 [2024-04-26 23:37:00.571710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.492 [2024-04-26 23:37:00.572023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.492 [2024-04-26 23:37:00.572029] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.492 qpair failed and we were unable to recover it. 00:34:11.492 [2024-04-26 23:37:00.572253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.492 [2024-04-26 23:37:00.572546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.492 [2024-04-26 23:37:00.572553] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.492 qpair failed and we were unable to recover it. 00:34:11.492 [2024-04-26 23:37:00.572783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.492 [2024-04-26 23:37:00.572960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.492 [2024-04-26 23:37:00.572968] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.492 qpair failed and we were unable to recover it. 00:34:11.492 [2024-04-26 23:37:00.573131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.492 [2024-04-26 23:37:00.573436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.492 [2024-04-26 23:37:00.573443] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.492 qpair failed and we were unable to recover it. 00:34:11.492 [2024-04-26 23:37:00.573778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.492 [2024-04-26 23:37:00.574100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.492 [2024-04-26 23:37:00.574107] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.492 qpair failed and we were unable to recover it. 00:34:11.492 [2024-04-26 23:37:00.574420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.492 [2024-04-26 23:37:00.574608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.492 [2024-04-26 23:37:00.574615] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.492 qpair failed and we were unable to recover it. 00:34:11.492 [2024-04-26 23:37:00.574934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.492 [2024-04-26 23:37:00.575302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.492 [2024-04-26 23:37:00.575309] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.492 qpair failed and we were unable to recover it. 00:34:11.492 [2024-04-26 23:37:00.575618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.492 [2024-04-26 23:37:00.575967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.492 [2024-04-26 23:37:00.575974] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.492 qpair failed and we were unable to recover it. 00:34:11.492 [2024-04-26 23:37:00.576212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.492 [2024-04-26 23:37:00.576457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.492 [2024-04-26 23:37:00.576464] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.492 qpair failed and we were unable to recover it. 00:34:11.492 [2024-04-26 23:37:00.576788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.492 [2024-04-26 23:37:00.577126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.492 [2024-04-26 23:37:00.577132] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.492 qpair failed and we were unable to recover it. 00:34:11.492 [2024-04-26 23:37:00.577511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.492 [2024-04-26 23:37:00.577822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.492 [2024-04-26 23:37:00.577829] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.492 qpair failed and we were unable to recover it. 00:34:11.492 [2024-04-26 23:37:00.578026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.492 [2024-04-26 23:37:00.578380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.492 [2024-04-26 23:37:00.578388] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.492 qpair failed and we were unable to recover it. 00:34:11.492 [2024-04-26 23:37:00.578723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.492 [2024-04-26 23:37:00.579069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.492 [2024-04-26 23:37:00.579076] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.492 qpair failed and we were unable to recover it. 00:34:11.492 [2024-04-26 23:37:00.579424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.492 [2024-04-26 23:37:00.579761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.492 [2024-04-26 23:37:00.579768] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.492 qpair failed and we were unable to recover it. 00:34:11.492 [2024-04-26 23:37:00.579996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.492 [2024-04-26 23:37:00.580294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.492 [2024-04-26 23:37:00.580301] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.492 qpair failed and we were unable to recover it. 00:34:11.492 [2024-04-26 23:37:00.580533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.492 [2024-04-26 23:37:00.580752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.492 [2024-04-26 23:37:00.580758] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.492 qpair failed and we were unable to recover it. 00:34:11.492 [2024-04-26 23:37:00.581059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.492 [2024-04-26 23:37:00.581396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.492 [2024-04-26 23:37:00.581403] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.492 qpair failed and we were unable to recover it. 00:34:11.492 [2024-04-26 23:37:00.581756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.492 [2024-04-26 23:37:00.582072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.492 [2024-04-26 23:37:00.582079] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.492 qpair failed and we were unable to recover it. 00:34:11.492 [2024-04-26 23:37:00.582411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.492 [2024-04-26 23:37:00.582734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.492 [2024-04-26 23:37:00.582740] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.492 qpair failed and we were unable to recover it. 00:34:11.492 [2024-04-26 23:37:00.583089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.492 [2024-04-26 23:37:00.583297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.492 [2024-04-26 23:37:00.583303] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.492 qpair failed and we were unable to recover it. 00:34:11.492 [2024-04-26 23:37:00.583630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.492 [2024-04-26 23:37:00.583981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.492 [2024-04-26 23:37:00.583988] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.492 qpair failed and we were unable to recover it. 00:34:11.492 [2024-04-26 23:37:00.584224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.492 [2024-04-26 23:37:00.584488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.492 [2024-04-26 23:37:00.584495] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.492 qpair failed and we were unable to recover it. 00:34:11.492 [2024-04-26 23:37:00.584800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.492 [2024-04-26 23:37:00.585022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.492 [2024-04-26 23:37:00.585029] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.492 qpair failed and we were unable to recover it. 00:34:11.492 [2024-04-26 23:37:00.585073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.492 [2024-04-26 23:37:00.585407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.492 [2024-04-26 23:37:00.585414] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.492 qpair failed and we were unable to recover it. 00:34:11.492 [2024-04-26 23:37:00.585654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.492 [2024-04-26 23:37:00.585955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.492 [2024-04-26 23:37:00.585962] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.492 qpair failed and we were unable to recover it. 00:34:11.492 [2024-04-26 23:37:00.586175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.492 [2024-04-26 23:37:00.586503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.492 [2024-04-26 23:37:00.586510] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.492 qpair failed and we were unable to recover it. 00:34:11.492 [2024-04-26 23:37:00.586854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.492 [2024-04-26 23:37:00.587157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.492 [2024-04-26 23:37:00.587163] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.492 qpair failed and we were unable to recover it. 00:34:11.492 [2024-04-26 23:37:00.587493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.492 [2024-04-26 23:37:00.587734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.492 [2024-04-26 23:37:00.587740] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.492 qpair failed and we were unable to recover it. 00:34:11.492 [2024-04-26 23:37:00.588065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.492 [2024-04-26 23:37:00.588218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.492 [2024-04-26 23:37:00.588225] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.492 qpair failed and we were unable to recover it. 00:34:11.492 [2024-04-26 23:37:00.588470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.492 [2024-04-26 23:37:00.588550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.492 [2024-04-26 23:37:00.588556] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.492 qpair failed and we were unable to recover it. 00:34:11.492 [2024-04-26 23:37:00.588873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.492 [2024-04-26 23:37:00.589214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.492 [2024-04-26 23:37:00.589221] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.492 qpair failed and we were unable to recover it. 00:34:11.492 [2024-04-26 23:37:00.589422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.492 [2024-04-26 23:37:00.589715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.492 [2024-04-26 23:37:00.589722] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.492 qpair failed and we were unable to recover it. 00:34:11.492 [2024-04-26 23:37:00.590055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.492 [2024-04-26 23:37:00.590375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.492 [2024-04-26 23:37:00.590383] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.492 qpair failed and we were unable to recover it. 00:34:11.492 [2024-04-26 23:37:00.590712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.492 [2024-04-26 23:37:00.591106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.492 [2024-04-26 23:37:00.591113] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.492 qpair failed and we were unable to recover it. 00:34:11.492 [2024-04-26 23:37:00.591471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.492 [2024-04-26 23:37:00.591759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.492 [2024-04-26 23:37:00.591766] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.492 qpair failed and we were unable to recover it. 00:34:11.492 [2024-04-26 23:37:00.592122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.492 [2024-04-26 23:37:00.592319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.492 [2024-04-26 23:37:00.592326] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.493 qpair failed and we were unable to recover it. 00:34:11.493 [2024-04-26 23:37:00.592649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.493 [2024-04-26 23:37:00.592968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.493 [2024-04-26 23:37:00.592974] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.493 qpair failed and we were unable to recover it. 00:34:11.493 [2024-04-26 23:37:00.593310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.493 [2024-04-26 23:37:00.593624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.493 [2024-04-26 23:37:00.593632] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.493 qpair failed and we were unable to recover it. 00:34:11.493 [2024-04-26 23:37:00.593983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.493 [2024-04-26 23:37:00.594321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.493 [2024-04-26 23:37:00.594328] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.493 qpair failed and we were unable to recover it. 00:34:11.493 [2024-04-26 23:37:00.594709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.493 [2024-04-26 23:37:00.595004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.493 [2024-04-26 23:37:00.595011] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.493 qpair failed and we were unable to recover it. 00:34:11.493 [2024-04-26 23:37:00.595347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.493 [2024-04-26 23:37:00.595677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.493 [2024-04-26 23:37:00.595684] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.493 qpair failed and we were unable to recover it. 00:34:11.493 [2024-04-26 23:37:00.596006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.493 [2024-04-26 23:37:00.596296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.493 [2024-04-26 23:37:00.596302] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.493 qpair failed and we were unable to recover it. 00:34:11.493 [2024-04-26 23:37:00.596564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.493 [2024-04-26 23:37:00.596907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.493 [2024-04-26 23:37:00.596913] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.493 qpair failed and we were unable to recover it. 00:34:11.493 [2024-04-26 23:37:00.597275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.493 [2024-04-26 23:37:00.597662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.493 [2024-04-26 23:37:00.597668] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.493 qpair failed and we were unable to recover it. 00:34:11.493 [2024-04-26 23:37:00.597973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.493 [2024-04-26 23:37:00.598301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.493 [2024-04-26 23:37:00.598308] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.493 qpair failed and we were unable to recover it. 00:34:11.493 [2024-04-26 23:37:00.598655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.493 [2024-04-26 23:37:00.598987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.493 [2024-04-26 23:37:00.598994] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.493 qpair failed and we were unable to recover it. 00:34:11.493 [2024-04-26 23:37:00.599340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.493 [2024-04-26 23:37:00.599668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.493 [2024-04-26 23:37:00.599675] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.493 qpair failed and we were unable to recover it. 00:34:11.493 [2024-04-26 23:37:00.600031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.493 [2024-04-26 23:37:00.600389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.493 [2024-04-26 23:37:00.600395] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.493 qpair failed and we were unable to recover it. 00:34:11.493 [2024-04-26 23:37:00.600695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.493 [2024-04-26 23:37:00.601015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.493 [2024-04-26 23:37:00.601022] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.493 qpair failed and we were unable to recover it. 00:34:11.493 [2024-04-26 23:37:00.601203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.493 [2024-04-26 23:37:00.601407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.493 [2024-04-26 23:37:00.601414] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.493 qpair failed and we were unable to recover it. 00:34:11.493 [2024-04-26 23:37:00.601740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.493 [2024-04-26 23:37:00.602050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.493 [2024-04-26 23:37:00.602057] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.493 qpair failed and we were unable to recover it. 00:34:11.493 [2024-04-26 23:37:00.602385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.493 [2024-04-26 23:37:00.602736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.493 [2024-04-26 23:37:00.602742] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.493 qpair failed and we were unable to recover it. 00:34:11.493 [2024-04-26 23:37:00.602949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.493 [2024-04-26 23:37:00.603318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.493 [2024-04-26 23:37:00.603325] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.493 qpair failed and we were unable to recover it. 00:34:11.493 [2024-04-26 23:37:00.603720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.493 [2024-04-26 23:37:00.604029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.493 [2024-04-26 23:37:00.604041] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.493 qpair failed and we were unable to recover it. 00:34:11.493 [2024-04-26 23:37:00.604373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.493 [2024-04-26 23:37:00.604686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.493 [2024-04-26 23:37:00.604692] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.493 qpair failed and we were unable to recover it. 00:34:11.493 [2024-04-26 23:37:00.605041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.493 [2024-04-26 23:37:00.605364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.493 [2024-04-26 23:37:00.605370] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.493 qpair failed and we were unable to recover it. 00:34:11.493 [2024-04-26 23:37:00.605727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.493 [2024-04-26 23:37:00.605935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.493 [2024-04-26 23:37:00.605942] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.493 qpair failed and we were unable to recover it. 00:34:11.493 [2024-04-26 23:37:00.606294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.493 [2024-04-26 23:37:00.606657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.493 [2024-04-26 23:37:00.606664] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.493 qpair failed and we were unable to recover it. 00:34:11.493 [2024-04-26 23:37:00.606996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.493 [2024-04-26 23:37:00.607322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.493 [2024-04-26 23:37:00.607329] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.493 qpair failed and we were unable to recover it. 00:34:11.493 [2024-04-26 23:37:00.607683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.493 [2024-04-26 23:37:00.608013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.493 [2024-04-26 23:37:00.608019] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.493 qpair failed and we were unable to recover it. 00:34:11.493 [2024-04-26 23:37:00.608345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.493 [2024-04-26 23:37:00.608660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.493 [2024-04-26 23:37:00.608666] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.493 qpair failed and we were unable to recover it. 00:34:11.493 [2024-04-26 23:37:00.609019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.493 [2024-04-26 23:37:00.609356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.493 [2024-04-26 23:37:00.609363] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.493 qpair failed and we were unable to recover it. 00:34:11.493 [2024-04-26 23:37:00.609551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.493 [2024-04-26 23:37:00.609860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.493 [2024-04-26 23:37:00.609867] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.493 qpair failed and we were unable to recover it. 00:34:11.493 [2024-04-26 23:37:00.610170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.493 [2024-04-26 23:37:00.610487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.493 [2024-04-26 23:37:00.610494] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.493 qpair failed and we were unable to recover it. 00:34:11.493 [2024-04-26 23:37:00.610847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.493 [2024-04-26 23:37:00.611151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.493 [2024-04-26 23:37:00.611158] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.493 qpair failed and we were unable to recover it. 00:34:11.493 [2024-04-26 23:37:00.611502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.493 [2024-04-26 23:37:00.611689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.493 [2024-04-26 23:37:00.611696] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.493 qpair failed and we were unable to recover it. 00:34:11.493 [2024-04-26 23:37:00.611993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.493 [2024-04-26 23:37:00.612203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.493 [2024-04-26 23:37:00.612209] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.493 qpair failed and we were unable to recover it. 00:34:11.493 [2024-04-26 23:37:00.612522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.493 [2024-04-26 23:37:00.612829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.493 [2024-04-26 23:37:00.612836] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.493 qpair failed and we were unable to recover it. 00:34:11.493 [2024-04-26 23:37:00.613157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.493 [2024-04-26 23:37:00.613522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.493 [2024-04-26 23:37:00.613530] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.493 qpair failed and we were unable to recover it. 00:34:11.493 [2024-04-26 23:37:00.613883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.493 [2024-04-26 23:37:00.614201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.493 [2024-04-26 23:37:00.614207] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.493 qpair failed and we were unable to recover it. 00:34:11.493 [2024-04-26 23:37:00.614523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.493 [2024-04-26 23:37:00.614749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.493 [2024-04-26 23:37:00.614755] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.493 qpair failed and we were unable to recover it. 00:34:11.493 [2024-04-26 23:37:00.615081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.493 [2024-04-26 23:37:00.615400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.493 [2024-04-26 23:37:00.615407] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.493 qpair failed and we were unable to recover it. 00:34:11.493 [2024-04-26 23:37:00.615744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.493 [2024-04-26 23:37:00.616079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.493 [2024-04-26 23:37:00.616086] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.493 qpair failed and we were unable to recover it. 00:34:11.493 [2024-04-26 23:37:00.616253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.493 [2024-04-26 23:37:00.616559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.493 [2024-04-26 23:37:00.616566] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.493 qpair failed and we were unable to recover it. 00:34:11.493 [2024-04-26 23:37:00.616895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.493 [2024-04-26 23:37:00.617235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.493 [2024-04-26 23:37:00.617243] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.493 qpair failed and we were unable to recover it. 00:34:11.493 [2024-04-26 23:37:00.617592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.493 [2024-04-26 23:37:00.617880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.493 [2024-04-26 23:37:00.617886] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.493 qpair failed and we were unable to recover it. 00:34:11.493 [2024-04-26 23:37:00.618134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.493 [2024-04-26 23:37:00.618451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.493 [2024-04-26 23:37:00.618458] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.493 qpair failed and we were unable to recover it. 00:34:11.493 [2024-04-26 23:37:00.618768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.493 [2024-04-26 23:37:00.619089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.493 [2024-04-26 23:37:00.619096] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.493 qpair failed and we were unable to recover it. 00:34:11.493 [2024-04-26 23:37:00.619341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.493 [2024-04-26 23:37:00.619675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.493 [2024-04-26 23:37:00.619682] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.493 qpair failed and we were unable to recover it. 00:34:11.493 [2024-04-26 23:37:00.619880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.493 [2024-04-26 23:37:00.620168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.493 [2024-04-26 23:37:00.620175] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.493 qpair failed and we were unable to recover it. 00:34:11.493 [2024-04-26 23:37:00.620474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.493 [2024-04-26 23:37:00.620695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.493 [2024-04-26 23:37:00.620702] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.493 qpair failed and we were unable to recover it. 00:34:11.493 [2024-04-26 23:37:00.621003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.493 [2024-04-26 23:37:00.621171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.493 [2024-04-26 23:37:00.621178] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.493 qpair failed and we were unable to recover it. 00:34:11.493 [2024-04-26 23:37:00.621493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.493 [2024-04-26 23:37:00.621790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.493 [2024-04-26 23:37:00.621796] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.493 qpair failed and we were unable to recover it. 00:34:11.493 [2024-04-26 23:37:00.622104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.493 [2024-04-26 23:37:00.622397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.493 [2024-04-26 23:37:00.622404] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.493 qpair failed and we were unable to recover it. 00:34:11.493 [2024-04-26 23:37:00.622602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.493 [2024-04-26 23:37:00.622953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.493 [2024-04-26 23:37:00.622960] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.493 qpair failed and we were unable to recover it. 00:34:11.493 [2024-04-26 23:37:00.623276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.493 [2024-04-26 23:37:00.623426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.493 [2024-04-26 23:37:00.623434] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.493 qpair failed and we were unable to recover it. 00:34:11.493 [2024-04-26 23:37:00.623763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.493 [2024-04-26 23:37:00.624065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.493 [2024-04-26 23:37:00.624071] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.493 qpair failed and we were unable to recover it. 00:34:11.493 [2024-04-26 23:37:00.624385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.493 [2024-04-26 23:37:00.624685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.493 [2024-04-26 23:37:00.624691] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.493 qpair failed and we were unable to recover it. 00:34:11.493 [2024-04-26 23:37:00.625014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.493 [2024-04-26 23:37:00.625339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.493 [2024-04-26 23:37:00.625346] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.493 qpair failed and we were unable to recover it. 00:34:11.493 [2024-04-26 23:37:00.625683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.493 [2024-04-26 23:37:00.626008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.493 [2024-04-26 23:37:00.626015] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.493 qpair failed and we were unable to recover it. 00:34:11.493 [2024-04-26 23:37:00.626448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.493 [2024-04-26 23:37:00.626762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.493 [2024-04-26 23:37:00.626769] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.493 qpair failed and we were unable to recover it. 00:34:11.494 [2024-04-26 23:37:00.627098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.494 [2024-04-26 23:37:00.627406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.494 [2024-04-26 23:37:00.627412] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.494 qpair failed and we were unable to recover it. 00:34:11.494 [2024-04-26 23:37:00.627730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.494 [2024-04-26 23:37:00.628036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.494 [2024-04-26 23:37:00.628043] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.494 qpair failed and we were unable to recover it. 00:34:11.494 [2024-04-26 23:37:00.628353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.494 [2024-04-26 23:37:00.628616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.494 [2024-04-26 23:37:00.628623] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.494 qpair failed and we were unable to recover it. 00:34:11.494 [2024-04-26 23:37:00.628967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.494 [2024-04-26 23:37:00.629316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.494 [2024-04-26 23:37:00.629322] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.494 qpair failed and we were unable to recover it. 00:34:11.494 [2024-04-26 23:37:00.629681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.494 [2024-04-26 23:37:00.630066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.494 [2024-04-26 23:37:00.630073] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.494 qpair failed and we were unable to recover it. 00:34:11.494 [2024-04-26 23:37:00.630301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.494 [2024-04-26 23:37:00.630600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.494 [2024-04-26 23:37:00.630606] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.494 qpair failed and we were unable to recover it. 00:34:11.494 [2024-04-26 23:37:00.630920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.494 [2024-04-26 23:37:00.631267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.494 [2024-04-26 23:37:00.631274] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.494 qpair failed and we were unable to recover it. 00:34:11.494 [2024-04-26 23:37:00.631626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.494 [2024-04-26 23:37:00.631847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.494 [2024-04-26 23:37:00.631854] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.494 qpair failed and we were unable to recover it. 00:34:11.494 [2024-04-26 23:37:00.632221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.494 [2024-04-26 23:37:00.632557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.494 [2024-04-26 23:37:00.632563] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.494 qpair failed and we were unable to recover it. 00:34:11.494 [2024-04-26 23:37:00.632839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.494 [2024-04-26 23:37:00.633020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.494 [2024-04-26 23:37:00.633027] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.494 qpair failed and we were unable to recover it. 00:34:11.494 [2024-04-26 23:37:00.633408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.494 [2024-04-26 23:37:00.633731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.494 [2024-04-26 23:37:00.633737] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.494 qpair failed and we were unable to recover it. 00:34:11.494 [2024-04-26 23:37:00.633916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.494 [2024-04-26 23:37:00.634240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.494 [2024-04-26 23:37:00.634247] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.494 qpair failed and we were unable to recover it. 00:34:11.494 [2024-04-26 23:37:00.634554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.494 [2024-04-26 23:37:00.634878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.494 [2024-04-26 23:37:00.634885] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.494 qpair failed and we were unable to recover it. 00:34:11.494 [2024-04-26 23:37:00.635192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.494 [2024-04-26 23:37:00.635444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.494 [2024-04-26 23:37:00.635450] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.494 qpair failed and we were unable to recover it. 00:34:11.494 [2024-04-26 23:37:00.635822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.494 [2024-04-26 23:37:00.636181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.494 [2024-04-26 23:37:00.636188] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.494 qpair failed and we were unable to recover it. 00:34:11.494 [2024-04-26 23:37:00.636512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.494 [2024-04-26 23:37:00.636841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.494 [2024-04-26 23:37:00.636848] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.494 qpair failed and we were unable to recover it. 00:34:11.494 [2024-04-26 23:37:00.637195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.494 [2024-04-26 23:37:00.637465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.494 [2024-04-26 23:37:00.637472] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.494 qpair failed and we were unable to recover it. 00:34:11.494 [2024-04-26 23:37:00.637781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.494 [2024-04-26 23:37:00.638107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.494 [2024-04-26 23:37:00.638113] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.494 qpair failed and we were unable to recover it. 00:34:11.494 [2024-04-26 23:37:00.638434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.494 [2024-04-26 23:37:00.638634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.494 [2024-04-26 23:37:00.638640] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.494 qpair failed and we were unable to recover it. 00:34:11.494 [2024-04-26 23:37:00.638943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.494 [2024-04-26 23:37:00.639165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.494 [2024-04-26 23:37:00.639171] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.494 qpair failed and we were unable to recover it. 00:34:11.494 [2024-04-26 23:37:00.639405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.494 [2024-04-26 23:37:00.639710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.494 [2024-04-26 23:37:00.639716] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.494 qpair failed and we were unable to recover it. 00:34:11.494 [2024-04-26 23:37:00.640086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.494 [2024-04-26 23:37:00.640411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.494 [2024-04-26 23:37:00.640417] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.494 qpair failed and we were unable to recover it. 00:34:11.494 [2024-04-26 23:37:00.640657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.494 [2024-04-26 23:37:00.640866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.494 [2024-04-26 23:37:00.640873] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.494 qpair failed and we were unable to recover it. 00:34:11.494 [2024-04-26 23:37:00.641221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.494 [2024-04-26 23:37:00.641537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.494 [2024-04-26 23:37:00.641544] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.494 qpair failed and we were unable to recover it. 00:34:11.494 [2024-04-26 23:37:00.641852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.494 [2024-04-26 23:37:00.642085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.494 [2024-04-26 23:37:00.642092] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.494 qpair failed and we were unable to recover it. 00:34:11.494 [2024-04-26 23:37:00.642425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.494 [2024-04-26 23:37:00.642741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.494 [2024-04-26 23:37:00.642747] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.494 qpair failed and we were unable to recover it. 00:34:11.494 [2024-04-26 23:37:00.643072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.494 [2024-04-26 23:37:00.643428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.494 [2024-04-26 23:37:00.643434] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.494 qpair failed and we were unable to recover it. 00:34:11.494 [2024-04-26 23:37:00.643737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.494 [2024-04-26 23:37:00.644036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.494 [2024-04-26 23:37:00.644043] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.494 qpair failed and we were unable to recover it. 00:34:11.494 [2024-04-26 23:37:00.644368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.494 [2024-04-26 23:37:00.644720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.494 [2024-04-26 23:37:00.644727] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.494 qpair failed and we were unable to recover it. 00:34:11.494 [2024-04-26 23:37:00.645089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.494 [2024-04-26 23:37:00.645388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.494 [2024-04-26 23:37:00.645394] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.494 qpair failed and we were unable to recover it. 00:34:11.494 [2024-04-26 23:37:00.645740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.494 [2024-04-26 23:37:00.646067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.494 [2024-04-26 23:37:00.646075] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.494 qpair failed and we were unable to recover it. 00:34:11.494 [2024-04-26 23:37:00.646406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.494 [2024-04-26 23:37:00.646717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.494 [2024-04-26 23:37:00.646725] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.494 qpair failed and we were unable to recover it. 00:34:11.494 [2024-04-26 23:37:00.647058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.494 [2024-04-26 23:37:00.647370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.494 [2024-04-26 23:37:00.647376] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.494 qpair failed and we were unable to recover it. 00:34:11.494 [2024-04-26 23:37:00.647694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.494 [2024-04-26 23:37:00.647888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.494 [2024-04-26 23:37:00.647895] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.494 qpair failed and we were unable to recover it. 00:34:11.494 [2024-04-26 23:37:00.648206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.494 [2024-04-26 23:37:00.648509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.494 [2024-04-26 23:37:00.648516] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.494 qpair failed and we were unable to recover it. 00:34:11.494 [2024-04-26 23:37:00.648853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.494 [2024-04-26 23:37:00.649184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.494 [2024-04-26 23:37:00.649191] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.494 qpair failed and we were unable to recover it. 00:34:11.494 [2024-04-26 23:37:00.649386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.494 [2024-04-26 23:37:00.649740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.494 [2024-04-26 23:37:00.649747] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.494 qpair failed and we were unable to recover it. 00:34:11.494 [2024-04-26 23:37:00.650074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.494 [2024-04-26 23:37:00.650313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.494 [2024-04-26 23:37:00.650320] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.494 qpair failed and we were unable to recover it. 00:34:11.494 [2024-04-26 23:37:00.650631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.494 [2024-04-26 23:37:00.650960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.494 [2024-04-26 23:37:00.650966] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.494 qpair failed and we were unable to recover it. 00:34:11.494 [2024-04-26 23:37:00.651217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.494 [2024-04-26 23:37:00.651486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.494 [2024-04-26 23:37:00.651493] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.494 qpair failed and we were unable to recover it. 00:34:11.494 [2024-04-26 23:37:00.651846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.494 [2024-04-26 23:37:00.652234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.494 [2024-04-26 23:37:00.652241] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.494 qpair failed and we were unable to recover it. 00:34:11.494 [2024-04-26 23:37:00.652568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.494 [2024-04-26 23:37:00.652936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.494 [2024-04-26 23:37:00.652946] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.494 qpair failed and we were unable to recover it. 00:34:11.494 [2024-04-26 23:37:00.653155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.494 [2024-04-26 23:37:00.653488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.494 [2024-04-26 23:37:00.653494] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.494 qpair failed and we were unable to recover it. 00:34:11.494 [2024-04-26 23:37:00.653676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.494 [2024-04-26 23:37:00.654021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.494 [2024-04-26 23:37:00.654028] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.494 qpair failed and we were unable to recover it. 00:34:11.494 [2024-04-26 23:37:00.654328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.494 [2024-04-26 23:37:00.654666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.494 [2024-04-26 23:37:00.654672] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.494 qpair failed and we were unable to recover it. 00:34:11.494 [2024-04-26 23:37:00.654988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.494 [2024-04-26 23:37:00.655316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.494 [2024-04-26 23:37:00.655323] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.494 qpair failed and we were unable to recover it. 00:34:11.494 [2024-04-26 23:37:00.655626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.494 [2024-04-26 23:37:00.655947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.494 [2024-04-26 23:37:00.655954] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.494 qpair failed and we were unable to recover it. 00:34:11.494 [2024-04-26 23:37:00.656279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.494 [2024-04-26 23:37:00.656620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.494 [2024-04-26 23:37:00.656628] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.494 qpair failed and we were unable to recover it. 00:34:11.494 [2024-04-26 23:37:00.656948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.494 [2024-04-26 23:37:00.657275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.494 [2024-04-26 23:37:00.657282] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.494 qpair failed and we were unable to recover it. 00:34:11.494 [2024-04-26 23:37:00.657629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.494 [2024-04-26 23:37:00.657986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.494 [2024-04-26 23:37:00.657993] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.494 qpair failed and we were unable to recover it. 00:34:11.494 [2024-04-26 23:37:00.658303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.494 [2024-04-26 23:37:00.658644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.494 [2024-04-26 23:37:00.658651] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.495 qpair failed and we were unable to recover it. 00:34:11.495 [2024-04-26 23:37:00.659061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.495 [2024-04-26 23:37:00.659395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.495 [2024-04-26 23:37:00.659405] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.495 qpair failed and we were unable to recover it. 00:34:11.495 [2024-04-26 23:37:00.659733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.495 [2024-04-26 23:37:00.660035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.495 [2024-04-26 23:37:00.660042] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.495 qpair failed and we were unable to recover it. 00:34:11.495 [2024-04-26 23:37:00.660240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.495 [2024-04-26 23:37:00.660568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.495 [2024-04-26 23:37:00.660575] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.495 qpair failed and we were unable to recover it. 00:34:11.495 [2024-04-26 23:37:00.660883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.495 [2024-04-26 23:37:00.661199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.495 [2024-04-26 23:37:00.661205] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.495 qpair failed and we were unable to recover it. 00:34:11.495 [2024-04-26 23:37:00.661488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.495 [2024-04-26 23:37:00.661665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.495 [2024-04-26 23:37:00.661673] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.495 qpair failed and we were unable to recover it. 00:34:11.495 [2024-04-26 23:37:00.661992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.495 [2024-04-26 23:37:00.662272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.495 [2024-04-26 23:37:00.662278] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.495 qpair failed and we were unable to recover it. 00:34:11.495 [2024-04-26 23:37:00.662605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.495 [2024-04-26 23:37:00.662899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.495 [2024-04-26 23:37:00.662905] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.495 qpair failed and we were unable to recover it. 00:34:11.495 [2024-04-26 23:37:00.663209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.495 [2024-04-26 23:37:00.663542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.495 [2024-04-26 23:37:00.663549] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.495 qpair failed and we were unable to recover it. 00:34:11.495 [2024-04-26 23:37:00.663879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.495 [2024-04-26 23:37:00.664220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.495 [2024-04-26 23:37:00.664227] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.495 qpair failed and we were unable to recover it. 00:34:11.495 [2024-04-26 23:37:00.664551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.495 [2024-04-26 23:37:00.664757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.495 [2024-04-26 23:37:00.664763] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.495 qpair failed and we were unable to recover it. 00:34:11.495 [2024-04-26 23:37:00.665059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.495 [2024-04-26 23:37:00.665391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.495 [2024-04-26 23:37:00.665399] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.495 qpair failed and we were unable to recover it. 00:34:11.495 [2024-04-26 23:37:00.665712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.495 [2024-04-26 23:37:00.666064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.495 [2024-04-26 23:37:00.666071] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.495 qpair failed and we were unable to recover it. 00:34:11.495 [2024-04-26 23:37:00.666394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.495 [2024-04-26 23:37:00.666694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.495 [2024-04-26 23:37:00.666701] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.495 qpair failed and we were unable to recover it. 00:34:11.495 [2024-04-26 23:37:00.666888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.495 [2024-04-26 23:37:00.667198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.495 [2024-04-26 23:37:00.667205] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.495 qpair failed and we were unable to recover it. 00:34:11.495 [2024-04-26 23:37:00.667464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.495 [2024-04-26 23:37:00.667743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.495 [2024-04-26 23:37:00.667750] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.495 qpair failed and we were unable to recover it. 00:34:11.495 [2024-04-26 23:37:00.668074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.495 [2024-04-26 23:37:00.668365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.495 [2024-04-26 23:37:00.668372] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.495 qpair failed and we were unable to recover it. 00:34:11.495 [2024-04-26 23:37:00.668709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.495 [2024-04-26 23:37:00.669012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.495 [2024-04-26 23:37:00.669019] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.495 qpair failed and we were unable to recover it. 00:34:11.495 [2024-04-26 23:37:00.669342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.495 [2024-04-26 23:37:00.669683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.495 [2024-04-26 23:37:00.669689] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.495 qpair failed and we were unable to recover it. 00:34:11.495 [2024-04-26 23:37:00.670037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.495 [2024-04-26 23:37:00.670344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.495 [2024-04-26 23:37:00.670350] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.495 qpair failed and we were unable to recover it. 00:34:11.495 [2024-04-26 23:37:00.670666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.495 [2024-04-26 23:37:00.670965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.495 [2024-04-26 23:37:00.670973] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.495 qpair failed and we were unable to recover it. 00:34:11.495 [2024-04-26 23:37:00.671319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.495 [2024-04-26 23:37:00.671644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.495 [2024-04-26 23:37:00.671652] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.495 qpair failed and we were unable to recover it. 00:34:11.495 [2024-04-26 23:37:00.671975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.495 [2024-04-26 23:37:00.672170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.495 [2024-04-26 23:37:00.672177] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.495 qpair failed and we were unable to recover it. 00:34:11.495 [2024-04-26 23:37:00.672365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.495 [2024-04-26 23:37:00.672553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.495 [2024-04-26 23:37:00.672561] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.495 qpair failed and we were unable to recover it. 00:34:11.495 [2024-04-26 23:37:00.672909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.495 [2024-04-26 23:37:00.673248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.495 [2024-04-26 23:37:00.673256] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.495 qpair failed and we were unable to recover it. 00:34:11.495 [2024-04-26 23:37:00.673499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.495 [2024-04-26 23:37:00.673789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.495 [2024-04-26 23:37:00.673797] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.495 qpair failed and we were unable to recover it. 00:34:11.495 [2024-04-26 23:37:00.674131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.495 [2024-04-26 23:37:00.674378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.495 [2024-04-26 23:37:00.674385] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.495 qpair failed and we were unable to recover it. 00:34:11.495 [2024-04-26 23:37:00.674602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.495 [2024-04-26 23:37:00.674936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.495 [2024-04-26 23:37:00.674943] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.495 qpair failed and we were unable to recover it. 00:34:11.495 [2024-04-26 23:37:00.675180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.495 [2024-04-26 23:37:00.675469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.495 [2024-04-26 23:37:00.675476] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.495 qpair failed and we were unable to recover it. 00:34:11.495 [2024-04-26 23:37:00.675697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.495 [2024-04-26 23:37:00.676033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.495 [2024-04-26 23:37:00.676040] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.495 qpair failed and we were unable to recover it. 00:34:11.495 [2024-04-26 23:37:00.676351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.495 [2024-04-26 23:37:00.676645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.495 [2024-04-26 23:37:00.676651] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.495 qpair failed and we were unable to recover it. 00:34:11.495 [2024-04-26 23:37:00.677049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.495 [2024-04-26 23:37:00.677369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.495 [2024-04-26 23:37:00.677377] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.495 qpair failed and we were unable to recover it. 00:34:11.495 [2024-04-26 23:37:00.677707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.495 [2024-04-26 23:37:00.678021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.495 [2024-04-26 23:37:00.678028] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.495 qpair failed and we were unable to recover it. 00:34:11.495 [2024-04-26 23:37:00.678366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.495 [2024-04-26 23:37:00.678687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.495 [2024-04-26 23:37:00.678693] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.495 qpair failed and we were unable to recover it. 00:34:11.495 [2024-04-26 23:37:00.679013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.495 [2024-04-26 23:37:00.679349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.495 [2024-04-26 23:37:00.679357] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.495 qpair failed and we were unable to recover it. 00:34:11.495 [2024-04-26 23:37:00.679564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.495 [2024-04-26 23:37:00.679868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.495 [2024-04-26 23:37:00.679875] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.495 qpair failed and we were unable to recover it. 00:34:11.495 [2024-04-26 23:37:00.680195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.495 [2024-04-26 23:37:00.680527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.495 [2024-04-26 23:37:00.680533] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.495 qpair failed and we were unable to recover it. 00:34:11.495 [2024-04-26 23:37:00.680845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.495 [2024-04-26 23:37:00.681182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.495 [2024-04-26 23:37:00.681188] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.495 qpair failed and we were unable to recover it. 00:34:11.495 [2024-04-26 23:37:00.681511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.495 [2024-04-26 23:37:00.681840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.495 [2024-04-26 23:37:00.681847] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.495 qpair failed and we were unable to recover it. 00:34:11.495 [2024-04-26 23:37:00.682195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.495 [2024-04-26 23:37:00.682596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.495 [2024-04-26 23:37:00.682603] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.495 qpair failed and we were unable to recover it. 00:34:11.495 [2024-04-26 23:37:00.682911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.495 [2024-04-26 23:37:00.683212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.495 [2024-04-26 23:37:00.683218] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.495 qpair failed and we were unable to recover it. 00:34:11.495 [2024-04-26 23:37:00.683529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.495 [2024-04-26 23:37:00.683778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.495 [2024-04-26 23:37:00.683784] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.495 qpair failed and we were unable to recover it. 00:34:11.495 [2024-04-26 23:37:00.684151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.495 [2024-04-26 23:37:00.684449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.495 [2024-04-26 23:37:00.684455] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.495 qpair failed and we were unable to recover it. 00:34:11.495 [2024-04-26 23:37:00.684787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.495 [2024-04-26 23:37:00.685110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.495 [2024-04-26 23:37:00.685118] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.495 qpair failed and we were unable to recover it. 00:34:11.495 [2024-04-26 23:37:00.685325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.495 [2024-04-26 23:37:00.685674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.495 [2024-04-26 23:37:00.685680] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.495 qpair failed and we were unable to recover it. 00:34:11.495 [2024-04-26 23:37:00.685893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.495 [2024-04-26 23:37:00.686310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.495 [2024-04-26 23:37:00.686316] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.495 qpair failed and we were unable to recover it. 00:34:11.495 [2024-04-26 23:37:00.686677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.495 [2024-04-26 23:37:00.686974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.495 [2024-04-26 23:37:00.686981] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.495 qpair failed and we were unable to recover it. 00:34:11.495 [2024-04-26 23:37:00.687321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.495 [2024-04-26 23:37:00.687654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.495 [2024-04-26 23:37:00.687661] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.495 qpair failed and we were unable to recover it. 00:34:11.495 [2024-04-26 23:37:00.688010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.495 [2024-04-26 23:37:00.688223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.495 [2024-04-26 23:37:00.688229] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.495 qpair failed and we were unable to recover it. 00:34:11.495 [2024-04-26 23:37:00.688529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.495 [2024-04-26 23:37:00.688689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.495 [2024-04-26 23:37:00.688696] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.495 qpair failed and we were unable to recover it. 00:34:11.495 [2024-04-26 23:37:00.689038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.495 [2024-04-26 23:37:00.689368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.495 [2024-04-26 23:37:00.689375] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.495 qpair failed and we were unable to recover it. 00:34:11.495 [2024-04-26 23:37:00.689607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.495 [2024-04-26 23:37:00.689810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.495 [2024-04-26 23:37:00.689817] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.495 qpair failed and we were unable to recover it. 00:34:11.495 [2024-04-26 23:37:00.690172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.495 [2024-04-26 23:37:00.690436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.495 [2024-04-26 23:37:00.690450] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.495 qpair failed and we were unable to recover it. 00:34:11.495 [2024-04-26 23:37:00.690787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.495 [2024-04-26 23:37:00.691167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.495 [2024-04-26 23:37:00.691174] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.495 qpair failed and we were unable to recover it. 00:34:11.495 [2024-04-26 23:37:00.691534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.495 [2024-04-26 23:37:00.691859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.495 [2024-04-26 23:37:00.691866] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.495 qpair failed and we were unable to recover it. 00:34:11.495 [2024-04-26 23:37:00.692198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.495 [2024-04-26 23:37:00.692423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.495 [2024-04-26 23:37:00.692431] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.495 qpair failed and we were unable to recover it. 00:34:11.496 [2024-04-26 23:37:00.692760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.496 [2024-04-26 23:37:00.692981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.496 [2024-04-26 23:37:00.692988] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.496 qpair failed and we were unable to recover it. 00:34:11.496 [2024-04-26 23:37:00.693325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.496 [2024-04-26 23:37:00.693521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.496 [2024-04-26 23:37:00.693527] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.496 qpair failed and we were unable to recover it. 00:34:11.496 [2024-04-26 23:37:00.693848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.496 [2024-04-26 23:37:00.694164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.496 [2024-04-26 23:37:00.694171] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.496 qpair failed and we were unable to recover it. 00:34:11.496 [2024-04-26 23:37:00.694480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.496 [2024-04-26 23:37:00.694797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.496 [2024-04-26 23:37:00.694803] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.496 qpair failed and we were unable to recover it. 00:34:11.496 [2024-04-26 23:37:00.695109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.496 [2024-04-26 23:37:00.695304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.496 [2024-04-26 23:37:00.695311] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.496 qpair failed and we were unable to recover it. 00:34:11.496 [2024-04-26 23:37:00.695563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.496 [2024-04-26 23:37:00.695824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.496 [2024-04-26 23:37:00.695831] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.496 qpair failed and we were unable to recover it. 00:34:11.496 [2024-04-26 23:37:00.696215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.496 [2024-04-26 23:37:00.696529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.496 [2024-04-26 23:37:00.696537] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.496 qpair failed and we were unable to recover it. 00:34:11.496 [2024-04-26 23:37:00.696853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.496 [2024-04-26 23:37:00.697174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.496 [2024-04-26 23:37:00.697181] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.496 qpair failed and we were unable to recover it. 00:34:11.496 [2024-04-26 23:37:00.697506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.496 [2024-04-26 23:37:00.697820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.496 [2024-04-26 23:37:00.697826] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.496 qpair failed and we were unable to recover it. 00:34:11.496 [2024-04-26 23:37:00.698005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.496 [2024-04-26 23:37:00.698204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.496 [2024-04-26 23:37:00.698211] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.496 qpair failed and we were unable to recover it. 00:34:11.496 [2024-04-26 23:37:00.698447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.496 [2024-04-26 23:37:00.698620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.496 [2024-04-26 23:37:00.698627] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.496 qpair failed and we were unable to recover it. 00:34:11.496 [2024-04-26 23:37:00.698909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.496 [2024-04-26 23:37:00.699205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.496 [2024-04-26 23:37:00.699212] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.496 qpair failed and we were unable to recover it. 00:34:11.496 [2024-04-26 23:37:00.699555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.496 [2024-04-26 23:37:00.699877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.496 [2024-04-26 23:37:00.699884] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.496 qpair failed and we were unable to recover it. 00:34:11.496 [2024-04-26 23:37:00.700239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.496 [2024-04-26 23:37:00.700560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.496 [2024-04-26 23:37:00.700567] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.496 qpair failed and we were unable to recover it. 00:34:11.496 [2024-04-26 23:37:00.700762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.496 [2024-04-26 23:37:00.700996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.496 [2024-04-26 23:37:00.701003] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.496 qpair failed and we were unable to recover it. 00:34:11.496 [2024-04-26 23:37:00.701383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.496 [2024-04-26 23:37:00.701774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.496 [2024-04-26 23:37:00.701781] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.496 qpair failed and we were unable to recover it. 00:34:11.496 [2024-04-26 23:37:00.701994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.496 [2024-04-26 23:37:00.702316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.496 [2024-04-26 23:37:00.702322] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.496 qpair failed and we were unable to recover it. 00:34:11.496 [2024-04-26 23:37:00.702661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.496 [2024-04-26 23:37:00.702973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.496 [2024-04-26 23:37:00.702979] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.496 qpair failed and we were unable to recover it. 00:34:11.496 [2024-04-26 23:37:00.703308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.496 [2024-04-26 23:37:00.703641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.496 [2024-04-26 23:37:00.703647] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.496 qpair failed and we were unable to recover it. 00:34:11.496 [2024-04-26 23:37:00.704044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.496 [2024-04-26 23:37:00.704354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.496 [2024-04-26 23:37:00.704361] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.496 qpair failed and we were unable to recover it. 00:34:11.496 [2024-04-26 23:37:00.704720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.496 [2024-04-26 23:37:00.705029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.496 [2024-04-26 23:37:00.705035] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.496 qpair failed and we were unable to recover it. 00:34:11.496 [2024-04-26 23:37:00.705352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.496 [2024-04-26 23:37:00.705584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.496 [2024-04-26 23:37:00.705591] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.496 qpair failed and we were unable to recover it. 00:34:11.496 [2024-04-26 23:37:00.705934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.496 [2024-04-26 23:37:00.706231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.496 [2024-04-26 23:37:00.706238] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.496 qpair failed and we were unable to recover it. 00:34:11.496 [2024-04-26 23:37:00.706373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.496 [2024-04-26 23:37:00.706677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.496 [2024-04-26 23:37:00.706685] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.496 qpair failed and we were unable to recover it. 00:34:11.496 [2024-04-26 23:37:00.706994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.496 [2024-04-26 23:37:00.707328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.496 [2024-04-26 23:37:00.707335] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.496 qpair failed and we were unable to recover it. 00:34:11.496 [2024-04-26 23:37:00.707695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.496 [2024-04-26 23:37:00.707992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.496 [2024-04-26 23:37:00.707999] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.496 qpair failed and we were unable to recover it. 00:34:11.496 [2024-04-26 23:37:00.708305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.496 [2024-04-26 23:37:00.708600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.496 [2024-04-26 23:37:00.708606] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.496 qpair failed and we were unable to recover it. 00:34:11.496 [2024-04-26 23:37:00.709004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.496 [2024-04-26 23:37:00.709185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.496 [2024-04-26 23:37:00.709192] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.496 qpair failed and we were unable to recover it. 00:34:11.496 [2024-04-26 23:37:00.709493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.496 [2024-04-26 23:37:00.709805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.496 [2024-04-26 23:37:00.709811] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.496 qpair failed and we were unable to recover it. 00:34:11.496 [2024-04-26 23:37:00.710150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.496 [2024-04-26 23:37:00.710482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.496 [2024-04-26 23:37:00.710488] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.496 qpair failed and we were unable to recover it. 00:34:11.496 [2024-04-26 23:37:00.710812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.496 [2024-04-26 23:37:00.711012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.496 [2024-04-26 23:37:00.711019] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.496 qpair failed and we were unable to recover it. 00:34:11.496 [2024-04-26 23:37:00.711241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.496 [2024-04-26 23:37:00.711489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.496 [2024-04-26 23:37:00.711496] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.496 qpair failed and we were unable to recover it. 00:34:11.496 [2024-04-26 23:37:00.711796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.496 [2024-04-26 23:37:00.711944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.496 [2024-04-26 23:37:00.711951] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.496 qpair failed and we were unable to recover it. 00:34:11.496 [2024-04-26 23:37:00.712295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.496 [2024-04-26 23:37:00.712590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.496 [2024-04-26 23:37:00.712597] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.496 qpair failed and we were unable to recover it. 00:34:11.496 [2024-04-26 23:37:00.712933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.496 [2024-04-26 23:37:00.713251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.496 [2024-04-26 23:37:00.713258] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.496 qpair failed and we were unable to recover it. 00:34:11.496 [2024-04-26 23:37:00.713632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.496 [2024-04-26 23:37:00.713968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.496 [2024-04-26 23:37:00.713975] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.496 qpair failed and we were unable to recover it. 00:34:11.496 [2024-04-26 23:37:00.714364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.496 [2024-04-26 23:37:00.714567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.496 [2024-04-26 23:37:00.714574] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.496 qpair failed and we were unable to recover it. 00:34:11.496 [2024-04-26 23:37:00.714871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.496 [2024-04-26 23:37:00.714995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.496 [2024-04-26 23:37:00.715003] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.496 qpair failed and we were unable to recover it. 00:34:11.496 [2024-04-26 23:37:00.715225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.496 [2024-04-26 23:37:00.715590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.496 [2024-04-26 23:37:00.715596] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.496 qpair failed and we were unable to recover it. 00:34:11.496 [2024-04-26 23:37:00.715774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.496 [2024-04-26 23:37:00.716035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.496 [2024-04-26 23:37:00.716042] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.496 qpair failed and we were unable to recover it. 00:34:11.496 [2024-04-26 23:37:00.716407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.496 [2024-04-26 23:37:00.716736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.496 [2024-04-26 23:37:00.716743] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.496 qpair failed and we were unable to recover it. 00:34:11.496 [2024-04-26 23:37:00.717048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.496 [2024-04-26 23:37:00.717311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.496 [2024-04-26 23:37:00.717317] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.496 qpair failed and we were unable to recover it. 00:34:11.496 [2024-04-26 23:37:00.717645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.496 [2024-04-26 23:37:00.717983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.496 [2024-04-26 23:37:00.717990] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.496 qpair failed and we were unable to recover it. 00:34:11.496 [2024-04-26 23:37:00.718367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.496 [2024-04-26 23:37:00.718692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.496 [2024-04-26 23:37:00.718699] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.496 qpair failed and we were unable to recover it. 00:34:11.496 [2024-04-26 23:37:00.719007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.496 [2024-04-26 23:37:00.719227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.496 [2024-04-26 23:37:00.719233] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.496 qpair failed and we were unable to recover it. 00:34:11.496 [2024-04-26 23:37:00.719617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.496 [2024-04-26 23:37:00.719865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.496 [2024-04-26 23:37:00.719872] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.496 qpair failed and we were unable to recover it. 00:34:11.496 [2024-04-26 23:37:00.720233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.496 [2024-04-26 23:37:00.720492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.496 [2024-04-26 23:37:00.720498] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.496 qpair failed and we were unable to recover it. 00:34:11.496 [2024-04-26 23:37:00.720897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.496 [2024-04-26 23:37:00.721213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.496 [2024-04-26 23:37:00.721220] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.496 qpair failed and we were unable to recover it. 00:34:11.496 [2024-04-26 23:37:00.721399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.496 [2024-04-26 23:37:00.721724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.496 [2024-04-26 23:37:00.721731] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.496 qpair failed and we were unable to recover it. 00:34:11.496 [2024-04-26 23:37:00.721998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.496 [2024-04-26 23:37:00.722216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.496 [2024-04-26 23:37:00.722222] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.496 qpair failed and we were unable to recover it. 00:34:11.496 [2024-04-26 23:37:00.722542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.496 [2024-04-26 23:37:00.722861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.496 [2024-04-26 23:37:00.722868] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.496 qpair failed and we were unable to recover it. 00:34:11.496 [2024-04-26 23:37:00.723214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.496 [2024-04-26 23:37:00.723532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.496 [2024-04-26 23:37:00.723539] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.496 qpair failed and we were unable to recover it. 00:34:11.496 [2024-04-26 23:37:00.723931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.496 [2024-04-26 23:37:00.724278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.496 [2024-04-26 23:37:00.724285] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.496 qpair failed and we were unable to recover it. 00:34:11.496 [2024-04-26 23:37:00.724616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.496 [2024-04-26 23:37:00.724931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.496 [2024-04-26 23:37:00.724938] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.496 qpair failed and we were unable to recover it. 00:34:11.496 [2024-04-26 23:37:00.725271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.496 [2024-04-26 23:37:00.725608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.496 [2024-04-26 23:37:00.725615] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.496 qpair failed and we were unable to recover it. 00:34:11.497 [2024-04-26 23:37:00.725944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.497 [2024-04-26 23:37:00.726267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.497 [2024-04-26 23:37:00.726273] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.497 qpair failed and we were unable to recover it. 00:34:11.497 [2024-04-26 23:37:00.726590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.497 [2024-04-26 23:37:00.726783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.497 [2024-04-26 23:37:00.726789] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.497 qpair failed and we were unable to recover it. 00:34:11.497 [2024-04-26 23:37:00.727206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.497 [2024-04-26 23:37:00.727446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.497 [2024-04-26 23:37:00.727452] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.497 qpair failed and we were unable to recover it. 00:34:11.497 [2024-04-26 23:37:00.727803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.497 [2024-04-26 23:37:00.728100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.497 [2024-04-26 23:37:00.728107] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.497 qpair failed and we were unable to recover it. 00:34:11.497 [2024-04-26 23:37:00.728294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.497 [2024-04-26 23:37:00.728577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.497 [2024-04-26 23:37:00.728584] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.497 qpair failed and we were unable to recover it. 00:34:11.497 [2024-04-26 23:37:00.728693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.497 [2024-04-26 23:37:00.728988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.497 [2024-04-26 23:37:00.728994] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.497 qpair failed and we were unable to recover it. 00:34:11.497 [2024-04-26 23:37:00.729246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.767 [2024-04-26 23:37:00.729601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.767 [2024-04-26 23:37:00.729608] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.767 qpair failed and we were unable to recover it. 00:34:11.767 [2024-04-26 23:37:00.729804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.767 [2024-04-26 23:37:00.730101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.767 [2024-04-26 23:37:00.730108] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.767 qpair failed and we were unable to recover it. 00:34:11.767 [2024-04-26 23:37:00.730473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.767 [2024-04-26 23:37:00.730781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.767 [2024-04-26 23:37:00.730794] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.767 qpair failed and we were unable to recover it. 00:34:11.767 [2024-04-26 23:37:00.731128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.767 [2024-04-26 23:37:00.731324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.767 [2024-04-26 23:37:00.731330] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.767 qpair failed and we were unable to recover it. 00:34:11.767 [2024-04-26 23:37:00.731716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.767 [2024-04-26 23:37:00.731994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.767 [2024-04-26 23:37:00.732000] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.767 qpair failed and we were unable to recover it. 00:34:11.767 [2024-04-26 23:37:00.732248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.767 [2024-04-26 23:37:00.732425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.767 [2024-04-26 23:37:00.732432] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.767 qpair failed and we were unable to recover it. 00:34:11.767 [2024-04-26 23:37:00.732787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.767 [2024-04-26 23:37:00.733094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.767 [2024-04-26 23:37:00.733102] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.767 qpair failed and we were unable to recover it. 00:34:11.767 [2024-04-26 23:37:00.733517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.767 [2024-04-26 23:37:00.733820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.767 [2024-04-26 23:37:00.733827] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.767 qpair failed and we were unable to recover it. 00:34:11.767 [2024-04-26 23:37:00.734254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.767 [2024-04-26 23:37:00.734580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.767 [2024-04-26 23:37:00.734586] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.767 qpair failed and we were unable to recover it. 00:34:11.767 [2024-04-26 23:37:00.734933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.767 [2024-04-26 23:37:00.735234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.767 [2024-04-26 23:37:00.735240] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.767 qpair failed and we were unable to recover it. 00:34:11.767 [2024-04-26 23:37:00.735489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.767 [2024-04-26 23:37:00.735678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.767 [2024-04-26 23:37:00.735685] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.767 qpair failed and we were unable to recover it. 00:34:11.767 [2024-04-26 23:37:00.736013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.767 [2024-04-26 23:37:00.736327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.767 [2024-04-26 23:37:00.736334] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.767 qpair failed and we were unable to recover it. 00:34:11.767 [2024-04-26 23:37:00.736533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.767 [2024-04-26 23:37:00.736848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.767 [2024-04-26 23:37:00.736855] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.767 qpair failed and we were unable to recover it. 00:34:11.767 [2024-04-26 23:37:00.737197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.767 [2024-04-26 23:37:00.737493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.767 [2024-04-26 23:37:00.737499] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.767 qpair failed and we were unable to recover it. 00:34:11.767 [2024-04-26 23:37:00.737697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.767 [2024-04-26 23:37:00.738032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.767 [2024-04-26 23:37:00.738039] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.767 qpair failed and we were unable to recover it. 00:34:11.767 [2024-04-26 23:37:00.738355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.767 [2024-04-26 23:37:00.738755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.767 [2024-04-26 23:37:00.738762] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.767 qpair failed and we were unable to recover it. 00:34:11.767 [2024-04-26 23:37:00.739095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.767 [2024-04-26 23:37:00.739296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.767 [2024-04-26 23:37:00.739303] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.767 qpair failed and we were unable to recover it. 00:34:11.767 [2024-04-26 23:37:00.739472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.767 [2024-04-26 23:37:00.739737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.767 [2024-04-26 23:37:00.739743] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.767 qpair failed and we were unable to recover it. 00:34:11.767 [2024-04-26 23:37:00.740005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.767 [2024-04-26 23:37:00.740368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.767 [2024-04-26 23:37:00.740375] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.767 qpair failed and we were unable to recover it. 00:34:11.767 [2024-04-26 23:37:00.740679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.767 [2024-04-26 23:37:00.741017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.767 [2024-04-26 23:37:00.741023] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.767 qpair failed and we were unable to recover it. 00:34:11.767 [2024-04-26 23:37:00.741361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.767 [2024-04-26 23:37:00.741678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.767 [2024-04-26 23:37:00.741684] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.767 qpair failed and we were unable to recover it. 00:34:11.767 [2024-04-26 23:37:00.741765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.767 [2024-04-26 23:37:00.741965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.767 [2024-04-26 23:37:00.741972] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.767 qpair failed and we were unable to recover it. 00:34:11.767 [2024-04-26 23:37:00.742326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.767 [2024-04-26 23:37:00.742643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.767 [2024-04-26 23:37:00.742649] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.767 qpair failed and we were unable to recover it. 00:34:11.767 [2024-04-26 23:37:00.742968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.767 [2024-04-26 23:37:00.743142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.767 [2024-04-26 23:37:00.743149] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.767 qpair failed and we were unable to recover it. 00:34:11.767 [2024-04-26 23:37:00.743367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.767 [2024-04-26 23:37:00.743744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.767 [2024-04-26 23:37:00.743751] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.767 qpair failed and we were unable to recover it. 00:34:11.767 [2024-04-26 23:37:00.743852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.767 [2024-04-26 23:37:00.744101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.767 [2024-04-26 23:37:00.744112] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.767 qpair failed and we were unable to recover it. 00:34:11.767 [2024-04-26 23:37:00.744480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.767 [2024-04-26 23:37:00.744848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.767 [2024-04-26 23:37:00.744856] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.767 qpair failed and we were unable to recover it. 00:34:11.767 [2024-04-26 23:37:00.745213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.767 [2024-04-26 23:37:00.745589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.767 [2024-04-26 23:37:00.745595] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.767 qpair failed and we were unable to recover it. 00:34:11.767 [2024-04-26 23:37:00.745929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.767 [2024-04-26 23:37:00.746150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.767 [2024-04-26 23:37:00.746156] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.767 qpair failed and we were unable to recover it. 00:34:11.767 [2024-04-26 23:37:00.746481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.767 [2024-04-26 23:37:00.746780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.767 [2024-04-26 23:37:00.746786] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.767 qpair failed and we were unable to recover it. 00:34:11.767 [2024-04-26 23:37:00.747178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.767 [2024-04-26 23:37:00.747510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.767 [2024-04-26 23:37:00.747517] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.767 qpair failed and we were unable to recover it. 00:34:11.767 [2024-04-26 23:37:00.747809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.767 [2024-04-26 23:37:00.748141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.767 [2024-04-26 23:37:00.748148] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.767 qpair failed and we were unable to recover it. 00:34:11.767 [2024-04-26 23:37:00.748551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.767 [2024-04-26 23:37:00.748845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.767 [2024-04-26 23:37:00.748852] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.767 qpair failed and we were unable to recover it. 00:34:11.767 [2024-04-26 23:37:00.749205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.767 [2024-04-26 23:37:00.749541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.767 [2024-04-26 23:37:00.749547] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.767 qpair failed and we were unable to recover it. 00:34:11.767 [2024-04-26 23:37:00.749912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.767 [2024-04-26 23:37:00.750280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.767 [2024-04-26 23:37:00.750286] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.767 qpair failed and we were unable to recover it. 00:34:11.767 [2024-04-26 23:37:00.750520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.767 [2024-04-26 23:37:00.750777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.767 [2024-04-26 23:37:00.750786] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.767 qpair failed and we were unable to recover it. 00:34:11.767 [2024-04-26 23:37:00.751110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.767 [2024-04-26 23:37:00.751437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.767 [2024-04-26 23:37:00.751443] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.767 qpair failed and we were unable to recover it. 00:34:11.767 [2024-04-26 23:37:00.751775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.767 [2024-04-26 23:37:00.752143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.767 [2024-04-26 23:37:00.752149] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.767 qpair failed and we were unable to recover it. 00:34:11.767 [2024-04-26 23:37:00.752399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.767 [2024-04-26 23:37:00.752691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.767 [2024-04-26 23:37:00.752697] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.767 qpair failed and we were unable to recover it. 00:34:11.767 [2024-04-26 23:37:00.753028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.767 [2024-04-26 23:37:00.753242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.767 [2024-04-26 23:37:00.753248] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.767 qpair failed and we were unable to recover it. 00:34:11.767 [2024-04-26 23:37:00.753542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.767 [2024-04-26 23:37:00.753876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.767 [2024-04-26 23:37:00.753882] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.767 qpair failed and we were unable to recover it. 00:34:11.767 [2024-04-26 23:37:00.754209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.767 [2024-04-26 23:37:00.754390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.767 [2024-04-26 23:37:00.754396] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.767 qpair failed and we were unable to recover it. 00:34:11.767 [2024-04-26 23:37:00.754544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.767 [2024-04-26 23:37:00.754934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.767 [2024-04-26 23:37:00.754941] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.767 qpair failed and we were unable to recover it. 00:34:11.767 [2024-04-26 23:37:00.755267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.767 [2024-04-26 23:37:00.755577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.767 [2024-04-26 23:37:00.755584] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.767 qpair failed and we were unable to recover it. 00:34:11.767 [2024-04-26 23:37:00.755917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.767 [2024-04-26 23:37:00.756147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.767 [2024-04-26 23:37:00.756154] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.767 qpair failed and we were unable to recover it. 00:34:11.767 [2024-04-26 23:37:00.756500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.767 [2024-04-26 23:37:00.756850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.767 [2024-04-26 23:37:00.756858] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.767 qpair failed and we were unable to recover it. 00:34:11.767 [2024-04-26 23:37:00.757203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.767 [2024-04-26 23:37:00.757579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.767 [2024-04-26 23:37:00.757585] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.767 qpair failed and we were unable to recover it. 00:34:11.767 [2024-04-26 23:37:00.757907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.767 [2024-04-26 23:37:00.758270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.767 [2024-04-26 23:37:00.758276] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.767 qpair failed and we were unable to recover it. 00:34:11.767 [2024-04-26 23:37:00.758588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.767 [2024-04-26 23:37:00.758797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.767 [2024-04-26 23:37:00.758803] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.767 qpair failed and we were unable to recover it. 00:34:11.767 [2024-04-26 23:37:00.759186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.767 [2024-04-26 23:37:00.759509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.767 [2024-04-26 23:37:00.759516] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.767 qpair failed and we were unable to recover it. 00:34:11.767 [2024-04-26 23:37:00.759835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.767 [2024-04-26 23:37:00.760170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.767 [2024-04-26 23:37:00.760177] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.767 qpair failed and we were unable to recover it. 00:34:11.767 [2024-04-26 23:37:00.760552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.768 [2024-04-26 23:37:00.760756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.768 [2024-04-26 23:37:00.760763] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.768 qpair failed and we were unable to recover it. 00:34:11.768 [2024-04-26 23:37:00.761150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.768 [2024-04-26 23:37:00.761438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.768 [2024-04-26 23:37:00.761445] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.768 qpair failed and we were unable to recover it. 00:34:11.768 [2024-04-26 23:37:00.761629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.768 [2024-04-26 23:37:00.761987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.768 [2024-04-26 23:37:00.761994] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.768 qpair failed and we were unable to recover it. 00:34:11.768 [2024-04-26 23:37:00.762221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.768 [2024-04-26 23:37:00.762560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.768 [2024-04-26 23:37:00.762567] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.768 qpair failed and we were unable to recover it. 00:34:11.768 [2024-04-26 23:37:00.762897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.768 [2024-04-26 23:37:00.763136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.768 [2024-04-26 23:37:00.763143] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.768 qpair failed and we were unable to recover it. 00:34:11.768 [2024-04-26 23:37:00.763541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.768 [2024-04-26 23:37:00.763901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.768 [2024-04-26 23:37:00.763909] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.768 qpair failed and we were unable to recover it. 00:34:11.768 [2024-04-26 23:37:00.764280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.768 [2024-04-26 23:37:00.764510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.768 [2024-04-26 23:37:00.764517] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.768 qpair failed and we were unable to recover it. 00:34:11.768 [2024-04-26 23:37:00.764815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.768 [2024-04-26 23:37:00.765159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.768 [2024-04-26 23:37:00.765166] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.768 qpair failed and we were unable to recover it. 00:34:11.768 [2024-04-26 23:37:00.765466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.768 [2024-04-26 23:37:00.765677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.768 [2024-04-26 23:37:00.765684] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.768 qpair failed and we were unable to recover it. 00:34:11.768 [2024-04-26 23:37:00.765901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.768 [2024-04-26 23:37:00.766223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.768 [2024-04-26 23:37:00.766230] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.768 qpair failed and we were unable to recover it. 00:34:11.768 [2024-04-26 23:37:00.766597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.768 [2024-04-26 23:37:00.766814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.768 [2024-04-26 23:37:00.766820] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.768 qpair failed and we were unable to recover it. 00:34:11.768 [2024-04-26 23:37:00.767222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.768 [2024-04-26 23:37:00.767421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.768 [2024-04-26 23:37:00.767427] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.768 qpair failed and we were unable to recover it. 00:34:11.768 [2024-04-26 23:37:00.767758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.768 [2024-04-26 23:37:00.768177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.768 [2024-04-26 23:37:00.768183] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.768 qpair failed and we were unable to recover it. 00:34:11.768 [2024-04-26 23:37:00.768493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.768 [2024-04-26 23:37:00.768817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.768 [2024-04-26 23:37:00.768824] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.768 qpair failed and we were unable to recover it. 00:34:11.768 [2024-04-26 23:37:00.769199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.768 [2024-04-26 23:37:00.769396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.768 [2024-04-26 23:37:00.769403] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.768 qpair failed and we were unable to recover it. 00:34:11.768 [2024-04-26 23:37:00.769714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.768 [2024-04-26 23:37:00.770001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.768 [2024-04-26 23:37:00.770008] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.768 qpair failed and we were unable to recover it. 00:34:11.768 [2024-04-26 23:37:00.770331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.768 [2024-04-26 23:37:00.770612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.768 [2024-04-26 23:37:00.770619] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.768 qpair failed and we were unable to recover it. 00:34:11.768 [2024-04-26 23:37:00.770992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.768 [2024-04-26 23:37:00.771351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.768 [2024-04-26 23:37:00.771358] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.768 qpair failed and we were unable to recover it. 00:34:11.768 [2024-04-26 23:37:00.771560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.768 [2024-04-26 23:37:00.771904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.768 [2024-04-26 23:37:00.771910] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.768 qpair failed and we were unable to recover it. 00:34:11.768 [2024-04-26 23:37:00.772192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.768 [2024-04-26 23:37:00.772510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.768 [2024-04-26 23:37:00.772516] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.768 qpair failed and we were unable to recover it. 00:34:11.768 [2024-04-26 23:37:00.772869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.768 [2024-04-26 23:37:00.773084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.768 [2024-04-26 23:37:00.773090] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.768 qpair failed and we were unable to recover it. 00:34:11.768 [2024-04-26 23:37:00.773384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.768 [2024-04-26 23:37:00.773742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.768 [2024-04-26 23:37:00.773748] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.768 qpair failed and we were unable to recover it. 00:34:11.768 [2024-04-26 23:37:00.774167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.768 [2024-04-26 23:37:00.774486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.768 [2024-04-26 23:37:00.774493] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.768 qpair failed and we were unable to recover it. 00:34:11.768 [2024-04-26 23:37:00.774854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.768 [2024-04-26 23:37:00.775176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.768 [2024-04-26 23:37:00.775183] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.768 qpair failed and we were unable to recover it. 00:34:11.768 [2024-04-26 23:37:00.775505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.768 [2024-04-26 23:37:00.775707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.768 [2024-04-26 23:37:00.775714] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.768 qpair failed and we were unable to recover it. 00:34:11.768 [2024-04-26 23:37:00.775898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.768 [2024-04-26 23:37:00.776093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.768 [2024-04-26 23:37:00.776100] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.768 qpair failed and we were unable to recover it. 00:34:11.768 [2024-04-26 23:37:00.776325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.768 [2024-04-26 23:37:00.776515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.768 [2024-04-26 23:37:00.776522] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.768 qpair failed and we were unable to recover it. 00:34:11.768 [2024-04-26 23:37:00.776845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.768 [2024-04-26 23:37:00.777175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.768 [2024-04-26 23:37:00.777181] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.768 qpair failed and we were unable to recover it. 00:34:11.768 [2024-04-26 23:37:00.777459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.768 [2024-04-26 23:37:00.777715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.768 [2024-04-26 23:37:00.777721] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.768 qpair failed and we were unable to recover it. 00:34:11.768 [2024-04-26 23:37:00.778060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.768 [2024-04-26 23:37:00.778415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.768 [2024-04-26 23:37:00.778422] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.768 qpair failed and we were unable to recover it. 00:34:11.768 [2024-04-26 23:37:00.778744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.768 [2024-04-26 23:37:00.779005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.768 [2024-04-26 23:37:00.779012] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.768 qpair failed and we were unable to recover it. 00:34:11.768 [2024-04-26 23:37:00.779340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.768 [2024-04-26 23:37:00.779622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.768 [2024-04-26 23:37:00.779628] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.768 qpair failed and we were unable to recover it. 00:34:11.768 [2024-04-26 23:37:00.780027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.768 [2024-04-26 23:37:00.780205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.768 [2024-04-26 23:37:00.780211] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.768 qpair failed and we were unable to recover it. 00:34:11.768 [2024-04-26 23:37:00.780529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.768 [2024-04-26 23:37:00.780733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.768 [2024-04-26 23:37:00.780740] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.768 qpair failed and we were unable to recover it. 00:34:11.768 [2024-04-26 23:37:00.781028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.768 [2024-04-26 23:37:00.781320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.768 [2024-04-26 23:37:00.781326] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.768 qpair failed and we were unable to recover it. 00:34:11.768 [2024-04-26 23:37:00.781682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.768 [2024-04-26 23:37:00.782009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.768 [2024-04-26 23:37:00.782015] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.768 qpair failed and we were unable to recover it. 00:34:11.768 [2024-04-26 23:37:00.782330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.768 [2024-04-26 23:37:00.782683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.768 [2024-04-26 23:37:00.782690] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.768 qpair failed and we were unable to recover it. 00:34:11.768 [2024-04-26 23:37:00.782912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.768 [2024-04-26 23:37:00.783293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.768 [2024-04-26 23:37:00.783301] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.768 qpair failed and we were unable to recover it. 00:34:11.768 [2024-04-26 23:37:00.783668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.768 [2024-04-26 23:37:00.783977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.768 [2024-04-26 23:37:00.783983] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.768 qpair failed and we were unable to recover it. 00:34:11.768 [2024-04-26 23:37:00.784320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.768 [2024-04-26 23:37:00.784682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.768 [2024-04-26 23:37:00.784689] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.768 qpair failed and we were unable to recover it. 00:34:11.768 [2024-04-26 23:37:00.785109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.768 [2024-04-26 23:37:00.785400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.768 [2024-04-26 23:37:00.785406] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.768 qpair failed and we were unable to recover it. 00:34:11.768 [2024-04-26 23:37:00.785724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.768 [2024-04-26 23:37:00.786049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.768 [2024-04-26 23:37:00.786056] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.768 qpair failed and we were unable to recover it. 00:34:11.768 [2024-04-26 23:37:00.786411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.768 [2024-04-26 23:37:00.786701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.768 [2024-04-26 23:37:00.786707] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.768 qpair failed and we were unable to recover it. 00:34:11.768 [2024-04-26 23:37:00.787035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.768 [2024-04-26 23:37:00.787367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.768 [2024-04-26 23:37:00.787373] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.768 qpair failed and we were unable to recover it. 00:34:11.768 [2024-04-26 23:37:00.787698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.768 [2024-04-26 23:37:00.788034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.768 [2024-04-26 23:37:00.788040] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.768 qpair failed and we were unable to recover it. 00:34:11.768 [2024-04-26 23:37:00.788345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.768 [2024-04-26 23:37:00.788660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.768 [2024-04-26 23:37:00.788666] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.768 qpair failed and we were unable to recover it. 00:34:11.768 [2024-04-26 23:37:00.789003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.768 [2024-04-26 23:37:00.789308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.768 [2024-04-26 23:37:00.789314] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.768 qpair failed and we were unable to recover it. 00:34:11.768 [2024-04-26 23:37:00.789662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.768 [2024-04-26 23:37:00.790001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.768 [2024-04-26 23:37:00.790008] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.768 qpair failed and we were unable to recover it. 00:34:11.768 [2024-04-26 23:37:00.790336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.768 [2024-04-26 23:37:00.790698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.768 [2024-04-26 23:37:00.790704] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.768 qpair failed and we were unable to recover it. 00:34:11.768 [2024-04-26 23:37:00.790937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.768 [2024-04-26 23:37:00.791108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.768 [2024-04-26 23:37:00.791115] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.768 qpair failed and we were unable to recover it. 00:34:11.768 [2024-04-26 23:37:00.791415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.768 [2024-04-26 23:37:00.791713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.768 [2024-04-26 23:37:00.791719] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.768 qpair failed and we were unable to recover it. 00:34:11.768 [2024-04-26 23:37:00.792042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.768 [2024-04-26 23:37:00.792395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.768 [2024-04-26 23:37:00.792401] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.768 qpair failed and we were unable to recover it. 00:34:11.768 [2024-04-26 23:37:00.792704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.768 [2024-04-26 23:37:00.793035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.768 [2024-04-26 23:37:00.793041] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.768 qpair failed and we were unable to recover it. 00:34:11.768 [2024-04-26 23:37:00.793439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.768 [2024-04-26 23:37:00.793680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.768 [2024-04-26 23:37:00.793687] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.768 qpair failed and we were unable to recover it. 00:34:11.768 [2024-04-26 23:37:00.793980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.768 [2024-04-26 23:37:00.794308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.768 [2024-04-26 23:37:00.794314] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.768 qpair failed and we were unable to recover it. 00:34:11.768 [2024-04-26 23:37:00.794544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.768 [2024-04-26 23:37:00.794843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.768 [2024-04-26 23:37:00.794850] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.768 qpair failed and we were unable to recover it. 00:34:11.768 [2024-04-26 23:37:00.795166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.768 [2024-04-26 23:37:00.795493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.768 [2024-04-26 23:37:00.795499] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.768 qpair failed and we were unable to recover it. 00:34:11.768 [2024-04-26 23:37:00.795818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.768 [2024-04-26 23:37:00.796152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.769 [2024-04-26 23:37:00.796159] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.769 qpair failed and we were unable to recover it. 00:34:11.769 [2024-04-26 23:37:00.796471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.769 [2024-04-26 23:37:00.796783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.769 [2024-04-26 23:37:00.796789] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.769 qpair failed and we were unable to recover it. 00:34:11.769 [2024-04-26 23:37:00.797189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.769 [2024-04-26 23:37:00.797491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.769 [2024-04-26 23:37:00.797498] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.769 qpair failed and we were unable to recover it. 00:34:11.769 [2024-04-26 23:37:00.797855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.769 [2024-04-26 23:37:00.798165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.769 [2024-04-26 23:37:00.798172] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.769 qpair failed and we were unable to recover it. 00:34:11.769 [2024-04-26 23:37:00.798315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.769 [2024-04-26 23:37:00.798607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.769 [2024-04-26 23:37:00.798613] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.769 qpair failed and we were unable to recover it. 00:34:11.769 [2024-04-26 23:37:00.799060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.769 [2024-04-26 23:37:00.799374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.769 [2024-04-26 23:37:00.799380] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.769 qpair failed and we were unable to recover it. 00:34:11.769 [2024-04-26 23:37:00.799753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.769 [2024-04-26 23:37:00.800056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.769 [2024-04-26 23:37:00.800063] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.769 qpair failed and we were unable to recover it. 00:34:11.769 [2024-04-26 23:37:00.800156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.769 [2024-04-26 23:37:00.800443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.769 [2024-04-26 23:37:00.800450] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.769 qpair failed and we were unable to recover it. 00:34:11.769 [2024-04-26 23:37:00.800788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.769 [2024-04-26 23:37:00.801097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.769 [2024-04-26 23:37:00.801104] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.769 qpair failed and we were unable to recover it. 00:34:11.769 [2024-04-26 23:37:00.801457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.769 [2024-04-26 23:37:00.801814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.769 [2024-04-26 23:37:00.801820] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.769 qpair failed and we were unable to recover it. 00:34:11.769 [2024-04-26 23:37:00.802129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.769 [2024-04-26 23:37:00.802452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.769 [2024-04-26 23:37:00.802458] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.769 qpair failed and we were unable to recover it. 00:34:11.769 [2024-04-26 23:37:00.802778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.769 [2024-04-26 23:37:00.803107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.769 [2024-04-26 23:37:00.803113] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.769 qpair failed and we were unable to recover it. 00:34:11.769 [2024-04-26 23:37:00.803464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.769 [2024-04-26 23:37:00.803793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.769 [2024-04-26 23:37:00.803800] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.769 qpair failed and we were unable to recover it. 00:34:11.769 [2024-04-26 23:37:00.804107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.769 [2024-04-26 23:37:00.804470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.769 [2024-04-26 23:37:00.804476] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.769 qpair failed and we were unable to recover it. 00:34:11.769 [2024-04-26 23:37:00.804784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.769 [2024-04-26 23:37:00.805100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.769 [2024-04-26 23:37:00.805107] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.769 qpair failed and we were unable to recover it. 00:34:11.769 [2024-04-26 23:37:00.805296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.769 [2024-04-26 23:37:00.805724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.769 [2024-04-26 23:37:00.805731] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.769 qpair failed and we were unable to recover it. 00:34:11.769 [2024-04-26 23:37:00.806055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.769 [2024-04-26 23:37:00.806410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.769 [2024-04-26 23:37:00.806417] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.769 qpair failed and we were unable to recover it. 00:34:11.769 [2024-04-26 23:37:00.806720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.769 [2024-04-26 23:37:00.807054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.769 [2024-04-26 23:37:00.807061] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.769 qpair failed and we were unable to recover it. 00:34:11.769 [2024-04-26 23:37:00.807369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.769 [2024-04-26 23:37:00.807706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.769 [2024-04-26 23:37:00.807712] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.769 qpair failed and we were unable to recover it. 00:34:11.769 [2024-04-26 23:37:00.808039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.769 [2024-04-26 23:37:00.808278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.769 [2024-04-26 23:37:00.808285] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.769 qpair failed and we were unable to recover it. 00:34:11.769 [2024-04-26 23:37:00.808523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.769 [2024-04-26 23:37:00.808810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.769 [2024-04-26 23:37:00.808816] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.769 qpair failed and we were unable to recover it. 00:34:11.769 [2024-04-26 23:37:00.809151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.769 [2024-04-26 23:37:00.809449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.769 [2024-04-26 23:37:00.809455] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.769 qpair failed and we were unable to recover it. 00:34:11.769 [2024-04-26 23:37:00.809810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.769 [2024-04-26 23:37:00.810108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.769 [2024-04-26 23:37:00.810114] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.769 qpair failed and we were unable to recover it. 00:34:11.769 [2024-04-26 23:37:00.810466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.769 [2024-04-26 23:37:00.810810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.769 [2024-04-26 23:37:00.810816] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.769 qpair failed and we were unable to recover it. 00:34:11.769 [2024-04-26 23:37:00.811139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.769 [2024-04-26 23:37:00.811463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.769 [2024-04-26 23:37:00.811469] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.769 qpair failed and we were unable to recover it. 00:34:11.769 [2024-04-26 23:37:00.811783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.769 [2024-04-26 23:37:00.812103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.769 [2024-04-26 23:37:00.812109] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.769 qpair failed and we were unable to recover it. 00:34:11.769 [2024-04-26 23:37:00.812484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.769 [2024-04-26 23:37:00.812819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.769 [2024-04-26 23:37:00.812825] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.769 qpair failed and we were unable to recover it. 00:34:11.769 [2024-04-26 23:37:00.813155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.769 [2024-04-26 23:37:00.813487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.769 [2024-04-26 23:37:00.813494] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.769 qpair failed and we were unable to recover it. 00:34:11.769 [2024-04-26 23:37:00.813823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.769 [2024-04-26 23:37:00.814124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.769 [2024-04-26 23:37:00.814130] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.769 qpair failed and we were unable to recover it. 00:34:11.769 [2024-04-26 23:37:00.814479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.769 [2024-04-26 23:37:00.814792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.769 [2024-04-26 23:37:00.814799] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.769 qpair failed and we were unable to recover it. 00:34:11.769 [2024-04-26 23:37:00.815044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.769 [2024-04-26 23:37:00.815382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.769 [2024-04-26 23:37:00.815390] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.769 qpair failed and we were unable to recover it. 00:34:11.769 [2024-04-26 23:37:00.815690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.769 [2024-04-26 23:37:00.816035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.769 [2024-04-26 23:37:00.816042] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.769 qpair failed and we were unable to recover it. 00:34:11.769 [2024-04-26 23:37:00.816351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.769 [2024-04-26 23:37:00.816707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.769 [2024-04-26 23:37:00.816713] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.769 qpair failed and we were unable to recover it. 00:34:11.769 [2024-04-26 23:37:00.816906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.769 [2024-04-26 23:37:00.817281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.769 [2024-04-26 23:37:00.817287] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.769 qpair failed and we were unable to recover it. 00:34:11.769 [2024-04-26 23:37:00.817609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.769 [2024-04-26 23:37:00.817939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.769 [2024-04-26 23:37:00.817946] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.769 qpair failed and we were unable to recover it. 00:34:11.769 [2024-04-26 23:37:00.818275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.769 [2024-04-26 23:37:00.818553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.769 [2024-04-26 23:37:00.818559] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.769 qpair failed and we were unable to recover it. 00:34:11.769 [2024-04-26 23:37:00.818898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.769 [2024-04-26 23:37:00.819230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.769 [2024-04-26 23:37:00.819236] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.769 qpair failed and we were unable to recover it. 00:34:11.769 [2024-04-26 23:37:00.819548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.769 [2024-04-26 23:37:00.819843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.769 [2024-04-26 23:37:00.819850] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.769 qpair failed and we were unable to recover it. 00:34:11.769 [2024-04-26 23:37:00.820197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.769 [2024-04-26 23:37:00.820560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.769 [2024-04-26 23:37:00.820566] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.769 qpair failed and we were unable to recover it. 00:34:11.769 [2024-04-26 23:37:00.820914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.769 [2024-04-26 23:37:00.821163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.769 [2024-04-26 23:37:00.821170] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.769 qpair failed and we were unable to recover it. 00:34:11.769 [2024-04-26 23:37:00.821510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.769 [2024-04-26 23:37:00.821857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.769 [2024-04-26 23:37:00.821863] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.769 qpair failed and we were unable to recover it. 00:34:11.769 [2024-04-26 23:37:00.822168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.769 [2024-04-26 23:37:00.822460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.769 [2024-04-26 23:37:00.822467] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.769 qpair failed and we were unable to recover it. 00:34:11.769 [2024-04-26 23:37:00.822818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.769 [2024-04-26 23:37:00.823164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.769 [2024-04-26 23:37:00.823171] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.769 qpair failed and we were unable to recover it. 00:34:11.769 [2024-04-26 23:37:00.823534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.769 [2024-04-26 23:37:00.823855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.769 [2024-04-26 23:37:00.823862] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.769 qpair failed and we were unable to recover it. 00:34:11.769 [2024-04-26 23:37:00.824052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.769 [2024-04-26 23:37:00.824391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.769 [2024-04-26 23:37:00.824398] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.769 qpair failed and we were unable to recover it. 00:34:11.769 [2024-04-26 23:37:00.824583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.769 [2024-04-26 23:37:00.824879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.769 [2024-04-26 23:37:00.824886] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.769 qpair failed and we were unable to recover it. 00:34:11.769 [2024-04-26 23:37:00.825167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.769 [2024-04-26 23:37:00.825492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.769 [2024-04-26 23:37:00.825498] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.769 qpair failed and we were unable to recover it. 00:34:11.769 [2024-04-26 23:37:00.825816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.769 [2024-04-26 23:37:00.826169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.769 [2024-04-26 23:37:00.826176] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.769 qpair failed and we were unable to recover it. 00:34:11.769 [2024-04-26 23:37:00.826534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.770 [2024-04-26 23:37:00.826842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.770 [2024-04-26 23:37:00.826848] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.770 qpair failed and we were unable to recover it. 00:34:11.770 [2024-04-26 23:37:00.827207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.770 [2024-04-26 23:37:00.827533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.770 [2024-04-26 23:37:00.827539] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.770 qpair failed and we were unable to recover it. 00:34:11.770 [2024-04-26 23:37:00.827857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.770 [2024-04-26 23:37:00.828199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.770 [2024-04-26 23:37:00.828206] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.770 qpair failed and we were unable to recover it. 00:34:11.770 [2024-04-26 23:37:00.828663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.770 [2024-04-26 23:37:00.828949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.770 [2024-04-26 23:37:00.828956] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.770 qpair failed and we were unable to recover it. 00:34:11.770 [2024-04-26 23:37:00.829161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.770 [2024-04-26 23:37:00.829450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.770 [2024-04-26 23:37:00.829456] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.770 qpair failed and we were unable to recover it. 00:34:11.770 [2024-04-26 23:37:00.829777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.770 [2024-04-26 23:37:00.830068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.770 [2024-04-26 23:37:00.830074] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.770 qpair failed and we were unable to recover it. 00:34:11.770 [2024-04-26 23:37:00.830264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.770 [2024-04-26 23:37:00.830582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.770 [2024-04-26 23:37:00.830589] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.770 qpair failed and we were unable to recover it. 00:34:11.770 [2024-04-26 23:37:00.830951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.770 [2024-04-26 23:37:00.831312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.770 [2024-04-26 23:37:00.831319] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.770 qpair failed and we were unable to recover it. 00:34:11.770 [2024-04-26 23:37:00.831645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.770 [2024-04-26 23:37:00.831994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.770 [2024-04-26 23:37:00.832001] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.770 qpair failed and we were unable to recover it. 00:34:11.770 [2024-04-26 23:37:00.832366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.770 [2024-04-26 23:37:00.832689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.770 [2024-04-26 23:37:00.832696] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.770 qpair failed and we were unable to recover it. 00:34:11.770 [2024-04-26 23:37:00.833067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.770 [2024-04-26 23:37:00.833390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.770 [2024-04-26 23:37:00.833397] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.770 qpair failed and we were unable to recover it. 00:34:11.770 [2024-04-26 23:37:00.833755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.770 [2024-04-26 23:37:00.834113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.770 [2024-04-26 23:37:00.834120] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.770 qpair failed and we were unable to recover it. 00:34:11.770 [2024-04-26 23:37:00.834441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.770 [2024-04-26 23:37:00.834759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.770 [2024-04-26 23:37:00.834765] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.770 qpair failed and we were unable to recover it. 00:34:11.770 [2024-04-26 23:37:00.834993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.770 [2024-04-26 23:37:00.835281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.770 [2024-04-26 23:37:00.835287] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.770 qpair failed and we were unable to recover it. 00:34:11.770 [2024-04-26 23:37:00.835517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.770 [2024-04-26 23:37:00.835729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.770 [2024-04-26 23:37:00.835735] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.770 qpair failed and we were unable to recover it. 00:34:11.770 [2024-04-26 23:37:00.836063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.770 [2024-04-26 23:37:00.836386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.770 [2024-04-26 23:37:00.836392] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.770 qpair failed and we were unable to recover it. 00:34:11.770 [2024-04-26 23:37:00.836708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.770 [2024-04-26 23:37:00.837069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.770 [2024-04-26 23:37:00.837075] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.770 qpair failed and we were unable to recover it. 00:34:11.770 [2024-04-26 23:37:00.837467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.770 [2024-04-26 23:37:00.837654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.770 [2024-04-26 23:37:00.837661] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.770 qpair failed and we were unable to recover it. 00:34:11.770 [2024-04-26 23:37:00.837896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.770 [2024-04-26 23:37:00.838221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.770 [2024-04-26 23:37:00.838227] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.770 qpair failed and we were unable to recover it. 00:34:11.770 [2024-04-26 23:37:00.838548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.770 [2024-04-26 23:37:00.838845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.770 [2024-04-26 23:37:00.838852] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.770 qpair failed and we were unable to recover it. 00:34:11.770 [2024-04-26 23:37:00.839142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.770 [2024-04-26 23:37:00.839349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.770 [2024-04-26 23:37:00.839359] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.770 qpair failed and we were unable to recover it. 00:34:11.770 [2024-04-26 23:37:00.839672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.770 [2024-04-26 23:37:00.839998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.770 [2024-04-26 23:37:00.840005] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.770 qpair failed and we were unable to recover it. 00:34:11.770 [2024-04-26 23:37:00.840329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.770 [2024-04-26 23:37:00.840686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.770 [2024-04-26 23:37:00.840692] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.770 qpair failed and we were unable to recover it. 00:34:11.770 [2024-04-26 23:37:00.841015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.770 [2024-04-26 23:37:00.841379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.770 [2024-04-26 23:37:00.841385] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.770 qpair failed and we were unable to recover it. 00:34:11.770 [2024-04-26 23:37:00.841693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.770 [2024-04-26 23:37:00.842050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.770 [2024-04-26 23:37:00.842057] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.770 qpair failed and we were unable to recover it. 00:34:11.770 [2024-04-26 23:37:00.842371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.770 [2024-04-26 23:37:00.842686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.770 [2024-04-26 23:37:00.842692] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.770 qpair failed and we were unable to recover it. 00:34:11.770 [2024-04-26 23:37:00.842915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.770 [2024-04-26 23:37:00.843212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.770 [2024-04-26 23:37:00.843218] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.770 qpair failed and we were unable to recover it. 00:34:11.770 [2024-04-26 23:37:00.843532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.770 [2024-04-26 23:37:00.843683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.770 [2024-04-26 23:37:00.843690] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.770 qpair failed and we were unable to recover it. 00:34:11.770 [2024-04-26 23:37:00.844025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.770 [2024-04-26 23:37:00.844353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.770 [2024-04-26 23:37:00.844360] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.770 qpair failed and we were unable to recover it. 00:34:11.770 [2024-04-26 23:37:00.844686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.770 [2024-04-26 23:37:00.845014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.770 [2024-04-26 23:37:00.845021] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.770 qpair failed and we were unable to recover it. 00:34:11.770 [2024-04-26 23:37:00.845375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.770 [2024-04-26 23:37:00.845690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.770 [2024-04-26 23:37:00.845698] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.770 qpair failed and we were unable to recover it. 00:34:11.770 [2024-04-26 23:37:00.846045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.770 [2024-04-26 23:37:00.846441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.770 [2024-04-26 23:37:00.846448] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.770 qpair failed and we were unable to recover it. 00:34:11.770 [2024-04-26 23:37:00.846753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.770 [2024-04-26 23:37:00.846980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.770 [2024-04-26 23:37:00.846987] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.770 qpair failed and we were unable to recover it. 00:34:11.770 [2024-04-26 23:37:00.847171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.770 [2024-04-26 23:37:00.847528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.770 [2024-04-26 23:37:00.847534] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.770 qpair failed and we were unable to recover it. 00:34:11.770 [2024-04-26 23:37:00.847741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.770 [2024-04-26 23:37:00.848039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.770 [2024-04-26 23:37:00.848046] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.770 qpair failed and we were unable to recover it. 00:34:11.770 [2024-04-26 23:37:00.848354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.770 [2024-04-26 23:37:00.848660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.770 [2024-04-26 23:37:00.848667] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.770 qpair failed and we were unable to recover it. 00:34:11.770 [2024-04-26 23:37:00.848987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.770 [2024-04-26 23:37:00.849278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.770 [2024-04-26 23:37:00.849284] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.770 qpair failed and we were unable to recover it. 00:34:11.770 [2024-04-26 23:37:00.849609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.770 [2024-04-26 23:37:00.849928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.770 [2024-04-26 23:37:00.849934] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.770 qpair failed and we were unable to recover it. 00:34:11.770 [2024-04-26 23:37:00.850134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.770 [2024-04-26 23:37:00.850469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.770 [2024-04-26 23:37:00.850476] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.770 qpair failed and we were unable to recover it. 00:34:11.770 [2024-04-26 23:37:00.850822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.770 [2024-04-26 23:37:00.851175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.770 [2024-04-26 23:37:00.851182] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.770 qpair failed and we were unable to recover it. 00:34:11.770 [2024-04-26 23:37:00.851505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.770 [2024-04-26 23:37:00.851817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.770 [2024-04-26 23:37:00.851825] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.770 qpair failed and we were unable to recover it. 00:34:11.770 [2024-04-26 23:37:00.852054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.770 [2024-04-26 23:37:00.852270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.770 [2024-04-26 23:37:00.852276] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.770 qpair failed and we were unable to recover it. 00:34:11.770 [2024-04-26 23:37:00.852638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.770 [2024-04-26 23:37:00.852967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.770 [2024-04-26 23:37:00.852974] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.770 qpair failed and we were unable to recover it. 00:34:11.770 [2024-04-26 23:37:00.853221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.770 [2024-04-26 23:37:00.853569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.770 [2024-04-26 23:37:00.853575] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.770 qpair failed and we were unable to recover it. 00:34:11.770 [2024-04-26 23:37:00.853930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.770 [2024-04-26 23:37:00.854313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.770 [2024-04-26 23:37:00.854320] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.770 qpair failed and we were unable to recover it. 00:34:11.770 [2024-04-26 23:37:00.854529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.770 [2024-04-26 23:37:00.854871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.770 [2024-04-26 23:37:00.854878] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.770 qpair failed and we were unable to recover it. 00:34:11.770 [2024-04-26 23:37:00.855199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.770 [2024-04-26 23:37:00.855551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.770 [2024-04-26 23:37:00.855557] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.770 qpair failed and we were unable to recover it. 00:34:11.770 [2024-04-26 23:37:00.855932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.770 [2024-04-26 23:37:00.856286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.770 [2024-04-26 23:37:00.856293] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.770 qpair failed and we were unable to recover it. 00:34:11.770 [2024-04-26 23:37:00.856620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.770 [2024-04-26 23:37:00.856979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.770 [2024-04-26 23:37:00.856986] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.770 qpair failed and we were unable to recover it. 00:34:11.770 [2024-04-26 23:37:00.857324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.770 [2024-04-26 23:37:00.857655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.770 [2024-04-26 23:37:00.857661] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.770 qpair failed and we were unable to recover it. 00:34:11.770 [2024-04-26 23:37:00.858007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.770 [2024-04-26 23:37:00.858304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.770 [2024-04-26 23:37:00.858312] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.770 qpair failed and we were unable to recover it. 00:34:11.770 [2024-04-26 23:37:00.858625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.770 [2024-04-26 23:37:00.858946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.770 [2024-04-26 23:37:00.858953] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.770 qpair failed and we were unable to recover it. 00:34:11.770 [2024-04-26 23:37:00.859290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.770 [2024-04-26 23:37:00.859443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.770 [2024-04-26 23:37:00.859450] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.770 qpair failed and we were unable to recover it. 00:34:11.770 [2024-04-26 23:37:00.859770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.770 [2024-04-26 23:37:00.859986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.770 [2024-04-26 23:37:00.859992] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.770 qpair failed and we were unable to recover it. 00:34:11.770 [2024-04-26 23:37:00.860330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.770 [2024-04-26 23:37:00.860611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.770 [2024-04-26 23:37:00.860618] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.770 qpair failed and we were unable to recover it. 00:34:11.770 [2024-04-26 23:37:00.861008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.770 [2024-04-26 23:37:00.861359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.770 [2024-04-26 23:37:00.861366] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.770 qpair failed and we were unable to recover it. 00:34:11.770 [2024-04-26 23:37:00.861705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.770 [2024-04-26 23:37:00.862020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.770 [2024-04-26 23:37:00.862027] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.770 qpair failed and we were unable to recover it. 00:34:11.770 [2024-04-26 23:37:00.862333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.770 [2024-04-26 23:37:00.862654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.770 [2024-04-26 23:37:00.862660] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.770 qpair failed and we were unable to recover it. 00:34:11.770 [2024-04-26 23:37:00.863010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.770 [2024-04-26 23:37:00.863217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.770 [2024-04-26 23:37:00.863223] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.770 qpair failed and we were unable to recover it. 00:34:11.770 [2024-04-26 23:37:00.863525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.770 [2024-04-26 23:37:00.863884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.770 [2024-04-26 23:37:00.863890] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.771 qpair failed and we were unable to recover it. 00:34:11.771 [2024-04-26 23:37:00.864235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.771 [2024-04-26 23:37:00.864556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.771 [2024-04-26 23:37:00.864562] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.771 qpair failed and we were unable to recover it. 00:34:11.771 [2024-04-26 23:37:00.864876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.771 [2024-04-26 23:37:00.865215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.771 [2024-04-26 23:37:00.865222] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.771 qpair failed and we were unable to recover it. 00:34:11.771 [2024-04-26 23:37:00.865544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.771 [2024-04-26 23:37:00.865794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.771 [2024-04-26 23:37:00.865800] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.771 qpair failed and we were unable to recover it. 00:34:11.771 [2024-04-26 23:37:00.866120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.771 [2024-04-26 23:37:00.866317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.771 [2024-04-26 23:37:00.866323] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.771 qpair failed and we were unable to recover it. 00:34:11.771 [2024-04-26 23:37:00.866613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.771 [2024-04-26 23:37:00.866969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.771 [2024-04-26 23:37:00.866976] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.771 qpair failed and we were unable to recover it. 00:34:11.771 [2024-04-26 23:37:00.867282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.771 [2024-04-26 23:37:00.867596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.771 [2024-04-26 23:37:00.867602] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.771 qpair failed and we were unable to recover it. 00:34:11.771 [2024-04-26 23:37:00.868013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.771 [2024-04-26 23:37:00.868372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.771 [2024-04-26 23:37:00.868379] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.771 qpair failed and we were unable to recover it. 00:34:11.771 [2024-04-26 23:37:00.868596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.771 [2024-04-26 23:37:00.868721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.771 [2024-04-26 23:37:00.868727] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.771 qpair failed and we were unable to recover it. 00:34:11.771 [2024-04-26 23:37:00.869023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.771 [2024-04-26 23:37:00.869359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.771 [2024-04-26 23:37:00.869365] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.771 qpair failed and we were unable to recover it. 00:34:11.771 [2024-04-26 23:37:00.869673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.771 [2024-04-26 23:37:00.869995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.771 [2024-04-26 23:37:00.870002] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.771 qpair failed and we were unable to recover it. 00:34:11.771 [2024-04-26 23:37:00.870304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.771 [2024-04-26 23:37:00.870644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.771 [2024-04-26 23:37:00.870650] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.771 qpair failed and we were unable to recover it. 00:34:11.771 [2024-04-26 23:37:00.870850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.771 [2024-04-26 23:37:00.871145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.771 [2024-04-26 23:37:00.871151] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.771 qpair failed and we were unable to recover it. 00:34:11.771 [2024-04-26 23:37:00.871498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.771 [2024-04-26 23:37:00.871799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.771 [2024-04-26 23:37:00.871805] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.771 qpair failed and we were unable to recover it. 00:34:11.771 [2024-04-26 23:37:00.872113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.771 [2024-04-26 23:37:00.872437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.771 [2024-04-26 23:37:00.872443] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.771 qpair failed and we were unable to recover it. 00:34:11.771 [2024-04-26 23:37:00.872791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.771 [2024-04-26 23:37:00.872961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.771 [2024-04-26 23:37:00.872967] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.771 qpair failed and we were unable to recover it. 00:34:11.771 [2024-04-26 23:37:00.873310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.771 [2024-04-26 23:37:00.873632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.771 [2024-04-26 23:37:00.873639] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.771 qpair failed and we were unable to recover it. 00:34:11.771 [2024-04-26 23:37:00.873866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.771 [2024-04-26 23:37:00.874163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.771 [2024-04-26 23:37:00.874170] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.771 qpair failed and we were unable to recover it. 00:34:11.771 [2024-04-26 23:37:00.874486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.771 [2024-04-26 23:37:00.874781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.771 [2024-04-26 23:37:00.874787] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.771 qpair failed and we were unable to recover it. 00:34:11.771 [2024-04-26 23:37:00.875120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.771 [2024-04-26 23:37:00.875445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.771 [2024-04-26 23:37:00.875452] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.771 qpair failed and we were unable to recover it. 00:34:11.771 [2024-04-26 23:37:00.875797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.771 [2024-04-26 23:37:00.876095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.771 [2024-04-26 23:37:00.876102] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.771 qpair failed and we were unable to recover it. 00:34:11.771 [2024-04-26 23:37:00.876290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.771 [2024-04-26 23:37:00.876602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.771 [2024-04-26 23:37:00.876609] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.771 qpair failed and we were unable to recover it. 00:34:11.771 [2024-04-26 23:37:00.876963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.771 [2024-04-26 23:37:00.877336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.771 [2024-04-26 23:37:00.877342] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.771 qpair failed and we were unable to recover it. 00:34:11.771 [2024-04-26 23:37:00.877656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.771 [2024-04-26 23:37:00.877966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.771 [2024-04-26 23:37:00.877972] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.771 qpair failed and we were unable to recover it. 00:34:11.771 [2024-04-26 23:37:00.878325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.771 [2024-04-26 23:37:00.878666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.771 [2024-04-26 23:37:00.878673] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.771 qpair failed and we were unable to recover it. 00:34:11.771 [2024-04-26 23:37:00.878911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.771 [2024-04-26 23:37:00.879090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.771 [2024-04-26 23:37:00.879096] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.771 qpair failed and we were unable to recover it. 00:34:11.771 [2024-04-26 23:37:00.879408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.771 [2024-04-26 23:37:00.879709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.771 [2024-04-26 23:37:00.879716] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.771 qpair failed and we were unable to recover it. 00:34:11.771 [2024-04-26 23:37:00.880034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.771 [2024-04-26 23:37:00.880251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.771 [2024-04-26 23:37:00.880257] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.771 qpair failed and we were unable to recover it. 00:34:11.771 [2024-04-26 23:37:00.880558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.771 [2024-04-26 23:37:00.880919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.771 [2024-04-26 23:37:00.880925] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.771 qpair failed and we were unable to recover it. 00:34:11.771 [2024-04-26 23:37:00.881236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.771 [2024-04-26 23:37:00.881588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.771 [2024-04-26 23:37:00.881594] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.771 qpair failed and we were unable to recover it. 00:34:11.771 [2024-04-26 23:37:00.881944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.771 [2024-04-26 23:37:00.882300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.771 [2024-04-26 23:37:00.882307] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.771 qpair failed and we were unable to recover it. 00:34:11.771 [2024-04-26 23:37:00.882672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.771 [2024-04-26 23:37:00.882880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.771 [2024-04-26 23:37:00.882887] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.771 qpair failed and we were unable to recover it. 00:34:11.771 [2024-04-26 23:37:00.883202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.771 [2024-04-26 23:37:00.883550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.771 [2024-04-26 23:37:00.883556] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.771 qpair failed and we were unable to recover it. 00:34:11.771 [2024-04-26 23:37:00.883786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.771 [2024-04-26 23:37:00.884138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.771 [2024-04-26 23:37:00.884145] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.771 qpair failed and we were unable to recover it. 00:34:11.771 [2024-04-26 23:37:00.884500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.771 [2024-04-26 23:37:00.884828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.771 [2024-04-26 23:37:00.884835] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.771 qpair failed and we were unable to recover it. 00:34:11.771 [2024-04-26 23:37:00.885160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.771 [2024-04-26 23:37:00.885476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.771 [2024-04-26 23:37:00.885482] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.771 qpair failed and we were unable to recover it. 00:34:11.771 [2024-04-26 23:37:00.885671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.771 [2024-04-26 23:37:00.885993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.771 [2024-04-26 23:37:00.885999] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.771 qpair failed and we were unable to recover it. 00:34:11.771 [2024-04-26 23:37:00.886196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.771 [2024-04-26 23:37:00.886567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.771 [2024-04-26 23:37:00.886573] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.771 qpair failed and we were unable to recover it. 00:34:11.771 [2024-04-26 23:37:00.886967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.771 [2024-04-26 23:37:00.887282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.771 [2024-04-26 23:37:00.887289] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.771 qpair failed and we were unable to recover it. 00:34:11.771 [2024-04-26 23:37:00.887627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.771 [2024-04-26 23:37:00.887921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.771 [2024-04-26 23:37:00.887928] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.771 qpair failed and we were unable to recover it. 00:34:11.771 [2024-04-26 23:37:00.888245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.771 [2024-04-26 23:37:00.888559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.771 [2024-04-26 23:37:00.888566] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.771 qpair failed and we were unable to recover it. 00:34:11.771 [2024-04-26 23:37:00.888881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.771 [2024-04-26 23:37:00.889201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.771 [2024-04-26 23:37:00.889208] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.771 qpair failed and we were unable to recover it. 00:34:11.771 [2024-04-26 23:37:00.889556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.771 [2024-04-26 23:37:00.889886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.771 [2024-04-26 23:37:00.889893] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.771 qpair failed and we were unable to recover it. 00:34:11.771 [2024-04-26 23:37:00.890111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.771 [2024-04-26 23:37:00.890343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.771 [2024-04-26 23:37:00.890350] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.771 qpair failed and we were unable to recover it. 00:34:11.771 [2024-04-26 23:37:00.890705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.771 [2024-04-26 23:37:00.891058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.771 [2024-04-26 23:37:00.891064] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.771 qpair failed and we were unable to recover it. 00:34:11.771 [2024-04-26 23:37:00.891298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.771 [2024-04-26 23:37:00.891621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.771 [2024-04-26 23:37:00.891627] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.771 qpair failed and we were unable to recover it. 00:34:11.771 [2024-04-26 23:37:00.891940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.771 [2024-04-26 23:37:00.892271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.771 [2024-04-26 23:37:00.892278] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.771 qpair failed and we were unable to recover it. 00:34:11.771 [2024-04-26 23:37:00.892516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.771 [2024-04-26 23:37:00.892846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.771 [2024-04-26 23:37:00.892853] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.771 qpair failed and we were unable to recover it. 00:34:11.771 [2024-04-26 23:37:00.893229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.771 [2024-04-26 23:37:00.893468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.771 [2024-04-26 23:37:00.893475] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.772 qpair failed and we were unable to recover it. 00:34:11.772 [2024-04-26 23:37:00.893810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.772 [2024-04-26 23:37:00.894020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.772 [2024-04-26 23:37:00.894026] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.772 qpair failed and we were unable to recover it. 00:34:11.772 [2024-04-26 23:37:00.894333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.772 [2024-04-26 23:37:00.894688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.772 [2024-04-26 23:37:00.894694] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.772 qpair failed and we were unable to recover it. 00:34:11.772 [2024-04-26 23:37:00.895004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.772 [2024-04-26 23:37:00.895305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.772 [2024-04-26 23:37:00.895311] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.772 qpair failed and we were unable to recover it. 00:34:11.772 [2024-04-26 23:37:00.895472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.772 [2024-04-26 23:37:00.895815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.772 [2024-04-26 23:37:00.895821] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.772 qpair failed and we were unable to recover it. 00:34:11.772 [2024-04-26 23:37:00.896119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.772 [2024-04-26 23:37:00.896457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.772 [2024-04-26 23:37:00.896463] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.772 qpair failed and we were unable to recover it. 00:34:11.772 [2024-04-26 23:37:00.896779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.772 [2024-04-26 23:37:00.897097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.772 [2024-04-26 23:37:00.897103] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.772 qpair failed and we were unable to recover it. 00:34:11.772 [2024-04-26 23:37:00.897456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.772 [2024-04-26 23:37:00.897817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.772 [2024-04-26 23:37:00.897824] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.772 qpair failed and we were unable to recover it. 00:34:11.772 [2024-04-26 23:37:00.898190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.772 [2024-04-26 23:37:00.898542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.772 [2024-04-26 23:37:00.898549] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.772 qpair failed and we were unable to recover it. 00:34:11.772 [2024-04-26 23:37:00.898876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.772 [2024-04-26 23:37:00.899026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.772 [2024-04-26 23:37:00.899033] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.772 qpair failed and we were unable to recover it. 00:34:11.772 [2024-04-26 23:37:00.899356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.772 [2024-04-26 23:37:00.899638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.772 [2024-04-26 23:37:00.899644] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.772 qpair failed and we were unable to recover it. 00:34:11.772 [2024-04-26 23:37:00.900003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.772 [2024-04-26 23:37:00.900218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.772 [2024-04-26 23:37:00.900225] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.772 qpair failed and we were unable to recover it. 00:34:11.772 [2024-04-26 23:37:00.900616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.772 [2024-04-26 23:37:00.900947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.772 [2024-04-26 23:37:00.900954] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.772 qpair failed and we were unable to recover it. 00:34:11.772 [2024-04-26 23:37:00.901283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.772 [2024-04-26 23:37:00.901601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.772 [2024-04-26 23:37:00.901607] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.772 qpair failed and we were unable to recover it. 00:34:11.772 [2024-04-26 23:37:00.901914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.772 [2024-04-26 23:37:00.902259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.772 [2024-04-26 23:37:00.902266] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.772 qpair failed and we were unable to recover it. 00:34:11.772 [2024-04-26 23:37:00.902619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.772 [2024-04-26 23:37:00.902946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.772 [2024-04-26 23:37:00.902953] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.772 qpair failed and we were unable to recover it. 00:34:11.772 [2024-04-26 23:37:00.903319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.772 [2024-04-26 23:37:00.903636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.772 [2024-04-26 23:37:00.903642] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.772 qpair failed and we were unable to recover it. 00:34:11.772 [2024-04-26 23:37:00.903968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.772 [2024-04-26 23:37:00.904272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.772 [2024-04-26 23:37:00.904278] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.772 qpair failed and we were unable to recover it. 00:34:11.772 [2024-04-26 23:37:00.904590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.772 [2024-04-26 23:37:00.904826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.772 [2024-04-26 23:37:00.904832] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.772 qpair failed and we were unable to recover it. 00:34:11.772 [2024-04-26 23:37:00.905206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.772 [2024-04-26 23:37:00.905518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.772 [2024-04-26 23:37:00.905525] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.772 qpair failed and we were unable to recover it. 00:34:11.772 [2024-04-26 23:37:00.905884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.772 [2024-04-26 23:37:00.906198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.772 [2024-04-26 23:37:00.906204] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.772 qpair failed and we were unable to recover it. 00:34:11.772 [2024-04-26 23:37:00.906518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.772 [2024-04-26 23:37:00.906695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.772 [2024-04-26 23:37:00.906701] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.772 qpair failed and we were unable to recover it. 00:34:11.772 [2024-04-26 23:37:00.907006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.772 [2024-04-26 23:37:00.907201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.772 [2024-04-26 23:37:00.907208] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.772 qpair failed and we were unable to recover it. 00:34:11.772 [2024-04-26 23:37:00.907531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.772 [2024-04-26 23:37:00.907894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.772 [2024-04-26 23:37:00.907902] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.772 qpair failed and we were unable to recover it. 00:34:11.772 [2024-04-26 23:37:00.908242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.772 [2024-04-26 23:37:00.908567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.772 [2024-04-26 23:37:00.908574] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.772 qpair failed and we were unable to recover it. 00:34:11.772 [2024-04-26 23:37:00.908903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.772 [2024-04-26 23:37:00.909201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.772 [2024-04-26 23:37:00.909209] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.772 qpair failed and we were unable to recover it. 00:34:11.772 [2024-04-26 23:37:00.909523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.772 [2024-04-26 23:37:00.909843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.772 [2024-04-26 23:37:00.909850] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.772 qpair failed and we were unable to recover it. 00:34:11.772 [2024-04-26 23:37:00.910173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.772 [2024-04-26 23:37:00.910453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.772 [2024-04-26 23:37:00.910460] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.772 qpair failed and we were unable to recover it. 00:34:11.772 [2024-04-26 23:37:00.910643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.772 [2024-04-26 23:37:00.910922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.772 [2024-04-26 23:37:00.910928] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.772 qpair failed and we were unable to recover it. 00:34:11.772 [2024-04-26 23:37:00.911188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.772 [2024-04-26 23:37:00.911535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.772 [2024-04-26 23:37:00.911541] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.772 qpair failed and we were unable to recover it. 00:34:11.772 [2024-04-26 23:37:00.911894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.772 [2024-04-26 23:37:00.912290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.772 [2024-04-26 23:37:00.912297] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.772 qpair failed and we were unable to recover it. 00:34:11.772 [2024-04-26 23:37:00.912615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.772 [2024-04-26 23:37:00.912934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.772 [2024-04-26 23:37:00.912941] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.772 qpair failed and we were unable to recover it. 00:34:11.772 [2024-04-26 23:37:00.913262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.772 [2024-04-26 23:37:00.913557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.772 [2024-04-26 23:37:00.913563] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.772 qpair failed and we were unable to recover it. 00:34:11.772 [2024-04-26 23:37:00.913945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.772 [2024-04-26 23:37:00.914381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.772 [2024-04-26 23:37:00.914387] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.772 qpair failed and we were unable to recover it. 00:34:11.772 [2024-04-26 23:37:00.914703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.772 [2024-04-26 23:37:00.915034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.772 [2024-04-26 23:37:00.915041] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.772 qpair failed and we were unable to recover it. 00:34:11.772 [2024-04-26 23:37:00.915366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.772 [2024-04-26 23:37:00.915686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.772 [2024-04-26 23:37:00.915693] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.772 qpair failed and we were unable to recover it. 00:34:11.772 [2024-04-26 23:37:00.916038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.772 [2024-04-26 23:37:00.916376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.772 [2024-04-26 23:37:00.916383] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.772 qpair failed and we were unable to recover it. 00:34:11.772 [2024-04-26 23:37:00.916567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.772 [2024-04-26 23:37:00.916767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.772 [2024-04-26 23:37:00.916775] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.772 qpair failed and we were unable to recover it. 00:34:11.772 [2024-04-26 23:37:00.917106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.772 [2024-04-26 23:37:00.917393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.772 [2024-04-26 23:37:00.917399] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.772 qpair failed and we were unable to recover it. 00:34:11.772 [2024-04-26 23:37:00.917595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.772 [2024-04-26 23:37:00.917834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.772 [2024-04-26 23:37:00.917842] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.772 qpair failed and we were unable to recover it. 00:34:11.772 [2024-04-26 23:37:00.918174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.772 [2024-04-26 23:37:00.918479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.772 [2024-04-26 23:37:00.918485] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.772 qpair failed and we were unable to recover it. 00:34:11.772 [2024-04-26 23:37:00.918844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.772 [2024-04-26 23:37:00.919173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.772 [2024-04-26 23:37:00.919180] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.772 qpair failed and we were unable to recover it. 00:34:11.772 [2024-04-26 23:37:00.919507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.772 [2024-04-26 23:37:00.919821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.772 [2024-04-26 23:37:00.919829] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.772 qpair failed and we were unable to recover it. 00:34:11.772 [2024-04-26 23:37:00.920020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.772 [2024-04-26 23:37:00.920351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.772 [2024-04-26 23:37:00.920358] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.772 qpair failed and we were unable to recover it. 00:34:11.772 [2024-04-26 23:37:00.920644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.772 [2024-04-26 23:37:00.921066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.772 [2024-04-26 23:37:00.921073] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.772 qpair failed and we were unable to recover it. 00:34:11.772 [2024-04-26 23:37:00.921202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.772 [2024-04-26 23:37:00.921493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.772 [2024-04-26 23:37:00.921499] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.772 qpair failed and we were unable to recover it. 00:34:11.772 [2024-04-26 23:37:00.921819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.772 [2024-04-26 23:37:00.922141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.772 [2024-04-26 23:37:00.922148] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.772 qpair failed and we were unable to recover it. 00:34:11.772 [2024-04-26 23:37:00.922477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.772 [2024-04-26 23:37:00.922838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.772 [2024-04-26 23:37:00.922847] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.772 qpair failed and we were unable to recover it. 00:34:11.772 [2024-04-26 23:37:00.923186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.772 [2024-04-26 23:37:00.923404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.772 [2024-04-26 23:37:00.923410] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.772 qpair failed and we were unable to recover it. 00:34:11.772 [2024-04-26 23:37:00.923739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.772 [2024-04-26 23:37:00.923953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.772 [2024-04-26 23:37:00.923961] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.772 qpair failed and we were unable to recover it. 00:34:11.772 [2024-04-26 23:37:00.924203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.772 [2024-04-26 23:37:00.924456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.772 [2024-04-26 23:37:00.924462] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.772 qpair failed and we were unable to recover it. 00:34:11.772 [2024-04-26 23:37:00.924818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.772 [2024-04-26 23:37:00.925156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.772 [2024-04-26 23:37:00.925163] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.772 qpair failed and we were unable to recover it. 00:34:11.772 [2024-04-26 23:37:00.925519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.772 [2024-04-26 23:37:00.925838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.772 [2024-04-26 23:37:00.925846] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.772 qpair failed and we were unable to recover it. 00:34:11.772 [2024-04-26 23:37:00.926199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.772 [2024-04-26 23:37:00.926401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.772 [2024-04-26 23:37:00.926408] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.772 qpair failed and we were unable to recover it. 00:34:11.772 [2024-04-26 23:37:00.926685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.772 [2024-04-26 23:37:00.926977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.772 [2024-04-26 23:37:00.926984] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.772 qpair failed and we were unable to recover it. 00:34:11.772 [2024-04-26 23:37:00.927319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.772 [2024-04-26 23:37:00.927621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.772 [2024-04-26 23:37:00.927628] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.772 qpair failed and we were unable to recover it. 00:34:11.772 [2024-04-26 23:37:00.927923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.772 [2024-04-26 23:37:00.928295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.772 [2024-04-26 23:37:00.928302] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.772 qpair failed and we were unable to recover it. 00:34:11.772 [2024-04-26 23:37:00.928490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.772 [2024-04-26 23:37:00.928713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.772 [2024-04-26 23:37:00.928720] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.772 qpair failed and we were unable to recover it. 00:34:11.772 [2024-04-26 23:37:00.929028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.772 [2024-04-26 23:37:00.929206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.772 [2024-04-26 23:37:00.929213] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.772 qpair failed and we were unable to recover it. 00:34:11.773 [2024-04-26 23:37:00.929435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.773 [2024-04-26 23:37:00.929753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.773 [2024-04-26 23:37:00.929760] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.773 qpair failed and we were unable to recover it. 00:34:11.773 [2024-04-26 23:37:00.930086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.773 [2024-04-26 23:37:00.930376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.773 [2024-04-26 23:37:00.930384] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.773 qpair failed and we were unable to recover it. 00:34:11.773 [2024-04-26 23:37:00.930768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.773 [2024-04-26 23:37:00.930977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.773 [2024-04-26 23:37:00.930985] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.773 qpair failed and we were unable to recover it. 00:34:11.773 [2024-04-26 23:37:00.931285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.773 [2024-04-26 23:37:00.931609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.773 [2024-04-26 23:37:00.931616] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.773 qpair failed and we were unable to recover it. 00:34:11.773 [2024-04-26 23:37:00.931826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.773 [2024-04-26 23:37:00.932164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.773 [2024-04-26 23:37:00.932172] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.773 qpair failed and we were unable to recover it. 00:34:11.773 [2024-04-26 23:37:00.932397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.773 [2024-04-26 23:37:00.932706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.773 [2024-04-26 23:37:00.932713] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.773 qpair failed and we were unable to recover it. 00:34:11.773 [2024-04-26 23:37:00.932933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.773 [2024-04-26 23:37:00.933294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.773 [2024-04-26 23:37:00.933301] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.773 qpair failed and we were unable to recover it. 00:34:11.773 [2024-04-26 23:37:00.933535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.773 [2024-04-26 23:37:00.933824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.773 [2024-04-26 23:37:00.933830] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.773 qpair failed and we were unable to recover it. 00:34:11.773 [2024-04-26 23:37:00.934139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.773 [2024-04-26 23:37:00.934498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.773 [2024-04-26 23:37:00.934505] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.773 qpair failed and we were unable to recover it. 00:34:11.773 [2024-04-26 23:37:00.934826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.773 [2024-04-26 23:37:00.935147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.773 [2024-04-26 23:37:00.935154] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.773 qpair failed and we were unable to recover it. 00:34:11.773 [2024-04-26 23:37:00.935349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.773 [2024-04-26 23:37:00.935578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.773 [2024-04-26 23:37:00.935585] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.773 qpair failed and we were unable to recover it. 00:34:11.773 [2024-04-26 23:37:00.935892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.773 [2024-04-26 23:37:00.936196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.773 [2024-04-26 23:37:00.936203] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.773 qpair failed and we were unable to recover it. 00:34:11.773 [2024-04-26 23:37:00.936516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.773 [2024-04-26 23:37:00.936874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.773 [2024-04-26 23:37:00.936880] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.773 qpair failed and we were unable to recover it. 00:34:11.773 [2024-04-26 23:37:00.937195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.773 [2024-04-26 23:37:00.937563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.773 [2024-04-26 23:37:00.937569] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.773 qpair failed and we were unable to recover it. 00:34:11.773 [2024-04-26 23:37:00.937903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.773 [2024-04-26 23:37:00.938225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.773 [2024-04-26 23:37:00.938231] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.773 qpair failed and we were unable to recover it. 00:34:11.773 [2024-04-26 23:37:00.938620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.773 [2024-04-26 23:37:00.938932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.773 [2024-04-26 23:37:00.938940] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.773 qpair failed and we were unable to recover it. 00:34:11.773 [2024-04-26 23:37:00.939267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.773 [2024-04-26 23:37:00.939586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.773 [2024-04-26 23:37:00.939592] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.773 qpair failed and we were unable to recover it. 00:34:11.773 [2024-04-26 23:37:00.939941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.773 [2024-04-26 23:37:00.940184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.773 [2024-04-26 23:37:00.940190] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.773 qpair failed and we were unable to recover it. 00:34:11.773 [2024-04-26 23:37:00.940516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.773 [2024-04-26 23:37:00.940826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.773 [2024-04-26 23:37:00.940834] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.773 qpair failed and we were unable to recover it. 00:34:11.773 [2024-04-26 23:37:00.941161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.773 [2024-04-26 23:37:00.941511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.773 [2024-04-26 23:37:00.941518] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.773 qpair failed and we were unable to recover it. 00:34:11.773 [2024-04-26 23:37:00.941871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.773 [2024-04-26 23:37:00.942109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.773 [2024-04-26 23:37:00.942116] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.773 qpair failed and we were unable to recover it. 00:34:11.773 [2024-04-26 23:37:00.942309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.773 [2024-04-26 23:37:00.942598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.773 [2024-04-26 23:37:00.942606] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.773 qpair failed and we were unable to recover it. 00:34:11.773 [2024-04-26 23:37:00.942930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.773 [2024-04-26 23:37:00.943248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.773 [2024-04-26 23:37:00.943255] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.773 qpair failed and we were unable to recover it. 00:34:11.773 [2024-04-26 23:37:00.943483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.773 [2024-04-26 23:37:00.943807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.773 [2024-04-26 23:37:00.943816] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.773 qpair failed and we were unable to recover it. 00:34:11.773 [2024-04-26 23:37:00.944136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.773 [2024-04-26 23:37:00.944502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.773 [2024-04-26 23:37:00.944509] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.773 qpair failed and we were unable to recover it. 00:34:11.773 [2024-04-26 23:37:00.944834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.773 [2024-04-26 23:37:00.945035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.773 [2024-04-26 23:37:00.945043] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.773 qpair failed and we were unable to recover it. 00:34:11.773 [2024-04-26 23:37:00.945264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.773 [2024-04-26 23:37:00.945469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.773 [2024-04-26 23:37:00.945477] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.773 qpair failed and we were unable to recover it. 00:34:11.773 [2024-04-26 23:37:00.945780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.773 [2024-04-26 23:37:00.946092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.773 [2024-04-26 23:37:00.946100] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.773 qpair failed and we were unable to recover it. 00:34:11.773 [2024-04-26 23:37:00.946419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.773 [2024-04-26 23:37:00.946781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.773 [2024-04-26 23:37:00.946788] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.773 qpair failed and we were unable to recover it. 00:34:11.773 [2024-04-26 23:37:00.947007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.773 [2024-04-26 23:37:00.947220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.773 [2024-04-26 23:37:00.947228] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.773 qpair failed and we were unable to recover it. 00:34:11.773 [2024-04-26 23:37:00.947650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.773 [2024-04-26 23:37:00.947984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.773 [2024-04-26 23:37:00.947991] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.773 qpair failed and we were unable to recover it. 00:34:11.773 [2024-04-26 23:37:00.948370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.773 [2024-04-26 23:37:00.948583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.773 [2024-04-26 23:37:00.948590] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.773 qpair failed and we were unable to recover it. 00:34:11.773 [2024-04-26 23:37:00.948971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.773 [2024-04-26 23:37:00.949320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.773 [2024-04-26 23:37:00.949328] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.773 qpair failed and we were unable to recover it. 00:34:11.773 [2024-04-26 23:37:00.949522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.773 [2024-04-26 23:37:00.949828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.773 [2024-04-26 23:37:00.949835] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.773 qpair failed and we were unable to recover it. 00:34:11.773 [2024-04-26 23:37:00.950209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.773 [2024-04-26 23:37:00.950522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.773 [2024-04-26 23:37:00.950529] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.773 qpair failed and we were unable to recover it. 00:34:11.773 [2024-04-26 23:37:00.950886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.773 [2024-04-26 23:37:00.951234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.773 [2024-04-26 23:37:00.951244] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.773 qpair failed and we were unable to recover it. 00:34:11.773 [2024-04-26 23:37:00.951575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.773 [2024-04-26 23:37:00.951920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.773 [2024-04-26 23:37:00.951928] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.773 qpair failed and we were unable to recover it. 00:34:11.773 [2024-04-26 23:37:00.952250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.773 [2024-04-26 23:37:00.952577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.773 [2024-04-26 23:37:00.952585] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.773 qpair failed and we were unable to recover it. 00:34:11.773 [2024-04-26 23:37:00.952951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.773 [2024-04-26 23:37:00.953239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.773 [2024-04-26 23:37:00.953247] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.773 qpair failed and we were unable to recover it. 00:34:11.773 [2024-04-26 23:37:00.953572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.773 [2024-04-26 23:37:00.953876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.773 [2024-04-26 23:37:00.953884] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.773 qpair failed and we were unable to recover it. 00:34:11.773 [2024-04-26 23:37:00.954111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.773 [2024-04-26 23:37:00.954313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.773 [2024-04-26 23:37:00.954320] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.773 qpair failed and we were unable to recover it. 00:34:11.773 [2024-04-26 23:37:00.954595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.773 [2024-04-26 23:37:00.954819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.773 [2024-04-26 23:37:00.954827] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.773 qpair failed and we were unable to recover it. 00:34:11.773 [2024-04-26 23:37:00.955121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.773 [2024-04-26 23:37:00.955303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.773 [2024-04-26 23:37:00.955310] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.773 qpair failed and we were unable to recover it. 00:34:11.773 [2024-04-26 23:37:00.955644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.773 [2024-04-26 23:37:00.955933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.773 [2024-04-26 23:37:00.955940] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.773 qpair failed and we were unable to recover it. 00:34:11.773 [2024-04-26 23:37:00.956231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.773 [2024-04-26 23:37:00.956427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.773 [2024-04-26 23:37:00.956434] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.773 qpair failed and we were unable to recover it. 00:34:11.773 [2024-04-26 23:37:00.956775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.773 [2024-04-26 23:37:00.957112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.773 [2024-04-26 23:37:00.957122] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.773 qpair failed and we were unable to recover it. 00:34:11.773 [2024-04-26 23:37:00.957458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.773 [2024-04-26 23:37:00.957824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.773 [2024-04-26 23:37:00.957831] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.773 qpair failed and we were unable to recover it. 00:34:11.773 [2024-04-26 23:37:00.958066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.773 [2024-04-26 23:37:00.958390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.773 [2024-04-26 23:37:00.958398] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.773 qpair failed and we were unable to recover it. 00:34:11.773 [2024-04-26 23:37:00.958700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.773 [2024-04-26 23:37:00.959017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.773 [2024-04-26 23:37:00.959024] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.773 qpair failed and we were unable to recover it. 00:34:11.773 [2024-04-26 23:37:00.959219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.773 [2024-04-26 23:37:00.959553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.773 [2024-04-26 23:37:00.959560] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.773 qpair failed and we were unable to recover it. 00:34:11.773 [2024-04-26 23:37:00.959752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.773 [2024-04-26 23:37:00.960082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.773 [2024-04-26 23:37:00.960090] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.773 qpair failed and we were unable to recover it. 00:34:11.773 [2024-04-26 23:37:00.960445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.773 [2024-04-26 23:37:00.960776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.773 [2024-04-26 23:37:00.960784] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.773 qpair failed and we were unable to recover it. 00:34:11.773 [2024-04-26 23:37:00.961108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.773 [2024-04-26 23:37:00.961341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.773 [2024-04-26 23:37:00.961349] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.773 qpair failed and we were unable to recover it. 00:34:11.773 [2024-04-26 23:37:00.961666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.773 [2024-04-26 23:37:00.962026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.773 [2024-04-26 23:37:00.962034] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.773 qpair failed and we were unable to recover it. 00:34:11.773 [2024-04-26 23:37:00.962349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.773 [2024-04-26 23:37:00.962706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.773 [2024-04-26 23:37:00.962714] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.773 qpair failed and we were unable to recover it. 00:34:11.773 [2024-04-26 23:37:00.963066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.773 [2024-04-26 23:37:00.963428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.773 [2024-04-26 23:37:00.963436] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.773 qpair failed and we were unable to recover it. 00:34:11.773 [2024-04-26 23:37:00.963792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.773 [2024-04-26 23:37:00.964109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.773 [2024-04-26 23:37:00.964118] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.773 qpair failed and we were unable to recover it. 00:34:11.773 [2024-04-26 23:37:00.964326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.773 [2024-04-26 23:37:00.964685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.773 [2024-04-26 23:37:00.964692] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.773 qpair failed and we were unable to recover it. 00:34:11.773 [2024-04-26 23:37:00.965024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.774 [2024-04-26 23:37:00.965337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.774 [2024-04-26 23:37:00.965345] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.774 qpair failed and we were unable to recover it. 00:34:11.774 [2024-04-26 23:37:00.965703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.774 [2024-04-26 23:37:00.966055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.774 [2024-04-26 23:37:00.966063] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.774 qpair failed and we were unable to recover it. 00:34:11.774 [2024-04-26 23:37:00.966241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.774 [2024-04-26 23:37:00.966498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.774 [2024-04-26 23:37:00.966506] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.774 qpair failed and we were unable to recover it. 00:34:11.774 [2024-04-26 23:37:00.966862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.774 [2024-04-26 23:37:00.967157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.774 [2024-04-26 23:37:00.967165] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.774 qpair failed and we were unable to recover it. 00:34:11.774 [2024-04-26 23:37:00.967368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.774 [2024-04-26 23:37:00.967654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.774 [2024-04-26 23:37:00.967662] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.774 qpair failed and we were unable to recover it. 00:34:11.774 [2024-04-26 23:37:00.967874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.774 [2024-04-26 23:37:00.968195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.774 [2024-04-26 23:37:00.968203] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.774 qpair failed and we were unable to recover it. 00:34:11.774 [2024-04-26 23:37:00.968541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.774 [2024-04-26 23:37:00.968752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.774 [2024-04-26 23:37:00.968760] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.774 qpair failed and we were unable to recover it. 00:34:11.774 [2024-04-26 23:37:00.969086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.774 [2024-04-26 23:37:00.969438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.774 [2024-04-26 23:37:00.969447] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.774 qpair failed and we were unable to recover it. 00:34:11.774 [2024-04-26 23:37:00.969782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.774 [2024-04-26 23:37:00.970001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.774 [2024-04-26 23:37:00.970008] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.774 qpair failed and we were unable to recover it. 00:34:11.774 [2024-04-26 23:37:00.970336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.774 [2024-04-26 23:37:00.970696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.774 [2024-04-26 23:37:00.970704] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.774 qpair failed and we were unable to recover it. 00:34:11.774 [2024-04-26 23:37:00.970898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.774 [2024-04-26 23:37:00.971260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.774 [2024-04-26 23:37:00.971268] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.774 qpair failed and we were unable to recover it. 00:34:11.774 [2024-04-26 23:37:00.971660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.774 [2024-04-26 23:37:00.971974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.774 [2024-04-26 23:37:00.971983] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.774 qpair failed and we were unable to recover it. 00:34:11.774 [2024-04-26 23:37:00.972175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.774 [2024-04-26 23:37:00.972479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.774 [2024-04-26 23:37:00.972487] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.774 qpair failed and we were unable to recover it. 00:34:11.774 [2024-04-26 23:37:00.972807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.774 [2024-04-26 23:37:00.973160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.774 [2024-04-26 23:37:00.973168] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.774 qpair failed and we were unable to recover it. 00:34:11.774 [2024-04-26 23:37:00.973519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.774 [2024-04-26 23:37:00.973767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.774 [2024-04-26 23:37:00.973775] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.774 qpair failed and we were unable to recover it. 00:34:11.774 [2024-04-26 23:37:00.974096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.774 [2024-04-26 23:37:00.974454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.774 [2024-04-26 23:37:00.974462] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.774 qpair failed and we were unable to recover it. 00:34:11.774 [2024-04-26 23:37:00.974793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.774 [2024-04-26 23:37:00.975087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.774 [2024-04-26 23:37:00.975097] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.774 qpair failed and we were unable to recover it. 00:34:11.774 [2024-04-26 23:37:00.975456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.774 [2024-04-26 23:37:00.975774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.774 [2024-04-26 23:37:00.975782] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.774 qpair failed and we were unable to recover it. 00:34:11.774 [2024-04-26 23:37:00.976159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.774 [2024-04-26 23:37:00.976348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.774 [2024-04-26 23:37:00.976355] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.774 qpair failed and we were unable to recover it. 00:34:11.774 [2024-04-26 23:37:00.976667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.774 [2024-04-26 23:37:00.976946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.774 [2024-04-26 23:37:00.976954] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.774 qpair failed and we were unable to recover it. 00:34:11.774 [2024-04-26 23:37:00.977285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.774 [2024-04-26 23:37:00.977641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.774 [2024-04-26 23:37:00.977649] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.774 qpair failed and we were unable to recover it. 00:34:11.774 [2024-04-26 23:37:00.977963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.774 [2024-04-26 23:37:00.978299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.774 [2024-04-26 23:37:00.978307] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.774 qpair failed and we were unable to recover it. 00:34:11.774 [2024-04-26 23:37:00.978651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.774 [2024-04-26 23:37:00.979004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.774 [2024-04-26 23:37:00.979011] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.774 qpair failed and we were unable to recover it. 00:34:11.774 [2024-04-26 23:37:00.979314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.774 [2024-04-26 23:37:00.979627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.774 [2024-04-26 23:37:00.979634] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.774 qpair failed and we were unable to recover it. 00:34:11.774 [2024-04-26 23:37:00.979954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.774 [2024-04-26 23:37:00.980311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.774 [2024-04-26 23:37:00.980319] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.774 qpair failed and we were unable to recover it. 00:34:11.774 [2024-04-26 23:37:00.980623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.774 [2024-04-26 23:37:00.980952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.774 [2024-04-26 23:37:00.980960] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.774 qpair failed and we were unable to recover it. 00:34:11.774 [2024-04-26 23:37:00.981294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.774 [2024-04-26 23:37:00.981607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.774 [2024-04-26 23:37:00.981614] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.774 qpair failed and we were unable to recover it. 00:34:11.774 [2024-04-26 23:37:00.981921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.774 [2024-04-26 23:37:00.982087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.774 [2024-04-26 23:37:00.982095] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.774 qpair failed and we were unable to recover it. 00:34:11.774 [2024-04-26 23:37:00.982445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.774 [2024-04-26 23:37:00.982801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.774 [2024-04-26 23:37:00.982808] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.774 qpair failed and we were unable to recover it. 00:34:11.774 [2024-04-26 23:37:00.982970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.774 [2024-04-26 23:37:00.983324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.774 [2024-04-26 23:37:00.983331] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.774 qpair failed and we were unable to recover it. 00:34:11.774 [2024-04-26 23:37:00.983682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.774 [2024-04-26 23:37:00.984035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.774 [2024-04-26 23:37:00.984043] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.774 qpair failed and we were unable to recover it. 00:34:11.774 [2024-04-26 23:37:00.984222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.774 [2024-04-26 23:37:00.984545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.774 [2024-04-26 23:37:00.984554] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.774 qpair failed and we were unable to recover it. 00:34:11.774 [2024-04-26 23:37:00.984882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.774 [2024-04-26 23:37:00.985073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.774 [2024-04-26 23:37:00.985082] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.774 qpair failed and we were unable to recover it. 00:34:11.774 [2024-04-26 23:37:00.985408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.774 [2024-04-26 23:37:00.985766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.774 [2024-04-26 23:37:00.985774] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.774 qpair failed and we were unable to recover it. 00:34:11.774 [2024-04-26 23:37:00.985966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.774 [2024-04-26 23:37:00.986275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.774 [2024-04-26 23:37:00.986283] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.774 qpair failed and we were unable to recover it. 00:34:11.774 [2024-04-26 23:37:00.986630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.774 [2024-04-26 23:37:00.986935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.774 [2024-04-26 23:37:00.986944] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.774 qpair failed and we were unable to recover it. 00:34:11.774 [2024-04-26 23:37:00.987263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.774 [2024-04-26 23:37:00.987571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.774 [2024-04-26 23:37:00.987578] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.774 qpair failed and we were unable to recover it. 00:34:11.774 [2024-04-26 23:37:00.987928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.774 [2024-04-26 23:37:00.988285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.774 [2024-04-26 23:37:00.988293] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.774 qpair failed and we were unable to recover it. 00:34:11.774 [2024-04-26 23:37:00.988642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.774 [2024-04-26 23:37:00.988999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.774 [2024-04-26 23:37:00.989007] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.774 qpair failed and we were unable to recover it. 00:34:11.774 [2024-04-26 23:37:00.989357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.774 [2024-04-26 23:37:00.989715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.774 [2024-04-26 23:37:00.989723] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.774 qpair failed and we were unable to recover it. 00:34:11.774 [2024-04-26 23:37:00.990043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.774 [2024-04-26 23:37:00.990362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.774 [2024-04-26 23:37:00.990370] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.774 qpair failed and we were unable to recover it. 00:34:11.774 [2024-04-26 23:37:00.990720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.774 [2024-04-26 23:37:00.991115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.774 [2024-04-26 23:37:00.991123] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.774 qpair failed and we were unable to recover it. 00:34:11.774 [2024-04-26 23:37:00.991436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.774 [2024-04-26 23:37:00.991576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.774 [2024-04-26 23:37:00.991584] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.774 qpair failed and we were unable to recover it. 00:34:11.774 [2024-04-26 23:37:00.991901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.774 [2024-04-26 23:37:00.992235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.774 [2024-04-26 23:37:00.992243] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.774 qpair failed and we were unable to recover it. 00:34:11.774 [2024-04-26 23:37:00.992536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.774 [2024-04-26 23:37:00.992847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.774 [2024-04-26 23:37:00.992855] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.774 qpair failed and we were unable to recover it. 00:34:11.774 [2024-04-26 23:37:00.993154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.774 [2024-04-26 23:37:00.993513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.774 [2024-04-26 23:37:00.993520] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.774 qpair failed and we were unable to recover it. 00:34:11.774 [2024-04-26 23:37:00.993833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.774 [2024-04-26 23:37:00.994192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.774 [2024-04-26 23:37:00.994200] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.774 qpair failed and we were unable to recover it. 00:34:11.774 [2024-04-26 23:37:00.994507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.774 [2024-04-26 23:37:00.994858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.774 [2024-04-26 23:37:00.994866] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.774 qpair failed and we were unable to recover it. 00:34:11.774 [2024-04-26 23:37:00.995190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.774 [2024-04-26 23:37:00.995441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.774 [2024-04-26 23:37:00.995449] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.774 qpair failed and we were unable to recover it. 00:34:11.774 [2024-04-26 23:37:00.995794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.774 [2024-04-26 23:37:00.996138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.774 [2024-04-26 23:37:00.996146] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.774 qpair failed and we were unable to recover it. 00:34:11.774 [2024-04-26 23:37:00.996496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.774 [2024-04-26 23:37:00.996861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.774 [2024-04-26 23:37:00.996869] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.774 qpair failed and we were unable to recover it. 00:34:11.774 [2024-04-26 23:37:00.997217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.774 [2024-04-26 23:37:00.997530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.775 [2024-04-26 23:37:00.997538] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.775 qpair failed and we were unable to recover it. 00:34:11.775 [2024-04-26 23:37:00.997770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.775 [2024-04-26 23:37:00.998054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.775 [2024-04-26 23:37:00.998062] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.775 qpair failed and we were unable to recover it. 00:34:11.775 [2024-04-26 23:37:00.998384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.775 [2024-04-26 23:37:00.998741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.775 [2024-04-26 23:37:00.998748] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.775 qpair failed and we were unable to recover it. 00:34:11.775 [2024-04-26 23:37:00.999063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.775 [2024-04-26 23:37:00.999276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.775 [2024-04-26 23:37:00.999284] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.775 qpair failed and we were unable to recover it. 00:34:11.775 [2024-04-26 23:37:00.999475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.775 [2024-04-26 23:37:00.999723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.775 [2024-04-26 23:37:00.999731] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.775 qpair failed and we were unable to recover it. 00:34:11.775 [2024-04-26 23:37:01.000043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.775 [2024-04-26 23:37:01.000327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.775 [2024-04-26 23:37:01.000335] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.775 qpair failed and we were unable to recover it. 00:34:11.775 [2024-04-26 23:37:01.000655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.775 [2024-04-26 23:37:01.001017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.775 [2024-04-26 23:37:01.001026] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.775 qpair failed and we were unable to recover it. 00:34:11.775 [2024-04-26 23:37:01.001375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.775 [2024-04-26 23:37:01.001740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.775 [2024-04-26 23:37:01.001748] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.775 qpair failed and we were unable to recover it. 00:34:11.775 [2024-04-26 23:37:01.002095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.775 [2024-04-26 23:37:01.002433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.775 [2024-04-26 23:37:01.002441] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.775 qpair failed and we were unable to recover it. 00:34:11.775 [2024-04-26 23:37:01.002774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.775 [2024-04-26 23:37:01.002957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.775 [2024-04-26 23:37:01.002964] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.775 qpair failed and we were unable to recover it. 00:34:11.775 [2024-04-26 23:37:01.003246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.775 [2024-04-26 23:37:01.003522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.775 [2024-04-26 23:37:01.003530] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.775 qpair failed and we were unable to recover it. 00:34:11.775 [2024-04-26 23:37:01.003704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.775 [2024-04-26 23:37:01.004017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.775 [2024-04-26 23:37:01.004025] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.775 qpair failed and we were unable to recover it. 00:34:11.775 [2024-04-26 23:37:01.004347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.775 [2024-04-26 23:37:01.004705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.775 [2024-04-26 23:37:01.004712] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.775 qpair failed and we were unable to recover it. 00:34:11.775 [2024-04-26 23:37:01.005037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.775 [2024-04-26 23:37:01.005370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.775 [2024-04-26 23:37:01.005378] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.775 qpair failed and we were unable to recover it. 00:34:11.775 [2024-04-26 23:37:01.005703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.775 [2024-04-26 23:37:01.006067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.775 [2024-04-26 23:37:01.006075] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.775 qpair failed and we were unable to recover it. 00:34:11.775 [2024-04-26 23:37:01.006430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.775 [2024-04-26 23:37:01.006753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.775 [2024-04-26 23:37:01.006760] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.775 qpair failed and we were unable to recover it. 00:34:11.775 [2024-04-26 23:37:01.006951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.775 [2024-04-26 23:37:01.007552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.775 [2024-04-26 23:37:01.007570] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.775 qpair failed and we were unable to recover it. 00:34:11.775 [2024-04-26 23:37:01.007903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.775 [2024-04-26 23:37:01.008266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.775 [2024-04-26 23:37:01.008275] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.775 qpair failed and we were unable to recover it. 00:34:11.775 [2024-04-26 23:37:01.008633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.775 [2024-04-26 23:37:01.008966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.775 [2024-04-26 23:37:01.008974] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.775 qpair failed and we were unable to recover it. 00:34:11.775 [2024-04-26 23:37:01.009299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.775 [2024-04-26 23:37:01.009658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.775 [2024-04-26 23:37:01.009666] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.775 qpair failed and we were unable to recover it. 00:34:11.775 [2024-04-26 23:37:01.009884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.775 [2024-04-26 23:37:01.010182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.775 [2024-04-26 23:37:01.010190] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.775 qpair failed and we were unable to recover it. 00:34:11.775 [2024-04-26 23:37:01.010503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.775 [2024-04-26 23:37:01.010812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.775 [2024-04-26 23:37:01.010820] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.775 qpair failed and we were unable to recover it. 00:34:11.775 [2024-04-26 23:37:01.011170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.775 [2024-04-26 23:37:01.011485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.775 [2024-04-26 23:37:01.011492] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:11.775 qpair failed and we were unable to recover it. 00:34:11.775 [2024-04-26 23:37:01.011854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.046 [2024-04-26 23:37:01.012170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.046 [2024-04-26 23:37:01.012178] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.046 qpair failed and we were unable to recover it. 00:34:12.046 [2024-04-26 23:37:01.012411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.046 [2024-04-26 23:37:01.012723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.046 [2024-04-26 23:37:01.012731] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.046 qpair failed and we were unable to recover it. 00:34:12.046 [2024-04-26 23:37:01.013564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.046 [2024-04-26 23:37:01.013886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.046 [2024-04-26 23:37:01.013896] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.046 qpair failed and we were unable to recover it. 00:34:12.046 [2024-04-26 23:37:01.014581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.046 [2024-04-26 23:37:01.014779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.046 [2024-04-26 23:37:01.014789] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.046 qpair failed and we were unable to recover it. 00:34:12.046 [2024-04-26 23:37:01.015026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.046 [2024-04-26 23:37:01.015315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.046 [2024-04-26 23:37:01.015324] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.046 qpair failed and we were unable to recover it. 00:34:12.046 [2024-04-26 23:37:01.015636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.046 [2024-04-26 23:37:01.015950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.046 [2024-04-26 23:37:01.015958] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.046 qpair failed and we were unable to recover it. 00:34:12.046 [2024-04-26 23:37:01.016293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.046 [2024-04-26 23:37:01.016649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.046 [2024-04-26 23:37:01.016658] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.046 qpair failed and we were unable to recover it. 00:34:12.046 [2024-04-26 23:37:01.017006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.046 [2024-04-26 23:37:01.017290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.046 [2024-04-26 23:37:01.017298] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.046 qpair failed and we were unable to recover it. 00:34:12.046 [2024-04-26 23:37:01.017638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.046 [2024-04-26 23:37:01.017997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.046 [2024-04-26 23:37:01.018005] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.046 qpair failed and we were unable to recover it. 00:34:12.046 [2024-04-26 23:37:01.018355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.046 [2024-04-26 23:37:01.018711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.047 [2024-04-26 23:37:01.018719] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.047 qpair failed and we were unable to recover it. 00:34:12.047 [2024-04-26 23:37:01.019015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.047 [2024-04-26 23:37:01.019338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.047 [2024-04-26 23:37:01.019346] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.047 qpair failed and we were unable to recover it. 00:34:12.047 [2024-04-26 23:37:01.019703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.047 [2024-04-26 23:37:01.019901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.047 [2024-04-26 23:37:01.019909] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.047 qpair failed and we were unable to recover it. 00:34:12.047 [2024-04-26 23:37:01.020261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.047 [2024-04-26 23:37:01.020619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.047 [2024-04-26 23:37:01.020626] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.047 qpair failed and we were unable to recover it. 00:34:12.047 [2024-04-26 23:37:01.020969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.047 [2024-04-26 23:37:01.021281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.047 [2024-04-26 23:37:01.021288] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.047 qpair failed and we were unable to recover it. 00:34:12.047 [2024-04-26 23:37:01.021617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.047 [2024-04-26 23:37:01.021983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.047 [2024-04-26 23:37:01.021990] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.047 qpair failed and we were unable to recover it. 00:34:12.047 [2024-04-26 23:37:01.022323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.047 [2024-04-26 23:37:01.022525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.047 [2024-04-26 23:37:01.022533] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.047 qpair failed and we were unable to recover it. 00:34:12.047 [2024-04-26 23:37:01.022846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.047 [2024-04-26 23:37:01.023161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.047 [2024-04-26 23:37:01.023168] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.047 qpair failed and we were unable to recover it. 00:34:12.047 [2024-04-26 23:37:01.023523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.047 [2024-04-26 23:37:01.023885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.047 [2024-04-26 23:37:01.023893] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.047 qpair failed and we were unable to recover it. 00:34:12.047 [2024-04-26 23:37:01.024227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.047 [2024-04-26 23:37:01.024546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.047 [2024-04-26 23:37:01.024554] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.047 qpair failed and we were unable to recover it. 00:34:12.047 [2024-04-26 23:37:01.024867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.047 [2024-04-26 23:37:01.025232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.047 [2024-04-26 23:37:01.025240] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.047 qpair failed and we were unable to recover it. 00:34:12.047 [2024-04-26 23:37:01.025547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.047 [2024-04-26 23:37:01.025911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.047 [2024-04-26 23:37:01.025919] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.047 qpair failed and we were unable to recover it. 00:34:12.047 [2024-04-26 23:37:01.026243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.047 [2024-04-26 23:37:01.026516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.047 [2024-04-26 23:37:01.026523] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.047 qpair failed and we were unable to recover it. 00:34:12.047 [2024-04-26 23:37:01.026858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.047 [2024-04-26 23:37:01.027193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.047 [2024-04-26 23:37:01.027201] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.047 qpair failed and we were unable to recover it. 00:34:12.047 [2024-04-26 23:37:01.027342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.047 [2024-04-26 23:37:01.027654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.047 [2024-04-26 23:37:01.027661] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.047 qpair failed and we were unable to recover it. 00:34:12.047 [2024-04-26 23:37:01.027983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.047 [2024-04-26 23:37:01.028294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.047 [2024-04-26 23:37:01.028302] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.047 qpair failed and we were unable to recover it. 00:34:12.047 [2024-04-26 23:37:01.028648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.047 [2024-04-26 23:37:01.028980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.047 [2024-04-26 23:37:01.028988] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.047 qpair failed and we were unable to recover it. 00:34:12.047 [2024-04-26 23:37:01.029317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.047 [2024-04-26 23:37:01.029627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.047 [2024-04-26 23:37:01.029635] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.047 qpair failed and we were unable to recover it. 00:34:12.047 [2024-04-26 23:37:01.029954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.047 [2024-04-26 23:37:01.030142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.047 [2024-04-26 23:37:01.030150] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.047 qpair failed and we were unable to recover it. 00:34:12.047 [2024-04-26 23:37:01.030498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.047 [2024-04-26 23:37:01.030843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.047 [2024-04-26 23:37:01.030851] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.047 qpair failed and we were unable to recover it. 00:34:12.047 [2024-04-26 23:37:01.031133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.047 [2024-04-26 23:37:01.031491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.047 [2024-04-26 23:37:01.031499] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.047 qpair failed and we were unable to recover it. 00:34:12.047 [2024-04-26 23:37:01.031799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.047 [2024-04-26 23:37:01.032092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.047 [2024-04-26 23:37:01.032100] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.047 qpair failed and we were unable to recover it. 00:34:12.047 [2024-04-26 23:37:01.032444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.047 [2024-04-26 23:37:01.032803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.047 [2024-04-26 23:37:01.032810] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.047 qpair failed and we were unable to recover it. 00:34:12.047 [2024-04-26 23:37:01.033161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.047 [2024-04-26 23:37:01.033472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.047 [2024-04-26 23:37:01.033479] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.047 qpair failed and we were unable to recover it. 00:34:12.047 [2024-04-26 23:37:01.033819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.047 [2024-04-26 23:37:01.034172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.047 [2024-04-26 23:37:01.034180] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.047 qpair failed and we were unable to recover it. 00:34:12.047 [2024-04-26 23:37:01.034506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.047 [2024-04-26 23:37:01.034872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.047 [2024-04-26 23:37:01.034881] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.047 qpair failed and we were unable to recover it. 00:34:12.047 [2024-04-26 23:37:01.035190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.047 [2024-04-26 23:37:01.035505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.047 [2024-04-26 23:37:01.035514] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.047 qpair failed and we were unable to recover it. 00:34:12.047 [2024-04-26 23:37:01.035871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.047 [2024-04-26 23:37:01.036217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.047 [2024-04-26 23:37:01.036225] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.047 qpair failed and we were unable to recover it. 00:34:12.047 [2024-04-26 23:37:01.036454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.047 [2024-04-26 23:37:01.036722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.047 [2024-04-26 23:37:01.036730] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.047 qpair failed and we were unable to recover it. 00:34:12.048 [2024-04-26 23:37:01.036958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.048 [2024-04-26 23:37:01.037311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.048 [2024-04-26 23:37:01.037319] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.048 qpair failed and we were unable to recover it. 00:34:12.048 [2024-04-26 23:37:01.037665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.048 [2024-04-26 23:37:01.037868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.048 [2024-04-26 23:37:01.037876] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.048 qpair failed and we were unable to recover it. 00:34:12.048 [2024-04-26 23:37:01.038170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.048 [2024-04-26 23:37:01.038481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.048 [2024-04-26 23:37:01.038488] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.048 qpair failed and we were unable to recover it. 00:34:12.048 [2024-04-26 23:37:01.038845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.048 [2024-04-26 23:37:01.039168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.048 [2024-04-26 23:37:01.039176] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.048 qpair failed and we were unable to recover it. 00:34:12.048 [2024-04-26 23:37:01.039509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.048 [2024-04-26 23:37:01.039823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.048 [2024-04-26 23:37:01.039830] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.048 qpair failed and we were unable to recover it. 00:34:12.048 [2024-04-26 23:37:01.040146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.048 [2024-04-26 23:37:01.040462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.048 [2024-04-26 23:37:01.040470] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.048 qpair failed and we were unable to recover it. 00:34:12.048 [2024-04-26 23:37:01.040817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.048 [2024-04-26 23:37:01.041170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.048 [2024-04-26 23:37:01.041179] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.048 qpair failed and we were unable to recover it. 00:34:12.048 [2024-04-26 23:37:01.041528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.048 [2024-04-26 23:37:01.041667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.048 [2024-04-26 23:37:01.041676] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.048 qpair failed and we were unable to recover it. 00:34:12.048 [2024-04-26 23:37:01.041995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.048 [2024-04-26 23:37:01.042177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.048 [2024-04-26 23:37:01.042185] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.048 qpair failed and we were unable to recover it. 00:34:12.048 [2024-04-26 23:37:01.042506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.048 [2024-04-26 23:37:01.042781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.048 [2024-04-26 23:37:01.042789] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.048 qpair failed and we were unable to recover it. 00:34:12.048 [2024-04-26 23:37:01.043107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.048 [2024-04-26 23:37:01.043463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.048 [2024-04-26 23:37:01.043470] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.048 qpair failed and we were unable to recover it. 00:34:12.048 [2024-04-26 23:37:01.043817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.048 [2024-04-26 23:37:01.044162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.048 [2024-04-26 23:37:01.044171] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.048 qpair failed and we were unable to recover it. 00:34:12.048 [2024-04-26 23:37:01.044540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.048 [2024-04-26 23:37:01.044858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.048 [2024-04-26 23:37:01.044866] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.048 qpair failed and we were unable to recover it. 00:34:12.048 [2024-04-26 23:37:01.045193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.048 [2024-04-26 23:37:01.045509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.048 [2024-04-26 23:37:01.045516] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.048 qpair failed and we were unable to recover it. 00:34:12.048 [2024-04-26 23:37:01.045851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.048 [2024-04-26 23:37:01.046182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.048 [2024-04-26 23:37:01.046189] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.048 qpair failed and we were unable to recover it. 00:34:12.048 [2024-04-26 23:37:01.046536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.048 [2024-04-26 23:37:01.046847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.048 [2024-04-26 23:37:01.046856] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.048 qpair failed and we were unable to recover it. 00:34:12.048 [2024-04-26 23:37:01.047201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.048 [2024-04-26 23:37:01.047473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.048 [2024-04-26 23:37:01.047482] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.048 qpair failed and we were unable to recover it. 00:34:12.048 [2024-04-26 23:37:01.047831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.048 [2024-04-26 23:37:01.047990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.048 [2024-04-26 23:37:01.047999] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.048 qpair failed and we were unable to recover it. 00:34:12.048 [2024-04-26 23:37:01.048324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.048 [2024-04-26 23:37:01.048622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.048 [2024-04-26 23:37:01.048629] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.048 qpair failed and we were unable to recover it. 00:34:12.048 [2024-04-26 23:37:01.048980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.048 [2024-04-26 23:37:01.049309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.048 [2024-04-26 23:37:01.049317] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.048 qpair failed and we were unable to recover it. 00:34:12.048 [2024-04-26 23:37:01.049640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.048 [2024-04-26 23:37:01.050000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.048 [2024-04-26 23:37:01.050008] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.048 qpair failed and we were unable to recover it. 00:34:12.048 [2024-04-26 23:37:01.050412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.048 [2024-04-26 23:37:01.050699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.048 [2024-04-26 23:37:01.050708] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.048 qpair failed and we were unable to recover it. 00:34:12.048 [2024-04-26 23:37:01.051013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.048 [2024-04-26 23:37:01.051322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.048 [2024-04-26 23:37:01.051330] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.048 qpair failed and we were unable to recover it. 00:34:12.048 [2024-04-26 23:37:01.051634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.048 [2024-04-26 23:37:01.051990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.048 [2024-04-26 23:37:01.051998] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.048 qpair failed and we were unable to recover it. 00:34:12.048 [2024-04-26 23:37:01.052320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.048 [2024-04-26 23:37:01.052679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.048 [2024-04-26 23:37:01.052687] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.048 qpair failed and we were unable to recover it. 00:34:12.048 [2024-04-26 23:37:01.053031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.048 [2024-04-26 23:37:01.053378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.048 [2024-04-26 23:37:01.053386] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.048 qpair failed and we were unable to recover it. 00:34:12.048 [2024-04-26 23:37:01.053735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.048 [2024-04-26 23:37:01.054407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.048 [2024-04-26 23:37:01.054425] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.048 qpair failed and we were unable to recover it. 00:34:12.048 [2024-04-26 23:37:01.054709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.048 [2024-04-26 23:37:01.055039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.048 [2024-04-26 23:37:01.055047] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.048 qpair failed and we were unable to recover it. 00:34:12.048 [2024-04-26 23:37:01.055374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.048 [2024-04-26 23:37:01.055622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.048 [2024-04-26 23:37:01.055630] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.049 qpair failed and we were unable to recover it. 00:34:12.049 [2024-04-26 23:37:01.055978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.049 [2024-04-26 23:37:01.056334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.049 [2024-04-26 23:37:01.056341] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.049 qpair failed and we were unable to recover it. 00:34:12.049 [2024-04-26 23:37:01.056695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.049 [2024-04-26 23:37:01.056890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.049 [2024-04-26 23:37:01.056898] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.049 qpair failed and we were unable to recover it. 00:34:12.049 [2024-04-26 23:37:01.057149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.049 [2024-04-26 23:37:01.057462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.049 [2024-04-26 23:37:01.057470] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.049 qpair failed and we were unable to recover it. 00:34:12.049 [2024-04-26 23:37:01.057783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.049 [2024-04-26 23:37:01.058128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.049 [2024-04-26 23:37:01.058136] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.049 qpair failed and we were unable to recover it. 00:34:12.049 [2024-04-26 23:37:01.058356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.049 [2024-04-26 23:37:01.058646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.049 [2024-04-26 23:37:01.058654] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.049 qpair failed and we were unable to recover it. 00:34:12.049 [2024-04-26 23:37:01.058953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.049 [2024-04-26 23:37:01.059326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.049 [2024-04-26 23:37:01.059334] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.049 qpair failed and we were unable to recover it. 00:34:12.049 [2024-04-26 23:37:01.059651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.049 [2024-04-26 23:37:01.060006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.049 [2024-04-26 23:37:01.060014] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.049 qpair failed and we were unable to recover it. 00:34:12.049 [2024-04-26 23:37:01.060384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.049 [2024-04-26 23:37:01.060742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.049 [2024-04-26 23:37:01.060750] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.049 qpair failed and we were unable to recover it. 00:34:12.049 [2024-04-26 23:37:01.061096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.049 [2024-04-26 23:37:01.061276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.049 [2024-04-26 23:37:01.061284] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.049 qpair failed and we were unable to recover it. 00:34:12.049 [2024-04-26 23:37:01.061624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.049 [2024-04-26 23:37:01.061918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.049 [2024-04-26 23:37:01.061927] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.049 qpair failed and we were unable to recover it. 00:34:12.049 [2024-04-26 23:37:01.062133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.049 [2024-04-26 23:37:01.062428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.049 [2024-04-26 23:37:01.062436] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.049 qpair failed and we were unable to recover it. 00:34:12.049 [2024-04-26 23:37:01.062840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.049 [2024-04-26 23:37:01.063144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.049 [2024-04-26 23:37:01.063153] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.049 qpair failed and we were unable to recover it. 00:34:12.049 [2024-04-26 23:37:01.063501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.049 [2024-04-26 23:37:01.063816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.049 [2024-04-26 23:37:01.063824] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.049 qpair failed and we were unable to recover it. 00:34:12.049 [2024-04-26 23:37:01.064174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.049 [2024-04-26 23:37:01.064533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.049 [2024-04-26 23:37:01.064542] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.049 qpair failed and we were unable to recover it. 00:34:12.049 [2024-04-26 23:37:01.064882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.049 [2024-04-26 23:37:01.065223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.049 [2024-04-26 23:37:01.065232] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.049 qpair failed and we were unable to recover it. 00:34:12.049 [2024-04-26 23:37:01.065561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.049 [2024-04-26 23:37:01.065846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.049 [2024-04-26 23:37:01.065854] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.049 qpair failed and we were unable to recover it. 00:34:12.049 [2024-04-26 23:37:01.066030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.049 [2024-04-26 23:37:01.066313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.049 [2024-04-26 23:37:01.066320] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.049 qpair failed and we were unable to recover it. 00:34:12.049 [2024-04-26 23:37:01.066643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.049 [2024-04-26 23:37:01.067002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.049 [2024-04-26 23:37:01.067010] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.049 qpair failed and we were unable to recover it. 00:34:12.049 [2024-04-26 23:37:01.067325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.049 [2024-04-26 23:37:01.067684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.049 [2024-04-26 23:37:01.067693] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.049 qpair failed and we were unable to recover it. 00:34:12.049 [2024-04-26 23:37:01.068000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.049 [2024-04-26 23:37:01.068377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.049 [2024-04-26 23:37:01.068386] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.049 qpair failed and we were unable to recover it. 00:34:12.049 [2024-04-26 23:37:01.068696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.049 [2024-04-26 23:37:01.069013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.049 [2024-04-26 23:37:01.069022] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.049 qpair failed and we were unable to recover it. 00:34:12.049 [2024-04-26 23:37:01.069325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.049 [2024-04-26 23:37:01.069597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.049 [2024-04-26 23:37:01.069605] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.049 qpair failed and we were unable to recover it. 00:34:12.049 [2024-04-26 23:37:01.069915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.049 [2024-04-26 23:37:01.070307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.049 [2024-04-26 23:37:01.070315] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.049 qpair failed and we were unable to recover it. 00:34:12.049 [2024-04-26 23:37:01.070645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.049 [2024-04-26 23:37:01.071003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.049 [2024-04-26 23:37:01.071011] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.049 qpair failed and we were unable to recover it. 00:34:12.049 [2024-04-26 23:37:01.071369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.049 [2024-04-26 23:37:01.071727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.049 [2024-04-26 23:37:01.071736] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.049 qpair failed and we were unable to recover it. 00:34:12.049 [2024-04-26 23:37:01.072071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.049 [2024-04-26 23:37:01.072427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.049 [2024-04-26 23:37:01.072435] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.049 qpair failed and we were unable to recover it. 00:34:12.049 [2024-04-26 23:37:01.072785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.049 [2024-04-26 23:37:01.073133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.049 [2024-04-26 23:37:01.073142] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.049 qpair failed and we were unable to recover it. 00:34:12.049 [2024-04-26 23:37:01.073484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.049 [2024-04-26 23:37:01.073803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.049 [2024-04-26 23:37:01.073810] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.049 qpair failed and we were unable to recover it. 00:34:12.049 [2024-04-26 23:37:01.074161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.049 [2024-04-26 23:37:01.074473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.050 [2024-04-26 23:37:01.074481] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.050 qpair failed and we were unable to recover it. 00:34:12.050 [2024-04-26 23:37:01.074715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.050 [2024-04-26 23:37:01.074993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.050 [2024-04-26 23:37:01.075001] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.050 qpair failed and we were unable to recover it. 00:34:12.050 [2024-04-26 23:37:01.075319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.050 [2024-04-26 23:37:01.075674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.050 [2024-04-26 23:37:01.075682] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.050 qpair failed and we were unable to recover it. 00:34:12.050 [2024-04-26 23:37:01.076084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.050 [2024-04-26 23:37:01.076397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.050 [2024-04-26 23:37:01.076405] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.050 qpair failed and we were unable to recover it. 00:34:12.050 [2024-04-26 23:37:01.076716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.050 [2024-04-26 23:37:01.077038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.050 [2024-04-26 23:37:01.077045] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.050 qpair failed and we were unable to recover it. 00:34:12.050 [2024-04-26 23:37:01.077347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.050 [2024-04-26 23:37:01.077701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.050 [2024-04-26 23:37:01.077709] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.050 qpair failed and we were unable to recover it. 00:34:12.050 [2024-04-26 23:37:01.077944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.050 [2024-04-26 23:37:01.078216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.050 [2024-04-26 23:37:01.078224] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.050 qpair failed and we were unable to recover it. 00:34:12.050 [2024-04-26 23:37:01.078428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.050 [2024-04-26 23:37:01.078707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.050 [2024-04-26 23:37:01.078715] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.050 qpair failed and we were unable to recover it. 00:34:12.050 [2024-04-26 23:37:01.079043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.050 [2024-04-26 23:37:01.079402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.050 [2024-04-26 23:37:01.079410] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.050 qpair failed and we were unable to recover it. 00:34:12.050 [2024-04-26 23:37:01.079763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.050 [2024-04-26 23:37:01.080082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.050 [2024-04-26 23:37:01.080090] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.050 qpair failed and we were unable to recover it. 00:34:12.050 [2024-04-26 23:37:01.080338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.050 [2024-04-26 23:37:01.080612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.050 [2024-04-26 23:37:01.080620] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.050 qpair failed and we were unable to recover it. 00:34:12.050 [2024-04-26 23:37:01.080971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.050 [2024-04-26 23:37:01.081260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.050 [2024-04-26 23:37:01.081267] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.050 qpair failed and we were unable to recover it. 00:34:12.050 [2024-04-26 23:37:01.081614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.050 [2024-04-26 23:37:01.081969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.050 [2024-04-26 23:37:01.081977] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.050 qpair failed and we were unable to recover it. 00:34:12.050 [2024-04-26 23:37:01.082306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.050 [2024-04-26 23:37:01.082664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.050 [2024-04-26 23:37:01.082672] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.050 qpair failed and we were unable to recover it. 00:34:12.050 [2024-04-26 23:37:01.083022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.050 [2024-04-26 23:37:01.083333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.050 [2024-04-26 23:37:01.083341] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.050 qpair failed and we were unable to recover it. 00:34:12.050 [2024-04-26 23:37:01.083563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.050 [2024-04-26 23:37:01.083847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.050 [2024-04-26 23:37:01.083855] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.050 qpair failed and we were unable to recover it. 00:34:12.050 [2024-04-26 23:37:01.084206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.050 [2024-04-26 23:37:01.084531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.050 [2024-04-26 23:37:01.084538] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.050 qpair failed and we were unable to recover it. 00:34:12.050 [2024-04-26 23:37:01.084886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.050 [2024-04-26 23:37:01.085182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.050 [2024-04-26 23:37:01.085190] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.050 qpair failed and we were unable to recover it. 00:34:12.050 [2024-04-26 23:37:01.085538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.050 [2024-04-26 23:37:01.085897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.050 [2024-04-26 23:37:01.085905] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.050 qpair failed and we were unable to recover it. 00:34:12.050 [2024-04-26 23:37:01.086277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.050 [2024-04-26 23:37:01.086586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.050 [2024-04-26 23:37:01.086594] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.050 qpair failed and we were unable to recover it. 00:34:12.050 [2024-04-26 23:37:01.086916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.050 [2024-04-26 23:37:01.087259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.050 [2024-04-26 23:37:01.087267] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.050 qpair failed and we were unable to recover it. 00:34:12.050 [2024-04-26 23:37:01.087623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.050 [2024-04-26 23:37:01.087985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.050 [2024-04-26 23:37:01.087992] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.050 qpair failed and we were unable to recover it. 00:34:12.050 [2024-04-26 23:37:01.088292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.050 [2024-04-26 23:37:01.088651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.050 [2024-04-26 23:37:01.088658] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.050 qpair failed and we were unable to recover it. 00:34:12.050 [2024-04-26 23:37:01.088985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.050 [2024-04-26 23:37:01.089310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.050 [2024-04-26 23:37:01.089318] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.050 qpair failed and we were unable to recover it. 00:34:12.050 [2024-04-26 23:37:01.089710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.050 [2024-04-26 23:37:01.090064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.050 [2024-04-26 23:37:01.090072] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.050 qpair failed and we were unable to recover it. 00:34:12.050 [2024-04-26 23:37:01.090430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.050 [2024-04-26 23:37:01.090741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.050 [2024-04-26 23:37:01.090749] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.050 qpair failed and we were unable to recover it. 00:34:12.050 [2024-04-26 23:37:01.091085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.050 [2024-04-26 23:37:01.091398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.050 [2024-04-26 23:37:01.091406] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.050 qpair failed and we were unable to recover it. 00:34:12.050 [2024-04-26 23:37:01.091585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.050 [2024-04-26 23:37:01.091902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.050 [2024-04-26 23:37:01.091911] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.050 qpair failed and we were unable to recover it. 00:34:12.050 [2024-04-26 23:37:01.092232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.050 [2024-04-26 23:37:01.092546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.050 [2024-04-26 23:37:01.092554] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.050 qpair failed and we were unable to recover it. 00:34:12.050 [2024-04-26 23:37:01.092873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.051 [2024-04-26 23:37:01.093174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.051 [2024-04-26 23:37:01.093181] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.051 qpair failed and we were unable to recover it. 00:34:12.051 [2024-04-26 23:37:01.093537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.051 [2024-04-26 23:37:01.093892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.051 [2024-04-26 23:37:01.093900] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.051 qpair failed and we were unable to recover it. 00:34:12.051 [2024-04-26 23:37:01.094117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.051 [2024-04-26 23:37:01.094376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.051 [2024-04-26 23:37:01.094384] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.051 qpair failed and we were unable to recover it. 00:34:12.051 [2024-04-26 23:37:01.094714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.051 [2024-04-26 23:37:01.095033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.051 [2024-04-26 23:37:01.095041] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.051 qpair failed and we were unable to recover it. 00:34:12.051 [2024-04-26 23:37:01.095392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.051 [2024-04-26 23:37:01.095714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.051 [2024-04-26 23:37:01.095722] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.051 qpair failed and we were unable to recover it. 00:34:12.051 [2024-04-26 23:37:01.096065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.051 [2024-04-26 23:37:01.096374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.051 [2024-04-26 23:37:01.096382] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.051 qpair failed and we were unable to recover it. 00:34:12.051 [2024-04-26 23:37:01.096575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.051 [2024-04-26 23:37:01.096818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.051 [2024-04-26 23:37:01.096826] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.051 qpair failed and we were unable to recover it. 00:34:12.051 [2024-04-26 23:37:01.097147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.051 [2024-04-26 23:37:01.097461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.051 [2024-04-26 23:37:01.097469] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.051 qpair failed and we were unable to recover it. 00:34:12.051 [2024-04-26 23:37:01.097852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.051 [2024-04-26 23:37:01.098166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.051 [2024-04-26 23:37:01.098174] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.051 qpair failed and we were unable to recover it. 00:34:12.051 [2024-04-26 23:37:01.098366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.051 [2024-04-26 23:37:01.098653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.051 [2024-04-26 23:37:01.098661] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.051 qpair failed and we were unable to recover it. 00:34:12.051 [2024-04-26 23:37:01.098882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.051 [2024-04-26 23:37:01.099171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.051 [2024-04-26 23:37:01.099180] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.051 qpair failed and we were unable to recover it. 00:34:12.051 [2024-04-26 23:37:01.099471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.051 [2024-04-26 23:37:01.099830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.051 [2024-04-26 23:37:01.099842] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.051 qpair failed and we were unable to recover it. 00:34:12.051 [2024-04-26 23:37:01.100178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.051 [2024-04-26 23:37:01.100405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.051 [2024-04-26 23:37:01.100412] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.051 qpair failed and we were unable to recover it. 00:34:12.051 [2024-04-26 23:37:01.100765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.051 [2024-04-26 23:37:01.101056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.051 [2024-04-26 23:37:01.101064] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.051 qpair failed and we were unable to recover it. 00:34:12.051 [2024-04-26 23:37:01.101395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.051 [2024-04-26 23:37:01.101754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.051 [2024-04-26 23:37:01.101762] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.051 qpair failed and we were unable to recover it. 00:34:12.051 [2024-04-26 23:37:01.102073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.051 [2024-04-26 23:37:01.102429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.051 [2024-04-26 23:37:01.102437] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.051 qpair failed and we were unable to recover it. 00:34:12.051 [2024-04-26 23:37:01.102786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.051 [2024-04-26 23:37:01.103103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.051 [2024-04-26 23:37:01.103111] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.051 qpair failed and we were unable to recover it. 00:34:12.051 [2024-04-26 23:37:01.103422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.051 [2024-04-26 23:37:01.103735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.051 [2024-04-26 23:37:01.103743] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.051 qpair failed and we were unable to recover it. 00:34:12.051 [2024-04-26 23:37:01.104080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.051 [2024-04-26 23:37:01.104440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.051 [2024-04-26 23:37:01.104448] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.051 qpair failed and we were unable to recover it. 00:34:12.051 [2024-04-26 23:37:01.104791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.051 [2024-04-26 23:37:01.105153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.051 [2024-04-26 23:37:01.105162] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.051 qpair failed and we were unable to recover it. 00:34:12.051 [2024-04-26 23:37:01.105509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.051 [2024-04-26 23:37:01.105876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.051 [2024-04-26 23:37:01.105883] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.051 qpair failed and we were unable to recover it. 00:34:12.051 [2024-04-26 23:37:01.106165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.051 [2024-04-26 23:37:01.106413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.051 [2024-04-26 23:37:01.106421] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.051 qpair failed and we were unable to recover it. 00:34:12.051 [2024-04-26 23:37:01.106613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.051 [2024-04-26 23:37:01.106949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.051 [2024-04-26 23:37:01.106957] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.051 qpair failed and we were unable to recover it. 00:34:12.052 [2024-04-26 23:37:01.107278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.052 [2024-04-26 23:37:01.107637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.052 [2024-04-26 23:37:01.107644] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.052 qpair failed and we were unable to recover it. 00:34:12.052 [2024-04-26 23:37:01.107992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.052 [2024-04-26 23:37:01.108349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.052 [2024-04-26 23:37:01.108357] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.052 qpair failed and we were unable to recover it. 00:34:12.052 [2024-04-26 23:37:01.108704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.052 [2024-04-26 23:37:01.109022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.052 [2024-04-26 23:37:01.109030] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.052 qpair failed and we were unable to recover it. 00:34:12.052 [2024-04-26 23:37:01.109362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.052 [2024-04-26 23:37:01.109553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.052 [2024-04-26 23:37:01.109560] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.052 qpair failed and we were unable to recover it. 00:34:12.052 [2024-04-26 23:37:01.109878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.052 [2024-04-26 23:37:01.110232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.052 [2024-04-26 23:37:01.110240] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.052 qpair failed and we were unable to recover it. 00:34:12.052 [2024-04-26 23:37:01.110597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.052 [2024-04-26 23:37:01.110956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.052 [2024-04-26 23:37:01.110964] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.052 qpair failed and we were unable to recover it. 00:34:12.052 [2024-04-26 23:37:01.111327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.052 [2024-04-26 23:37:01.111663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.052 [2024-04-26 23:37:01.111670] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.052 qpair failed and we were unable to recover it. 00:34:12.052 [2024-04-26 23:37:01.111998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.052 [2024-04-26 23:37:01.112351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.052 [2024-04-26 23:37:01.112359] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.052 qpair failed and we were unable to recover it. 00:34:12.052 [2024-04-26 23:37:01.112672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.052 [2024-04-26 23:37:01.112950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.052 [2024-04-26 23:37:01.112958] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.052 qpair failed and we were unable to recover it. 00:34:12.052 [2024-04-26 23:37:01.113159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.052 [2024-04-26 23:37:01.113437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.052 [2024-04-26 23:37:01.113445] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.052 qpair failed and we were unable to recover it. 00:34:12.052 [2024-04-26 23:37:01.113760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.052 [2024-04-26 23:37:01.114078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.052 [2024-04-26 23:37:01.114086] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.052 qpair failed and we were unable to recover it. 00:34:12.052 [2024-04-26 23:37:01.114383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.052 [2024-04-26 23:37:01.114572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.052 [2024-04-26 23:37:01.114580] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.052 qpair failed and we were unable to recover it. 00:34:12.052 [2024-04-26 23:37:01.114916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.052 [2024-04-26 23:37:01.115234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.052 [2024-04-26 23:37:01.115242] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.052 qpair failed and we were unable to recover it. 00:34:12.052 [2024-04-26 23:37:01.115593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.052 [2024-04-26 23:37:01.115902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.052 [2024-04-26 23:37:01.115911] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.052 qpair failed and we were unable to recover it. 00:34:12.052 [2024-04-26 23:37:01.116207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.052 [2024-04-26 23:37:01.116557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.052 [2024-04-26 23:37:01.116565] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.052 qpair failed and we were unable to recover it. 00:34:12.052 [2024-04-26 23:37:01.116935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.052 [2024-04-26 23:37:01.117314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.052 [2024-04-26 23:37:01.117321] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.052 qpair failed and we were unable to recover it. 00:34:12.052 [2024-04-26 23:37:01.117512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.052 [2024-04-26 23:37:01.117800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.052 [2024-04-26 23:37:01.117808] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.052 qpair failed and we were unable to recover it. 00:34:12.052 [2024-04-26 23:37:01.118139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.052 [2024-04-26 23:37:01.118456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.052 [2024-04-26 23:37:01.118464] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.052 qpair failed and we were unable to recover it. 00:34:12.052 [2024-04-26 23:37:01.118774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.052 [2024-04-26 23:37:01.119071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.052 [2024-04-26 23:37:01.119079] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.052 qpair failed and we were unable to recover it. 00:34:12.052 [2024-04-26 23:37:01.119373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.052 [2024-04-26 23:37:01.119695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.052 [2024-04-26 23:37:01.119702] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.052 qpair failed and we were unable to recover it. 00:34:12.052 [2024-04-26 23:37:01.120020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.052 [2024-04-26 23:37:01.120343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.052 [2024-04-26 23:37:01.120350] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.052 qpair failed and we were unable to recover it. 00:34:12.052 [2024-04-26 23:37:01.120663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.052 [2024-04-26 23:37:01.120999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.052 [2024-04-26 23:37:01.121008] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.052 qpair failed and we were unable to recover it. 00:34:12.052 [2024-04-26 23:37:01.121329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.052 [2024-04-26 23:37:01.121646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.052 [2024-04-26 23:37:01.121654] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.052 qpair failed and we were unable to recover it. 00:34:12.052 [2024-04-26 23:37:01.121989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.052 [2024-04-26 23:37:01.122311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.052 [2024-04-26 23:37:01.122319] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.052 qpair failed and we were unable to recover it. 00:34:12.052 [2024-04-26 23:37:01.122624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.052 [2024-04-26 23:37:01.122812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.052 [2024-04-26 23:37:01.122821] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.052 qpair failed and we were unable to recover it. 00:34:12.052 [2024-04-26 23:37:01.123053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.052 [2024-04-26 23:37:01.123348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.052 [2024-04-26 23:37:01.123356] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.052 qpair failed and we were unable to recover it. 00:34:12.052 [2024-04-26 23:37:01.123658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.052 [2024-04-26 23:37:01.123947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.052 [2024-04-26 23:37:01.123955] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.052 qpair failed and we were unable to recover it. 00:34:12.052 [2024-04-26 23:37:01.124287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.052 [2024-04-26 23:37:01.124484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.052 [2024-04-26 23:37:01.124492] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.052 qpair failed and we were unable to recover it. 00:34:12.052 [2024-04-26 23:37:01.124843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.053 [2024-04-26 23:37:01.125180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.053 [2024-04-26 23:37:01.125187] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.053 qpair failed and we were unable to recover it. 00:34:12.053 [2024-04-26 23:37:01.125539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.053 [2024-04-26 23:37:01.125898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.053 [2024-04-26 23:37:01.125906] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.053 qpair failed and we were unable to recover it. 00:34:12.053 [2024-04-26 23:37:01.126243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.053 [2024-04-26 23:37:01.126599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.053 [2024-04-26 23:37:01.126607] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.053 qpair failed and we were unable to recover it. 00:34:12.053 [2024-04-26 23:37:01.126923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.053 [2024-04-26 23:37:01.127222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.053 [2024-04-26 23:37:01.127230] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.053 qpair failed and we were unable to recover it. 00:34:12.053 [2024-04-26 23:37:01.127555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.053 [2024-04-26 23:37:01.127908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.053 [2024-04-26 23:37:01.127916] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.053 qpair failed and we were unable to recover it. 00:34:12.053 [2024-04-26 23:37:01.128230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.053 [2024-04-26 23:37:01.128589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.053 [2024-04-26 23:37:01.128596] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.053 qpair failed and we were unable to recover it. 00:34:12.053 [2024-04-26 23:37:01.128956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.053 [2024-04-26 23:37:01.129295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.053 [2024-04-26 23:37:01.129303] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.053 qpair failed and we were unable to recover it. 00:34:12.053 [2024-04-26 23:37:01.129623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.053 [2024-04-26 23:37:01.129824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.053 [2024-04-26 23:37:01.129832] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.053 qpair failed and we were unable to recover it. 00:34:12.053 [2024-04-26 23:37:01.130166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.053 [2024-04-26 23:37:01.130522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.053 [2024-04-26 23:37:01.130529] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.053 qpair failed and we were unable to recover it. 00:34:12.053 [2024-04-26 23:37:01.130878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.053 [2024-04-26 23:37:01.131195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.053 [2024-04-26 23:37:01.131204] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.053 qpair failed and we were unable to recover it. 00:34:12.053 [2024-04-26 23:37:01.131550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.053 [2024-04-26 23:37:01.131863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.053 [2024-04-26 23:37:01.131871] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.053 qpair failed and we were unable to recover it. 00:34:12.053 [2024-04-26 23:37:01.132215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.053 [2024-04-26 23:37:01.132527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.053 [2024-04-26 23:37:01.132535] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.053 qpair failed and we were unable to recover it. 00:34:12.053 [2024-04-26 23:37:01.132883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.053 [2024-04-26 23:37:01.133179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.053 [2024-04-26 23:37:01.133187] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.053 qpair failed and we were unable to recover it. 00:34:12.053 [2024-04-26 23:37:01.133537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.053 [2024-04-26 23:37:01.133851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.053 [2024-04-26 23:37:01.133859] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.053 qpair failed and we were unable to recover it. 00:34:12.053 [2024-04-26 23:37:01.134243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.053 [2024-04-26 23:37:01.134610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.053 [2024-04-26 23:37:01.134618] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.053 qpair failed and we were unable to recover it. 00:34:12.053 [2024-04-26 23:37:01.134944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.053 [2024-04-26 23:37:01.135293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.053 [2024-04-26 23:37:01.135301] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.053 qpair failed and we were unable to recover it. 00:34:12.053 [2024-04-26 23:37:01.135637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.053 [2024-04-26 23:37:01.135861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.053 [2024-04-26 23:37:01.135869] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.053 qpair failed and we were unable to recover it. 00:34:12.053 [2024-04-26 23:37:01.136234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.053 [2024-04-26 23:37:01.136548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.053 [2024-04-26 23:37:01.136555] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.053 qpair failed and we were unable to recover it. 00:34:12.053 [2024-04-26 23:37:01.136901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.053 [2024-04-26 23:37:01.137231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.053 [2024-04-26 23:37:01.137239] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.053 qpair failed and we were unable to recover it. 00:34:12.053 [2024-04-26 23:37:01.137567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.053 [2024-04-26 23:37:01.137768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.053 [2024-04-26 23:37:01.137776] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.053 qpair failed and we were unable to recover it. 00:34:12.053 [2024-04-26 23:37:01.138099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.053 [2024-04-26 23:37:01.138411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.053 [2024-04-26 23:37:01.138420] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.053 qpair failed and we were unable to recover it. 00:34:12.053 [2024-04-26 23:37:01.138736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.053 [2024-04-26 23:37:01.139076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.053 [2024-04-26 23:37:01.139084] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.053 qpair failed and we were unable to recover it. 00:34:12.053 [2024-04-26 23:37:01.139271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.053 [2024-04-26 23:37:01.139587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.053 [2024-04-26 23:37:01.139596] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.053 qpair failed and we were unable to recover it. 00:34:12.053 [2024-04-26 23:37:01.139922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.053 [2024-04-26 23:37:01.140120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.053 [2024-04-26 23:37:01.140127] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.053 qpair failed and we were unable to recover it. 00:34:12.053 [2024-04-26 23:37:01.140474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.053 [2024-04-26 23:37:01.140831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.053 [2024-04-26 23:37:01.140842] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.053 qpair failed and we were unable to recover it. 00:34:12.053 [2024-04-26 23:37:01.141152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.053 [2024-04-26 23:37:01.141508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.053 [2024-04-26 23:37:01.141515] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.053 qpair failed and we were unable to recover it. 00:34:12.053 [2024-04-26 23:37:01.141868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.053 [2024-04-26 23:37:01.142159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.053 [2024-04-26 23:37:01.142166] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.053 qpair failed and we were unable to recover it. 00:34:12.053 [2024-04-26 23:37:01.142401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.053 [2024-04-26 23:37:01.142763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.053 [2024-04-26 23:37:01.142771] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.053 qpair failed and we were unable to recover it. 00:34:12.054 [2024-04-26 23:37:01.143096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.054 [2024-04-26 23:37:01.143372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.054 [2024-04-26 23:37:01.143380] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.054 qpair failed and we were unable to recover it. 00:34:12.054 [2024-04-26 23:37:01.143690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.054 [2024-04-26 23:37:01.143876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.054 [2024-04-26 23:37:01.143884] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.054 qpair failed and we were unable to recover it. 00:34:12.054 [2024-04-26 23:37:01.144133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.054 [2024-04-26 23:37:01.144399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.054 [2024-04-26 23:37:01.144409] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.054 qpair failed and we were unable to recover it. 00:34:12.054 [2024-04-26 23:37:01.144727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.054 [2024-04-26 23:37:01.145043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.054 [2024-04-26 23:37:01.145052] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.054 qpair failed and we were unable to recover it. 00:34:12.054 [2024-04-26 23:37:01.145397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.054 [2024-04-26 23:37:01.145762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.054 [2024-04-26 23:37:01.145769] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.054 qpair failed and we were unable to recover it. 00:34:12.054 [2024-04-26 23:37:01.146089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.054 [2024-04-26 23:37:01.146449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.054 [2024-04-26 23:37:01.146457] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.054 qpair failed and we were unable to recover it. 00:34:12.054 [2024-04-26 23:37:01.146776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.054 [2024-04-26 23:37:01.147120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.054 [2024-04-26 23:37:01.147129] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.054 qpair failed and we were unable to recover it. 00:34:12.054 [2024-04-26 23:37:01.147459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.054 [2024-04-26 23:37:01.147818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.054 [2024-04-26 23:37:01.147825] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.054 qpair failed and we were unable to recover it. 00:34:12.054 [2024-04-26 23:37:01.148015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.054 [2024-04-26 23:37:01.148301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.054 [2024-04-26 23:37:01.148308] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.054 qpair failed and we were unable to recover it. 00:34:12.054 [2024-04-26 23:37:01.148635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.054 [2024-04-26 23:37:01.148993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.054 [2024-04-26 23:37:01.149001] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.054 qpair failed and we were unable to recover it. 00:34:12.054 [2024-04-26 23:37:01.149304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.054 [2024-04-26 23:37:01.149569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.054 [2024-04-26 23:37:01.149576] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.054 qpair failed and we were unable to recover it. 00:34:12.054 [2024-04-26 23:37:01.149764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.054 [2024-04-26 23:37:01.150009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.054 [2024-04-26 23:37:01.150017] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.054 qpair failed and we were unable to recover it. 00:34:12.054 [2024-04-26 23:37:01.150344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.054 [2024-04-26 23:37:01.150489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.054 [2024-04-26 23:37:01.150498] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.054 qpair failed and we were unable to recover it. 00:34:12.054 [2024-04-26 23:37:01.150740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.054 [2024-04-26 23:37:01.151024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.054 [2024-04-26 23:37:01.151033] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.054 qpair failed and we were unable to recover it. 00:34:12.054 [2024-04-26 23:37:01.151329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.054 [2024-04-26 23:37:01.151686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.054 [2024-04-26 23:37:01.151693] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.054 qpair failed and we were unable to recover it. 00:34:12.054 [2024-04-26 23:37:01.152021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.054 [2024-04-26 23:37:01.152360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.054 [2024-04-26 23:37:01.152368] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.054 qpair failed and we were unable to recover it. 00:34:12.054 [2024-04-26 23:37:01.152715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.054 [2024-04-26 23:37:01.153010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.054 [2024-04-26 23:37:01.153017] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.054 qpair failed and we were unable to recover it. 00:34:12.054 [2024-04-26 23:37:01.153334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.054 [2024-04-26 23:37:01.153692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.054 [2024-04-26 23:37:01.153700] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.054 qpair failed and we were unable to recover it. 00:34:12.054 [2024-04-26 23:37:01.154026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.054 [2024-04-26 23:37:01.154387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.054 [2024-04-26 23:37:01.154394] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.054 qpair failed and we were unable to recover it. 00:34:12.054 [2024-04-26 23:37:01.154720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.054 [2024-04-26 23:37:01.155034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.054 [2024-04-26 23:37:01.155043] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.054 qpair failed and we were unable to recover it. 00:34:12.054 [2024-04-26 23:37:01.155401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.054 [2024-04-26 23:37:01.155761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.054 [2024-04-26 23:37:01.155769] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.054 qpair failed and we were unable to recover it. 00:34:12.054 [2024-04-26 23:37:01.156096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.054 [2024-04-26 23:37:01.156455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.054 [2024-04-26 23:37:01.156463] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.054 qpair failed and we were unable to recover it. 00:34:12.054 [2024-04-26 23:37:01.156813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.054 [2024-04-26 23:37:01.157147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.054 [2024-04-26 23:37:01.157156] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.054 qpair failed and we were unable to recover it. 00:34:12.054 [2024-04-26 23:37:01.157484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.054 [2024-04-26 23:37:01.157673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.054 [2024-04-26 23:37:01.157681] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.054 qpair failed and we were unable to recover it. 00:34:12.054 [2024-04-26 23:37:01.158004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.054 [2024-04-26 23:37:01.158246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.054 [2024-04-26 23:37:01.158253] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.054 qpair failed and we were unable to recover it. 00:34:12.054 [2024-04-26 23:37:01.158572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.054 [2024-04-26 23:37:01.158881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.054 [2024-04-26 23:37:01.158889] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.054 qpair failed and we were unable to recover it. 00:34:12.054 [2024-04-26 23:37:01.159333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.054 [2024-04-26 23:37:01.159648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.054 [2024-04-26 23:37:01.159655] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.054 qpair failed and we were unable to recover it. 00:34:12.054 [2024-04-26 23:37:01.159983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.054 [2024-04-26 23:37:01.160224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.054 [2024-04-26 23:37:01.160231] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.054 qpair failed and we were unable to recover it. 00:34:12.054 [2024-04-26 23:37:01.160579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.055 [2024-04-26 23:37:01.161263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.055 [2024-04-26 23:37:01.161280] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.055 qpair failed and we were unable to recover it. 00:34:12.055 [2024-04-26 23:37:01.161624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.055 [2024-04-26 23:37:01.162290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.055 [2024-04-26 23:37:01.162306] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.055 qpair failed and we were unable to recover it. 00:34:12.055 [2024-04-26 23:37:01.162484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.055 [2024-04-26 23:37:01.162803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.055 [2024-04-26 23:37:01.162811] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.055 qpair failed and we were unable to recover it. 00:34:12.055 [2024-04-26 23:37:01.163129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.055 [2024-04-26 23:37:01.163475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.055 [2024-04-26 23:37:01.163483] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.055 qpair failed and we were unable to recover it. 00:34:12.055 [2024-04-26 23:37:01.163804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.055 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 44: 4188720 Killed "${NVMF_APP[@]}" "$@" 00:34:12.055 [2024-04-26 23:37:01.164124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.055 [2024-04-26 23:37:01.164132] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.055 qpair failed and we were unable to recover it. 00:34:12.055 [2024-04-26 23:37:01.164480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.055 23:37:01 -- host/target_disconnect.sh@56 -- # disconnect_init 10.0.0.2 00:34:12.055 [2024-04-26 23:37:01.164844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.055 [2024-04-26 23:37:01.164853] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.055 qpair failed and we were unable to recover it. 00:34:12.055 23:37:01 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:34:12.055 [2024-04-26 23:37:01.165162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.055 23:37:01 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:34:12.055 [2024-04-26 23:37:01.165472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.055 [2024-04-26 23:37:01.165480] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.055 qpair failed and we were unable to recover it. 00:34:12.055 23:37:01 -- common/autotest_common.sh@710 -- # xtrace_disable 00:34:12.055 23:37:01 -- common/autotest_common.sh@10 -- # set +x 00:34:12.055 [2024-04-26 23:37:01.165807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.055 [2024-04-26 23:37:01.166158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.055 [2024-04-26 23:37:01.166166] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.055 qpair failed and we were unable to recover it. 00:34:12.055 [2024-04-26 23:37:01.166516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.055 [2024-04-26 23:37:01.166830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.055 [2024-04-26 23:37:01.166842] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.055 qpair failed and we were unable to recover it. 00:34:12.055 [2024-04-26 23:37:01.167134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.055 [2024-04-26 23:37:01.167486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.055 [2024-04-26 23:37:01.167495] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.055 qpair failed and we were unable to recover it. 00:34:12.055 [2024-04-26 23:37:01.167823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.055 [2024-04-26 23:37:01.168159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.055 [2024-04-26 23:37:01.168168] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.055 qpair failed and we were unable to recover it. 00:34:12.055 [2024-04-26 23:37:01.168508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.055 [2024-04-26 23:37:01.168824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.055 [2024-04-26 23:37:01.168832] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.055 qpair failed and we were unable to recover it. 00:34:12.055 [2024-04-26 23:37:01.169028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.055 [2024-04-26 23:37:01.169312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.055 [2024-04-26 23:37:01.169320] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.055 qpair failed and we were unable to recover it. 00:34:12.055 [2024-04-26 23:37:01.169644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.055 [2024-04-26 23:37:01.169991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.055 [2024-04-26 23:37:01.170002] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.055 qpair failed and we were unable to recover it. 00:34:12.055 [2024-04-26 23:37:01.170357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.055 [2024-04-26 23:37:01.170599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.055 [2024-04-26 23:37:01.170607] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.055 qpair failed and we were unable to recover it. 00:34:12.055 [2024-04-26 23:37:01.170782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.055 [2024-04-26 23:37:01.171068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.055 [2024-04-26 23:37:01.171077] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.055 qpair failed and we were unable to recover it. 00:34:12.055 [2024-04-26 23:37:01.171314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.055 [2024-04-26 23:37:01.171606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.055 [2024-04-26 23:37:01.171615] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.055 qpair failed and we were unable to recover it. 00:34:12.055 [2024-04-26 23:37:01.171936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.055 [2024-04-26 23:37:01.172294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.055 [2024-04-26 23:37:01.172303] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.055 qpair failed and we were unable to recover it. 00:34:12.055 [2024-04-26 23:37:01.172656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.055 23:37:01 -- nvmf/common.sh@470 -- # nvmfpid=4189640 00:34:12.055 [2024-04-26 23:37:01.172977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.055 [2024-04-26 23:37:01.172987] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.055 qpair failed and we were unable to recover it. 00:34:12.055 23:37:01 -- nvmf/common.sh@471 -- # waitforlisten 4189640 00:34:12.055 23:37:01 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:34:12.055 [2024-04-26 23:37:01.173317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.055 23:37:01 -- common/autotest_common.sh@817 -- # '[' -z 4189640 ']' 00:34:12.055 [2024-04-26 23:37:01.173629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.055 [2024-04-26 23:37:01.173639] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.055 23:37:01 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:12.055 qpair failed and we were unable to recover it. 00:34:12.055 23:37:01 -- common/autotest_common.sh@822 -- # local max_retries=100 00:34:12.055 [2024-04-26 23:37:01.173909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.055 23:37:01 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:12.055 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:12.055 [2024-04-26 23:37:01.174295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.055 [2024-04-26 23:37:01.174304] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.055 qpair failed and we were unable to recover it. 00:34:12.055 23:37:01 -- common/autotest_common.sh@826 -- # xtrace_disable 00:34:12.055 23:37:01 -- common/autotest_common.sh@10 -- # set +x 00:34:12.056 [2024-04-26 23:37:01.174545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.056 [2024-04-26 23:37:01.174862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.056 [2024-04-26 23:37:01.174873] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.056 qpair failed and we were unable to recover it. 00:34:12.056 [2024-04-26 23:37:01.175111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.056 [2024-04-26 23:37:01.175385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.056 [2024-04-26 23:37:01.175393] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.056 qpair failed and we were unable to recover it. 00:34:12.056 [2024-04-26 23:37:01.175489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.056 [2024-04-26 23:37:01.175842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.056 [2024-04-26 23:37:01.175851] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.056 qpair failed and we were unable to recover it. 00:34:12.056 [2024-04-26 23:37:01.176155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.056 [2024-04-26 23:37:01.176465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.056 [2024-04-26 23:37:01.176474] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.056 qpair failed and we were unable to recover it. 00:34:12.056 [2024-04-26 23:37:01.176659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.056 [2024-04-26 23:37:01.176886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.056 [2024-04-26 23:37:01.176895] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.056 qpair failed and we were unable to recover it. 00:34:12.056 [2024-04-26 23:37:01.177203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.056 [2024-04-26 23:37:01.177440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.056 [2024-04-26 23:37:01.177450] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.056 qpair failed and we were unable to recover it. 00:34:12.056 [2024-04-26 23:37:01.177773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.056 [2024-04-26 23:37:01.178085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.056 [2024-04-26 23:37:01.178094] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.056 qpair failed and we were unable to recover it. 00:34:12.056 [2024-04-26 23:37:01.178404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.056 [2024-04-26 23:37:01.178674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.056 [2024-04-26 23:37:01.178683] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.056 qpair failed and we were unable to recover it. 00:34:12.056 [2024-04-26 23:37:01.178931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.056 [2024-04-26 23:37:01.179110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.056 [2024-04-26 23:37:01.179119] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.056 qpair failed and we were unable to recover it. 00:34:12.056 [2024-04-26 23:37:01.179442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.056 [2024-04-26 23:37:01.179762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.056 [2024-04-26 23:37:01.179771] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.056 qpair failed and we were unable to recover it. 00:34:12.056 [2024-04-26 23:37:01.180106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.056 [2024-04-26 23:37:01.180413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.056 [2024-04-26 23:37:01.180422] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.056 qpair failed and we were unable to recover it. 00:34:12.056 [2024-04-26 23:37:01.180652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.056 [2024-04-26 23:37:01.180940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.056 [2024-04-26 23:37:01.180950] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.056 qpair failed and we were unable to recover it. 00:34:12.056 [2024-04-26 23:37:01.181232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.056 [2024-04-26 23:37:01.181449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.056 [2024-04-26 23:37:01.181458] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.056 qpair failed and we were unable to recover it. 00:34:12.056 [2024-04-26 23:37:01.181657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.056 [2024-04-26 23:37:01.181863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.056 [2024-04-26 23:37:01.181872] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.056 qpair failed and we were unable to recover it. 00:34:12.056 [2024-04-26 23:37:01.182157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.056 [2024-04-26 23:37:01.182513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.056 [2024-04-26 23:37:01.182522] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.056 qpair failed and we were unable to recover it. 00:34:12.056 [2024-04-26 23:37:01.182886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.056 [2024-04-26 23:37:01.183120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.056 [2024-04-26 23:37:01.183127] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.056 qpair failed and we were unable to recover it. 00:34:12.056 [2024-04-26 23:37:01.183332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.056 [2024-04-26 23:37:01.183538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.056 [2024-04-26 23:37:01.183546] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.056 qpair failed and we were unable to recover it. 00:34:12.056 [2024-04-26 23:37:01.183789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.056 [2024-04-26 23:37:01.184089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.056 [2024-04-26 23:37:01.184097] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.056 qpair failed and we were unable to recover it. 00:34:12.056 [2024-04-26 23:37:01.184386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.056 [2024-04-26 23:37:01.184671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.056 [2024-04-26 23:37:01.184680] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.056 qpair failed and we were unable to recover it. 00:34:12.056 [2024-04-26 23:37:01.184993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.056 [2024-04-26 23:37:01.185344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.056 [2024-04-26 23:37:01.185352] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.056 qpair failed and we were unable to recover it. 00:34:12.056 [2024-04-26 23:37:01.185612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.056 [2024-04-26 23:37:01.185931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.056 [2024-04-26 23:37:01.185939] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.056 qpair failed and we were unable to recover it. 00:34:12.056 [2024-04-26 23:37:01.186241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.056 [2024-04-26 23:37:01.186591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.056 [2024-04-26 23:37:01.186599] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.056 qpair failed and we were unable to recover it. 00:34:12.056 [2024-04-26 23:37:01.186931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.056 [2024-04-26 23:37:01.187242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.056 [2024-04-26 23:37:01.187250] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.056 qpair failed and we were unable to recover it. 00:34:12.056 [2024-04-26 23:37:01.187566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.056 [2024-04-26 23:37:01.187845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.056 [2024-04-26 23:37:01.187853] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.056 qpair failed and we were unable to recover it. 00:34:12.056 [2024-04-26 23:37:01.188143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.056 [2024-04-26 23:37:01.188465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.056 [2024-04-26 23:37:01.188473] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.056 qpair failed and we were unable to recover it. 00:34:12.056 [2024-04-26 23:37:01.188825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.056 [2024-04-26 23:37:01.189118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.056 [2024-04-26 23:37:01.189126] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.056 qpair failed and we were unable to recover it. 00:34:12.056 [2024-04-26 23:37:01.189450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.056 [2024-04-26 23:37:01.189594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.056 [2024-04-26 23:37:01.189603] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.056 qpair failed and we were unable to recover it. 00:34:12.056 [2024-04-26 23:37:01.189941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.056 [2024-04-26 23:37:01.190283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.056 [2024-04-26 23:37:01.190290] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.056 qpair failed and we were unable to recover it. 00:34:12.056 [2024-04-26 23:37:01.190654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.056 [2024-04-26 23:37:01.190973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.057 [2024-04-26 23:37:01.190980] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.057 qpair failed and we were unable to recover it. 00:34:12.057 [2024-04-26 23:37:01.191291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.057 [2024-04-26 23:37:01.191608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.057 [2024-04-26 23:37:01.191616] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.057 qpair failed and we were unable to recover it. 00:34:12.057 [2024-04-26 23:37:01.191970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.057 [2024-04-26 23:37:01.192296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.057 [2024-04-26 23:37:01.192304] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.057 qpair failed and we were unable to recover it. 00:34:12.057 [2024-04-26 23:37:01.192656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.057 [2024-04-26 23:37:01.192940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.057 [2024-04-26 23:37:01.192948] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.057 qpair failed and we were unable to recover it. 00:34:12.057 [2024-04-26 23:37:01.193237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.057 [2024-04-26 23:37:01.193388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.057 [2024-04-26 23:37:01.193396] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.057 qpair failed and we were unable to recover it. 00:34:12.057 [2024-04-26 23:37:01.193751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.057 [2024-04-26 23:37:01.194078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.057 [2024-04-26 23:37:01.194087] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.057 qpair failed and we were unable to recover it. 00:34:12.057 [2024-04-26 23:37:01.194437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.057 [2024-04-26 23:37:01.194629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.057 [2024-04-26 23:37:01.194636] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.057 qpair failed and we were unable to recover it. 00:34:12.057 [2024-04-26 23:37:01.194965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.057 [2024-04-26 23:37:01.195213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.057 [2024-04-26 23:37:01.195221] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.057 qpair failed and we were unable to recover it. 00:34:12.057 [2024-04-26 23:37:01.195594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.057 [2024-04-26 23:37:01.195881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.057 [2024-04-26 23:37:01.195889] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.057 qpair failed and we were unable to recover it. 00:34:12.057 [2024-04-26 23:37:01.196116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.057 [2024-04-26 23:37:01.196428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.057 [2024-04-26 23:37:01.196437] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.057 qpair failed and we were unable to recover it. 00:34:12.057 [2024-04-26 23:37:01.196756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.057 [2024-04-26 23:37:01.196923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.057 [2024-04-26 23:37:01.196932] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.057 qpair failed and we were unable to recover it. 00:34:12.057 [2024-04-26 23:37:01.197289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.057 [2024-04-26 23:37:01.197593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.057 [2024-04-26 23:37:01.197602] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.057 qpair failed and we were unable to recover it. 00:34:12.057 [2024-04-26 23:37:01.197902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.057 [2024-04-26 23:37:01.198224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.057 [2024-04-26 23:37:01.198232] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.057 qpair failed and we were unable to recover it. 00:34:12.057 [2024-04-26 23:37:01.198547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.057 [2024-04-26 23:37:01.198869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.057 [2024-04-26 23:37:01.198877] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.057 qpair failed and we were unable to recover it. 00:34:12.057 [2024-04-26 23:37:01.199110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.057 [2024-04-26 23:37:01.199458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.057 [2024-04-26 23:37:01.199466] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.057 qpair failed and we were unable to recover it. 00:34:12.057 [2024-04-26 23:37:01.199663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.057 [2024-04-26 23:37:01.199997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.057 [2024-04-26 23:37:01.200005] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.057 qpair failed and we were unable to recover it. 00:34:12.057 [2024-04-26 23:37:01.200329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.057 [2024-04-26 23:37:01.200612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.057 [2024-04-26 23:37:01.200620] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.057 qpair failed and we were unable to recover it. 00:34:12.057 [2024-04-26 23:37:01.200832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.057 [2024-04-26 23:37:01.201145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.057 [2024-04-26 23:37:01.201153] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.057 qpair failed and we were unable to recover it. 00:34:12.057 [2024-04-26 23:37:01.201437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.057 [2024-04-26 23:37:01.201659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.057 [2024-04-26 23:37:01.201667] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.057 qpair failed and we were unable to recover it. 00:34:12.057 [2024-04-26 23:37:01.201922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.057 [2024-04-26 23:37:01.202085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.057 [2024-04-26 23:37:01.202093] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.057 qpair failed and we were unable to recover it. 00:34:12.057 [2024-04-26 23:37:01.202374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.057 [2024-04-26 23:37:01.202648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.057 [2024-04-26 23:37:01.202656] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.057 qpair failed and we were unable to recover it. 00:34:12.057 [2024-04-26 23:37:01.203060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.057 [2024-04-26 23:37:01.203398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.057 [2024-04-26 23:37:01.203406] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.057 qpair failed and we were unable to recover it. 00:34:12.057 [2024-04-26 23:37:01.203734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.057 [2024-04-26 23:37:01.204081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.057 [2024-04-26 23:37:01.204089] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.057 qpair failed and we were unable to recover it. 00:34:12.057 [2024-04-26 23:37:01.204438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.057 [2024-04-26 23:37:01.204682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.057 [2024-04-26 23:37:01.204690] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.057 qpair failed and we were unable to recover it. 00:34:12.057 [2024-04-26 23:37:01.205083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.057 [2024-04-26 23:37:01.205416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.057 [2024-04-26 23:37:01.205424] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.057 qpair failed and we were unable to recover it. 00:34:12.057 [2024-04-26 23:37:01.205776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.057 [2024-04-26 23:37:01.206090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.057 [2024-04-26 23:37:01.206098] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.057 qpair failed and we were unable to recover it. 00:34:12.057 [2024-04-26 23:37:01.206422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.057 [2024-04-26 23:37:01.206683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.057 [2024-04-26 23:37:01.206692] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.057 qpair failed and we were unable to recover it. 00:34:12.057 [2024-04-26 23:37:01.207037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.057 [2024-04-26 23:37:01.207361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.057 [2024-04-26 23:37:01.207370] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.057 qpair failed and we were unable to recover it. 00:34:12.057 [2024-04-26 23:37:01.207727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.057 [2024-04-26 23:37:01.208031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.058 [2024-04-26 23:37:01.208039] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.058 qpair failed and we were unable to recover it. 00:34:12.058 [2024-04-26 23:37:01.208381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.058 [2024-04-26 23:37:01.208534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.058 [2024-04-26 23:37:01.208542] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.058 qpair failed and we were unable to recover it. 00:34:12.058 [2024-04-26 23:37:01.208902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.058 [2024-04-26 23:37:01.209106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.058 [2024-04-26 23:37:01.209114] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.058 qpair failed and we were unable to recover it. 00:34:12.058 [2024-04-26 23:37:01.209329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.058 [2024-04-26 23:37:01.209724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.058 [2024-04-26 23:37:01.209733] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.058 qpair failed and we were unable to recover it. 00:34:12.058 [2024-04-26 23:37:01.210080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.058 [2024-04-26 23:37:01.210290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.058 [2024-04-26 23:37:01.210298] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.058 qpair failed and we were unable to recover it. 00:34:12.058 [2024-04-26 23:37:01.210504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.058 [2024-04-26 23:37:01.210706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.058 [2024-04-26 23:37:01.210713] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.058 qpair failed and we were unable to recover it. 00:34:12.058 [2024-04-26 23:37:01.211046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.058 [2024-04-26 23:37:01.211328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.058 [2024-04-26 23:37:01.211336] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.058 qpair failed and we were unable to recover it. 00:34:12.058 [2024-04-26 23:37:01.211682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.058 [2024-04-26 23:37:01.211997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.058 [2024-04-26 23:37:01.212005] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.058 qpair failed and we were unable to recover it. 00:34:12.058 [2024-04-26 23:37:01.212236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.058 [2024-04-26 23:37:01.212570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.058 [2024-04-26 23:37:01.212579] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.058 qpair failed and we were unable to recover it. 00:34:12.058 [2024-04-26 23:37:01.212761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.058 [2024-04-26 23:37:01.213064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.058 [2024-04-26 23:37:01.213072] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.058 qpair failed and we were unable to recover it. 00:34:12.058 [2024-04-26 23:37:01.213395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.058 [2024-04-26 23:37:01.213674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.058 [2024-04-26 23:37:01.213682] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.058 qpair failed and we were unable to recover it. 00:34:12.058 [2024-04-26 23:37:01.213900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.058 [2024-04-26 23:37:01.214056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.058 [2024-04-26 23:37:01.214064] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.058 qpair failed and we were unable to recover it. 00:34:12.058 [2024-04-26 23:37:01.214414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.058 [2024-04-26 23:37:01.214692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.058 [2024-04-26 23:37:01.214701] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.058 qpair failed and we were unable to recover it. 00:34:12.058 [2024-04-26 23:37:01.215002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.058 [2024-04-26 23:37:01.215211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.058 [2024-04-26 23:37:01.215219] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.058 qpair failed and we were unable to recover it. 00:34:12.058 [2024-04-26 23:37:01.215555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.058 [2024-04-26 23:37:01.215885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.058 [2024-04-26 23:37:01.215893] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.058 qpair failed and we were unable to recover it. 00:34:12.058 [2024-04-26 23:37:01.216265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.058 [2024-04-26 23:37:01.216651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.058 [2024-04-26 23:37:01.216659] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.058 qpair failed and we were unable to recover it. 00:34:12.058 [2024-04-26 23:37:01.216984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.058 [2024-04-26 23:37:01.217203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.058 [2024-04-26 23:37:01.217210] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.058 qpair failed and we were unable to recover it. 00:34:12.058 [2024-04-26 23:37:01.217404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.058 [2024-04-26 23:37:01.217720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.058 [2024-04-26 23:37:01.217728] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.058 qpair failed and we were unable to recover it. 00:34:12.058 [2024-04-26 23:37:01.217799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.058 [2024-04-26 23:37:01.218121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.058 [2024-04-26 23:37:01.218129] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.058 qpair failed and we were unable to recover it. 00:34:12.058 [2024-04-26 23:37:01.218452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.058 [2024-04-26 23:37:01.218665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.058 [2024-04-26 23:37:01.218673] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.058 qpair failed and we were unable to recover it. 00:34:12.058 [2024-04-26 23:37:01.219001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.058 [2024-04-26 23:37:01.219356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.058 [2024-04-26 23:37:01.219364] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.058 qpair failed and we were unable to recover it. 00:34:12.058 [2024-04-26 23:37:01.219680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.058 [2024-04-26 23:37:01.219879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.058 [2024-04-26 23:37:01.219888] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.058 qpair failed and we were unable to recover it. 00:34:12.058 [2024-04-26 23:37:01.220070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.058 [2024-04-26 23:37:01.220411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.058 [2024-04-26 23:37:01.220420] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.058 qpair failed and we were unable to recover it. 00:34:12.058 [2024-04-26 23:37:01.220733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.058 [2024-04-26 23:37:01.221075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.058 [2024-04-26 23:37:01.221083] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.058 qpair failed and we were unable to recover it. 00:34:12.058 [2024-04-26 23:37:01.221474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.058 [2024-04-26 23:37:01.221799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.058 [2024-04-26 23:37:01.221808] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.058 qpair failed and we were unable to recover it. 00:34:12.058 [2024-04-26 23:37:01.222122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.058 [2024-04-26 23:37:01.222452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.058 [2024-04-26 23:37:01.222460] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.058 qpair failed and we were unable to recover it. 00:34:12.058 [2024-04-26 23:37:01.222858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.058 [2024-04-26 23:37:01.223163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.058 [2024-04-26 23:37:01.223172] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.058 qpair failed and we were unable to recover it. 00:34:12.058 [2024-04-26 23:37:01.223417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.058 [2024-04-26 23:37:01.223583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.058 [2024-04-26 23:37:01.223591] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.058 qpair failed and we were unable to recover it. 00:34:12.058 [2024-04-26 23:37:01.223905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.058 [2024-04-26 23:37:01.224097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.058 [2024-04-26 23:37:01.224104] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.058 qpair failed and we were unable to recover it. 00:34:12.059 [2024-04-26 23:37:01.224291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.059 [2024-04-26 23:37:01.224468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.059 [2024-04-26 23:37:01.224477] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.059 qpair failed and we were unable to recover it. 00:34:12.059 [2024-04-26 23:37:01.224777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.059 [2024-04-26 23:37:01.225139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.059 [2024-04-26 23:37:01.225148] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.059 qpair failed and we were unable to recover it. 00:34:12.059 [2024-04-26 23:37:01.225513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.059 [2024-04-26 23:37:01.225722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.059 [2024-04-26 23:37:01.225730] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.059 qpair failed and we were unable to recover it. 00:34:12.059 [2024-04-26 23:37:01.226069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.059 [2024-04-26 23:37:01.226368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.059 [2024-04-26 23:37:01.226376] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.059 qpair failed and we were unable to recover it. 00:34:12.059 [2024-04-26 23:37:01.226693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.059 [2024-04-26 23:37:01.226970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.059 [2024-04-26 23:37:01.226978] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.059 qpair failed and we were unable to recover it. 00:34:12.059 [2024-04-26 23:37:01.227322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.059 [2024-04-26 23:37:01.227639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.059 [2024-04-26 23:37:01.227646] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.059 qpair failed and we were unable to recover it. 00:34:12.059 [2024-04-26 23:37:01.227977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.059 [2024-04-26 23:37:01.228157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.059 [2024-04-26 23:37:01.228165] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.059 qpair failed and we were unable to recover it. 00:34:12.059 [2024-04-26 23:37:01.228489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.059 [2024-04-26 23:37:01.228680] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:34:12.059 [2024-04-26 23:37:01.228724] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:12.059 [2024-04-26 23:37:01.228808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.059 [2024-04-26 23:37:01.228815] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.059 qpair failed and we were unable to recover it. 00:34:12.059 [2024-04-26 23:37:01.229010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.059 [2024-04-26 23:37:01.229371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.059 [2024-04-26 23:37:01.229378] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.059 qpair failed and we were unable to recover it. 00:34:12.059 [2024-04-26 23:37:01.229703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.059 [2024-04-26 23:37:01.230028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.059 [2024-04-26 23:37:01.230037] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.059 qpair failed and we were unable to recover it. 00:34:12.059 [2024-04-26 23:37:01.230242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.059 [2024-04-26 23:37:01.230408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.059 [2024-04-26 23:37:01.230416] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.059 qpair failed and we were unable to recover it. 00:34:12.059 [2024-04-26 23:37:01.230746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.059 [2024-04-26 23:37:01.231073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.059 [2024-04-26 23:37:01.231082] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.059 qpair failed and we were unable to recover it. 00:34:12.059 [2024-04-26 23:37:01.231401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.059 [2024-04-26 23:37:01.231664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.059 [2024-04-26 23:37:01.231672] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.059 qpair failed and we were unable to recover it. 00:34:12.059 [2024-04-26 23:37:01.231861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.059 [2024-04-26 23:37:01.232187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.059 [2024-04-26 23:37:01.232195] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.059 qpair failed and we were unable to recover it. 00:34:12.059 [2024-04-26 23:37:01.232390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.059 [2024-04-26 23:37:01.232704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.059 [2024-04-26 23:37:01.232712] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.059 qpair failed and we were unable to recover it. 00:34:12.059 [2024-04-26 23:37:01.233075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.059 [2024-04-26 23:37:01.233283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.059 [2024-04-26 23:37:01.233291] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.059 qpair failed and we were unable to recover it. 00:34:12.059 [2024-04-26 23:37:01.233594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.059 [2024-04-26 23:37:01.233894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.059 [2024-04-26 23:37:01.233903] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.059 qpair failed and we were unable to recover it. 00:34:12.059 [2024-04-26 23:37:01.234191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.059 [2024-04-26 23:37:01.234511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.059 [2024-04-26 23:37:01.234519] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.059 qpair failed and we were unable to recover it. 00:34:12.059 [2024-04-26 23:37:01.234828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.059 [2024-04-26 23:37:01.235164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.059 [2024-04-26 23:37:01.235173] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.059 qpair failed and we were unable to recover it. 00:34:12.059 [2024-04-26 23:37:01.235530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.059 [2024-04-26 23:37:01.235739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.059 [2024-04-26 23:37:01.235747] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.059 qpair failed and we were unable to recover it. 00:34:12.059 [2024-04-26 23:37:01.235935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.059 [2024-04-26 23:37:01.236047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.059 [2024-04-26 23:37:01.236056] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.059 qpair failed and we were unable to recover it. 00:34:12.059 [2024-04-26 23:37:01.236407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.059 [2024-04-26 23:37:01.236710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.059 [2024-04-26 23:37:01.236719] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.059 qpair failed and we were unable to recover it. 00:34:12.059 [2024-04-26 23:37:01.237020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.059 [2024-04-26 23:37:01.237336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.059 [2024-04-26 23:37:01.237345] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.059 qpair failed and we were unable to recover it. 00:34:12.059 [2024-04-26 23:37:01.237704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.059 [2024-04-26 23:37:01.237911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.059 [2024-04-26 23:37:01.237920] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.059 qpair failed and we were unable to recover it. 00:34:12.059 [2024-04-26 23:37:01.238097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.059 [2024-04-26 23:37:01.238417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.059 [2024-04-26 23:37:01.238426] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.059 qpair failed and we were unable to recover it. 00:34:12.059 [2024-04-26 23:37:01.238765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.059 [2024-04-26 23:37:01.239061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.059 [2024-04-26 23:37:01.239070] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.059 qpair failed and we were unable to recover it. 00:34:12.059 [2024-04-26 23:37:01.239379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.059 [2024-04-26 23:37:01.239696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.059 [2024-04-26 23:37:01.239704] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.059 qpair failed and we were unable to recover it. 00:34:12.059 [2024-04-26 23:37:01.239897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.059 [2024-04-26 23:37:01.240261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.059 [2024-04-26 23:37:01.240269] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.059 qpair failed and we were unable to recover it. 00:34:12.059 [2024-04-26 23:37:01.240620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.060 [2024-04-26 23:37:01.240816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.060 [2024-04-26 23:37:01.240823] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.060 qpair failed and we were unable to recover it. 00:34:12.060 [2024-04-26 23:37:01.241147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.060 [2024-04-26 23:37:01.241299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.060 [2024-04-26 23:37:01.241306] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.060 qpair failed and we were unable to recover it. 00:34:12.060 [2024-04-26 23:37:01.241624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.060 [2024-04-26 23:37:01.241896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.060 [2024-04-26 23:37:01.241904] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.060 qpair failed and we were unable to recover it. 00:34:12.060 [2024-04-26 23:37:01.242102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.060 [2024-04-26 23:37:01.242425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.060 [2024-04-26 23:37:01.242432] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.060 qpair failed and we were unable to recover it. 00:34:12.060 [2024-04-26 23:37:01.242742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.060 [2024-04-26 23:37:01.243037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.060 [2024-04-26 23:37:01.243045] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.060 qpair failed and we were unable to recover it. 00:34:12.060 [2024-04-26 23:37:01.243220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.060 [2024-04-26 23:37:01.243496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.060 [2024-04-26 23:37:01.243504] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.060 qpair failed and we were unable to recover it. 00:34:12.060 [2024-04-26 23:37:01.243813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.060 [2024-04-26 23:37:01.244131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.060 [2024-04-26 23:37:01.244139] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.060 qpair failed and we were unable to recover it. 00:34:12.060 [2024-04-26 23:37:01.244503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.060 [2024-04-26 23:37:01.244823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.060 [2024-04-26 23:37:01.244832] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.060 qpair failed and we were unable to recover it. 00:34:12.060 [2024-04-26 23:37:01.245191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.060 [2024-04-26 23:37:01.245504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.060 [2024-04-26 23:37:01.245512] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.060 qpair failed and we were unable to recover it. 00:34:12.060 [2024-04-26 23:37:01.245833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.060 [2024-04-26 23:37:01.246163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.060 [2024-04-26 23:37:01.246171] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.060 qpair failed and we were unable to recover it. 00:34:12.060 [2024-04-26 23:37:01.246492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.060 [2024-04-26 23:37:01.246806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.060 [2024-04-26 23:37:01.246813] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.060 qpair failed and we were unable to recover it. 00:34:12.060 [2024-04-26 23:37:01.247157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.060 [2024-04-26 23:37:01.247473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.060 [2024-04-26 23:37:01.247482] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.060 qpair failed and we were unable to recover it. 00:34:12.060 [2024-04-26 23:37:01.247849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.060 [2024-04-26 23:37:01.248183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.060 [2024-04-26 23:37:01.248191] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.060 qpair failed and we were unable to recover it. 00:34:12.060 [2024-04-26 23:37:01.248372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.060 [2024-04-26 23:37:01.248684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.060 [2024-04-26 23:37:01.248692] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.060 qpair failed and we were unable to recover it. 00:34:12.060 [2024-04-26 23:37:01.249060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.060 [2024-04-26 23:37:01.249384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.060 [2024-04-26 23:37:01.249392] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.060 qpair failed and we were unable to recover it. 00:34:12.060 [2024-04-26 23:37:01.249748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.060 [2024-04-26 23:37:01.250065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.060 [2024-04-26 23:37:01.250073] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.060 qpair failed and we were unable to recover it. 00:34:12.060 [2024-04-26 23:37:01.250423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.060 [2024-04-26 23:37:01.250744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.060 [2024-04-26 23:37:01.250751] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.060 qpair failed and we were unable to recover it. 00:34:12.060 [2024-04-26 23:37:01.250950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.060 [2024-04-26 23:37:01.251142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.060 [2024-04-26 23:37:01.251153] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.060 qpair failed and we were unable to recover it. 00:34:12.060 [2024-04-26 23:37:01.251506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.060 [2024-04-26 23:37:01.251824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.060 [2024-04-26 23:37:01.251832] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.060 qpair failed and we were unable to recover it. 00:34:12.060 [2024-04-26 23:37:01.252165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.060 [2024-04-26 23:37:01.252483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.060 [2024-04-26 23:37:01.252493] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.060 qpair failed and we were unable to recover it. 00:34:12.060 [2024-04-26 23:37:01.252652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.060 [2024-04-26 23:37:01.252971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.060 [2024-04-26 23:37:01.252979] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.060 qpair failed and we were unable to recover it. 00:34:12.060 [2024-04-26 23:37:01.253314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.060 [2024-04-26 23:37:01.253629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.060 [2024-04-26 23:37:01.253636] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.060 qpair failed and we were unable to recover it. 00:34:12.060 [2024-04-26 23:37:01.253966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.060 [2024-04-26 23:37:01.254312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.060 [2024-04-26 23:37:01.254319] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.060 qpair failed and we were unable to recover it. 00:34:12.060 [2024-04-26 23:37:01.254606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.060 [2024-04-26 23:37:01.254964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.060 [2024-04-26 23:37:01.254972] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.060 qpair failed and we were unable to recover it. 00:34:12.060 [2024-04-26 23:37:01.255142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.060 [2024-04-26 23:37:01.255543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.060 [2024-04-26 23:37:01.255551] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.060 qpair failed and we were unable to recover it. 00:34:12.060 [2024-04-26 23:37:01.255871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.061 [2024-04-26 23:37:01.256184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.061 [2024-04-26 23:37:01.256192] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.061 qpair failed and we were unable to recover it. 00:34:12.061 [2024-04-26 23:37:01.256505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.061 [2024-04-26 23:37:01.256822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.061 [2024-04-26 23:37:01.256830] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.061 qpair failed and we were unable to recover it. 00:34:12.061 [2024-04-26 23:37:01.257026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.061 [2024-04-26 23:37:01.257223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.061 [2024-04-26 23:37:01.257233] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.061 qpair failed and we were unable to recover it. 00:34:12.061 [2024-04-26 23:37:01.257550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.061 [2024-04-26 23:37:01.257863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.061 [2024-04-26 23:37:01.257871] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.061 qpair failed and we were unable to recover it. 00:34:12.061 [2024-04-26 23:37:01.258161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.061 [2024-04-26 23:37:01.258467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.061 [2024-04-26 23:37:01.258474] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.061 qpair failed and we were unable to recover it. 00:34:12.061 [2024-04-26 23:37:01.258841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.061 [2024-04-26 23:37:01.259189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.061 [2024-04-26 23:37:01.259197] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.061 qpair failed and we were unable to recover it. 00:34:12.061 [2024-04-26 23:37:01.259499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.061 [2024-04-26 23:37:01.259647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.061 [2024-04-26 23:37:01.259655] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.061 qpair failed and we were unable to recover it. 00:34:12.061 [2024-04-26 23:37:01.259855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.061 [2024-04-26 23:37:01.260186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.061 [2024-04-26 23:37:01.260194] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.061 qpair failed and we were unable to recover it. 00:34:12.061 [2024-04-26 23:37:01.260591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.061 [2024-04-26 23:37:01.260908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.061 [2024-04-26 23:37:01.260915] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.061 qpair failed and we were unable to recover it. 00:34:12.061 [2024-04-26 23:37:01.261235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.061 EAL: No free 2048 kB hugepages reported on node 1 00:34:12.061 [2024-04-26 23:37:01.261547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.061 [2024-04-26 23:37:01.261554] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.061 qpair failed and we were unable to recover it. 00:34:12.061 [2024-04-26 23:37:01.261908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.061 [2024-04-26 23:37:01.262223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.061 [2024-04-26 23:37:01.262231] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.061 qpair failed and we were unable to recover it. 00:34:12.061 [2024-04-26 23:37:01.262541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.061 [2024-04-26 23:37:01.262856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.061 [2024-04-26 23:37:01.262864] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.061 qpair failed and we were unable to recover it. 00:34:12.061 [2024-04-26 23:37:01.263190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.061 [2024-04-26 23:37:01.263395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.061 [2024-04-26 23:37:01.263404] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.061 qpair failed and we were unable to recover it. 00:34:12.061 [2024-04-26 23:37:01.263717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.061 [2024-04-26 23:37:01.264082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.061 [2024-04-26 23:37:01.264091] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.061 qpair failed and we were unable to recover it. 00:34:12.061 [2024-04-26 23:37:01.264449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.061 [2024-04-26 23:37:01.264803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.061 [2024-04-26 23:37:01.264811] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.061 qpair failed and we were unable to recover it. 00:34:12.061 [2024-04-26 23:37:01.265011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.061 [2024-04-26 23:37:01.265244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.061 [2024-04-26 23:37:01.265251] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.061 qpair failed and we were unable to recover it. 00:34:12.061 [2024-04-26 23:37:01.265480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.061 [2024-04-26 23:37:01.265841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.061 [2024-04-26 23:37:01.265850] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.061 qpair failed and we were unable to recover it. 00:34:12.061 [2024-04-26 23:37:01.266229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.061 [2024-04-26 23:37:01.266535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.061 [2024-04-26 23:37:01.266543] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.061 qpair failed and we were unable to recover it. 00:34:12.061 [2024-04-26 23:37:01.266901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.061 [2024-04-26 23:37:01.267209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.061 [2024-04-26 23:37:01.267217] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.061 qpair failed and we were unable to recover it. 00:34:12.061 [2024-04-26 23:37:01.267502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.061 [2024-04-26 23:37:01.267740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.061 [2024-04-26 23:37:01.267749] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.061 qpair failed and we were unable to recover it. 00:34:12.061 [2024-04-26 23:37:01.267990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.061 [2024-04-26 23:37:01.268137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.061 [2024-04-26 23:37:01.268144] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.061 qpair failed and we were unable to recover it. 00:34:12.061 [2024-04-26 23:37:01.268468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.061 [2024-04-26 23:37:01.268734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.061 [2024-04-26 23:37:01.268743] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.061 qpair failed and we were unable to recover it. 00:34:12.061 [2024-04-26 23:37:01.269067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.061 [2024-04-26 23:37:01.269369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.061 [2024-04-26 23:37:01.269379] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.061 qpair failed and we were unable to recover it. 00:34:12.061 [2024-04-26 23:37:01.269741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.061 [2024-04-26 23:37:01.270053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.061 [2024-04-26 23:37:01.270060] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.061 qpair failed and we were unable to recover it. 00:34:12.061 [2024-04-26 23:37:01.270257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.061 [2024-04-26 23:37:01.270601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.061 [2024-04-26 23:37:01.270609] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.061 qpair failed and we were unable to recover it. 00:34:12.061 [2024-04-26 23:37:01.270804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.061 [2024-04-26 23:37:01.271157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.061 [2024-04-26 23:37:01.271166] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.061 qpair failed and we were unable to recover it. 00:34:12.061 [2024-04-26 23:37:01.271533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.061 [2024-04-26 23:37:01.271849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.061 [2024-04-26 23:37:01.271858] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.061 qpair failed and we were unable to recover it. 00:34:12.061 [2024-04-26 23:37:01.272143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.061 [2024-04-26 23:37:01.272348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.061 [2024-04-26 23:37:01.272356] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.061 qpair failed and we were unable to recover it. 00:34:12.061 [2024-04-26 23:37:01.272553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.061 [2024-04-26 23:37:01.272895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.061 [2024-04-26 23:37:01.272904] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.062 qpair failed and we were unable to recover it. 00:34:12.062 [2024-04-26 23:37:01.273182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.062 [2024-04-26 23:37:01.273499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.062 [2024-04-26 23:37:01.273507] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.062 qpair failed and we were unable to recover it. 00:34:12.062 [2024-04-26 23:37:01.273814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.062 [2024-04-26 23:37:01.273965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.062 [2024-04-26 23:37:01.273973] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.062 qpair failed and we were unable to recover it. 00:34:12.062 [2024-04-26 23:37:01.274296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.062 [2024-04-26 23:37:01.274609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.062 [2024-04-26 23:37:01.274617] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.062 qpair failed and we were unable to recover it. 00:34:12.062 [2024-04-26 23:37:01.274940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.062 [2024-04-26 23:37:01.275254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.062 [2024-04-26 23:37:01.275262] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.062 qpair failed and we were unable to recover it. 00:34:12.062 [2024-04-26 23:37:01.275618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.062 [2024-04-26 23:37:01.275825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.062 [2024-04-26 23:37:01.275832] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.062 qpair failed and we were unable to recover it. 00:34:12.062 [2024-04-26 23:37:01.276158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.062 [2024-04-26 23:37:01.276427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.062 [2024-04-26 23:37:01.276435] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.062 qpair failed and we were unable to recover it. 00:34:12.062 [2024-04-26 23:37:01.276790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.062 [2024-04-26 23:37:01.277453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.062 [2024-04-26 23:37:01.277471] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.062 qpair failed and we were unable to recover it. 00:34:12.062 [2024-04-26 23:37:01.277797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.062 [2024-04-26 23:37:01.278099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.062 [2024-04-26 23:37:01.278108] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.062 qpair failed and we were unable to recover it. 00:34:12.062 [2024-04-26 23:37:01.278458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.062 [2024-04-26 23:37:01.278662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.062 [2024-04-26 23:37:01.278671] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.062 qpair failed and we were unable to recover it. 00:34:12.062 [2024-04-26 23:37:01.279022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.062 [2024-04-26 23:37:01.279390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.062 [2024-04-26 23:37:01.279399] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.062 qpair failed and we were unable to recover it. 00:34:12.062 [2024-04-26 23:37:01.279696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.062 [2024-04-26 23:37:01.279885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.062 [2024-04-26 23:37:01.279892] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.062 qpair failed and we were unable to recover it. 00:34:12.062 [2024-04-26 23:37:01.280237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.062 [2024-04-26 23:37:01.280469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.062 [2024-04-26 23:37:01.280476] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.062 qpair failed and we were unable to recover it. 00:34:12.062 [2024-04-26 23:37:01.280808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.062 [2024-04-26 23:37:01.281122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.062 [2024-04-26 23:37:01.281130] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.062 qpair failed and we were unable to recover it. 00:34:12.062 [2024-04-26 23:37:01.281481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.062 [2024-04-26 23:37:01.281800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.062 [2024-04-26 23:37:01.281808] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.062 qpair failed and we were unable to recover it. 00:34:12.062 [2024-04-26 23:37:01.282138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.062 [2024-04-26 23:37:01.282298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.062 [2024-04-26 23:37:01.282306] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.062 qpair failed and we were unable to recover it. 00:34:12.062 [2024-04-26 23:37:01.282611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.062 [2024-04-26 23:37:01.282941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.062 [2024-04-26 23:37:01.282949] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.062 qpair failed and we were unable to recover it. 00:34:12.062 [2024-04-26 23:37:01.283310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.062 [2024-04-26 23:37:01.283512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.062 [2024-04-26 23:37:01.283519] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.062 qpair failed and we were unable to recover it. 00:34:12.062 [2024-04-26 23:37:01.283883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.062 [2024-04-26 23:37:01.284070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.062 [2024-04-26 23:37:01.284078] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.062 qpair failed and we were unable to recover it. 00:34:12.062 [2024-04-26 23:37:01.284436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.062 [2024-04-26 23:37:01.284647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.062 [2024-04-26 23:37:01.284655] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.062 qpair failed and we were unable to recover it. 00:34:12.062 [2024-04-26 23:37:01.284996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.062 [2024-04-26 23:37:01.285367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.062 [2024-04-26 23:37:01.285374] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.062 qpair failed and we were unable to recover it. 00:34:12.062 [2024-04-26 23:37:01.285733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.062 [2024-04-26 23:37:01.286045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.062 [2024-04-26 23:37:01.286053] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.062 qpair failed and we were unable to recover it. 00:34:12.062 [2024-04-26 23:37:01.286368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.062 [2024-04-26 23:37:01.286693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.062 [2024-04-26 23:37:01.286700] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.062 qpair failed and we were unable to recover it. 00:34:12.062 [2024-04-26 23:37:01.287039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.062 [2024-04-26 23:37:01.287341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.062 [2024-04-26 23:37:01.287349] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.062 qpair failed and we were unable to recover it. 00:34:12.062 [2024-04-26 23:37:01.287683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.062 [2024-04-26 23:37:01.287893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.062 [2024-04-26 23:37:01.287900] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.062 qpair failed and we were unable to recover it. 00:34:12.062 [2024-04-26 23:37:01.288216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.062 [2024-04-26 23:37:01.288522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.062 [2024-04-26 23:37:01.288529] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.062 qpair failed and we were unable to recover it. 00:34:12.062 [2024-04-26 23:37:01.288881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.062 [2024-04-26 23:37:01.289111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.062 [2024-04-26 23:37:01.289118] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.062 qpair failed and we were unable to recover it. 00:34:12.062 [2024-04-26 23:37:01.289468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.062 [2024-04-26 23:37:01.289832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.062 [2024-04-26 23:37:01.289842] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.062 qpair failed and we were unable to recover it. 00:34:12.062 [2024-04-26 23:37:01.290185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.062 [2024-04-26 23:37:01.290494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.062 [2024-04-26 23:37:01.290503] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.062 qpair failed and we were unable to recover it. 00:34:12.062 [2024-04-26 23:37:01.290860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.062 [2024-04-26 23:37:01.291188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.063 [2024-04-26 23:37:01.291196] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.063 qpair failed and we were unable to recover it. 00:34:12.335 [2024-04-26 23:37:01.291564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.335 [2024-04-26 23:37:01.291893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.335 [2024-04-26 23:37:01.291902] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.335 qpair failed and we were unable to recover it. 00:34:12.335 [2024-04-26 23:37:01.292227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.335 [2024-04-26 23:37:01.292586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.335 [2024-04-26 23:37:01.292594] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.335 qpair failed and we were unable to recover it. 00:34:12.335 [2024-04-26 23:37:01.293003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.335 [2024-04-26 23:37:01.293242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.335 [2024-04-26 23:37:01.293250] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.335 qpair failed and we were unable to recover it. 00:34:12.335 [2024-04-26 23:37:01.293575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.335 [2024-04-26 23:37:01.293938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.335 [2024-04-26 23:37:01.293946] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.335 qpair failed and we were unable to recover it. 00:34:12.335 [2024-04-26 23:37:01.294267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.335 [2024-04-26 23:37:01.294574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.335 [2024-04-26 23:37:01.294582] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.335 qpair failed and we were unable to recover it. 00:34:12.335 [2024-04-26 23:37:01.294783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.335 [2024-04-26 23:37:01.295043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.335 [2024-04-26 23:37:01.295051] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.335 qpair failed and we were unable to recover it. 00:34:12.335 [2024-04-26 23:37:01.295375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.335 [2024-04-26 23:37:01.295754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.335 [2024-04-26 23:37:01.295762] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.335 qpair failed and we were unable to recover it. 00:34:12.335 [2024-04-26 23:37:01.296085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.335 [2024-04-26 23:37:01.296407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.335 [2024-04-26 23:37:01.296415] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.335 qpair failed and we were unable to recover it. 00:34:12.335 [2024-04-26 23:37:01.296765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.335 [2024-04-26 23:37:01.297062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.335 [2024-04-26 23:37:01.297070] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.335 qpair failed and we were unable to recover it. 00:34:12.335 [2024-04-26 23:37:01.297266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.335 [2024-04-26 23:37:01.297590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.335 [2024-04-26 23:37:01.297598] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.335 qpair failed and we were unable to recover it. 00:34:12.335 [2024-04-26 23:37:01.297928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.335 [2024-04-26 23:37:01.298243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.335 [2024-04-26 23:37:01.298252] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.335 qpair failed and we were unable to recover it. 00:34:12.335 [2024-04-26 23:37:01.298605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.335 [2024-04-26 23:37:01.298973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.335 [2024-04-26 23:37:01.298982] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.335 qpair failed and we were unable to recover it. 00:34:12.335 [2024-04-26 23:37:01.299282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.335 [2024-04-26 23:37:01.299501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.335 [2024-04-26 23:37:01.299510] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.335 qpair failed and we were unable to recover it. 00:34:12.335 [2024-04-26 23:37:01.299872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.335 [2024-04-26 23:37:01.300198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.335 [2024-04-26 23:37:01.300207] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.335 qpair failed and we were unable to recover it. 00:34:12.335 [2024-04-26 23:37:01.300557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.335 [2024-04-26 23:37:01.300919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.335 [2024-04-26 23:37:01.300927] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.335 qpair failed and we were unable to recover it. 00:34:12.335 [2024-04-26 23:37:01.301132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.335 [2024-04-26 23:37:01.301300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.335 [2024-04-26 23:37:01.301307] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.335 qpair failed and we were unable to recover it. 00:34:12.335 [2024-04-26 23:37:01.301633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.335 [2024-04-26 23:37:01.301916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.335 [2024-04-26 23:37:01.301925] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.335 qpair failed and we were unable to recover it. 00:34:12.335 [2024-04-26 23:37:01.302248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.335 [2024-04-26 23:37:01.302614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.335 [2024-04-26 23:37:01.302622] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.335 qpair failed and we were unable to recover it. 00:34:12.335 [2024-04-26 23:37:01.302968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.335 [2024-04-26 23:37:01.303298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.335 [2024-04-26 23:37:01.303306] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.335 qpair failed and we were unable to recover it. 00:34:12.335 [2024-04-26 23:37:01.303607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.335 [2024-04-26 23:37:01.303675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.335 [2024-04-26 23:37:01.303684] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.335 qpair failed and we were unable to recover it. 00:34:12.336 [2024-04-26 23:37:01.303851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.336 [2024-04-26 23:37:01.304166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.336 [2024-04-26 23:37:01.304174] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.336 qpair failed and we were unable to recover it. 00:34:12.336 [2024-04-26 23:37:01.304528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.336 [2024-04-26 23:37:01.304889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.336 [2024-04-26 23:37:01.304897] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.336 qpair failed and we were unable to recover it. 00:34:12.336 [2024-04-26 23:37:01.305242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.336 [2024-04-26 23:37:01.305607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.336 [2024-04-26 23:37:01.305615] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.336 qpair failed and we were unable to recover it. 00:34:12.336 [2024-04-26 23:37:01.305861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.336 [2024-04-26 23:37:01.306240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.336 [2024-04-26 23:37:01.306248] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.336 qpair failed and we were unable to recover it. 00:34:12.336 [2024-04-26 23:37:01.306604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.336 [2024-04-26 23:37:01.306914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.336 [2024-04-26 23:37:01.306922] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.336 qpair failed and we were unable to recover it. 00:34:12.336 [2024-04-26 23:37:01.307270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.336 [2024-04-26 23:37:01.307631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.336 [2024-04-26 23:37:01.307638] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.336 qpair failed and we were unable to recover it. 00:34:12.336 [2024-04-26 23:37:01.307977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.336 [2024-04-26 23:37:01.308316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.336 [2024-04-26 23:37:01.308324] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.336 qpair failed and we were unable to recover it. 00:34:12.336 [2024-04-26 23:37:01.308515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.336 [2024-04-26 23:37:01.308753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.336 [2024-04-26 23:37:01.308761] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.336 qpair failed and we were unable to recover it. 00:34:12.336 [2024-04-26 23:37:01.308990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.336 [2024-04-26 23:37:01.309299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.336 [2024-04-26 23:37:01.309307] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.336 qpair failed and we were unable to recover it. 00:34:12.336 [2024-04-26 23:37:01.309621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.336 [2024-04-26 23:37:01.309795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.336 [2024-04-26 23:37:01.309803] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.336 qpair failed and we were unable to recover it. 00:34:12.336 [2024-04-26 23:37:01.309979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.336 [2024-04-26 23:37:01.310328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.336 [2024-04-26 23:37:01.310336] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.336 qpair failed and we were unable to recover it. 00:34:12.336 [2024-04-26 23:37:01.310651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.336 [2024-04-26 23:37:01.311015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.336 [2024-04-26 23:37:01.311023] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.336 qpair failed and we were unable to recover it. 00:34:12.336 [2024-04-26 23:37:01.311375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.336 [2024-04-26 23:37:01.311688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.336 [2024-04-26 23:37:01.311696] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.336 qpair failed and we were unable to recover it. 00:34:12.336 [2024-04-26 23:37:01.312449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.336 [2024-04-26 23:37:01.312775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.336 [2024-04-26 23:37:01.312784] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.336 qpair failed and we were unable to recover it. 00:34:12.336 [2024-04-26 23:37:01.312974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.336 [2024-04-26 23:37:01.313285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.336 [2024-04-26 23:37:01.313293] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.336 qpair failed and we were unable to recover it. 00:34:12.336 [2024-04-26 23:37:01.313594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.336 [2024-04-26 23:37:01.313920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.336 [2024-04-26 23:37:01.313928] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.336 qpair failed and we were unable to recover it. 00:34:12.336 [2024-04-26 23:37:01.314252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.336 [2024-04-26 23:37:01.314447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.336 [2024-04-26 23:37:01.314455] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.336 qpair failed and we were unable to recover it. 00:34:12.336 [2024-04-26 23:37:01.314646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.336 [2024-04-26 23:37:01.314990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.336 [2024-04-26 23:37:01.314999] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.336 qpair failed and we were unable to recover it. 00:34:12.336 [2024-04-26 23:37:01.315333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.336 [2024-04-26 23:37:01.315545] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:34:12.336 [2024-04-26 23:37:01.315611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.336 [2024-04-26 23:37:01.315618] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.336 qpair failed and we were unable to recover it. 00:34:12.336 [2024-04-26 23:37:01.315948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.336 [2024-04-26 23:37:01.316144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.336 [2024-04-26 23:37:01.316152] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.336 qpair failed and we were unable to recover it. 00:34:12.336 [2024-04-26 23:37:01.316466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.336 [2024-04-26 23:37:01.316792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.336 [2024-04-26 23:37:01.316800] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.336 qpair failed and we were unable to recover it. 00:34:12.336 [2024-04-26 23:37:01.316988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.336 [2024-04-26 23:37:01.317354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.336 [2024-04-26 23:37:01.317362] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.336 qpair failed and we were unable to recover it. 00:34:12.336 [2024-04-26 23:37:01.317698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.336 [2024-04-26 23:37:01.318007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.336 [2024-04-26 23:37:01.318016] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.336 qpair failed and we were unable to recover it. 00:34:12.336 [2024-04-26 23:37:01.318369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.336 [2024-04-26 23:37:01.318673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.336 [2024-04-26 23:37:01.318681] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.336 qpair failed and we were unable to recover it. 00:34:12.336 [2024-04-26 23:37:01.318884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.336 [2024-04-26 23:37:01.319237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.336 [2024-04-26 23:37:01.319246] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.336 qpair failed and we were unable to recover it. 00:34:12.336 [2024-04-26 23:37:01.319608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.336 [2024-04-26 23:37:01.319965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.336 [2024-04-26 23:37:01.319974] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.336 qpair failed and we were unable to recover it. 00:34:12.336 [2024-04-26 23:37:01.320326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.336 [2024-04-26 23:37:01.320574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.336 [2024-04-26 23:37:01.320581] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.336 qpair failed and we were unable to recover it. 00:34:12.336 [2024-04-26 23:37:01.320913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.336 [2024-04-26 23:37:01.321255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.336 [2024-04-26 23:37:01.321264] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.336 qpair failed and we were unable to recover it. 00:34:12.337 [2024-04-26 23:37:01.321600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.337 [2024-04-26 23:37:01.321939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.337 [2024-04-26 23:37:01.321948] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.337 qpair failed and we were unable to recover it. 00:34:12.337 [2024-04-26 23:37:01.322275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.337 [2024-04-26 23:37:01.322637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.337 [2024-04-26 23:37:01.322646] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.337 qpair failed and we were unable to recover it. 00:34:12.337 [2024-04-26 23:37:01.322877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.337 [2024-04-26 23:37:01.323189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.337 [2024-04-26 23:37:01.323196] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.337 qpair failed and we were unable to recover it. 00:34:12.337 [2024-04-26 23:37:01.323546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.337 [2024-04-26 23:37:01.323781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.337 [2024-04-26 23:37:01.323788] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.337 qpair failed and we were unable to recover it. 00:34:12.337 [2024-04-26 23:37:01.323994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.337 [2024-04-26 23:37:01.324386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.337 [2024-04-26 23:37:01.324395] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.337 qpair failed and we were unable to recover it. 00:34:12.337 [2024-04-26 23:37:01.324749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.337 [2024-04-26 23:37:01.325083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.337 [2024-04-26 23:37:01.325092] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.337 qpair failed and we were unable to recover it. 00:34:12.337 [2024-04-26 23:37:01.325424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.337 [2024-04-26 23:37:01.325789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.337 [2024-04-26 23:37:01.325797] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.337 qpair failed and we were unable to recover it. 00:34:12.337 [2024-04-26 23:37:01.326093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.337 [2024-04-26 23:37:01.326318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.337 [2024-04-26 23:37:01.326326] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.337 qpair failed and we were unable to recover it. 00:34:12.337 [2024-04-26 23:37:01.326658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.337 [2024-04-26 23:37:01.326995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.337 [2024-04-26 23:37:01.327003] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.337 qpair failed and we were unable to recover it. 00:34:12.337 [2024-04-26 23:37:01.327322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.337 [2024-04-26 23:37:01.327576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.337 [2024-04-26 23:37:01.327585] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.337 qpair failed and we were unable to recover it. 00:34:12.337 [2024-04-26 23:37:01.327771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.337 [2024-04-26 23:37:01.328106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.337 [2024-04-26 23:37:01.328114] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.337 qpair failed and we were unable to recover it. 00:34:12.337 [2024-04-26 23:37:01.328460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.337 [2024-04-26 23:37:01.328769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.337 [2024-04-26 23:37:01.328778] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.337 qpair failed and we were unable to recover it. 00:34:12.337 [2024-04-26 23:37:01.329135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.337 [2024-04-26 23:37:01.329502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.337 [2024-04-26 23:37:01.329510] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.337 qpair failed and we were unable to recover it. 00:34:12.337 [2024-04-26 23:37:01.329828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.337 [2024-04-26 23:37:01.330196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.337 [2024-04-26 23:37:01.330204] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.337 qpair failed and we were unable to recover it. 00:34:12.337 [2024-04-26 23:37:01.330539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.337 [2024-04-26 23:37:01.330901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.337 [2024-04-26 23:37:01.330910] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.337 qpair failed and we were unable to recover it. 00:34:12.337 [2024-04-26 23:37:01.331237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.337 [2024-04-26 23:37:01.331558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.337 [2024-04-26 23:37:01.331566] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.337 qpair failed and we were unable to recover it. 00:34:12.337 [2024-04-26 23:37:01.331770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.337 [2024-04-26 23:37:01.332067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.337 [2024-04-26 23:37:01.332076] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.337 qpair failed and we were unable to recover it. 00:34:12.337 [2024-04-26 23:37:01.332363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.337 [2024-04-26 23:37:01.332716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.337 [2024-04-26 23:37:01.332724] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.337 qpair failed and we were unable to recover it. 00:34:12.337 [2024-04-26 23:37:01.333061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.337 [2024-04-26 23:37:01.333340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.337 [2024-04-26 23:37:01.333349] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.337 qpair failed and we were unable to recover it. 00:34:12.337 [2024-04-26 23:37:01.333697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.337 [2024-04-26 23:37:01.334032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.337 [2024-04-26 23:37:01.334040] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.337 qpair failed and we were unable to recover it. 00:34:12.337 [2024-04-26 23:37:01.334400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.337 [2024-04-26 23:37:01.334616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.337 [2024-04-26 23:37:01.334624] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.337 qpair failed and we were unable to recover it. 00:34:12.337 [2024-04-26 23:37:01.334933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.337 [2024-04-26 23:37:01.335255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.337 [2024-04-26 23:37:01.335264] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.337 qpair failed and we were unable to recover it. 00:34:12.337 [2024-04-26 23:37:01.335605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.337 [2024-04-26 23:37:01.335921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.337 [2024-04-26 23:37:01.335929] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.337 qpair failed and we were unable to recover it. 00:34:12.337 [2024-04-26 23:37:01.336277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.337 [2024-04-26 23:37:01.336635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.337 [2024-04-26 23:37:01.336643] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.337 qpair failed and we were unable to recover it. 00:34:12.337 [2024-04-26 23:37:01.336825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.337 [2024-04-26 23:37:01.337194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.337 [2024-04-26 23:37:01.337202] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.337 qpair failed and we were unable to recover it. 00:34:12.337 [2024-04-26 23:37:01.337549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.337 [2024-04-26 23:37:01.337909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.337 [2024-04-26 23:37:01.337917] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.337 qpair failed and we were unable to recover it. 00:34:12.337 [2024-04-26 23:37:01.338224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.337 [2024-04-26 23:37:01.338586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.337 [2024-04-26 23:37:01.338595] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.337 qpair failed and we were unable to recover it. 00:34:12.337 [2024-04-26 23:37:01.338952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.337 [2024-04-26 23:37:01.339174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.337 [2024-04-26 23:37:01.339181] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.337 qpair failed and we were unable to recover it. 00:34:12.337 [2024-04-26 23:37:01.339396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.337 [2024-04-26 23:37:01.339709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.337 [2024-04-26 23:37:01.339718] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.338 qpair failed and we were unable to recover it. 00:34:12.338 [2024-04-26 23:37:01.340043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.338 [2024-04-26 23:37:01.340377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.338 [2024-04-26 23:37:01.340386] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.338 qpair failed and we were unable to recover it. 00:34:12.338 [2024-04-26 23:37:01.340753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.338 [2024-04-26 23:37:01.340973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.338 [2024-04-26 23:37:01.340981] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.338 qpair failed and we were unable to recover it. 00:34:12.338 [2024-04-26 23:37:01.341214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.338 [2024-04-26 23:37:01.341498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.338 [2024-04-26 23:37:01.341507] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.338 qpair failed and we were unable to recover it. 00:34:12.338 [2024-04-26 23:37:01.341829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.338 [2024-04-26 23:37:01.342161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.338 [2024-04-26 23:37:01.342169] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.338 qpair failed and we were unable to recover it. 00:34:12.338 [2024-04-26 23:37:01.342353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.338 [2024-04-26 23:37:01.342641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.338 [2024-04-26 23:37:01.342649] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.338 qpair failed and we were unable to recover it. 00:34:12.338 [2024-04-26 23:37:01.342810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.338 [2024-04-26 23:37:01.343022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.338 [2024-04-26 23:37:01.343031] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.338 qpair failed and we were unable to recover it. 00:34:12.338 [2024-04-26 23:37:01.343342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.338 [2024-04-26 23:37:01.343666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.338 [2024-04-26 23:37:01.343675] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.338 qpair failed and we were unable to recover it. 00:34:12.338 [2024-04-26 23:37:01.343993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.338 [2024-04-26 23:37:01.344331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.338 [2024-04-26 23:37:01.344339] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.338 qpair failed and we were unable to recover it. 00:34:12.338 [2024-04-26 23:37:01.344700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.338 [2024-04-26 23:37:01.345028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.338 [2024-04-26 23:37:01.345036] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.338 qpair failed and we were unable to recover it. 00:34:12.338 [2024-04-26 23:37:01.345277] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:12.338 [2024-04-26 23:37:01.345307] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:12.338 [2024-04-26 23:37:01.345315] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:12.338 [2024-04-26 23:37:01.345324] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:12.338 [2024-04-26 23:37:01.345331] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:12.338 [2024-04-26 23:37:01.345364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.338 [2024-04-26 23:37:01.345628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.338 [2024-04-26 23:37:01.345635] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.338 qpair failed and we were unable to recover it. 00:34:12.338 [2024-04-26 23:37:01.345495] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:34:12.338 [2024-04-26 23:37:01.345618] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:34:12.338 [2024-04-26 23:37:01.345743] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:34:12.338 [2024-04-26 23:37:01.345904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.338 [2024-04-26 23:37:01.345744] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 7 00:34:12.338 [2024-04-26 23:37:01.346222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.338 [2024-04-26 23:37:01.346229] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.338 qpair failed and we were unable to recover it. 00:34:12.338 [2024-04-26 23:37:01.346401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.338 [2024-04-26 23:37:01.346757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.338 [2024-04-26 23:37:01.346764] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.338 qpair failed and we were unable to recover it. 00:34:12.338 [2024-04-26 23:37:01.347103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.338 [2024-04-26 23:37:01.347472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.338 [2024-04-26 23:37:01.347480] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.338 qpair failed and we were unable to recover it. 00:34:12.338 [2024-04-26 23:37:01.347857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.338 [2024-04-26 23:37:01.348176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.338 [2024-04-26 23:37:01.348184] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.338 qpair failed and we were unable to recover it. 00:34:12.338 [2024-04-26 23:37:01.348543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.338 [2024-04-26 23:37:01.348913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.338 [2024-04-26 23:37:01.348920] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.338 qpair failed and we were unable to recover it. 00:34:12.338 [2024-04-26 23:37:01.349284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.338 [2024-04-26 23:37:01.349608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.338 [2024-04-26 23:37:01.349616] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.338 qpair failed and we were unable to recover it. 00:34:12.338 [2024-04-26 23:37:01.349985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.338 [2024-04-26 23:37:01.350299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.338 [2024-04-26 23:37:01.350308] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.338 qpair failed and we were unable to recover it. 00:34:12.338 [2024-04-26 23:37:01.350553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.338 [2024-04-26 23:37:01.350765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.338 [2024-04-26 23:37:01.350774] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.338 qpair failed and we were unable to recover it. 00:34:12.338 [2024-04-26 23:37:01.351078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.338 [2024-04-26 23:37:01.351291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.338 [2024-04-26 23:37:01.351300] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.338 qpair failed and we were unable to recover it. 00:34:12.338 [2024-04-26 23:37:01.351614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.338 [2024-04-26 23:37:01.351975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.338 [2024-04-26 23:37:01.351983] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.338 qpair failed and we were unable to recover it. 00:34:12.338 [2024-04-26 23:37:01.352290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.338 [2024-04-26 23:37:01.352514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.338 [2024-04-26 23:37:01.352522] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.338 qpair failed and we were unable to recover it. 00:34:12.338 [2024-04-26 23:37:01.352848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.338 [2024-04-26 23:37:01.353101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.338 [2024-04-26 23:37:01.353109] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.338 qpair failed and we were unable to recover it. 00:34:12.338 [2024-04-26 23:37:01.353467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.338 [2024-04-26 23:37:01.353630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.338 [2024-04-26 23:37:01.353638] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.338 qpair failed and we were unable to recover it. 00:34:12.338 [2024-04-26 23:37:01.353976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.338 [2024-04-26 23:37:01.354308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.338 [2024-04-26 23:37:01.354316] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.338 qpair failed and we were unable to recover it. 00:34:12.338 [2024-04-26 23:37:01.354649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.338 [2024-04-26 23:37:01.354973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.338 [2024-04-26 23:37:01.354982] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.338 qpair failed and we were unable to recover it. 00:34:12.338 [2024-04-26 23:37:01.355333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.339 [2024-04-26 23:37:01.355540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.339 [2024-04-26 23:37:01.355548] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.339 qpair failed and we were unable to recover it. 00:34:12.339 [2024-04-26 23:37:01.355864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.339 [2024-04-26 23:37:01.356182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.339 [2024-04-26 23:37:01.356190] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.339 qpair failed and we were unable to recover it. 00:34:12.339 [2024-04-26 23:37:01.356495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.339 [2024-04-26 23:37:01.356815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.339 [2024-04-26 23:37:01.356823] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.339 qpair failed and we were unable to recover it. 00:34:12.339 [2024-04-26 23:37:01.357019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.339 [2024-04-26 23:37:01.357201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.339 [2024-04-26 23:37:01.357209] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.339 qpair failed and we were unable to recover it. 00:34:12.339 [2024-04-26 23:37:01.357523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.339 [2024-04-26 23:37:01.357860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.339 [2024-04-26 23:37:01.357868] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.339 qpair failed and we were unable to recover it. 00:34:12.339 [2024-04-26 23:37:01.358163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.339 [2024-04-26 23:37:01.358435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.339 [2024-04-26 23:37:01.358442] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.339 qpair failed and we were unable to recover it. 00:34:12.339 [2024-04-26 23:37:01.358773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.339 [2024-04-26 23:37:01.358954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.339 [2024-04-26 23:37:01.358962] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.339 qpair failed and we were unable to recover it. 00:34:12.339 [2024-04-26 23:37:01.359168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.339 [2024-04-26 23:37:01.359551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.339 [2024-04-26 23:37:01.359560] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.339 qpair failed and we were unable to recover it. 00:34:12.339 [2024-04-26 23:37:01.359746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.339 [2024-04-26 23:37:01.360071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.339 [2024-04-26 23:37:01.360079] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.339 qpair failed and we were unable to recover it. 00:34:12.339 [2024-04-26 23:37:01.360403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.339 [2024-04-26 23:37:01.360724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.339 [2024-04-26 23:37:01.360733] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.339 qpair failed and we were unable to recover it. 00:34:12.339 [2024-04-26 23:37:01.361060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.339 [2024-04-26 23:37:01.361396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.339 [2024-04-26 23:37:01.361404] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.339 qpair failed and we were unable to recover it. 00:34:12.339 [2024-04-26 23:37:01.361770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.339 [2024-04-26 23:37:01.362104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.339 [2024-04-26 23:37:01.362112] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.339 qpair failed and we were unable to recover it. 00:34:12.339 [2024-04-26 23:37:01.362442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.339 [2024-04-26 23:37:01.362725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.339 [2024-04-26 23:37:01.362734] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.339 qpair failed and we were unable to recover it. 00:34:12.339 [2024-04-26 23:37:01.363075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.339 [2024-04-26 23:37:01.363267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.339 [2024-04-26 23:37:01.363274] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.339 qpair failed and we were unable to recover it. 00:34:12.339 [2024-04-26 23:37:01.363458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.339 [2024-04-26 23:37:01.363795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.339 [2024-04-26 23:37:01.363804] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.339 qpair failed and we were unable to recover it. 00:34:12.339 [2024-04-26 23:37:01.364009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.339 [2024-04-26 23:37:01.364330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.339 [2024-04-26 23:37:01.364340] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.339 qpair failed and we were unable to recover it. 00:34:12.339 [2024-04-26 23:37:01.364527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.339 [2024-04-26 23:37:01.364833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.339 [2024-04-26 23:37:01.364845] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.339 qpair failed and we were unable to recover it. 00:34:12.339 [2024-04-26 23:37:01.365063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.339 [2024-04-26 23:37:01.365419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.339 [2024-04-26 23:37:01.365428] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.339 qpair failed and we were unable to recover it. 00:34:12.339 [2024-04-26 23:37:01.365624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.339 [2024-04-26 23:37:01.365951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.339 [2024-04-26 23:37:01.365960] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.339 qpair failed and we were unable to recover it. 00:34:12.339 [2024-04-26 23:37:01.366317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.339 [2024-04-26 23:37:01.366562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.339 [2024-04-26 23:37:01.366579] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.339 qpair failed and we were unable to recover it. 00:34:12.339 [2024-04-26 23:37:01.366913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.339 [2024-04-26 23:37:01.367237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.339 [2024-04-26 23:37:01.367246] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.339 qpair failed and we were unable to recover it. 00:34:12.339 [2024-04-26 23:37:01.367598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.339 [2024-04-26 23:37:01.367843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.339 [2024-04-26 23:37:01.367853] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.339 qpair failed and we were unable to recover it. 00:34:12.339 [2024-04-26 23:37:01.368005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.339 [2024-04-26 23:37:01.368357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.339 [2024-04-26 23:37:01.368364] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.339 qpair failed and we were unable to recover it. 00:34:12.339 [2024-04-26 23:37:01.368564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.339 [2024-04-26 23:37:01.368924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.340 [2024-04-26 23:37:01.368932] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.340 qpair failed and we were unable to recover it. 00:34:12.340 [2024-04-26 23:37:01.369264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.340 [2024-04-26 23:37:01.369485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.340 [2024-04-26 23:37:01.369493] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.340 qpair failed and we were unable to recover it. 00:34:12.340 [2024-04-26 23:37:01.369694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.340 [2024-04-26 23:37:01.369917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.340 [2024-04-26 23:37:01.369925] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.340 qpair failed and we were unable to recover it. 00:34:12.340 [2024-04-26 23:37:01.370237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.340 [2024-04-26 23:37:01.370574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.340 [2024-04-26 23:37:01.370582] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.340 qpair failed and we were unable to recover it. 00:34:12.340 [2024-04-26 23:37:01.370936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.340 [2024-04-26 23:37:01.371237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.340 [2024-04-26 23:37:01.371245] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.340 qpair failed and we were unable to recover it. 00:34:12.340 [2024-04-26 23:37:01.371583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.340 [2024-04-26 23:37:01.371796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.340 [2024-04-26 23:37:01.371803] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.340 qpair failed and we were unable to recover it. 00:34:12.340 [2024-04-26 23:37:01.372137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.340 [2024-04-26 23:37:01.372453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.340 [2024-04-26 23:37:01.372461] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.340 qpair failed and we were unable to recover it. 00:34:12.340 [2024-04-26 23:37:01.372789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.340 [2024-04-26 23:37:01.373144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.340 [2024-04-26 23:37:01.373153] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.340 qpair failed and we were unable to recover it. 00:34:12.340 [2024-04-26 23:37:01.373504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.340 [2024-04-26 23:37:01.373819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.340 [2024-04-26 23:37:01.373828] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.340 qpair failed and we were unable to recover it. 00:34:12.340 [2024-04-26 23:37:01.374160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.340 [2024-04-26 23:37:01.374406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.340 [2024-04-26 23:37:01.374413] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.340 qpair failed and we were unable to recover it. 00:34:12.340 [2024-04-26 23:37:01.374607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.340 [2024-04-26 23:37:01.374952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.340 [2024-04-26 23:37:01.374959] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.340 qpair failed and we were unable to recover it. 00:34:12.340 [2024-04-26 23:37:01.375299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.340 [2024-04-26 23:37:01.375667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.340 [2024-04-26 23:37:01.375674] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.340 qpair failed and we were unable to recover it. 00:34:12.340 [2024-04-26 23:37:01.376028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.340 [2024-04-26 23:37:01.376391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.340 [2024-04-26 23:37:01.376398] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.340 qpair failed and we were unable to recover it. 00:34:12.340 [2024-04-26 23:37:01.376607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.340 [2024-04-26 23:37:01.376897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.340 [2024-04-26 23:37:01.376905] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.340 qpair failed and we were unable to recover it. 00:34:12.340 [2024-04-26 23:37:01.377072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.340 [2024-04-26 23:37:01.377405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.340 [2024-04-26 23:37:01.377413] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.340 qpair failed and we were unable to recover it. 00:34:12.340 [2024-04-26 23:37:01.377747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.340 [2024-04-26 23:37:01.378110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.340 [2024-04-26 23:37:01.378121] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.340 qpair failed and we were unable to recover it. 00:34:12.340 [2024-04-26 23:37:01.378483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.340 [2024-04-26 23:37:01.378701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.340 [2024-04-26 23:37:01.378708] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.340 qpair failed and we were unable to recover it. 00:34:12.340 [2024-04-26 23:37:01.378918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.340 [2024-04-26 23:37:01.379118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.340 [2024-04-26 23:37:01.379125] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.340 qpair failed and we were unable to recover it. 00:34:12.340 [2024-04-26 23:37:01.379464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.340 [2024-04-26 23:37:01.379760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.340 [2024-04-26 23:37:01.379768] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.340 qpair failed and we were unable to recover it. 00:34:12.340 [2024-04-26 23:37:01.379973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.340 [2024-04-26 23:37:01.380312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.340 [2024-04-26 23:37:01.380320] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.340 qpair failed and we were unable to recover it. 00:34:12.340 [2024-04-26 23:37:01.380674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.340 [2024-04-26 23:37:01.381007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.340 [2024-04-26 23:37:01.381014] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.340 qpair failed and we were unable to recover it. 00:34:12.340 [2024-04-26 23:37:01.381351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.340 [2024-04-26 23:37:01.381671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.340 [2024-04-26 23:37:01.381679] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.340 qpair failed and we were unable to recover it. 00:34:12.340 [2024-04-26 23:37:01.381923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.340 [2024-04-26 23:37:01.382292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.340 [2024-04-26 23:37:01.382300] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.340 qpair failed and we were unable to recover it. 00:34:12.340 [2024-04-26 23:37:01.382635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.340 [2024-04-26 23:37:01.382973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.340 [2024-04-26 23:37:01.382981] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.340 qpair failed and we were unable to recover it. 00:34:12.340 [2024-04-26 23:37:01.383162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.340 [2024-04-26 23:37:01.383516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.340 [2024-04-26 23:37:01.383524] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.340 qpair failed and we were unable to recover it. 00:34:12.340 [2024-04-26 23:37:01.383857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.340 [2024-04-26 23:37:01.384155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.340 [2024-04-26 23:37:01.384162] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.340 qpair failed and we were unable to recover it. 00:34:12.340 [2024-04-26 23:37:01.384345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.340 [2024-04-26 23:37:01.384669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.340 [2024-04-26 23:37:01.384676] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.340 qpair failed and we were unable to recover it. 00:34:12.340 [2024-04-26 23:37:01.385009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.340 [2024-04-26 23:37:01.385354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.340 [2024-04-26 23:37:01.385362] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.340 qpair failed and we were unable to recover it. 00:34:12.340 [2024-04-26 23:37:01.385558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.340 [2024-04-26 23:37:01.385871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.341 [2024-04-26 23:37:01.385880] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.341 qpair failed and we were unable to recover it. 00:34:12.341 [2024-04-26 23:37:01.386191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.341 [2024-04-26 23:37:01.386393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.341 [2024-04-26 23:37:01.386400] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.341 qpair failed and we were unable to recover it. 00:34:12.341 [2024-04-26 23:37:01.386729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.341 [2024-04-26 23:37:01.387070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.341 [2024-04-26 23:37:01.387079] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.341 qpair failed and we were unable to recover it. 00:34:12.341 [2024-04-26 23:37:01.387281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.341 [2024-04-26 23:37:01.387620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.341 [2024-04-26 23:37:01.387628] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.341 qpair failed and we were unable to recover it. 00:34:12.341 [2024-04-26 23:37:01.387944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.341 [2024-04-26 23:37:01.388305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.341 [2024-04-26 23:37:01.388313] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.341 qpair failed and we were unable to recover it. 00:34:12.341 [2024-04-26 23:37:01.388642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.341 [2024-04-26 23:37:01.388959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.341 [2024-04-26 23:37:01.388967] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.341 qpair failed and we were unable to recover it. 00:34:12.341 [2024-04-26 23:37:01.389300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.341 [2024-04-26 23:37:01.389618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.341 [2024-04-26 23:37:01.389626] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.341 qpair failed and we were unable to recover it. 00:34:12.341 [2024-04-26 23:37:01.389826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.341 [2024-04-26 23:37:01.390168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.341 [2024-04-26 23:37:01.390176] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.341 qpair failed and we were unable to recover it. 00:34:12.341 [2024-04-26 23:37:01.390533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.341 [2024-04-26 23:37:01.390730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.341 [2024-04-26 23:37:01.390738] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.341 qpair failed and we were unable to recover it. 00:34:12.341 [2024-04-26 23:37:01.390972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.341 [2024-04-26 23:37:01.391188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.341 [2024-04-26 23:37:01.391195] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.341 qpair failed and we were unable to recover it. 00:34:12.341 [2024-04-26 23:37:01.391504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.341 [2024-04-26 23:37:01.391830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.341 [2024-04-26 23:37:01.391842] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.341 qpair failed and we were unable to recover it. 00:34:12.341 [2024-04-26 23:37:01.392166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.341 [2024-04-26 23:37:01.392380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.341 [2024-04-26 23:37:01.392388] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.341 qpair failed and we were unable to recover it. 00:34:12.341 [2024-04-26 23:37:01.392728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.341 [2024-04-26 23:37:01.393057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.341 [2024-04-26 23:37:01.393065] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.341 qpair failed and we were unable to recover it. 00:34:12.341 [2024-04-26 23:37:01.393249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.341 [2024-04-26 23:37:01.393616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.341 [2024-04-26 23:37:01.393624] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.341 qpair failed and we were unable to recover it. 00:34:12.341 [2024-04-26 23:37:01.393989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.341 [2024-04-26 23:37:01.394326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.341 [2024-04-26 23:37:01.394335] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.341 qpair failed and we were unable to recover it. 00:34:12.341 [2024-04-26 23:37:01.394669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.341 [2024-04-26 23:37:01.394874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.341 [2024-04-26 23:37:01.394882] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.341 qpair failed and we were unable to recover it. 00:34:12.341 [2024-04-26 23:37:01.395093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.341 [2024-04-26 23:37:01.395404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.341 [2024-04-26 23:37:01.395412] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.341 qpair failed and we were unable to recover it. 00:34:12.341 [2024-04-26 23:37:01.395745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.341 [2024-04-26 23:37:01.396073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.341 [2024-04-26 23:37:01.396081] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.341 qpair failed and we were unable to recover it. 00:34:12.341 [2024-04-26 23:37:01.396437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.341 [2024-04-26 23:37:01.396751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.341 [2024-04-26 23:37:01.396759] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.341 qpair failed and we were unable to recover it. 00:34:12.341 [2024-04-26 23:37:01.397075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.341 [2024-04-26 23:37:01.397233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.341 [2024-04-26 23:37:01.397241] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.341 qpair failed and we were unable to recover it. 00:34:12.341 [2024-04-26 23:37:01.397430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.341 [2024-04-26 23:37:01.397774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.341 [2024-04-26 23:37:01.397781] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.341 qpair failed and we were unable to recover it. 00:34:12.341 [2024-04-26 23:37:01.397833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.341 [2024-04-26 23:37:01.398158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.341 [2024-04-26 23:37:01.398166] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.341 qpair failed and we were unable to recover it. 00:34:12.341 [2024-04-26 23:37:01.398371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.341 [2024-04-26 23:37:01.398669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.341 [2024-04-26 23:37:01.398676] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.341 qpair failed and we were unable to recover it. 00:34:12.341 [2024-04-26 23:37:01.398901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.341 [2024-04-26 23:37:01.399205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.341 [2024-04-26 23:37:01.399212] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.341 qpair failed and we were unable to recover it. 00:34:12.341 [2024-04-26 23:37:01.399565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.341 [2024-04-26 23:37:01.399904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.341 [2024-04-26 23:37:01.399912] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.341 qpair failed and we were unable to recover it. 00:34:12.341 [2024-04-26 23:37:01.400276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.341 [2024-04-26 23:37:01.400639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.341 [2024-04-26 23:37:01.400647] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.341 qpair failed and we were unable to recover it. 00:34:12.341 [2024-04-26 23:37:01.400979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.341 [2024-04-26 23:37:01.401299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.341 [2024-04-26 23:37:01.401307] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.341 qpair failed and we were unable to recover it. 00:34:12.341 [2024-04-26 23:37:01.401639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.341 [2024-04-26 23:37:01.402018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.341 [2024-04-26 23:37:01.402026] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.341 qpair failed and we were unable to recover it. 00:34:12.341 [2024-04-26 23:37:01.402198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.341 [2024-04-26 23:37:01.402545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.341 [2024-04-26 23:37:01.402553] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.341 qpair failed and we were unable to recover it. 00:34:12.342 [2024-04-26 23:37:01.402893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.342 [2024-04-26 23:37:01.403117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.342 [2024-04-26 23:37:01.403125] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.342 qpair failed and we were unable to recover it. 00:34:12.342 [2024-04-26 23:37:01.403425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.342 [2024-04-26 23:37:01.403743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.342 [2024-04-26 23:37:01.403753] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.342 qpair failed and we were unable to recover it. 00:34:12.342 [2024-04-26 23:37:01.403918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.342 [2024-04-26 23:37:01.404093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.342 [2024-04-26 23:37:01.404100] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.342 qpair failed and we were unable to recover it. 00:34:12.342 [2024-04-26 23:37:01.404307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.342 [2024-04-26 23:37:01.404602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.342 [2024-04-26 23:37:01.404610] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.342 qpair failed and we were unable to recover it. 00:34:12.342 [2024-04-26 23:37:01.404953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.342 [2024-04-26 23:37:01.405246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.342 [2024-04-26 23:37:01.405253] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.342 qpair failed and we were unable to recover it. 00:34:12.342 [2024-04-26 23:37:01.405587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.342 [2024-04-26 23:37:01.405790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.342 [2024-04-26 23:37:01.405798] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.342 qpair failed and we were unable to recover it. 00:34:12.342 [2024-04-26 23:37:01.406134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.342 [2024-04-26 23:37:01.406388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.342 [2024-04-26 23:37:01.406396] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.342 qpair failed and we were unable to recover it. 00:34:12.342 [2024-04-26 23:37:01.406755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.342 [2024-04-26 23:37:01.407066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.342 [2024-04-26 23:37:01.407075] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.342 qpair failed and we were unable to recover it. 00:34:12.342 [2024-04-26 23:37:01.407413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.342 [2024-04-26 23:37:01.407732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.342 [2024-04-26 23:37:01.407739] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.342 qpair failed and we were unable to recover it. 00:34:12.342 [2024-04-26 23:37:01.408089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.342 [2024-04-26 23:37:01.408452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.342 [2024-04-26 23:37:01.408460] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.342 qpair failed and we were unable to recover it. 00:34:12.342 [2024-04-26 23:37:01.408795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.342 [2024-04-26 23:37:01.409002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.342 [2024-04-26 23:37:01.409010] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.342 qpair failed and we were unable to recover it. 00:34:12.342 [2024-04-26 23:37:01.409349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.342 [2024-04-26 23:37:01.409713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.342 [2024-04-26 23:37:01.409722] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.342 qpair failed and we were unable to recover it. 00:34:12.342 [2024-04-26 23:37:01.409914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.342 [2024-04-26 23:37:01.410100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.342 [2024-04-26 23:37:01.410109] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.342 qpair failed and we were unable to recover it. 00:34:12.342 [2024-04-26 23:37:01.410160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.342 [2024-04-26 23:37:01.410487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.342 [2024-04-26 23:37:01.410495] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.342 qpair failed and we were unable to recover it. 00:34:12.342 [2024-04-26 23:37:01.410692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.342 [2024-04-26 23:37:01.411002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.342 [2024-04-26 23:37:01.411010] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.342 qpair failed and we were unable to recover it. 00:34:12.342 [2024-04-26 23:37:01.411343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.342 [2024-04-26 23:37:01.411713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.342 [2024-04-26 23:37:01.411720] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.342 qpair failed and we were unable to recover it. 00:34:12.342 [2024-04-26 23:37:01.412077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.342 [2024-04-26 23:37:01.412398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.342 [2024-04-26 23:37:01.412405] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.342 qpair failed and we were unable to recover it. 00:34:12.342 [2024-04-26 23:37:01.412754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.342 [2024-04-26 23:37:01.412958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.342 [2024-04-26 23:37:01.412966] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.342 qpair failed and we were unable to recover it. 00:34:12.342 [2024-04-26 23:37:01.413142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.342 [2024-04-26 23:37:01.413481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.342 [2024-04-26 23:37:01.413489] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.342 qpair failed and we were unable to recover it. 00:34:12.342 [2024-04-26 23:37:01.413816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.342 [2024-04-26 23:37:01.414028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.342 [2024-04-26 23:37:01.414036] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.342 qpair failed and we were unable to recover it. 00:34:12.342 [2024-04-26 23:37:01.414376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.342 [2024-04-26 23:37:01.414746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.342 [2024-04-26 23:37:01.414754] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.342 qpair failed and we were unable to recover it. 00:34:12.342 [2024-04-26 23:37:01.415177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.342 [2024-04-26 23:37:01.415542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.342 [2024-04-26 23:37:01.415551] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.342 qpair failed and we were unable to recover it. 00:34:12.342 [2024-04-26 23:37:01.415745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.342 [2024-04-26 23:37:01.416058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.342 [2024-04-26 23:37:01.416067] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.342 qpair failed and we were unable to recover it. 00:34:12.342 [2024-04-26 23:37:01.416473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.342 [2024-04-26 23:37:01.416819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.342 [2024-04-26 23:37:01.416827] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.342 qpair failed and we were unable to recover it. 00:34:12.342 [2024-04-26 23:37:01.417159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.342 [2024-04-26 23:37:01.417207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.342 [2024-04-26 23:37:01.417213] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.342 qpair failed and we were unable to recover it. 00:34:12.342 [2024-04-26 23:37:01.417542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.342 [2024-04-26 23:37:01.417874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.342 [2024-04-26 23:37:01.417882] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.342 qpair failed and we were unable to recover it. 00:34:12.342 [2024-04-26 23:37:01.418181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.342 [2024-04-26 23:37:01.418497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.342 [2024-04-26 23:37:01.418504] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.342 qpair failed and we were unable to recover it. 00:34:12.342 [2024-04-26 23:37:01.418836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.342 [2024-04-26 23:37:01.419157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.342 [2024-04-26 23:37:01.419164] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.342 qpair failed and we were unable to recover it. 00:34:12.342 [2024-04-26 23:37:01.419493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.342 [2024-04-26 23:37:01.419696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.342 [2024-04-26 23:37:01.419703] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.343 qpair failed and we were unable to recover it. 00:34:12.343 [2024-04-26 23:37:01.419920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.343 [2024-04-26 23:37:01.420116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.343 [2024-04-26 23:37:01.420124] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.343 qpair failed and we were unable to recover it. 00:34:12.343 [2024-04-26 23:37:01.420447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.343 [2024-04-26 23:37:01.420648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.343 [2024-04-26 23:37:01.420656] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.343 qpair failed and we were unable to recover it. 00:34:12.343 [2024-04-26 23:37:01.420852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.343 [2024-04-26 23:37:01.421153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.343 [2024-04-26 23:37:01.421162] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.343 qpair failed and we were unable to recover it. 00:34:12.343 [2024-04-26 23:37:01.421506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.343 [2024-04-26 23:37:01.421795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.343 [2024-04-26 23:37:01.421803] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.343 qpair failed and we were unable to recover it. 00:34:12.343 [2024-04-26 23:37:01.422161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.343 [2024-04-26 23:37:01.422475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.343 [2024-04-26 23:37:01.422483] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.343 qpair failed and we were unable to recover it. 00:34:12.343 [2024-04-26 23:37:01.422828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.343 [2024-04-26 23:37:01.423199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.343 [2024-04-26 23:37:01.423206] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.343 qpair failed and we were unable to recover it. 00:34:12.343 [2024-04-26 23:37:01.423542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.343 [2024-04-26 23:37:01.423749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.343 [2024-04-26 23:37:01.423756] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.343 qpair failed and we were unable to recover it. 00:34:12.343 [2024-04-26 23:37:01.423986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.343 [2024-04-26 23:37:01.424389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.343 [2024-04-26 23:37:01.424397] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.343 qpair failed and we were unable to recover it. 00:34:12.343 [2024-04-26 23:37:01.424750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.343 [2024-04-26 23:37:01.425076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.343 [2024-04-26 23:37:01.425085] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.343 qpair failed and we were unable to recover it. 00:34:12.343 [2024-04-26 23:37:01.425289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.343 [2024-04-26 23:37:01.425498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.343 [2024-04-26 23:37:01.425506] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.343 qpair failed and we were unable to recover it. 00:34:12.343 [2024-04-26 23:37:01.425843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.343 [2024-04-26 23:37:01.426205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.343 [2024-04-26 23:37:01.426213] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.343 qpair failed and we were unable to recover it. 00:34:12.343 [2024-04-26 23:37:01.426586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.343 [2024-04-26 23:37:01.426901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.343 [2024-04-26 23:37:01.426909] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.343 qpair failed and we were unable to recover it. 00:34:12.343 [2024-04-26 23:37:01.427244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.343 [2024-04-26 23:37:01.427439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.343 [2024-04-26 23:37:01.427446] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.343 qpair failed and we were unable to recover it. 00:34:12.343 [2024-04-26 23:37:01.427757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.343 [2024-04-26 23:37:01.427975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.343 [2024-04-26 23:37:01.427983] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.343 qpair failed and we were unable to recover it. 00:34:12.343 [2024-04-26 23:37:01.428325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.343 [2024-04-26 23:37:01.428695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.343 [2024-04-26 23:37:01.428702] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.343 qpair failed and we were unable to recover it. 00:34:12.343 [2024-04-26 23:37:01.429040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.343 [2024-04-26 23:37:01.429121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.343 [2024-04-26 23:37:01.429129] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.343 qpair failed and we were unable to recover it. 00:34:12.343 [2024-04-26 23:37:01.429421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.343 [2024-04-26 23:37:01.429664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.343 [2024-04-26 23:37:01.429672] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.343 qpair failed and we were unable to recover it. 00:34:12.343 [2024-04-26 23:37:01.429957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.343 [2024-04-26 23:37:01.430282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.343 [2024-04-26 23:37:01.430290] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.343 qpair failed and we were unable to recover it. 00:34:12.343 [2024-04-26 23:37:01.430606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.343 [2024-04-26 23:37:01.430809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.343 [2024-04-26 23:37:01.430817] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.343 qpair failed and we were unable to recover it. 00:34:12.343 [2024-04-26 23:37:01.431010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.343 [2024-04-26 23:37:01.431325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.343 [2024-04-26 23:37:01.431333] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.343 qpair failed and we were unable to recover it. 00:34:12.343 [2024-04-26 23:37:01.431520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.343 [2024-04-26 23:37:01.431871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.343 [2024-04-26 23:37:01.431879] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.343 qpair failed and we were unable to recover it. 00:34:12.343 [2024-04-26 23:37:01.432099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.343 [2024-04-26 23:37:01.432444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.343 [2024-04-26 23:37:01.432452] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.343 qpair failed and we were unable to recover it. 00:34:12.343 [2024-04-26 23:37:01.432781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.343 [2024-04-26 23:37:01.433110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.343 [2024-04-26 23:37:01.433118] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.343 qpair failed and we were unable to recover it. 00:34:12.343 [2024-04-26 23:37:01.433460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.343 [2024-04-26 23:37:01.433673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.343 [2024-04-26 23:37:01.433681] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.343 qpair failed and we were unable to recover it. 00:34:12.343 [2024-04-26 23:37:01.433866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.343 [2024-04-26 23:37:01.434052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.343 [2024-04-26 23:37:01.434060] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.343 qpair failed and we were unable to recover it. 00:34:12.343 [2024-04-26 23:37:01.434355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.343 [2024-04-26 23:37:01.434563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.343 [2024-04-26 23:37:01.434571] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.343 qpair failed and we were unable to recover it. 00:34:12.343 [2024-04-26 23:37:01.434800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.343 [2024-04-26 23:37:01.434947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.343 [2024-04-26 23:37:01.434955] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.343 qpair failed and we were unable to recover it. 00:34:12.343 [2024-04-26 23:37:01.435141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.343 [2024-04-26 23:37:01.435476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.344 [2024-04-26 23:37:01.435484] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.344 qpair failed and we were unable to recover it. 00:34:12.344 [2024-04-26 23:37:01.435817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.344 [2024-04-26 23:37:01.436025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.344 [2024-04-26 23:37:01.436033] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.344 qpair failed and we were unable to recover it. 00:34:12.344 [2024-04-26 23:37:01.436401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.344 [2024-04-26 23:37:01.436757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.344 [2024-04-26 23:37:01.436764] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.344 qpair failed and we were unable to recover it. 00:34:12.344 [2024-04-26 23:37:01.437013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.344 [2024-04-26 23:37:01.437362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.344 [2024-04-26 23:37:01.437370] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.344 qpair failed and we were unable to recover it. 00:34:12.344 [2024-04-26 23:37:01.437705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.344 [2024-04-26 23:37:01.438002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.344 [2024-04-26 23:37:01.438011] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.344 qpair failed and we were unable to recover it. 00:34:12.344 [2024-04-26 23:37:01.438358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.344 [2024-04-26 23:37:01.438562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.344 [2024-04-26 23:37:01.438570] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.344 qpair failed and we were unable to recover it. 00:34:12.344 [2024-04-26 23:37:01.438985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.344 [2024-04-26 23:37:01.439335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.344 [2024-04-26 23:37:01.439343] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.344 qpair failed and we were unable to recover it. 00:34:12.344 [2024-04-26 23:37:01.439502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.344 [2024-04-26 23:37:01.439762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.344 [2024-04-26 23:37:01.439770] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.344 qpair failed and we were unable to recover it. 00:34:12.344 [2024-04-26 23:37:01.439980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.344 [2024-04-26 23:37:01.440320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.344 [2024-04-26 23:37:01.440328] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.344 qpair failed and we were unable to recover it. 00:34:12.344 [2024-04-26 23:37:01.440698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.344 [2024-04-26 23:37:01.441060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.344 [2024-04-26 23:37:01.441068] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.344 qpair failed and we were unable to recover it. 00:34:12.344 [2024-04-26 23:37:01.441441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.344 [2024-04-26 23:37:01.441760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.344 [2024-04-26 23:37:01.441769] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.344 qpair failed and we were unable to recover it. 00:34:12.344 [2024-04-26 23:37:01.442113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.344 [2024-04-26 23:37:01.442446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.344 [2024-04-26 23:37:01.442454] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.344 qpair failed and we were unable to recover it. 00:34:12.344 [2024-04-26 23:37:01.442785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.344 [2024-04-26 23:37:01.442995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.344 [2024-04-26 23:37:01.443004] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.344 qpair failed and we were unable to recover it. 00:34:12.344 [2024-04-26 23:37:01.443335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.344 [2024-04-26 23:37:01.443657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.344 [2024-04-26 23:37:01.443666] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.344 qpair failed and we were unable to recover it. 00:34:12.344 [2024-04-26 23:37:01.444005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.344 [2024-04-26 23:37:01.444189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.344 [2024-04-26 23:37:01.444197] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.344 qpair failed and we were unable to recover it. 00:34:12.344 [2024-04-26 23:37:01.444381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.344 [2024-04-26 23:37:01.444685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.344 [2024-04-26 23:37:01.444693] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.344 qpair failed and we were unable to recover it. 00:34:12.344 [2024-04-26 23:37:01.444995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.344 [2024-04-26 23:37:01.445343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.344 [2024-04-26 23:37:01.445351] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.344 qpair failed and we were unable to recover it. 00:34:12.344 [2024-04-26 23:37:01.445695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.344 [2024-04-26 23:37:01.446061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.344 [2024-04-26 23:37:01.446069] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.344 qpair failed and we were unable to recover it. 00:34:12.344 [2024-04-26 23:37:01.446369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.344 [2024-04-26 23:37:01.446685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.344 [2024-04-26 23:37:01.446692] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.344 qpair failed and we were unable to recover it. 00:34:12.344 [2024-04-26 23:37:01.447021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.344 [2024-04-26 23:37:01.447229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.344 [2024-04-26 23:37:01.447238] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.344 qpair failed and we were unable to recover it. 00:34:12.344 [2024-04-26 23:37:01.447598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.344 [2024-04-26 23:37:01.447939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.344 [2024-04-26 23:37:01.447947] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.344 qpair failed and we were unable to recover it. 00:34:12.344 [2024-04-26 23:37:01.448278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.344 [2024-04-26 23:37:01.448489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.344 [2024-04-26 23:37:01.448496] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.344 qpair failed and we were unable to recover it. 00:34:12.344 [2024-04-26 23:37:01.448811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.344 [2024-04-26 23:37:01.448857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.344 [2024-04-26 23:37:01.448865] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.344 qpair failed and we were unable to recover it. 00:34:12.344 [2024-04-26 23:37:01.449242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.344 [2024-04-26 23:37:01.449536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.344 [2024-04-26 23:37:01.449543] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.344 qpair failed and we were unable to recover it. 00:34:12.344 [2024-04-26 23:37:01.449877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.345 [2024-04-26 23:37:01.450200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.345 [2024-04-26 23:37:01.450207] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.345 qpair failed and we were unable to recover it. 00:34:12.345 [2024-04-26 23:37:01.450522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.345 [2024-04-26 23:37:01.450564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.345 [2024-04-26 23:37:01.450571] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.345 qpair failed and we were unable to recover it. 00:34:12.345 [2024-04-26 23:37:01.450908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.345 [2024-04-26 23:37:01.451084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.345 [2024-04-26 23:37:01.451091] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.345 qpair failed and we were unable to recover it. 00:34:12.345 [2024-04-26 23:37:01.451415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.345 [2024-04-26 23:37:01.451609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.345 [2024-04-26 23:37:01.451616] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.345 qpair failed and we were unable to recover it. 00:34:12.345 [2024-04-26 23:37:01.451853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.345 [2024-04-26 23:37:01.452154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.345 [2024-04-26 23:37:01.452162] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.345 qpair failed and we were unable to recover it. 00:34:12.345 [2024-04-26 23:37:01.452496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.345 [2024-04-26 23:37:01.452787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.345 [2024-04-26 23:37:01.452795] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.345 qpair failed and we were unable to recover it. 00:34:12.345 [2024-04-26 23:37:01.452960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.345 [2024-04-26 23:37:01.453290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.345 [2024-04-26 23:37:01.453299] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.345 qpair failed and we were unable to recover it. 00:34:12.345 [2024-04-26 23:37:01.453645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.345 [2024-04-26 23:37:01.453843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.345 [2024-04-26 23:37:01.453852] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.345 qpair failed and we were unable to recover it. 00:34:12.345 [2024-04-26 23:37:01.454028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.345 [2024-04-26 23:37:01.454196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.345 [2024-04-26 23:37:01.454203] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.345 qpair failed and we were unable to recover it. 00:34:12.345 [2024-04-26 23:37:01.454496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.345 [2024-04-26 23:37:01.454831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.345 [2024-04-26 23:37:01.454842] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.345 qpair failed and we were unable to recover it. 00:34:12.345 [2024-04-26 23:37:01.455167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.345 [2024-04-26 23:37:01.455529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.345 [2024-04-26 23:37:01.455537] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.345 qpair failed and we were unable to recover it. 00:34:12.345 [2024-04-26 23:37:01.455733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.345 [2024-04-26 23:37:01.455948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.345 [2024-04-26 23:37:01.455956] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.345 qpair failed and we were unable to recover it. 00:34:12.345 [2024-04-26 23:37:01.456244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.345 [2024-04-26 23:37:01.456429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.345 [2024-04-26 23:37:01.456436] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.345 qpair failed and we were unable to recover it. 00:34:12.345 [2024-04-26 23:37:01.456792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.345 [2024-04-26 23:37:01.457139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.345 [2024-04-26 23:37:01.457147] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.345 qpair failed and we were unable to recover it. 00:34:12.345 [2024-04-26 23:37:01.457488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.345 [2024-04-26 23:37:01.457671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.345 [2024-04-26 23:37:01.457678] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.345 qpair failed and we were unable to recover it. 00:34:12.345 [2024-04-26 23:37:01.458040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.345 [2024-04-26 23:37:01.458225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.345 [2024-04-26 23:37:01.458234] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.345 qpair failed and we were unable to recover it. 00:34:12.345 [2024-04-26 23:37:01.458571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.345 [2024-04-26 23:37:01.458850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.345 [2024-04-26 23:37:01.458858] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.345 qpair failed and we were unable to recover it. 00:34:12.345 [2024-04-26 23:37:01.459215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.345 [2024-04-26 23:37:01.459394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.345 [2024-04-26 23:37:01.459401] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.345 qpair failed and we were unable to recover it. 00:34:12.345 [2024-04-26 23:37:01.459463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.345 [2024-04-26 23:37:01.459843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.345 [2024-04-26 23:37:01.459852] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.345 qpair failed and we were unable to recover it. 00:34:12.345 [2024-04-26 23:37:01.460076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.345 [2024-04-26 23:37:01.460433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.345 [2024-04-26 23:37:01.460440] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.345 qpair failed and we were unable to recover it. 00:34:12.345 [2024-04-26 23:37:01.460773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.345 [2024-04-26 23:37:01.461130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.345 [2024-04-26 23:37:01.461139] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.345 qpair failed and we were unable to recover it. 00:34:12.345 [2024-04-26 23:37:01.461338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.345 [2024-04-26 23:37:01.461659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.345 [2024-04-26 23:37:01.461667] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.345 qpair failed and we were unable to recover it. 00:34:12.345 [2024-04-26 23:37:01.462004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.345 [2024-04-26 23:37:01.462333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.345 [2024-04-26 23:37:01.462341] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.345 qpair failed and we were unable to recover it. 00:34:12.345 [2024-04-26 23:37:01.462515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.345 [2024-04-26 23:37:01.462844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.345 [2024-04-26 23:37:01.462852] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.345 qpair failed and we were unable to recover it. 00:34:12.345 [2024-04-26 23:37:01.463164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.345 [2024-04-26 23:37:01.463527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.345 [2024-04-26 23:37:01.463536] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.345 qpair failed and we were unable to recover it. 00:34:12.345 [2024-04-26 23:37:01.463856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.345 [2024-04-26 23:37:01.464195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.345 [2024-04-26 23:37:01.464203] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.345 qpair failed and we were unable to recover it. 00:34:12.345 [2024-04-26 23:37:01.464393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.345 [2024-04-26 23:37:01.464666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.345 [2024-04-26 23:37:01.464674] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.345 qpair failed and we were unable to recover it. 00:34:12.345 [2024-04-26 23:37:01.465013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.345 [2024-04-26 23:37:01.465344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.345 [2024-04-26 23:37:01.465351] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.345 qpair failed and we were unable to recover it. 00:34:12.345 [2024-04-26 23:37:01.465697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.345 [2024-04-26 23:37:01.466035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.346 [2024-04-26 23:37:01.466043] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.346 qpair failed and we were unable to recover it. 00:34:12.346 [2024-04-26 23:37:01.466389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.346 [2024-04-26 23:37:01.466578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.346 [2024-04-26 23:37:01.466585] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.346 qpair failed and we were unable to recover it. 00:34:12.346 [2024-04-26 23:37:01.466939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.346 [2024-04-26 23:37:01.467272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.346 [2024-04-26 23:37:01.467280] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.346 qpair failed and we were unable to recover it. 00:34:12.346 [2024-04-26 23:37:01.467649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.346 [2024-04-26 23:37:01.467989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.346 [2024-04-26 23:37:01.467998] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.346 qpair failed and we were unable to recover it. 00:34:12.346 [2024-04-26 23:37:01.468367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.346 [2024-04-26 23:37:01.468726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.346 [2024-04-26 23:37:01.468734] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.346 qpair failed and we were unable to recover it. 00:34:12.346 [2024-04-26 23:37:01.469074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.346 [2024-04-26 23:37:01.469257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.346 [2024-04-26 23:37:01.469265] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.346 qpair failed and we were unable to recover it. 00:34:12.346 [2024-04-26 23:37:01.469447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.346 [2024-04-26 23:37:01.469656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.346 [2024-04-26 23:37:01.469664] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.346 qpair failed and we were unable to recover it. 00:34:12.346 [2024-04-26 23:37:01.469848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.346 [2024-04-26 23:37:01.469988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.346 [2024-04-26 23:37:01.469994] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.346 qpair failed and we were unable to recover it. 00:34:12.346 [2024-04-26 23:37:01.470190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.346 [2024-04-26 23:37:01.470510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.346 [2024-04-26 23:37:01.470518] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.346 qpair failed and we were unable to recover it. 00:34:12.346 [2024-04-26 23:37:01.470830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.346 [2024-04-26 23:37:01.471173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.346 [2024-04-26 23:37:01.471181] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.346 qpair failed and we were unable to recover it. 00:34:12.346 [2024-04-26 23:37:01.471489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.346 [2024-04-26 23:37:01.471677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.346 [2024-04-26 23:37:01.471685] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.346 qpair failed and we were unable to recover it. 00:34:12.346 [2024-04-26 23:37:01.472090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.346 [2024-04-26 23:37:01.472286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.346 [2024-04-26 23:37:01.472295] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.346 qpair failed and we were unable to recover it. 00:34:12.346 [2024-04-26 23:37:01.472633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.346 [2024-04-26 23:37:01.472984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.346 [2024-04-26 23:37:01.472993] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.346 qpair failed and we were unable to recover it. 00:34:12.346 [2024-04-26 23:37:01.473237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.346 [2024-04-26 23:37:01.473558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.346 [2024-04-26 23:37:01.473566] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.346 qpair failed and we were unable to recover it. 00:34:12.346 [2024-04-26 23:37:01.473901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.346 [2024-04-26 23:37:01.474244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.346 [2024-04-26 23:37:01.474251] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.346 qpair failed and we were unable to recover it. 00:34:12.346 [2024-04-26 23:37:01.474611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.346 [2024-04-26 23:37:01.474805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.346 [2024-04-26 23:37:01.474812] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.346 qpair failed and we were unable to recover it. 00:34:12.346 [2024-04-26 23:37:01.475137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.346 [2024-04-26 23:37:01.475515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.346 [2024-04-26 23:37:01.475522] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.346 qpair failed and we were unable to recover it. 00:34:12.346 [2024-04-26 23:37:01.475874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.346 [2024-04-26 23:37:01.476233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.346 [2024-04-26 23:37:01.476241] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.346 qpair failed and we were unable to recover it. 00:34:12.346 [2024-04-26 23:37:01.476585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.346 [2024-04-26 23:37:01.476901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.346 [2024-04-26 23:37:01.476909] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.346 qpair failed and we were unable to recover it. 00:34:12.346 [2024-04-26 23:37:01.477103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.346 [2024-04-26 23:37:01.477427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.346 [2024-04-26 23:37:01.477434] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.346 qpair failed and we were unable to recover it. 00:34:12.346 [2024-04-26 23:37:01.477765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.346 [2024-04-26 23:37:01.477969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.346 [2024-04-26 23:37:01.477976] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.346 qpair failed and we were unable to recover it. 00:34:12.346 [2024-04-26 23:37:01.478194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.346 [2024-04-26 23:37:01.478237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.346 [2024-04-26 23:37:01.478245] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.346 qpair failed and we were unable to recover it. 00:34:12.346 [2024-04-26 23:37:01.478533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.346 [2024-04-26 23:37:01.478850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.346 [2024-04-26 23:37:01.478858] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.346 qpair failed and we were unable to recover it. 00:34:12.346 [2024-04-26 23:37:01.479066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.346 [2024-04-26 23:37:01.479255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.346 [2024-04-26 23:37:01.479263] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.346 qpair failed and we were unable to recover it. 00:34:12.346 [2024-04-26 23:37:01.479581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.346 [2024-04-26 23:37:01.479870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.346 [2024-04-26 23:37:01.479879] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.346 qpair failed and we were unable to recover it. 00:34:12.346 [2024-04-26 23:37:01.480172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.346 [2024-04-26 23:37:01.480491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.346 [2024-04-26 23:37:01.480499] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.346 qpair failed and we were unable to recover it. 00:34:12.346 [2024-04-26 23:37:01.480831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.346 [2024-04-26 23:37:01.481036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.346 [2024-04-26 23:37:01.481043] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.346 qpair failed and we were unable to recover it. 00:34:12.346 [2024-04-26 23:37:01.481492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.346 [2024-04-26 23:37:01.481856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.346 [2024-04-26 23:37:01.481864] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.346 qpair failed and we were unable to recover it. 00:34:12.346 [2024-04-26 23:37:01.482193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.346 [2024-04-26 23:37:01.482554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.346 [2024-04-26 23:37:01.482562] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.346 qpair failed and we were unable to recover it. 00:34:12.346 [2024-04-26 23:37:01.482948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.346 [2024-04-26 23:37:01.483258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.347 [2024-04-26 23:37:01.483267] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.347 qpair failed and we were unable to recover it. 00:34:12.347 [2024-04-26 23:37:01.483597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.347 [2024-04-26 23:37:01.483916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.347 [2024-04-26 23:37:01.483924] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.347 qpair failed and we were unable to recover it. 00:34:12.347 [2024-04-26 23:37:01.484318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.347 [2024-04-26 23:37:01.484525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.347 [2024-04-26 23:37:01.484534] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.347 qpair failed and we were unable to recover it. 00:34:12.347 [2024-04-26 23:37:01.484856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.347 [2024-04-26 23:37:01.485085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.347 [2024-04-26 23:37:01.485093] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.347 qpair failed and we were unable to recover it. 00:34:12.347 [2024-04-26 23:37:01.485289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.347 [2024-04-26 23:37:01.485503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.347 [2024-04-26 23:37:01.485510] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.347 qpair failed and we were unable to recover it. 00:34:12.347 [2024-04-26 23:37:01.485827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.347 [2024-04-26 23:37:01.486139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.347 [2024-04-26 23:37:01.486147] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.347 qpair failed and we were unable to recover it. 00:34:12.347 [2024-04-26 23:37:01.486502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.347 [2024-04-26 23:37:01.486691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.347 [2024-04-26 23:37:01.486698] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.347 qpair failed and we were unable to recover it. 00:34:12.347 [2024-04-26 23:37:01.486902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.347 [2024-04-26 23:37:01.486992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.347 [2024-04-26 23:37:01.487000] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.347 qpair failed and we were unable to recover it. 00:34:12.347 [2024-04-26 23:37:01.487311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.347 [2024-04-26 23:37:01.487677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.347 [2024-04-26 23:37:01.487684] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.347 qpair failed and we were unable to recover it. 00:34:12.347 [2024-04-26 23:37:01.488017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.347 [2024-04-26 23:37:01.488182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.347 [2024-04-26 23:37:01.488190] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.347 qpair failed and we were unable to recover it. 00:34:12.347 [2024-04-26 23:37:01.488394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.347 [2024-04-26 23:37:01.488694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.347 [2024-04-26 23:37:01.488702] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.347 qpair failed and we were unable to recover it. 00:34:12.347 [2024-04-26 23:37:01.489044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.347 [2024-04-26 23:37:01.489189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.347 [2024-04-26 23:37:01.489196] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.347 qpair failed and we were unable to recover it. 00:34:12.347 [2024-04-26 23:37:01.489519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.347 [2024-04-26 23:37:01.489855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.347 [2024-04-26 23:37:01.489864] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.347 qpair failed and we were unable to recover it. 00:34:12.347 [2024-04-26 23:37:01.490203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.347 [2024-04-26 23:37:01.490456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.347 [2024-04-26 23:37:01.490465] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.347 qpair failed and we were unable to recover it. 00:34:12.347 [2024-04-26 23:37:01.490821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.347 [2024-04-26 23:37:01.491187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.347 [2024-04-26 23:37:01.491195] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.347 qpair failed and we were unable to recover it. 00:34:12.347 [2024-04-26 23:37:01.491522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.347 [2024-04-26 23:37:01.491846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.347 [2024-04-26 23:37:01.491854] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.347 qpair failed and we were unable to recover it. 00:34:12.347 [2024-04-26 23:37:01.492040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.347 [2024-04-26 23:37:01.492387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.347 [2024-04-26 23:37:01.492394] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.347 qpair failed and we were unable to recover it. 00:34:12.347 [2024-04-26 23:37:01.492591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.347 [2024-04-26 23:37:01.492760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.347 [2024-04-26 23:37:01.492768] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.347 qpair failed and we were unable to recover it. 00:34:12.347 [2024-04-26 23:37:01.493104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.347 [2024-04-26 23:37:01.493360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.347 [2024-04-26 23:37:01.493369] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.347 qpair failed and we were unable to recover it. 00:34:12.347 [2024-04-26 23:37:01.493552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.347 [2024-04-26 23:37:01.493882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.347 [2024-04-26 23:37:01.493889] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.347 qpair failed and we were unable to recover it. 00:34:12.347 [2024-04-26 23:37:01.494188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.347 [2024-04-26 23:37:01.494462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.347 [2024-04-26 23:37:01.494469] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.347 qpair failed and we were unable to recover it. 00:34:12.347 [2024-04-26 23:37:01.494668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.347 [2024-04-26 23:37:01.495002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.347 [2024-04-26 23:37:01.495010] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.347 qpair failed and we were unable to recover it. 00:34:12.347 [2024-04-26 23:37:01.495336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.347 [2024-04-26 23:37:01.495557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.347 [2024-04-26 23:37:01.495564] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.347 qpair failed and we were unable to recover it. 00:34:12.347 [2024-04-26 23:37:01.495852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.347 [2024-04-26 23:37:01.496178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.347 [2024-04-26 23:37:01.496185] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.347 qpair failed and we were unable to recover it. 00:34:12.347 [2024-04-26 23:37:01.496541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.347 [2024-04-26 23:37:01.496861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.347 [2024-04-26 23:37:01.496869] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.347 qpair failed and we were unable to recover it. 00:34:12.347 [2024-04-26 23:37:01.497231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.347 [2024-04-26 23:37:01.497582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.347 [2024-04-26 23:37:01.497591] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.347 qpair failed and we were unable to recover it. 00:34:12.347 [2024-04-26 23:37:01.497951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.347 [2024-04-26 23:37:01.498154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.347 [2024-04-26 23:37:01.498161] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.347 qpair failed and we were unable to recover it. 00:34:12.347 [2024-04-26 23:37:01.498504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.347 [2024-04-26 23:37:01.498694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.347 [2024-04-26 23:37:01.498701] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.347 qpair failed and we were unable to recover it. 00:34:12.347 [2024-04-26 23:37:01.498888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.347 [2024-04-26 23:37:01.499192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.347 [2024-04-26 23:37:01.499200] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.347 qpair failed and we were unable to recover it. 00:34:12.347 [2024-04-26 23:37:01.499399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.348 [2024-04-26 23:37:01.499738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.348 [2024-04-26 23:37:01.499746] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.348 qpair failed and we were unable to recover it. 00:34:12.348 [2024-04-26 23:37:01.500071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.348 [2024-04-26 23:37:01.500433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.348 [2024-04-26 23:37:01.500441] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.348 qpair failed and we were unable to recover it. 00:34:12.348 [2024-04-26 23:37:01.500771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.348 [2024-04-26 23:37:01.501114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.348 [2024-04-26 23:37:01.501122] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.348 qpair failed and we were unable to recover it. 00:34:12.348 [2024-04-26 23:37:01.501328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.348 [2024-04-26 23:37:01.501605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.348 [2024-04-26 23:37:01.501613] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.348 qpair failed and we were unable to recover it. 00:34:12.348 [2024-04-26 23:37:01.501847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.348 [2024-04-26 23:37:01.502030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.348 [2024-04-26 23:37:01.502040] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.348 qpair failed and we were unable to recover it. 00:34:12.348 [2024-04-26 23:37:01.502373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.348 [2024-04-26 23:37:01.502530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.348 [2024-04-26 23:37:01.502538] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.348 qpair failed and we were unable to recover it. 00:34:12.348 [2024-04-26 23:37:01.502877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.348 [2024-04-26 23:37:01.503099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.348 [2024-04-26 23:37:01.503107] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.348 qpair failed and we were unable to recover it. 00:34:12.348 [2024-04-26 23:37:01.503305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.348 [2024-04-26 23:37:01.503639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.348 [2024-04-26 23:37:01.503647] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.348 qpair failed and we were unable to recover it. 00:34:12.348 [2024-04-26 23:37:01.503811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.348 [2024-04-26 23:37:01.504145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.348 [2024-04-26 23:37:01.504153] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.348 qpair failed and we were unable to recover it. 00:34:12.348 [2024-04-26 23:37:01.504493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.348 [2024-04-26 23:37:01.504800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.348 [2024-04-26 23:37:01.504809] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.348 qpair failed and we were unable to recover it. 00:34:12.348 [2024-04-26 23:37:01.505137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.348 [2024-04-26 23:37:01.505327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.348 [2024-04-26 23:37:01.505334] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.348 qpair failed and we were unable to recover it. 00:34:12.348 [2024-04-26 23:37:01.505663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.348 [2024-04-26 23:37:01.506035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.348 [2024-04-26 23:37:01.506044] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.348 qpair failed and we were unable to recover it. 00:34:12.348 [2024-04-26 23:37:01.506379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.348 [2024-04-26 23:37:01.506749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.348 [2024-04-26 23:37:01.506757] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.348 qpair failed and we were unable to recover it. 00:34:12.348 [2024-04-26 23:37:01.507089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.348 [2024-04-26 23:37:01.507464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.348 [2024-04-26 23:37:01.507472] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.348 qpair failed and we were unable to recover it. 00:34:12.348 [2024-04-26 23:37:01.507823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.348 [2024-04-26 23:37:01.508138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.348 [2024-04-26 23:37:01.508146] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.348 qpair failed and we were unable to recover it. 00:34:12.348 [2024-04-26 23:37:01.508494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.348 [2024-04-26 23:37:01.508706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.348 [2024-04-26 23:37:01.508714] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.348 qpair failed and we were unable to recover it. 00:34:12.348 [2024-04-26 23:37:01.509064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.348 [2024-04-26 23:37:01.509430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.348 [2024-04-26 23:37:01.509439] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.348 qpair failed and we were unable to recover it. 00:34:12.348 [2024-04-26 23:37:01.509793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.348 [2024-04-26 23:37:01.510117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.348 [2024-04-26 23:37:01.510125] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.348 qpair failed and we were unable to recover it. 00:34:12.348 [2024-04-26 23:37:01.510461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.348 [2024-04-26 23:37:01.510664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.348 [2024-04-26 23:37:01.510671] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.348 qpair failed and we were unable to recover it. 00:34:12.348 [2024-04-26 23:37:01.510868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.348 [2024-04-26 23:37:01.510919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.348 [2024-04-26 23:37:01.510925] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.348 qpair failed and we were unable to recover it. 00:34:12.348 [2024-04-26 23:37:01.511286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.348 [2024-04-26 23:37:01.511609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.348 [2024-04-26 23:37:01.511616] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.348 qpair failed and we were unable to recover it. 00:34:12.348 [2024-04-26 23:37:01.511823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.348 [2024-04-26 23:37:01.512172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.348 [2024-04-26 23:37:01.512180] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.348 qpair failed and we were unable to recover it. 00:34:12.348 [2024-04-26 23:37:01.512520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.348 [2024-04-26 23:37:01.512849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.348 [2024-04-26 23:37:01.512858] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.348 qpair failed and we were unable to recover it. 00:34:12.348 [2024-04-26 23:37:01.513077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.348 [2024-04-26 23:37:01.513446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.348 [2024-04-26 23:37:01.513455] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.348 qpair failed and we were unable to recover it. 00:34:12.348 [2024-04-26 23:37:01.513690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.348 [2024-04-26 23:37:01.513890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.348 [2024-04-26 23:37:01.513898] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.348 qpair failed and we were unable to recover it. 00:34:12.348 [2024-04-26 23:37:01.514223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.348 [2024-04-26 23:37:01.514545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.348 [2024-04-26 23:37:01.514553] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.348 qpair failed and we were unable to recover it. 00:34:12.348 [2024-04-26 23:37:01.514890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.348 [2024-04-26 23:37:01.515241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.348 [2024-04-26 23:37:01.515250] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.348 qpair failed and we were unable to recover it. 00:34:12.348 [2024-04-26 23:37:01.515612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.348 [2024-04-26 23:37:01.515966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.348 [2024-04-26 23:37:01.515974] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.348 qpair failed and we were unable to recover it. 00:34:12.348 [2024-04-26 23:37:01.516156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.348 [2024-04-26 23:37:01.516386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.348 [2024-04-26 23:37:01.516393] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.349 qpair failed and we were unable to recover it. 00:34:12.349 [2024-04-26 23:37:01.516695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.349 [2024-04-26 23:37:01.517020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.349 [2024-04-26 23:37:01.517028] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.349 qpair failed and we were unable to recover it. 00:34:12.349 [2024-04-26 23:37:01.517172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.349 [2024-04-26 23:37:01.517475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.349 [2024-04-26 23:37:01.517483] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.349 qpair failed and we were unable to recover it. 00:34:12.349 [2024-04-26 23:37:01.517669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.349 [2024-04-26 23:37:01.517839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.349 [2024-04-26 23:37:01.517847] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.349 qpair failed and we were unable to recover it. 00:34:12.349 [2024-04-26 23:37:01.518144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.349 [2024-04-26 23:37:01.518329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.349 [2024-04-26 23:37:01.518336] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.349 qpair failed and we were unable to recover it. 00:34:12.349 [2024-04-26 23:37:01.518539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.349 [2024-04-26 23:37:01.518869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.349 [2024-04-26 23:37:01.518877] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.349 qpair failed and we were unable to recover it. 00:34:12.349 [2024-04-26 23:37:01.519056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.349 [2024-04-26 23:37:01.519378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.349 [2024-04-26 23:37:01.519386] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.349 qpair failed and we were unable to recover it. 00:34:12.349 [2024-04-26 23:37:01.519697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.349 [2024-04-26 23:37:01.519879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.349 [2024-04-26 23:37:01.519887] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.349 qpair failed and we were unable to recover it. 00:34:12.349 [2024-04-26 23:37:01.520102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.349 [2024-04-26 23:37:01.520417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.349 [2024-04-26 23:37:01.520425] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.349 qpair failed and we were unable to recover it. 00:34:12.349 [2024-04-26 23:37:01.520798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.349 [2024-04-26 23:37:01.521135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.349 [2024-04-26 23:37:01.521143] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.349 qpair failed and we were unable to recover it. 00:34:12.349 [2024-04-26 23:37:01.521482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.349 [2024-04-26 23:37:01.521856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.349 [2024-04-26 23:37:01.521865] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.349 qpair failed and we were unable to recover it. 00:34:12.349 [2024-04-26 23:37:01.522153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.349 [2024-04-26 23:37:01.522202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.349 [2024-04-26 23:37:01.522208] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.349 qpair failed and we were unable to recover it. 00:34:12.349 [2024-04-26 23:37:01.522395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.349 [2024-04-26 23:37:01.522751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.349 [2024-04-26 23:37:01.522759] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.349 qpair failed and we were unable to recover it. 00:34:12.349 [2024-04-26 23:37:01.522950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.349 [2024-04-26 23:37:01.523263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.349 [2024-04-26 23:37:01.523271] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.349 qpair failed and we were unable to recover it. 00:34:12.349 [2024-04-26 23:37:01.523605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.349 [2024-04-26 23:37:01.523968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.349 [2024-04-26 23:37:01.523975] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.349 qpair failed and we were unable to recover it. 00:34:12.349 [2024-04-26 23:37:01.524322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.349 [2024-04-26 23:37:01.524687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.349 [2024-04-26 23:37:01.524695] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.349 qpair failed and we were unable to recover it. 00:34:12.349 [2024-04-26 23:37:01.525026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.349 [2024-04-26 23:37:01.525268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.349 [2024-04-26 23:37:01.525275] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.349 qpair failed and we were unable to recover it. 00:34:12.349 [2024-04-26 23:37:01.525636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.349 [2024-04-26 23:37:01.525955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.349 [2024-04-26 23:37:01.525962] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.349 qpair failed and we were unable to recover it. 00:34:12.349 [2024-04-26 23:37:01.526295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.349 [2024-04-26 23:37:01.526659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.349 [2024-04-26 23:37:01.526667] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.349 qpair failed and we were unable to recover it. 00:34:12.349 [2024-04-26 23:37:01.526988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.349 [2024-04-26 23:37:01.527150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.349 [2024-04-26 23:37:01.527158] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.349 qpair failed and we were unable to recover it. 00:34:12.349 [2024-04-26 23:37:01.527459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.349 [2024-04-26 23:37:01.527744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.349 [2024-04-26 23:37:01.527751] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.349 qpair failed and we were unable to recover it. 00:34:12.349 [2024-04-26 23:37:01.528081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.349 [2024-04-26 23:37:01.528434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.349 [2024-04-26 23:37:01.528441] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.349 qpair failed and we were unable to recover it. 00:34:12.349 [2024-04-26 23:37:01.528638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.349 [2024-04-26 23:37:01.528949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.349 [2024-04-26 23:37:01.528958] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.349 qpair failed and we were unable to recover it. 00:34:12.349 [2024-04-26 23:37:01.529007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.349 [2024-04-26 23:37:01.529188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.349 [2024-04-26 23:37:01.529196] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.349 qpair failed and we were unable to recover it. 00:34:12.349 [2024-04-26 23:37:01.529489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.349 [2024-04-26 23:37:01.529803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.350 [2024-04-26 23:37:01.529811] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.350 qpair failed and we were unable to recover it. 00:34:12.350 [2024-04-26 23:37:01.530006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.350 [2024-04-26 23:37:01.530324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.350 [2024-04-26 23:37:01.530333] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.350 qpair failed and we were unable to recover it. 00:34:12.350 [2024-04-26 23:37:01.530601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.350 [2024-04-26 23:37:01.530810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.350 [2024-04-26 23:37:01.530818] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.350 qpair failed and we were unable to recover it. 00:34:12.350 [2024-04-26 23:37:01.531158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.350 [2024-04-26 23:37:01.531477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.350 [2024-04-26 23:37:01.531485] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.350 qpair failed and we were unable to recover it. 00:34:12.350 [2024-04-26 23:37:01.531802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.350 [2024-04-26 23:37:01.532155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.350 [2024-04-26 23:37:01.532163] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.350 qpair failed and we were unable to recover it. 00:34:12.350 [2024-04-26 23:37:01.532486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.350 [2024-04-26 23:37:01.532697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.350 [2024-04-26 23:37:01.532706] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.350 qpair failed and we were unable to recover it. 00:34:12.350 [2024-04-26 23:37:01.533034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.350 [2024-04-26 23:37:01.533208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.350 [2024-04-26 23:37:01.533216] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.350 qpair failed and we were unable to recover it. 00:34:12.350 [2024-04-26 23:37:01.533537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.350 [2024-04-26 23:37:01.533685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.350 [2024-04-26 23:37:01.533692] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.350 qpair failed and we were unable to recover it. 00:34:12.350 [2024-04-26 23:37:01.533900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.350 [2024-04-26 23:37:01.534070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.350 [2024-04-26 23:37:01.534078] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.350 qpair failed and we were unable to recover it. 00:34:12.350 [2024-04-26 23:37:01.534392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.350 [2024-04-26 23:37:01.534736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.350 [2024-04-26 23:37:01.534743] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.350 qpair failed and we were unable to recover it. 00:34:12.350 [2024-04-26 23:37:01.534930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.350 [2024-04-26 23:37:01.535279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.350 [2024-04-26 23:37:01.535286] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.350 qpair failed and we were unable to recover it. 00:34:12.350 [2024-04-26 23:37:01.535482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.350 [2024-04-26 23:37:01.535759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.350 [2024-04-26 23:37:01.535767] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.350 qpair failed and we were unable to recover it. 00:34:12.350 [2024-04-26 23:37:01.536098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.350 [2024-04-26 23:37:01.536467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.350 [2024-04-26 23:37:01.536474] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.350 qpair failed and we were unable to recover it. 00:34:12.350 [2024-04-26 23:37:01.536788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.350 [2024-04-26 23:37:01.537106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.350 [2024-04-26 23:37:01.537114] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.350 qpair failed and we were unable to recover it. 00:34:12.350 [2024-04-26 23:37:01.537458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.350 [2024-04-26 23:37:01.537686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.350 [2024-04-26 23:37:01.537693] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.350 qpair failed and we were unable to recover it. 00:34:12.350 [2024-04-26 23:37:01.538052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.350 [2024-04-26 23:37:01.538417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.350 [2024-04-26 23:37:01.538424] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.350 qpair failed and we were unable to recover it. 00:34:12.350 [2024-04-26 23:37:01.538651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.350 [2024-04-26 23:37:01.538841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.350 [2024-04-26 23:37:01.538850] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.350 qpair failed and we were unable to recover it. 00:34:12.350 [2024-04-26 23:37:01.539017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.350 [2024-04-26 23:37:01.539344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.350 [2024-04-26 23:37:01.539353] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.350 qpair failed and we were unable to recover it. 00:34:12.350 [2024-04-26 23:37:01.539705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.350 [2024-04-26 23:37:01.539992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.350 [2024-04-26 23:37:01.540000] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.350 qpair failed and we were unable to recover it. 00:34:12.350 [2024-04-26 23:37:01.540343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.350 [2024-04-26 23:37:01.540709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.350 [2024-04-26 23:37:01.540717] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.350 qpair failed and we were unable to recover it. 00:34:12.350 [2024-04-26 23:37:01.541074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.350 [2024-04-26 23:37:01.541398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.350 [2024-04-26 23:37:01.541406] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.350 qpair failed and we were unable to recover it. 00:34:12.350 [2024-04-26 23:37:01.541727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.350 [2024-04-26 23:37:01.541926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.350 [2024-04-26 23:37:01.541934] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.350 qpair failed and we were unable to recover it. 00:34:12.350 [2024-04-26 23:37:01.542104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.350 [2024-04-26 23:37:01.542270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.350 [2024-04-26 23:37:01.542278] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.350 qpair failed and we were unable to recover it. 00:34:12.350 [2024-04-26 23:37:01.542612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.350 [2024-04-26 23:37:01.542934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.350 [2024-04-26 23:37:01.542942] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.350 qpair failed and we were unable to recover it. 00:34:12.350 [2024-04-26 23:37:01.543278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.350 [2024-04-26 23:37:01.543636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.350 [2024-04-26 23:37:01.543643] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.350 qpair failed and we were unable to recover it. 00:34:12.350 [2024-04-26 23:37:01.543685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.350 [2024-04-26 23:37:01.544012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.350 [2024-04-26 23:37:01.544020] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.350 qpair failed and we were unable to recover it. 00:34:12.350 [2024-04-26 23:37:01.544304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.350 [2024-04-26 23:37:01.544466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.350 [2024-04-26 23:37:01.544473] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.350 qpair failed and we were unable to recover it. 00:34:12.350 [2024-04-26 23:37:01.544802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.350 [2024-04-26 23:37:01.545136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.350 [2024-04-26 23:37:01.545144] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.350 qpair failed and we were unable to recover it. 00:34:12.350 [2024-04-26 23:37:01.545465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.350 [2024-04-26 23:37:01.545827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.350 [2024-04-26 23:37:01.545835] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.350 qpair failed and we were unable to recover it. 00:34:12.350 [2024-04-26 23:37:01.546028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.351 [2024-04-26 23:37:01.546393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.351 [2024-04-26 23:37:01.546400] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.351 qpair failed and we were unable to recover it. 00:34:12.351 [2024-04-26 23:37:01.546736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.351 [2024-04-26 23:37:01.547086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.351 [2024-04-26 23:37:01.547094] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.351 qpair failed and we were unable to recover it. 00:34:12.351 [2024-04-26 23:37:01.547417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.351 [2024-04-26 23:37:01.547707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.351 [2024-04-26 23:37:01.547715] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.351 qpair failed and we were unable to recover it. 00:34:12.351 [2024-04-26 23:37:01.547897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.351 [2024-04-26 23:37:01.548240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.351 [2024-04-26 23:37:01.548248] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.351 qpair failed and we were unable to recover it. 00:34:12.351 [2024-04-26 23:37:01.548564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.351 [2024-04-26 23:37:01.548881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.351 [2024-04-26 23:37:01.548889] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.351 qpair failed and we were unable to recover it. 00:34:12.351 [2024-04-26 23:37:01.549115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.351 [2024-04-26 23:37:01.549480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.351 [2024-04-26 23:37:01.549488] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.351 qpair failed and we were unable to recover it. 00:34:12.351 [2024-04-26 23:37:01.549688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.351 [2024-04-26 23:37:01.549894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.351 [2024-04-26 23:37:01.549903] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.351 qpair failed and we were unable to recover it. 00:34:12.351 [2024-04-26 23:37:01.550117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.351 [2024-04-26 23:37:01.550434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.351 [2024-04-26 23:37:01.550442] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.351 qpair failed and we were unable to recover it. 00:34:12.351 [2024-04-26 23:37:01.550795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.351 [2024-04-26 23:37:01.551151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.351 [2024-04-26 23:37:01.551158] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.351 qpair failed and we were unable to recover it. 00:34:12.351 [2024-04-26 23:37:01.551487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.351 [2024-04-26 23:37:01.551805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.351 [2024-04-26 23:37:01.551812] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.351 qpair failed and we were unable to recover it. 00:34:12.351 [2024-04-26 23:37:01.552172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.351 [2024-04-26 23:37:01.552361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.351 [2024-04-26 23:37:01.552370] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.351 qpair failed and we were unable to recover it. 00:34:12.351 [2024-04-26 23:37:01.552418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.351 [2024-04-26 23:37:01.552715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.351 [2024-04-26 23:37:01.552723] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.351 qpair failed and we were unable to recover it. 00:34:12.351 [2024-04-26 23:37:01.552914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.351 [2024-04-26 23:37:01.553254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.351 [2024-04-26 23:37:01.553261] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.351 qpair failed and we were unable to recover it. 00:34:12.351 [2024-04-26 23:37:01.553460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.351 [2024-04-26 23:37:01.553755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.351 [2024-04-26 23:37:01.553764] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.351 qpair failed and we were unable to recover it. 00:34:12.351 [2024-04-26 23:37:01.554191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.351 [2024-04-26 23:37:01.554457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.351 [2024-04-26 23:37:01.554465] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.351 qpair failed and we were unable to recover it. 00:34:12.351 [2024-04-26 23:37:01.554624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.351 [2024-04-26 23:37:01.554944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.351 [2024-04-26 23:37:01.554953] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.351 qpair failed and we were unable to recover it. 00:34:12.351 [2024-04-26 23:37:01.555158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.351 [2024-04-26 23:37:01.555450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.351 [2024-04-26 23:37:01.555458] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.351 qpair failed and we were unable to recover it. 00:34:12.351 [2024-04-26 23:37:01.555651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.351 [2024-04-26 23:37:01.555953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.351 [2024-04-26 23:37:01.555961] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.351 qpair failed and we were unable to recover it. 00:34:12.351 [2024-04-26 23:37:01.556274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.351 [2024-04-26 23:37:01.556638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.351 [2024-04-26 23:37:01.556645] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.351 qpair failed and we were unable to recover it. 00:34:12.351 [2024-04-26 23:37:01.556977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.351 [2024-04-26 23:37:01.557343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.351 [2024-04-26 23:37:01.557350] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.351 qpair failed and we were unable to recover it. 00:34:12.351 [2024-04-26 23:37:01.557666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.351 [2024-04-26 23:37:01.558087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.351 [2024-04-26 23:37:01.558095] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.351 qpair failed and we were unable to recover it. 00:34:12.351 [2024-04-26 23:37:01.558416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.351 [2024-04-26 23:37:01.558778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.351 [2024-04-26 23:37:01.558785] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.351 qpair failed and we were unable to recover it. 00:34:12.351 [2024-04-26 23:37:01.559108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.351 [2024-04-26 23:37:01.559423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.351 [2024-04-26 23:37:01.559431] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.351 qpair failed and we were unable to recover it. 00:34:12.351 [2024-04-26 23:37:01.559763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.351 [2024-04-26 23:37:01.560103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.351 [2024-04-26 23:37:01.560111] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.351 qpair failed and we were unable to recover it. 00:34:12.351 [2024-04-26 23:37:01.560308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.351 [2024-04-26 23:37:01.560501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.351 [2024-04-26 23:37:01.560509] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.351 qpair failed and we were unable to recover it. 00:34:12.351 [2024-04-26 23:37:01.560808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.351 [2024-04-26 23:37:01.561129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.351 [2024-04-26 23:37:01.561137] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.351 qpair failed and we were unable to recover it. 00:34:12.351 [2024-04-26 23:37:01.561499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.351 [2024-04-26 23:37:01.561863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.351 [2024-04-26 23:37:01.561870] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.351 qpair failed and we were unable to recover it. 00:34:12.351 [2024-04-26 23:37:01.562056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.351 [2024-04-26 23:37:01.562100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.351 [2024-04-26 23:37:01.562106] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.351 qpair failed and we were unable to recover it. 00:34:12.352 [2024-04-26 23:37:01.562427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.352 [2024-04-26 23:37:01.562744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.352 [2024-04-26 23:37:01.562752] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.352 qpair failed and we were unable to recover it. 00:34:12.352 [2024-04-26 23:37:01.563103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.352 [2024-04-26 23:37:01.563425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.352 [2024-04-26 23:37:01.563433] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.352 qpair failed and we were unable to recover it. 00:34:12.352 [2024-04-26 23:37:01.563656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.352 [2024-04-26 23:37:01.563972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.352 [2024-04-26 23:37:01.563980] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.352 qpair failed and we were unable to recover it. 00:34:12.352 [2024-04-26 23:37:01.564319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.352 [2024-04-26 23:37:01.564636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.352 [2024-04-26 23:37:01.564644] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.352 qpair failed and we were unable to recover it. 00:34:12.352 [2024-04-26 23:37:01.564997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.352 [2024-04-26 23:37:01.565254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.352 [2024-04-26 23:37:01.565262] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.352 qpair failed and we were unable to recover it. 00:34:12.352 [2024-04-26 23:37:01.565310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.352 [2024-04-26 23:37:01.565604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.352 [2024-04-26 23:37:01.565612] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.352 qpair failed and we were unable to recover it. 00:34:12.352 [2024-04-26 23:37:01.565812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.352 [2024-04-26 23:37:01.565986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.352 [2024-04-26 23:37:01.565993] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.352 qpair failed and we were unable to recover it. 00:34:12.352 [2024-04-26 23:37:01.566208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.352 [2024-04-26 23:37:01.566385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.352 [2024-04-26 23:37:01.566392] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.352 qpair failed and we were unable to recover it. 00:34:12.352 [2024-04-26 23:37:01.566721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.352 [2024-04-26 23:37:01.567116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.352 [2024-04-26 23:37:01.567124] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.352 qpair failed and we were unable to recover it. 00:34:12.352 [2024-04-26 23:37:01.567459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.352 [2024-04-26 23:37:01.567662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.352 [2024-04-26 23:37:01.567669] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.352 qpair failed and we were unable to recover it. 00:34:12.352 [2024-04-26 23:37:01.567995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.352 [2024-04-26 23:37:01.568316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.352 [2024-04-26 23:37:01.568324] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.352 qpair failed and we were unable to recover it. 00:34:12.352 [2024-04-26 23:37:01.568658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.352 [2024-04-26 23:37:01.568855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.352 [2024-04-26 23:37:01.568863] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.352 qpair failed and we were unable to recover it. 00:34:12.352 [2024-04-26 23:37:01.569076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.352 [2024-04-26 23:37:01.569401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.352 [2024-04-26 23:37:01.569409] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.352 qpair failed and we were unable to recover it. 00:34:12.352 [2024-04-26 23:37:01.569709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.352 [2024-04-26 23:37:01.570028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.352 [2024-04-26 23:37:01.570035] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.352 qpair failed and we were unable to recover it. 00:34:12.352 [2024-04-26 23:37:01.570246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.352 [2024-04-26 23:37:01.570609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.352 [2024-04-26 23:37:01.570616] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.352 qpair failed and we were unable to recover it. 00:34:12.352 [2024-04-26 23:37:01.570946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.352 [2024-04-26 23:37:01.571310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.352 [2024-04-26 23:37:01.571317] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.352 qpair failed and we were unable to recover it. 00:34:12.352 [2024-04-26 23:37:01.571524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.352 [2024-04-26 23:37:01.571820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.352 [2024-04-26 23:37:01.571830] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.352 qpair failed and we were unable to recover it. 00:34:12.352 [2024-04-26 23:37:01.572150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.352 [2024-04-26 23:37:01.572524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.352 [2024-04-26 23:37:01.572531] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.352 qpair failed and we were unable to recover it. 00:34:12.352 [2024-04-26 23:37:01.572888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.352 [2024-04-26 23:37:01.573257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.352 [2024-04-26 23:37:01.573265] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.352 qpair failed and we were unable to recover it. 00:34:12.352 [2024-04-26 23:37:01.573458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.352 [2024-04-26 23:37:01.573758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.352 [2024-04-26 23:37:01.573766] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.352 qpair failed and we were unable to recover it. 00:34:12.352 [2024-04-26 23:37:01.574089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.352 [2024-04-26 23:37:01.574451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.352 [2024-04-26 23:37:01.574459] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.352 qpair failed and we were unable to recover it. 00:34:12.352 [2024-04-26 23:37:01.574797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.352 [2024-04-26 23:37:01.575043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.352 [2024-04-26 23:37:01.575051] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.352 qpair failed and we were unable to recover it. 00:34:12.352 [2024-04-26 23:37:01.575101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.352 [2024-04-26 23:37:01.575285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.352 [2024-04-26 23:37:01.575292] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.352 qpair failed and we were unable to recover it. 00:34:12.352 [2024-04-26 23:37:01.575458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.352 [2024-04-26 23:37:01.575675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.352 [2024-04-26 23:37:01.575682] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.352 qpair failed and we were unable to recover it. 00:34:12.352 [2024-04-26 23:37:01.575998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.352 [2024-04-26 23:37:01.576301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.352 [2024-04-26 23:37:01.576308] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.352 qpair failed and we were unable to recover it. 00:34:12.352 [2024-04-26 23:37:01.576633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.352 [2024-04-26 23:37:01.576996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.352 [2024-04-26 23:37:01.577004] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.352 qpair failed and we were unable to recover it. 00:34:12.352 [2024-04-26 23:37:01.577329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.352 [2024-04-26 23:37:01.577488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.352 [2024-04-26 23:37:01.577494] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.352 qpair failed and we were unable to recover it. 00:34:12.352 [2024-04-26 23:37:01.577777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.352 [2024-04-26 23:37:01.578108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.352 [2024-04-26 23:37:01.578116] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.352 qpair failed and we were unable to recover it. 00:34:12.352 [2024-04-26 23:37:01.578476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.352 [2024-04-26 23:37:01.578669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.353 [2024-04-26 23:37:01.578678] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.353 qpair failed and we were unable to recover it. 00:34:12.353 [2024-04-26 23:37:01.579002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.627 [2024-04-26 23:37:01.579359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.627 [2024-04-26 23:37:01.579369] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.627 qpair failed and we were unable to recover it. 00:34:12.627 [2024-04-26 23:37:01.579724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.627 [2024-04-26 23:37:01.579909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.627 [2024-04-26 23:37:01.579917] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.627 qpair failed and we were unable to recover it. 00:34:12.627 [2024-04-26 23:37:01.580189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.627 [2024-04-26 23:37:01.580373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.627 [2024-04-26 23:37:01.580380] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.627 qpair failed and we were unable to recover it. 00:34:12.627 [2024-04-26 23:37:01.580654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.627 [2024-04-26 23:37:01.580845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.627 [2024-04-26 23:37:01.580853] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.627 qpair failed and we were unable to recover it. 00:34:12.627 [2024-04-26 23:37:01.581166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.627 [2024-04-26 23:37:01.581487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.627 [2024-04-26 23:37:01.581495] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.627 qpair failed and we were unable to recover it. 00:34:12.627 [2024-04-26 23:37:01.581773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.627 [2024-04-26 23:37:01.581986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.627 [2024-04-26 23:37:01.581994] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.627 qpair failed and we were unable to recover it. 00:34:12.627 [2024-04-26 23:37:01.582228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.627 [2024-04-26 23:37:01.582404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.628 [2024-04-26 23:37:01.582413] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.628 qpair failed and we were unable to recover it. 00:34:12.628 [2024-04-26 23:37:01.582706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.628 [2024-04-26 23:37:01.582859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.628 [2024-04-26 23:37:01.582866] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.628 qpair failed and we were unable to recover it. 00:34:12.628 [2024-04-26 23:37:01.583200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.628 [2024-04-26 23:37:01.583520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.628 [2024-04-26 23:37:01.583528] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.628 qpair failed and we were unable to recover it. 00:34:12.628 [2024-04-26 23:37:01.583883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.628 [2024-04-26 23:37:01.584204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.628 [2024-04-26 23:37:01.584212] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.628 qpair failed and we were unable to recover it. 00:34:12.628 [2024-04-26 23:37:01.584394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.628 [2024-04-26 23:37:01.584763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.628 [2024-04-26 23:37:01.584771] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.628 qpair failed and we were unable to recover it. 00:34:12.628 [2024-04-26 23:37:01.584963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.628 [2024-04-26 23:37:01.585270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.628 [2024-04-26 23:37:01.585279] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.628 qpair failed and we were unable to recover it. 00:34:12.628 [2024-04-26 23:37:01.585610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.628 [2024-04-26 23:37:01.585814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.628 [2024-04-26 23:37:01.585822] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.628 qpair failed and we were unable to recover it. 00:34:12.628 [2024-04-26 23:37:01.586174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.628 [2024-04-26 23:37:01.586367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.628 [2024-04-26 23:37:01.586376] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.628 qpair failed and we were unable to recover it. 00:34:12.628 [2024-04-26 23:37:01.586540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.628 [2024-04-26 23:37:01.586888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.628 [2024-04-26 23:37:01.586897] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.628 qpair failed and we were unable to recover it. 00:34:12.628 [2024-04-26 23:37:01.587094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.628 [2024-04-26 23:37:01.587368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.628 [2024-04-26 23:37:01.587376] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.628 qpair failed and we were unable to recover it. 00:34:12.628 [2024-04-26 23:37:01.587702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.628 [2024-04-26 23:37:01.588035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.628 [2024-04-26 23:37:01.588042] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.628 qpair failed and we were unable to recover it. 00:34:12.628 [2024-04-26 23:37:01.588359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.628 [2024-04-26 23:37:01.588562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.628 [2024-04-26 23:37:01.588570] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.628 qpair failed and we were unable to recover it. 00:34:12.628 [2024-04-26 23:37:01.588770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.628 [2024-04-26 23:37:01.589104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.628 [2024-04-26 23:37:01.589113] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.628 qpair failed and we were unable to recover it. 00:34:12.628 [2024-04-26 23:37:01.589439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.628 [2024-04-26 23:37:01.589732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.628 [2024-04-26 23:37:01.589741] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.628 qpair failed and we were unable to recover it. 00:34:12.628 [2024-04-26 23:37:01.589930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.628 [2024-04-26 23:37:01.590204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.628 [2024-04-26 23:37:01.590212] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.628 qpair failed and we were unable to recover it. 00:34:12.628 [2024-04-26 23:37:01.590562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.628 [2024-04-26 23:37:01.590770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.628 [2024-04-26 23:37:01.590778] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.628 qpair failed and we were unable to recover it. 00:34:12.628 [2024-04-26 23:37:01.591135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.628 [2024-04-26 23:37:01.591453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.628 [2024-04-26 23:37:01.591462] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.628 qpair failed and we were unable to recover it. 00:34:12.628 [2024-04-26 23:37:01.591656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.628 [2024-04-26 23:37:01.591854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.628 [2024-04-26 23:37:01.591863] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.628 qpair failed and we were unable to recover it. 00:34:12.628 [2024-04-26 23:37:01.592132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.628 [2024-04-26 23:37:01.592455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.628 [2024-04-26 23:37:01.592463] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.628 qpair failed and we were unable to recover it. 00:34:12.628 [2024-04-26 23:37:01.592816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.628 [2024-04-26 23:37:01.592971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.628 [2024-04-26 23:37:01.592979] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.628 qpair failed and we were unable to recover it. 00:34:12.628 [2024-04-26 23:37:01.593298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.628 [2024-04-26 23:37:01.593451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.628 [2024-04-26 23:37:01.593458] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.628 qpair failed and we were unable to recover it. 00:34:12.628 [2024-04-26 23:37:01.593812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.628 [2024-04-26 23:37:01.594171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.628 [2024-04-26 23:37:01.594178] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.628 qpair failed and we were unable to recover it. 00:34:12.628 [2024-04-26 23:37:01.594510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.628 [2024-04-26 23:37:01.594667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.628 [2024-04-26 23:37:01.594675] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.628 qpair failed and we were unable to recover it. 00:34:12.628 [2024-04-26 23:37:01.595032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.628 [2024-04-26 23:37:01.595351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.628 [2024-04-26 23:37:01.595361] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.629 qpair failed and we were unable to recover it. 00:34:12.629 [2024-04-26 23:37:01.595695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.629 [2024-04-26 23:37:01.596045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.629 [2024-04-26 23:37:01.596052] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.629 qpair failed and we were unable to recover it. 00:34:12.629 [2024-04-26 23:37:01.596368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.629 [2024-04-26 23:37:01.596723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.629 [2024-04-26 23:37:01.596730] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.629 qpair failed and we were unable to recover it. 00:34:12.629 [2024-04-26 23:37:01.597082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.629 [2024-04-26 23:37:01.597132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.629 [2024-04-26 23:37:01.597138] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.629 qpair failed and we were unable to recover it. 00:34:12.629 [2024-04-26 23:37:01.597467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.629 [2024-04-26 23:37:01.597777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.629 [2024-04-26 23:37:01.597785] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.629 qpair failed and we were unable to recover it. 00:34:12.629 [2024-04-26 23:37:01.598112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.629 [2024-04-26 23:37:01.598320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.629 [2024-04-26 23:37:01.598328] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.629 qpair failed and we were unable to recover it. 00:34:12.629 [2024-04-26 23:37:01.598683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.629 [2024-04-26 23:37:01.598932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.629 [2024-04-26 23:37:01.598940] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.629 qpair failed and we were unable to recover it. 00:34:12.629 [2024-04-26 23:37:01.599293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.629 [2024-04-26 23:37:01.599633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.629 [2024-04-26 23:37:01.599641] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.629 qpair failed and we were unable to recover it. 00:34:12.629 [2024-04-26 23:37:01.600056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.629 [2024-04-26 23:37:01.600371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.629 [2024-04-26 23:37:01.600378] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.629 qpair failed and we were unable to recover it. 00:34:12.629 [2024-04-26 23:37:01.600569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.629 [2024-04-26 23:37:01.600883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.629 [2024-04-26 23:37:01.600892] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.629 qpair failed and we were unable to recover it. 00:34:12.629 [2024-04-26 23:37:01.601115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.629 [2024-04-26 23:37:01.601431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.629 [2024-04-26 23:37:01.601440] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.629 qpair failed and we were unable to recover it. 00:34:12.629 [2024-04-26 23:37:01.601772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.629 [2024-04-26 23:37:01.602057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.629 [2024-04-26 23:37:01.602065] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.629 qpair failed and we were unable to recover it. 00:34:12.629 [2024-04-26 23:37:01.602402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.629 [2024-04-26 23:37:01.602716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.629 [2024-04-26 23:37:01.602724] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.629 qpair failed and we were unable to recover it. 00:34:12.629 [2024-04-26 23:37:01.603087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.629 [2024-04-26 23:37:01.603451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.629 [2024-04-26 23:37:01.603458] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.629 qpair failed and we were unable to recover it. 00:34:12.629 [2024-04-26 23:37:01.603500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.629 [2024-04-26 23:37:01.603684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.629 [2024-04-26 23:37:01.603691] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.629 qpair failed and we were unable to recover it. 00:34:12.629 [2024-04-26 23:37:01.603880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.629 [2024-04-26 23:37:01.604182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.629 [2024-04-26 23:37:01.604190] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.629 qpair failed and we were unable to recover it. 00:34:12.629 [2024-04-26 23:37:01.604500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.629 [2024-04-26 23:37:01.604578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.629 [2024-04-26 23:37:01.604586] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.629 qpair failed and we were unable to recover it. 00:34:12.629 [2024-04-26 23:37:01.604934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.629 [2024-04-26 23:37:01.605259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.629 [2024-04-26 23:37:01.605267] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.629 qpair failed and we were unable to recover it. 00:34:12.629 [2024-04-26 23:37:01.605587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.629 [2024-04-26 23:37:01.605954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.629 [2024-04-26 23:37:01.605963] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.629 qpair failed and we were unable to recover it. 00:34:12.629 [2024-04-26 23:37:01.606140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.629 [2024-04-26 23:37:01.606509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.629 [2024-04-26 23:37:01.606518] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.629 qpair failed and we were unable to recover it. 00:34:12.629 [2024-04-26 23:37:01.606755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.629 [2024-04-26 23:37:01.607038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.629 [2024-04-26 23:37:01.607046] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.629 qpair failed and we were unable to recover it. 00:34:12.629 [2024-04-26 23:37:01.607385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.629 [2024-04-26 23:37:01.607708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.629 [2024-04-26 23:37:01.607715] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.629 qpair failed and we were unable to recover it. 00:34:12.629 [2024-04-26 23:37:01.607935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.629 [2024-04-26 23:37:01.608219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.629 [2024-04-26 23:37:01.608227] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.629 qpair failed and we were unable to recover it. 00:34:12.629 [2024-04-26 23:37:01.608423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.629 [2024-04-26 23:37:01.608769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.629 [2024-04-26 23:37:01.608776] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.629 qpair failed and we were unable to recover it. 00:34:12.629 [2024-04-26 23:37:01.608959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.629 [2024-04-26 23:37:01.609165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.630 [2024-04-26 23:37:01.609173] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.630 qpair failed and we were unable to recover it. 00:34:12.630 [2024-04-26 23:37:01.609502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.630 [2024-04-26 23:37:01.609790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.630 [2024-04-26 23:37:01.609798] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.630 qpair failed and we were unable to recover it. 00:34:12.630 [2024-04-26 23:37:01.610152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.630 [2024-04-26 23:37:01.610497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.630 [2024-04-26 23:37:01.610505] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.630 qpair failed and we were unable to recover it. 00:34:12.630 [2024-04-26 23:37:01.610839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.630 [2024-04-26 23:37:01.611144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.630 [2024-04-26 23:37:01.611151] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.630 qpair failed and we were unable to recover it. 00:34:12.630 [2024-04-26 23:37:01.611509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.630 [2024-04-26 23:37:01.611825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.630 [2024-04-26 23:37:01.611833] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.630 qpair failed and we were unable to recover it. 00:34:12.630 [2024-04-26 23:37:01.612152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.630 [2024-04-26 23:37:01.612500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.630 [2024-04-26 23:37:01.612508] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.630 qpair failed and we were unable to recover it. 00:34:12.630 [2024-04-26 23:37:01.612831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.630 [2024-04-26 23:37:01.613039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.630 [2024-04-26 23:37:01.613047] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.630 qpair failed and we were unable to recover it. 00:34:12.630 [2024-04-26 23:37:01.613370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.630 [2024-04-26 23:37:01.613563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.630 [2024-04-26 23:37:01.613570] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.630 qpair failed and we were unable to recover it. 00:34:12.630 [2024-04-26 23:37:01.613887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.630 [2024-04-26 23:37:01.614195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.630 [2024-04-26 23:37:01.614203] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.630 qpair failed and we were unable to recover it. 00:34:12.630 [2024-04-26 23:37:01.614534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.630 [2024-04-26 23:37:01.614850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.630 [2024-04-26 23:37:01.614858] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.630 qpair failed and we were unable to recover it. 00:34:12.630 [2024-04-26 23:37:01.615217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.630 [2024-04-26 23:37:01.615419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.630 [2024-04-26 23:37:01.615427] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.630 qpair failed and we were unable to recover it. 00:34:12.630 [2024-04-26 23:37:01.615714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.630 [2024-04-26 23:37:01.615856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.630 [2024-04-26 23:37:01.615863] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.630 qpair failed and we were unable to recover it. 00:34:12.630 [2024-04-26 23:37:01.616039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.630 [2024-04-26 23:37:01.616361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.630 [2024-04-26 23:37:01.616369] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.630 qpair failed and we were unable to recover it. 00:34:12.630 [2024-04-26 23:37:01.616695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.630 [2024-04-26 23:37:01.616991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.630 [2024-04-26 23:37:01.616999] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.630 qpair failed and we were unable to recover it. 00:34:12.630 [2024-04-26 23:37:01.617325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.630 [2024-04-26 23:37:01.617534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.630 [2024-04-26 23:37:01.617541] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.630 qpair failed and we were unable to recover it. 00:34:12.630 [2024-04-26 23:37:01.617866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.630 [2024-04-26 23:37:01.618079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.630 [2024-04-26 23:37:01.618088] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.630 qpair failed and we were unable to recover it. 00:34:12.630 [2024-04-26 23:37:01.618265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.630 [2024-04-26 23:37:01.618579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.630 [2024-04-26 23:37:01.618587] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.630 qpair failed and we were unable to recover it. 00:34:12.630 [2024-04-26 23:37:01.618927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.630 [2024-04-26 23:37:01.619270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.630 [2024-04-26 23:37:01.619278] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.630 qpair failed and we were unable to recover it. 00:34:12.630 [2024-04-26 23:37:01.619633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.630 [2024-04-26 23:37:01.619961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.630 [2024-04-26 23:37:01.619968] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.630 qpair failed and we were unable to recover it. 00:34:12.630 [2024-04-26 23:37:01.620341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.630 [2024-04-26 23:37:01.620706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.630 [2024-04-26 23:37:01.620714] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.630 qpair failed and we were unable to recover it. 00:34:12.630 [2024-04-26 23:37:01.621068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.630 [2024-04-26 23:37:01.621389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.630 [2024-04-26 23:37:01.621396] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.630 qpair failed and we were unable to recover it. 00:34:12.630 [2024-04-26 23:37:01.621729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.630 [2024-04-26 23:37:01.622122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.630 [2024-04-26 23:37:01.622130] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.630 qpair failed and we were unable to recover it. 00:34:12.630 [2024-04-26 23:37:01.622174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.630 [2024-04-26 23:37:01.622494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.630 [2024-04-26 23:37:01.622501] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.630 qpair failed and we were unable to recover it. 00:34:12.630 [2024-04-26 23:37:01.622834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.630 [2024-04-26 23:37:01.623198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.630 [2024-04-26 23:37:01.623205] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.630 qpair failed and we were unable to recover it. 00:34:12.630 [2024-04-26 23:37:01.623444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.630 [2024-04-26 23:37:01.623619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.631 [2024-04-26 23:37:01.623627] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.631 qpair failed and we were unable to recover it. 00:34:12.631 [2024-04-26 23:37:01.623980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.631 [2024-04-26 23:37:01.624332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.631 [2024-04-26 23:37:01.624340] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.631 qpair failed and we were unable to recover it. 00:34:12.631 [2024-04-26 23:37:01.624748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.631 [2024-04-26 23:37:01.624902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.631 [2024-04-26 23:37:01.624909] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.631 qpair failed and we were unable to recover it. 00:34:12.631 [2024-04-26 23:37:01.625238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.631 [2024-04-26 23:37:01.625571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.631 [2024-04-26 23:37:01.625578] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.631 qpair failed and we were unable to recover it. 00:34:12.631 [2024-04-26 23:37:01.625933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.631 [2024-04-26 23:37:01.626253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.631 [2024-04-26 23:37:01.626261] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.631 qpair failed and we were unable to recover it. 00:34:12.631 [2024-04-26 23:37:01.626596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.631 [2024-04-26 23:37:01.626960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.631 [2024-04-26 23:37:01.626968] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.631 qpair failed and we were unable to recover it. 00:34:12.631 [2024-04-26 23:37:01.627285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.631 [2024-04-26 23:37:01.627650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.631 [2024-04-26 23:37:01.627657] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.631 qpair failed and we were unable to recover it. 00:34:12.631 [2024-04-26 23:37:01.627989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.631 [2024-04-26 23:37:01.628321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.631 [2024-04-26 23:37:01.628330] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.631 qpair failed and we were unable to recover it. 00:34:12.631 [2024-04-26 23:37:01.628529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.631 [2024-04-26 23:37:01.628846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.631 [2024-04-26 23:37:01.628854] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.631 qpair failed and we were unable to recover it. 00:34:12.631 [2024-04-26 23:37:01.629188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.631 [2024-04-26 23:37:01.629548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.631 [2024-04-26 23:37:01.629556] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.631 qpair failed and we were unable to recover it. 00:34:12.631 [2024-04-26 23:37:01.629911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.631 [2024-04-26 23:37:01.630277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.631 [2024-04-26 23:37:01.630284] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.631 qpair failed and we were unable to recover it. 00:34:12.631 [2024-04-26 23:37:01.630649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.631 [2024-04-26 23:37:01.630965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.631 [2024-04-26 23:37:01.630974] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.631 qpair failed and we were unable to recover it. 00:34:12.631 [2024-04-26 23:37:01.631289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.631 [2024-04-26 23:37:01.631617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.631 [2024-04-26 23:37:01.631625] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.631 qpair failed and we were unable to recover it. 00:34:12.631 [2024-04-26 23:37:01.631775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.631 [2024-04-26 23:37:01.632061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.631 [2024-04-26 23:37:01.632069] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.631 qpair failed and we were unable to recover it. 00:34:12.631 [2024-04-26 23:37:01.632235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.631 [2024-04-26 23:37:01.632523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.631 [2024-04-26 23:37:01.632531] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.631 qpair failed and we were unable to recover it. 00:34:12.631 [2024-04-26 23:37:01.632740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.631 [2024-04-26 23:37:01.633095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.631 [2024-04-26 23:37:01.633103] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.631 qpair failed and we were unable to recover it. 00:34:12.631 [2024-04-26 23:37:01.633456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.631 [2024-04-26 23:37:01.633821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.631 [2024-04-26 23:37:01.633829] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.631 qpair failed and we were unable to recover it. 00:34:12.631 [2024-04-26 23:37:01.634151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.631 [2024-04-26 23:37:01.634518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.631 [2024-04-26 23:37:01.634526] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.631 qpair failed and we were unable to recover it. 00:34:12.631 [2024-04-26 23:37:01.634882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.631 [2024-04-26 23:37:01.635173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.631 [2024-04-26 23:37:01.635181] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.631 qpair failed and we were unable to recover it. 00:34:12.631 [2024-04-26 23:37:01.635229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.631 [2024-04-26 23:37:01.635422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.631 [2024-04-26 23:37:01.635429] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.631 qpair failed and we were unable to recover it. 00:34:12.631 [2024-04-26 23:37:01.635772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.631 [2024-04-26 23:37:01.635959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.631 [2024-04-26 23:37:01.635967] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.631 qpair failed and we were unable to recover it. 00:34:12.631 [2024-04-26 23:37:01.636280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.631 [2024-04-26 23:37:01.636566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.631 [2024-04-26 23:37:01.636574] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.631 qpair failed and we were unable to recover it. 00:34:12.631 [2024-04-26 23:37:01.636921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.631 [2024-04-26 23:37:01.637289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.631 [2024-04-26 23:37:01.637297] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.631 qpair failed and we were unable to recover it. 00:34:12.632 [2024-04-26 23:37:01.637629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.632 [2024-04-26 23:37:01.637945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.632 [2024-04-26 23:37:01.637953] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.632 qpair failed and we were unable to recover it. 00:34:12.632 [2024-04-26 23:37:01.638280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.632 [2024-04-26 23:37:01.638489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.632 [2024-04-26 23:37:01.638497] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.632 qpair failed and we were unable to recover it. 00:34:12.632 [2024-04-26 23:37:01.638830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.632 [2024-04-26 23:37:01.639210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.632 [2024-04-26 23:37:01.639219] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.632 qpair failed and we were unable to recover it. 00:34:12.632 [2024-04-26 23:37:01.639414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.632 [2024-04-26 23:37:01.639730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.632 [2024-04-26 23:37:01.639738] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.632 qpair failed and we were unable to recover it. 00:34:12.632 [2024-04-26 23:37:01.640044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.632 [2024-04-26 23:37:01.640367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.632 [2024-04-26 23:37:01.640375] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.632 qpair failed and we were unable to recover it. 00:34:12.632 [2024-04-26 23:37:01.640675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.632 [2024-04-26 23:37:01.641010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.632 [2024-04-26 23:37:01.641019] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.632 qpair failed and we were unable to recover it. 00:34:12.632 [2024-04-26 23:37:01.641362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.632 [2024-04-26 23:37:01.641724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.632 [2024-04-26 23:37:01.641732] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.632 qpair failed and we were unable to recover it. 00:34:12.632 [2024-04-26 23:37:01.642074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.632 [2024-04-26 23:37:01.642397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.632 [2024-04-26 23:37:01.642405] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.632 qpair failed and we were unable to recover it. 00:34:12.632 [2024-04-26 23:37:01.642743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.632 [2024-04-26 23:37:01.642948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.632 [2024-04-26 23:37:01.642955] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.632 qpair failed and we were unable to recover it. 00:34:12.632 [2024-04-26 23:37:01.643321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.632 [2024-04-26 23:37:01.643679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.632 [2024-04-26 23:37:01.643687] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.632 qpair failed and we were unable to recover it. 00:34:12.632 [2024-04-26 23:37:01.643873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.632 [2024-04-26 23:37:01.644111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.632 [2024-04-26 23:37:01.644118] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.632 qpair failed and we were unable to recover it. 00:34:12.632 [2024-04-26 23:37:01.644284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.632 [2024-04-26 23:37:01.644603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.632 [2024-04-26 23:37:01.644610] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.632 qpair failed and we were unable to recover it. 00:34:12.632 [2024-04-26 23:37:01.644814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.632 [2024-04-26 23:37:01.645153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.632 [2024-04-26 23:37:01.645161] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.632 qpair failed and we were unable to recover it. 00:34:12.632 [2024-04-26 23:37:01.645519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.632 [2024-04-26 23:37:01.645820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.632 [2024-04-26 23:37:01.645828] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.632 qpair failed and we were unable to recover it. 00:34:12.632 [2024-04-26 23:37:01.646164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.632 [2024-04-26 23:37:01.646529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.632 [2024-04-26 23:37:01.646537] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.632 qpair failed and we were unable to recover it. 00:34:12.632 [2024-04-26 23:37:01.646862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.632 [2024-04-26 23:37:01.647188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.632 [2024-04-26 23:37:01.647197] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.632 qpair failed and we were unable to recover it. 00:34:12.632 [2024-04-26 23:37:01.647532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.632 [2024-04-26 23:37:01.647575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.632 [2024-04-26 23:37:01.647583] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.632 qpair failed and we were unable to recover it. 00:34:12.632 [2024-04-26 23:37:01.647913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.632 [2024-04-26 23:37:01.648209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.632 [2024-04-26 23:37:01.648218] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.632 qpair failed and we were unable to recover it. 00:34:12.632 [2024-04-26 23:37:01.648559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.632 [2024-04-26 23:37:01.648771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.632 [2024-04-26 23:37:01.648780] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.632 qpair failed and we were unable to recover it. 00:34:12.632 [2024-04-26 23:37:01.648832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.632 [2024-04-26 23:37:01.649004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.632 [2024-04-26 23:37:01.649014] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.632 qpair failed and we were unable to recover it. 00:34:12.632 [2024-04-26 23:37:01.649194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.632 [2024-04-26 23:37:01.649392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.632 [2024-04-26 23:37:01.649400] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.632 qpair failed and we were unable to recover it. 00:34:12.632 [2024-04-26 23:37:01.649724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.632 [2024-04-26 23:37:01.649937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.633 [2024-04-26 23:37:01.649945] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.633 qpair failed and we were unable to recover it. 00:34:12.633 [2024-04-26 23:37:01.650282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.633 [2024-04-26 23:37:01.650616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.633 [2024-04-26 23:37:01.650625] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.633 qpair failed and we were unable to recover it. 00:34:12.633 [2024-04-26 23:37:01.650975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.633 [2024-04-26 23:37:01.651188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.633 [2024-04-26 23:37:01.651196] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.633 qpair failed and we were unable to recover it. 00:34:12.633 [2024-04-26 23:37:01.651519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.633 [2024-04-26 23:37:01.651842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.633 [2024-04-26 23:37:01.651851] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.633 qpair failed and we were unable to recover it. 00:34:12.633 [2024-04-26 23:37:01.652156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.633 [2024-04-26 23:37:01.652357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.633 [2024-04-26 23:37:01.652365] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.633 qpair failed and we were unable to recover it. 00:34:12.633 [2024-04-26 23:37:01.652690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.633 [2024-04-26 23:37:01.652880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.633 [2024-04-26 23:37:01.652889] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.633 qpair failed and we were unable to recover it. 00:34:12.633 [2024-04-26 23:37:01.653207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.633 [2024-04-26 23:37:01.653571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.633 [2024-04-26 23:37:01.653578] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.633 qpair failed and we were unable to recover it. 00:34:12.633 [2024-04-26 23:37:01.653916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.633 [2024-04-26 23:37:01.654255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.633 [2024-04-26 23:37:01.654262] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.633 qpair failed and we were unable to recover it. 00:34:12.633 [2024-04-26 23:37:01.654499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.633 [2024-04-26 23:37:01.654652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.633 [2024-04-26 23:37:01.654660] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.633 qpair failed and we were unable to recover it. 00:34:12.633 [2024-04-26 23:37:01.654864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.633 [2024-04-26 23:37:01.655135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.633 [2024-04-26 23:37:01.655144] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.633 qpair failed and we were unable to recover it. 00:34:12.633 [2024-04-26 23:37:01.655471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.633 [2024-04-26 23:37:01.655813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.633 [2024-04-26 23:37:01.655820] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.633 qpair failed and we were unable to recover it. 00:34:12.633 [2024-04-26 23:37:01.656118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.633 [2024-04-26 23:37:01.656438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.633 [2024-04-26 23:37:01.656446] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.633 qpair failed and we were unable to recover it. 00:34:12.633 [2024-04-26 23:37:01.656797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.633 [2024-04-26 23:37:01.657143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.633 [2024-04-26 23:37:01.657151] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.633 qpair failed and we were unable to recover it. 00:34:12.633 [2024-04-26 23:37:01.657498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.633 [2024-04-26 23:37:01.657858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.633 [2024-04-26 23:37:01.657867] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.633 qpair failed and we were unable to recover it. 00:34:12.633 [2024-04-26 23:37:01.658215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.633 [2024-04-26 23:37:01.658375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.633 [2024-04-26 23:37:01.658383] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.633 qpair failed and we were unable to recover it. 00:34:12.633 [2024-04-26 23:37:01.658567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.633 [2024-04-26 23:37:01.658878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.633 [2024-04-26 23:37:01.658887] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.633 qpair failed and we were unable to recover it. 00:34:12.633 [2024-04-26 23:37:01.659187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.633 [2024-04-26 23:37:01.659397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.633 [2024-04-26 23:37:01.659405] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.633 qpair failed and we were unable to recover it. 00:34:12.633 [2024-04-26 23:37:01.659767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.633 [2024-04-26 23:37:01.660089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.633 [2024-04-26 23:37:01.660098] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.633 qpair failed and we were unable to recover it. 00:34:12.633 [2024-04-26 23:37:01.660332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.633 [2024-04-26 23:37:01.660627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.633 [2024-04-26 23:37:01.660635] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.633 qpair failed and we were unable to recover it. 00:34:12.633 [2024-04-26 23:37:01.660972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.633 [2024-04-26 23:37:01.661020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.633 [2024-04-26 23:37:01.661028] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.633 qpair failed and we were unable to recover it. 00:34:12.633 [2024-04-26 23:37:01.661343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.633 [2024-04-26 23:37:01.661703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.633 [2024-04-26 23:37:01.661711] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.633 qpair failed and we were unable to recover it. 00:34:12.633 [2024-04-26 23:37:01.662011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.634 [2024-04-26 23:37:01.662359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.634 [2024-04-26 23:37:01.662366] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.634 qpair failed and we were unable to recover it. 00:34:12.634 [2024-04-26 23:37:01.662730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.634 [2024-04-26 23:37:01.663065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.634 [2024-04-26 23:37:01.663073] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.634 qpair failed and we were unable to recover it. 00:34:12.634 [2024-04-26 23:37:01.663271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.634 [2024-04-26 23:37:01.663440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.634 [2024-04-26 23:37:01.663449] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.634 qpair failed and we were unable to recover it. 00:34:12.634 [2024-04-26 23:37:01.663775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.634 [2024-04-26 23:37:01.664116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.634 [2024-04-26 23:37:01.664124] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.634 qpair failed and we were unable to recover it. 00:34:12.634 [2024-04-26 23:37:01.664497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.634 [2024-04-26 23:37:01.664859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.634 [2024-04-26 23:37:01.664867] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.634 qpair failed and we were unable to recover it. 00:34:12.634 [2024-04-26 23:37:01.665203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.634 [2024-04-26 23:37:01.665523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.634 [2024-04-26 23:37:01.665530] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.634 qpair failed and we were unable to recover it. 00:34:12.634 [2024-04-26 23:37:01.665724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.634 [2024-04-26 23:37:01.666052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.634 [2024-04-26 23:37:01.666060] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.634 qpair failed and we were unable to recover it. 00:34:12.634 [2024-04-26 23:37:01.666411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.634 [2024-04-26 23:37:01.666755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.634 [2024-04-26 23:37:01.666762] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.634 qpair failed and we were unable to recover it. 00:34:12.634 [2024-04-26 23:37:01.667119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.634 [2024-04-26 23:37:01.667327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.634 [2024-04-26 23:37:01.667335] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.634 qpair failed and we were unable to recover it. 00:34:12.634 [2024-04-26 23:37:01.667529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.634 [2024-04-26 23:37:01.667924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.634 [2024-04-26 23:37:01.667933] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.634 qpair failed and we were unable to recover it. 00:34:12.634 [2024-04-26 23:37:01.668107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.634 [2024-04-26 23:37:01.668308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.634 [2024-04-26 23:37:01.668316] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.634 qpair failed and we were unable to recover it. 00:34:12.634 [2024-04-26 23:37:01.668649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.634 [2024-04-26 23:37:01.669025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.634 [2024-04-26 23:37:01.669033] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.634 qpair failed and we were unable to recover it. 00:34:12.634 [2024-04-26 23:37:01.669373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.634 [2024-04-26 23:37:01.669688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.634 [2024-04-26 23:37:01.669695] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.634 qpair failed and we were unable to recover it. 00:34:12.634 [2024-04-26 23:37:01.670007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.634 [2024-04-26 23:37:01.670370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.634 [2024-04-26 23:37:01.670377] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.634 qpair failed and we were unable to recover it. 00:34:12.634 [2024-04-26 23:37:01.670708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.634 [2024-04-26 23:37:01.670913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.634 [2024-04-26 23:37:01.670921] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.634 qpair failed and we were unable to recover it. 00:34:12.634 [2024-04-26 23:37:01.671265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.634 [2024-04-26 23:37:01.671464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.634 [2024-04-26 23:37:01.671472] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.634 qpair failed and we were unable to recover it. 00:34:12.634 [2024-04-26 23:37:01.671749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.634 [2024-04-26 23:37:01.671963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.634 [2024-04-26 23:37:01.671971] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.634 qpair failed and we were unable to recover it. 00:34:12.634 [2024-04-26 23:37:01.672311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.634 [2024-04-26 23:37:01.672677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.634 [2024-04-26 23:37:01.672684] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.634 qpair failed and we were unable to recover it. 00:34:12.634 [2024-04-26 23:37:01.672886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.634 [2024-04-26 23:37:01.673200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.634 [2024-04-26 23:37:01.673208] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.634 qpair failed and we were unable to recover it. 00:34:12.634 [2024-04-26 23:37:01.673545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.634 [2024-04-26 23:37:01.673785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.634 [2024-04-26 23:37:01.673793] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.634 qpair failed and we were unable to recover it. 00:34:12.634 [2024-04-26 23:37:01.674127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.634 [2024-04-26 23:37:01.674467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.634 [2024-04-26 23:37:01.674474] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.634 qpair failed and we were unable to recover it. 00:34:12.634 [2024-04-26 23:37:01.674828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.634 [2024-04-26 23:37:01.675159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.634 [2024-04-26 23:37:01.675167] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.634 qpair failed and we were unable to recover it. 00:34:12.634 [2024-04-26 23:37:01.675515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.634 [2024-04-26 23:37:01.675828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.634 [2024-04-26 23:37:01.675841] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.634 qpair failed and we were unable to recover it. 00:34:12.634 [2024-04-26 23:37:01.676171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.634 [2024-04-26 23:37:01.676406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.634 [2024-04-26 23:37:01.676414] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.634 qpair failed and we were unable to recover it. 00:34:12.634 [2024-04-26 23:37:01.676529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.634 [2024-04-26 23:37:01.676745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.634 [2024-04-26 23:37:01.676752] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.634 qpair failed and we were unable to recover it. 00:34:12.634 [2024-04-26 23:37:01.677078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.635 [2024-04-26 23:37:01.677402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.635 [2024-04-26 23:37:01.677411] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.635 qpair failed and we were unable to recover it. 00:34:12.635 [2024-04-26 23:37:01.677607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.635 [2024-04-26 23:37:01.677784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.635 [2024-04-26 23:37:01.677793] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.635 qpair failed and we were unable to recover it. 00:34:12.635 [2024-04-26 23:37:01.678180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.635 [2024-04-26 23:37:01.678370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.635 [2024-04-26 23:37:01.678377] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.635 qpair failed and we were unable to recover it. 00:34:12.635 [2024-04-26 23:37:01.678701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.635 [2024-04-26 23:37:01.679019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.635 [2024-04-26 23:37:01.679028] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.635 qpair failed and we were unable to recover it. 00:34:12.635 [2024-04-26 23:37:01.679381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.635 [2024-04-26 23:37:01.679697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.635 [2024-04-26 23:37:01.679704] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.635 qpair failed and we were unable to recover it. 00:34:12.635 [2024-04-26 23:37:01.680032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.635 [2024-04-26 23:37:01.680086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.635 [2024-04-26 23:37:01.680093] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.635 qpair failed and we were unable to recover it. 00:34:12.635 [2024-04-26 23:37:01.680289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.635 [2024-04-26 23:37:01.680680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.635 [2024-04-26 23:37:01.680688] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.635 qpair failed and we were unable to recover it. 00:34:12.635 [2024-04-26 23:37:01.681011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.635 [2024-04-26 23:37:01.681365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.635 [2024-04-26 23:37:01.681372] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.635 qpair failed and we were unable to recover it. 00:34:12.635 [2024-04-26 23:37:01.681576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.635 [2024-04-26 23:37:01.681926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.635 [2024-04-26 23:37:01.681934] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.635 qpair failed and we were unable to recover it. 00:34:12.635 [2024-04-26 23:37:01.682268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.635 [2024-04-26 23:37:01.682570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.635 [2024-04-26 23:37:01.682578] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.635 qpair failed and we were unable to recover it. 00:34:12.635 [2024-04-26 23:37:01.682729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.635 [2024-04-26 23:37:01.682975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.635 [2024-04-26 23:37:01.682983] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.635 qpair failed and we were unable to recover it. 00:34:12.635 [2024-04-26 23:37:01.683268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.635 [2024-04-26 23:37:01.683578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.635 [2024-04-26 23:37:01.683585] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.635 qpair failed and we were unable to recover it. 00:34:12.635 [2024-04-26 23:37:01.683939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.635 [2024-04-26 23:37:01.684311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.635 [2024-04-26 23:37:01.684319] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.635 qpair failed and we were unable to recover it. 00:34:12.635 [2024-04-26 23:37:01.684650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.635 [2024-04-26 23:37:01.684699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.635 [2024-04-26 23:37:01.684708] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.635 qpair failed and we were unable to recover it. 00:34:12.635 [2024-04-26 23:37:01.685033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.635 [2024-04-26 23:37:01.685258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.635 [2024-04-26 23:37:01.685266] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.635 qpair failed and we were unable to recover it. 00:34:12.635 [2024-04-26 23:37:01.685583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.635 [2024-04-26 23:37:01.685796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.635 [2024-04-26 23:37:01.685803] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.635 qpair failed and we were unable to recover it. 00:34:12.635 [2024-04-26 23:37:01.686159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.635 [2024-04-26 23:37:01.686485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.635 [2024-04-26 23:37:01.686493] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.635 qpair failed and we were unable to recover it. 00:34:12.635 [2024-04-26 23:37:01.686828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.635 [2024-04-26 23:37:01.687194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.635 [2024-04-26 23:37:01.687202] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.635 qpair failed and we were unable to recover it. 00:34:12.635 [2024-04-26 23:37:01.687523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.635 [2024-04-26 23:37:01.687729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.635 [2024-04-26 23:37:01.687737] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.635 qpair failed and we were unable to recover it. 00:34:12.635 [2024-04-26 23:37:01.688072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.635 [2024-04-26 23:37:01.688396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.635 [2024-04-26 23:37:01.688404] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.635 qpair failed and we were unable to recover it. 00:34:12.635 [2024-04-26 23:37:01.688579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.635 [2024-04-26 23:37:01.688924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.635 [2024-04-26 23:37:01.688932] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.635 qpair failed and we were unable to recover it. 00:34:12.635 [2024-04-26 23:37:01.689257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.635 [2024-04-26 23:37:01.689615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.635 [2024-04-26 23:37:01.689623] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.635 qpair failed and we were unable to recover it. 00:34:12.635 [2024-04-26 23:37:01.689828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.635 [2024-04-26 23:37:01.689994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.635 [2024-04-26 23:37:01.690002] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.635 qpair failed and we were unable to recover it. 00:34:12.635 [2024-04-26 23:37:01.690201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.635 [2024-04-26 23:37:01.690555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.635 [2024-04-26 23:37:01.690564] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.635 qpair failed and we were unable to recover it. 00:34:12.635 [2024-04-26 23:37:01.690764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.635 [2024-04-26 23:37:01.691121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.635 [2024-04-26 23:37:01.691129] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.635 qpair failed and we were unable to recover it. 00:34:12.635 [2024-04-26 23:37:01.691322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.635 [2024-04-26 23:37:01.691686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.635 [2024-04-26 23:37:01.691694] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.635 qpair failed and we were unable to recover it. 00:34:12.635 [2024-04-26 23:37:01.691912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.635 [2024-04-26 23:37:01.692225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.635 [2024-04-26 23:37:01.692232] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.635 qpair failed and we were unable to recover it. 00:34:12.635 [2024-04-26 23:37:01.692572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.635 [2024-04-26 23:37:01.692905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.635 [2024-04-26 23:37:01.692913] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.635 qpair failed and we were unable to recover it. 00:34:12.635 [2024-04-26 23:37:01.693078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.635 [2024-04-26 23:37:01.693363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.636 [2024-04-26 23:37:01.693371] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.636 qpair failed and we were unable to recover it. 00:34:12.636 [2024-04-26 23:37:01.693679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.636 [2024-04-26 23:37:01.694023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.636 [2024-04-26 23:37:01.694031] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.636 qpair failed and we were unable to recover it. 00:34:12.636 [2024-04-26 23:37:01.694385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.636 [2024-04-26 23:37:01.694674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.636 [2024-04-26 23:37:01.694682] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.636 qpair failed and we were unable to recover it. 00:34:12.636 [2024-04-26 23:37:01.695016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.636 [2024-04-26 23:37:01.695335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.636 [2024-04-26 23:37:01.695343] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.636 qpair failed and we were unable to recover it. 00:34:12.636 [2024-04-26 23:37:01.695673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.636 [2024-04-26 23:37:01.695882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.636 [2024-04-26 23:37:01.695890] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.636 qpair failed and we were unable to recover it. 00:34:12.636 [2024-04-26 23:37:01.696091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.636 [2024-04-26 23:37:01.696446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.636 [2024-04-26 23:37:01.696456] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.636 qpair failed and we were unable to recover it. 00:34:12.636 [2024-04-26 23:37:01.696785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.636 [2024-04-26 23:37:01.697011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.636 [2024-04-26 23:37:01.697019] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.636 qpair failed and we were unable to recover it. 00:34:12.636 [2024-04-26 23:37:01.697247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.636 [2024-04-26 23:37:01.697547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.636 [2024-04-26 23:37:01.697556] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.636 qpair failed and we were unable to recover it. 00:34:12.636 [2024-04-26 23:37:01.697760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.636 [2024-04-26 23:37:01.697967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.636 [2024-04-26 23:37:01.697975] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.636 qpair failed and we were unable to recover it. 00:34:12.636 [2024-04-26 23:37:01.698273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.636 [2024-04-26 23:37:01.698427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.636 [2024-04-26 23:37:01.698434] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.636 qpair failed and we were unable to recover it. 00:34:12.636 [2024-04-26 23:37:01.698476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.636 [2024-04-26 23:37:01.698651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.636 [2024-04-26 23:37:01.698658] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.636 qpair failed and we were unable to recover it. 00:34:12.636 [2024-04-26 23:37:01.698951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.636 [2024-04-26 23:37:01.699283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.636 [2024-04-26 23:37:01.699291] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.636 qpair failed and we were unable to recover it. 00:34:12.636 [2024-04-26 23:37:01.699477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.636 [2024-04-26 23:37:01.699846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.636 [2024-04-26 23:37:01.699854] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.636 qpair failed and we were unable to recover it. 00:34:12.636 [2024-04-26 23:37:01.700161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.636 [2024-04-26 23:37:01.700525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.636 [2024-04-26 23:37:01.700533] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.636 qpair failed and we were unable to recover it. 00:34:12.636 [2024-04-26 23:37:01.700860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.636 [2024-04-26 23:37:01.701074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.636 [2024-04-26 23:37:01.701082] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.636 qpair failed and we were unable to recover it. 00:34:12.636 [2024-04-26 23:37:01.701405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.636 [2024-04-26 23:37:01.701723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.636 [2024-04-26 23:37:01.701731] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.636 qpair failed and we were unable to recover it. 00:34:12.636 [2024-04-26 23:37:01.701918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.636 [2024-04-26 23:37:01.702230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.636 [2024-04-26 23:37:01.702237] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.636 qpair failed and we were unable to recover it. 00:34:12.636 [2024-04-26 23:37:01.702575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.636 [2024-04-26 23:37:01.702906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.636 [2024-04-26 23:37:01.702915] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.636 qpair failed and we were unable to recover it. 00:34:12.636 [2024-04-26 23:37:01.703132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.636 [2024-04-26 23:37:01.703457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.636 [2024-04-26 23:37:01.703465] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.636 qpair failed and we were unable to recover it. 00:34:12.636 [2024-04-26 23:37:01.703816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.636 [2024-04-26 23:37:01.704152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.636 [2024-04-26 23:37:01.704160] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.636 qpair failed and we were unable to recover it. 00:34:12.636 [2024-04-26 23:37:01.704362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.636 [2024-04-26 23:37:01.704585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.636 [2024-04-26 23:37:01.704593] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.636 qpair failed and we were unable to recover it. 00:34:12.636 [2024-04-26 23:37:01.704758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.636 [2024-04-26 23:37:01.705022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.636 [2024-04-26 23:37:01.705030] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.636 qpair failed and we were unable to recover it. 00:34:12.636 [2024-04-26 23:37:01.705366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.636 [2024-04-26 23:37:01.705691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.636 [2024-04-26 23:37:01.705700] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.636 qpair failed and we were unable to recover it. 00:34:12.636 [2024-04-26 23:37:01.706029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.636 [2024-04-26 23:37:01.706330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.636 [2024-04-26 23:37:01.706338] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.636 qpair failed and we were unable to recover it. 00:34:12.636 [2024-04-26 23:37:01.706533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.636 [2024-04-26 23:37:01.706875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.636 [2024-04-26 23:37:01.706883] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.636 qpair failed and we were unable to recover it. 00:34:12.636 [2024-04-26 23:37:01.707232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.636 [2024-04-26 23:37:01.707436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.636 [2024-04-26 23:37:01.707443] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.636 qpair failed and we were unable to recover it. 00:34:12.637 [2024-04-26 23:37:01.707770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.637 [2024-04-26 23:37:01.707974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.637 [2024-04-26 23:37:01.707982] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.637 qpair failed and we were unable to recover it. 00:34:12.637 [2024-04-26 23:37:01.708192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.637 [2024-04-26 23:37:01.708533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.637 [2024-04-26 23:37:01.708541] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.637 qpair failed and we were unable to recover it. 00:34:12.637 [2024-04-26 23:37:01.708895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.637 [2024-04-26 23:37:01.709075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.637 [2024-04-26 23:37:01.709082] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.637 qpair failed and we were unable to recover it. 00:34:12.637 [2024-04-26 23:37:01.709393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.637 [2024-04-26 23:37:01.709753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.637 [2024-04-26 23:37:01.709761] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.637 qpair failed and we were unable to recover it. 00:34:12.637 [2024-04-26 23:37:01.710096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.637 [2024-04-26 23:37:01.710461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.637 [2024-04-26 23:37:01.710469] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.637 qpair failed and we were unable to recover it. 00:34:12.637 [2024-04-26 23:37:01.710803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.637 [2024-04-26 23:37:01.711014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.637 [2024-04-26 23:37:01.711021] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.637 qpair failed and we were unable to recover it. 00:34:12.637 [2024-04-26 23:37:01.711362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.637 [2024-04-26 23:37:01.711730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.637 [2024-04-26 23:37:01.711737] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.637 qpair failed and we were unable to recover it. 00:34:12.637 [2024-04-26 23:37:01.712124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.637 [2024-04-26 23:37:01.712429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.637 [2024-04-26 23:37:01.712438] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.637 qpair failed and we were unable to recover it. 00:34:12.637 [2024-04-26 23:37:01.712803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.637 [2024-04-26 23:37:01.712956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.637 [2024-04-26 23:37:01.712964] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.637 qpair failed and we were unable to recover it. 00:34:12.637 [2024-04-26 23:37:01.713293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.637 [2024-04-26 23:37:01.713655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.637 [2024-04-26 23:37:01.713663] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.637 qpair failed and we were unable to recover it. 00:34:12.637 [2024-04-26 23:37:01.713870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.637 [2024-04-26 23:37:01.714028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.637 [2024-04-26 23:37:01.714035] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.637 qpair failed and we were unable to recover it. 00:34:12.637 [2024-04-26 23:37:01.714391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.637 [2024-04-26 23:37:01.714542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.637 [2024-04-26 23:37:01.714549] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.637 qpair failed and we were unable to recover it. 00:34:12.637 [2024-04-26 23:37:01.714895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.637 [2024-04-26 23:37:01.715107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.637 [2024-04-26 23:37:01.715114] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.637 qpair failed and we were unable to recover it. 00:34:12.637 [2024-04-26 23:37:01.715295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.637 [2024-04-26 23:37:01.715605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.637 [2024-04-26 23:37:01.715613] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.637 qpair failed and we were unable to recover it. 00:34:12.637 [2024-04-26 23:37:01.715905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.637 [2024-04-26 23:37:01.716239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.637 [2024-04-26 23:37:01.716246] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.637 qpair failed and we were unable to recover it. 00:34:12.637 [2024-04-26 23:37:01.716448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.637 [2024-04-26 23:37:01.716764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.637 [2024-04-26 23:37:01.716772] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.637 qpair failed and we were unable to recover it. 00:34:12.637 [2024-04-26 23:37:01.717124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.637 [2024-04-26 23:37:01.717418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.637 [2024-04-26 23:37:01.717425] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.637 qpair failed and we were unable to recover it. 00:34:12.637 [2024-04-26 23:37:01.717470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.637 [2024-04-26 23:37:01.717645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.637 [2024-04-26 23:37:01.717653] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.637 qpair failed and we were unable to recover it. 00:34:12.637 [2024-04-26 23:37:01.717967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.637 [2024-04-26 23:37:01.718291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.637 [2024-04-26 23:37:01.718298] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.637 qpair failed and we were unable to recover it. 00:34:12.637 [2024-04-26 23:37:01.718479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.637 [2024-04-26 23:37:01.718850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.637 [2024-04-26 23:37:01.718863] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.637 qpair failed and we were unable to recover it. 00:34:12.637 [2024-04-26 23:37:01.719247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.637 [2024-04-26 23:37:01.719518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.637 [2024-04-26 23:37:01.719525] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.637 qpair failed and we were unable to recover it. 00:34:12.637 [2024-04-26 23:37:01.719723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.637 [2024-04-26 23:37:01.720114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.637 [2024-04-26 23:37:01.720122] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.637 qpair failed and we were unable to recover it. 00:34:12.637 [2024-04-26 23:37:01.720491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.637 [2024-04-26 23:37:01.720807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.637 [2024-04-26 23:37:01.720815] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.637 qpair failed and we were unable to recover it. 00:34:12.637 [2024-04-26 23:37:01.721019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.637 [2024-04-26 23:37:01.721249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.637 [2024-04-26 23:37:01.721258] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.637 qpair failed and we were unable to recover it. 00:34:12.637 [2024-04-26 23:37:01.721465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.637 [2024-04-26 23:37:01.721798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.637 [2024-04-26 23:37:01.721806] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.637 qpair failed and we were unable to recover it. 00:34:12.637 [2024-04-26 23:37:01.722143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.637 [2024-04-26 23:37:01.722505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.637 [2024-04-26 23:37:01.722514] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.637 qpair failed and we were unable to recover it. 00:34:12.637 [2024-04-26 23:37:01.722869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.637 [2024-04-26 23:37:01.723165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.637 [2024-04-26 23:37:01.723173] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.637 qpair failed and we were unable to recover it. 00:34:12.637 [2024-04-26 23:37:01.723503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.637 [2024-04-26 23:37:01.723821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.637 [2024-04-26 23:37:01.723829] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.637 qpair failed and we were unable to recover it. 00:34:12.637 [2024-04-26 23:37:01.724038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.637 [2024-04-26 23:37:01.724080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.638 [2024-04-26 23:37:01.724087] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.638 qpair failed and we were unable to recover it. 00:34:12.638 [2024-04-26 23:37:01.724290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.638 [2024-04-26 23:37:01.724493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.638 [2024-04-26 23:37:01.724500] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.638 qpair failed and we were unable to recover it. 00:34:12.638 [2024-04-26 23:37:01.724735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.638 [2024-04-26 23:37:01.725020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.638 [2024-04-26 23:37:01.725028] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.638 qpair failed and we were unable to recover it. 00:34:12.638 [2024-04-26 23:37:01.725329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.638 [2024-04-26 23:37:01.725625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.638 [2024-04-26 23:37:01.725632] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.638 qpair failed and we were unable to recover it. 00:34:12.638 [2024-04-26 23:37:01.725821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.638 [2024-04-26 23:37:01.726001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.638 [2024-04-26 23:37:01.726010] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.638 qpair failed and we were unable to recover it. 00:34:12.638 [2024-04-26 23:37:01.726334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.638 [2024-04-26 23:37:01.726594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.638 [2024-04-26 23:37:01.726602] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.638 qpair failed and we were unable to recover it. 00:34:12.638 [2024-04-26 23:37:01.726915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.638 [2024-04-26 23:37:01.727146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.638 [2024-04-26 23:37:01.727153] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.638 qpair failed and we were unable to recover it. 00:34:12.638 [2024-04-26 23:37:01.727322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.638 [2024-04-26 23:37:01.727652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.638 [2024-04-26 23:37:01.727660] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.638 qpair failed and we were unable to recover it. 00:34:12.638 [2024-04-26 23:37:01.727863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.638 [2024-04-26 23:37:01.728103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.638 [2024-04-26 23:37:01.728110] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.638 qpair failed and we were unable to recover it. 00:34:12.638 [2024-04-26 23:37:01.728436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.638 [2024-04-26 23:37:01.728679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.638 [2024-04-26 23:37:01.728687] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.638 qpair failed and we were unable to recover it. 00:34:12.638 [2024-04-26 23:37:01.729087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.638 [2024-04-26 23:37:01.729438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.638 [2024-04-26 23:37:01.729446] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.638 qpair failed and we were unable to recover it. 00:34:12.638 [2024-04-26 23:37:01.729643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.638 [2024-04-26 23:37:01.729992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.638 [2024-04-26 23:37:01.730000] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.638 qpair failed and we were unable to recover it. 00:34:12.638 [2024-04-26 23:37:01.730319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.638 [2024-04-26 23:37:01.730564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.638 [2024-04-26 23:37:01.730572] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.638 qpair failed and we were unable to recover it. 00:34:12.638 [2024-04-26 23:37:01.730905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.638 [2024-04-26 23:37:01.731109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.638 [2024-04-26 23:37:01.731116] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.638 qpair failed and we were unable to recover it. 00:34:12.638 [2024-04-26 23:37:01.731301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.638 [2024-04-26 23:37:01.731485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.638 [2024-04-26 23:37:01.731492] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.638 qpair failed and we were unable to recover it. 00:34:12.638 [2024-04-26 23:37:01.731700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.638 [2024-04-26 23:37:01.732026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.638 [2024-04-26 23:37:01.732035] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.638 qpair failed and we were unable to recover it. 00:34:12.638 [2024-04-26 23:37:01.732370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.638 [2024-04-26 23:37:01.732682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.638 [2024-04-26 23:37:01.732690] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.638 qpair failed and we were unable to recover it. 00:34:12.638 [2024-04-26 23:37:01.732880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.638 [2024-04-26 23:37:01.733073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.639 [2024-04-26 23:37:01.733081] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.639 qpair failed and we were unable to recover it. 00:34:12.639 [2024-04-26 23:37:01.733408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.639 [2024-04-26 23:37:01.733622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.639 [2024-04-26 23:37:01.733629] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.639 qpair failed and we were unable to recover it. 00:34:12.639 [2024-04-26 23:37:01.733953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.639 [2024-04-26 23:37:01.734308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.639 [2024-04-26 23:37:01.734315] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.639 qpair failed and we were unable to recover it. 00:34:12.639 [2024-04-26 23:37:01.734627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.639 [2024-04-26 23:37:01.734954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.639 [2024-04-26 23:37:01.734962] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.639 qpair failed and we were unable to recover it. 00:34:12.639 [2024-04-26 23:37:01.735003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.639 [2024-04-26 23:37:01.735313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.639 [2024-04-26 23:37:01.735322] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.639 qpair failed and we were unable to recover it. 00:34:12.639 [2024-04-26 23:37:01.735513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.639 [2024-04-26 23:37:01.735848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.639 [2024-04-26 23:37:01.735858] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.639 qpair failed and we were unable to recover it. 00:34:12.639 [2024-04-26 23:37:01.736160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.639 [2024-04-26 23:37:01.736359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.639 [2024-04-26 23:37:01.736366] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.639 qpair failed and we were unable to recover it. 00:34:12.639 [2024-04-26 23:37:01.736710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.639 [2024-04-26 23:37:01.736960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.639 [2024-04-26 23:37:01.736967] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.639 qpair failed and we were unable to recover it. 00:34:12.639 [2024-04-26 23:37:01.737160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.639 [2024-04-26 23:37:01.737462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.639 [2024-04-26 23:37:01.737470] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.639 qpair failed and we were unable to recover it. 00:34:12.639 [2024-04-26 23:37:01.737660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.639 [2024-04-26 23:37:01.738041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.639 [2024-04-26 23:37:01.738049] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.639 qpair failed and we were unable to recover it. 00:34:12.639 [2024-04-26 23:37:01.738347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.639 [2024-04-26 23:37:01.738502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.639 [2024-04-26 23:37:01.738509] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.639 qpair failed and we were unable to recover it. 00:34:12.639 [2024-04-26 23:37:01.738824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.639 [2024-04-26 23:37:01.739120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.639 [2024-04-26 23:37:01.739128] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.639 qpair failed and we were unable to recover it. 00:34:12.639 [2024-04-26 23:37:01.739460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.639 [2024-04-26 23:37:01.739664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.639 [2024-04-26 23:37:01.739671] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.639 qpair failed and we were unable to recover it. 00:34:12.639 [2024-04-26 23:37:01.739993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.639 [2024-04-26 23:37:01.740336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.639 [2024-04-26 23:37:01.740344] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.639 qpair failed and we were unable to recover it. 00:34:12.639 [2024-04-26 23:37:01.740554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.639 [2024-04-26 23:37:01.740855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.639 [2024-04-26 23:37:01.740863] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.639 qpair failed and we were unable to recover it. 00:34:12.639 [2024-04-26 23:37:01.741049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.639 [2024-04-26 23:37:01.741379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.639 [2024-04-26 23:37:01.741387] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.639 qpair failed and we were unable to recover it. 00:34:12.639 [2024-04-26 23:37:01.741585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.639 [2024-04-26 23:37:01.741749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.639 [2024-04-26 23:37:01.741757] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.639 qpair failed and we were unable to recover it. 00:34:12.639 [2024-04-26 23:37:01.742176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.639 [2024-04-26 23:37:01.742505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.639 [2024-04-26 23:37:01.742512] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.639 qpair failed and we were unable to recover it. 00:34:12.639 [2024-04-26 23:37:01.742738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.639 [2024-04-26 23:37:01.742883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.639 [2024-04-26 23:37:01.742892] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.639 qpair failed and we were unable to recover it. 00:34:12.639 [2024-04-26 23:37:01.743185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.639 [2024-04-26 23:37:01.743490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.639 [2024-04-26 23:37:01.743499] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.639 qpair failed and we were unable to recover it. 00:34:12.639 [2024-04-26 23:37:01.743797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.639 [2024-04-26 23:37:01.744102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.639 [2024-04-26 23:37:01.744110] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.639 qpair failed and we were unable to recover it. 00:34:12.639 [2024-04-26 23:37:01.744415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.639 [2024-04-26 23:37:01.744775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.639 [2024-04-26 23:37:01.744783] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.639 qpair failed and we were unable to recover it. 00:34:12.639 [2024-04-26 23:37:01.745115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.639 [2024-04-26 23:37:01.745481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.639 [2024-04-26 23:37:01.745489] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.639 qpair failed and we were unable to recover it. 00:34:12.639 [2024-04-26 23:37:01.745680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.639 [2024-04-26 23:37:01.745955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.639 [2024-04-26 23:37:01.745963] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.639 qpair failed and we were unable to recover it. 00:34:12.639 [2024-04-26 23:37:01.746004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.639 [2024-04-26 23:37:01.746245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.639 [2024-04-26 23:37:01.746252] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.639 qpair failed and we were unable to recover it. 00:34:12.639 [2024-04-26 23:37:01.746616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.639 [2024-04-26 23:37:01.746882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.639 [2024-04-26 23:37:01.746891] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.639 qpair failed and we were unable to recover it. 00:34:12.639 [2024-04-26 23:37:01.746940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.639 [2024-04-26 23:37:01.747286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.639 [2024-04-26 23:37:01.747293] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.639 qpair failed and we were unable to recover it. 00:34:12.639 [2024-04-26 23:37:01.747655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.639 [2024-04-26 23:37:01.747981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.639 [2024-04-26 23:37:01.747988] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.639 qpair failed and we were unable to recover it. 00:34:12.639 [2024-04-26 23:37:01.748327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.639 [2024-04-26 23:37:01.748577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.639 [2024-04-26 23:37:01.748584] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.639 qpair failed and we were unable to recover it. 00:34:12.640 [2024-04-26 23:37:01.748907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.640 [2024-04-26 23:37:01.749273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.640 [2024-04-26 23:37:01.749280] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.640 qpair failed and we were unable to recover it. 00:34:12.640 [2024-04-26 23:37:01.749615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.640 [2024-04-26 23:37:01.749981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.640 [2024-04-26 23:37:01.749989] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.640 qpair failed and we were unable to recover it. 00:34:12.640 [2024-04-26 23:37:01.750177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.640 [2024-04-26 23:37:01.750549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.640 [2024-04-26 23:37:01.750557] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.640 qpair failed and we were unable to recover it. 00:34:12.640 [2024-04-26 23:37:01.750876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.640 [2024-04-26 23:37:01.751240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.640 [2024-04-26 23:37:01.751247] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.640 qpair failed and we were unable to recover it. 00:34:12.640 [2024-04-26 23:37:01.751432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.640 [2024-04-26 23:37:01.751609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.640 [2024-04-26 23:37:01.751617] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.640 qpair failed and we were unable to recover it. 00:34:12.640 [2024-04-26 23:37:01.751921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.640 [2024-04-26 23:37:01.752277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.640 [2024-04-26 23:37:01.752284] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.640 qpair failed and we were unable to recover it. 00:34:12.640 [2024-04-26 23:37:01.752582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.640 [2024-04-26 23:37:01.752945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.640 [2024-04-26 23:37:01.752953] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.640 qpair failed and we were unable to recover it. 00:34:12.640 [2024-04-26 23:37:01.753165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.640 [2024-04-26 23:37:01.753315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.640 [2024-04-26 23:37:01.753323] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.640 qpair failed and we were unable to recover it. 00:34:12.640 [2024-04-26 23:37:01.753500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.640 [2024-04-26 23:37:01.753640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.640 [2024-04-26 23:37:01.753648] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.640 qpair failed and we were unable to recover it. 00:34:12.640 [2024-04-26 23:37:01.753943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.640 [2024-04-26 23:37:01.754227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.640 [2024-04-26 23:37:01.754234] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.640 qpair failed and we were unable to recover it. 00:34:12.640 [2024-04-26 23:37:01.754547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.640 [2024-04-26 23:37:01.754912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.640 [2024-04-26 23:37:01.754920] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.640 qpair failed and we were unable to recover it. 00:34:12.640 [2024-04-26 23:37:01.755252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.640 [2024-04-26 23:37:01.755447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.640 [2024-04-26 23:37:01.755454] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.640 qpair failed and we were unable to recover it. 00:34:12.640 [2024-04-26 23:37:01.755750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.640 [2024-04-26 23:37:01.755927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.640 [2024-04-26 23:37:01.755935] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.640 qpair failed and we were unable to recover it. 00:34:12.640 [2024-04-26 23:37:01.756203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.640 [2024-04-26 23:37:01.756459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.640 [2024-04-26 23:37:01.756466] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.640 qpair failed and we were unable to recover it. 00:34:12.640 [2024-04-26 23:37:01.756789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.640 [2024-04-26 23:37:01.756965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.640 [2024-04-26 23:37:01.756973] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.640 qpair failed and we were unable to recover it. 00:34:12.640 [2024-04-26 23:37:01.757170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.640 [2024-04-26 23:37:01.757512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.640 [2024-04-26 23:37:01.757519] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.640 qpair failed and we were unable to recover it. 00:34:12.640 [2024-04-26 23:37:01.757725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.640 [2024-04-26 23:37:01.758028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.640 [2024-04-26 23:37:01.758036] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.640 qpair failed and we were unable to recover it. 00:34:12.640 [2024-04-26 23:37:01.758229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.640 [2024-04-26 23:37:01.758546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.640 [2024-04-26 23:37:01.758554] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.640 qpair failed and we were unable to recover it. 00:34:12.640 [2024-04-26 23:37:01.758761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.640 [2024-04-26 23:37:01.758981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.640 [2024-04-26 23:37:01.758990] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.640 qpair failed and we were unable to recover it. 00:34:12.640 [2024-04-26 23:37:01.759150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.640 [2024-04-26 23:37:01.759319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.640 [2024-04-26 23:37:01.759327] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.640 qpair failed and we were unable to recover it. 00:34:12.640 [2024-04-26 23:37:01.759656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.640 [2024-04-26 23:37:01.759865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.640 [2024-04-26 23:37:01.759872] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.640 qpair failed and we were unable to recover it. 00:34:12.640 [2024-04-26 23:37:01.760185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.640 [2024-04-26 23:37:01.760548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.640 [2024-04-26 23:37:01.760556] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.640 qpair failed and we were unable to recover it. 00:34:12.640 [2024-04-26 23:37:01.760858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.640 [2024-04-26 23:37:01.761168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.640 [2024-04-26 23:37:01.761175] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.640 qpair failed and we were unable to recover it. 00:34:12.640 [2024-04-26 23:37:01.761507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.640 [2024-04-26 23:37:01.761553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.640 [2024-04-26 23:37:01.761560] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.640 qpair failed and we were unable to recover it. 00:34:12.640 [2024-04-26 23:37:01.761755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.640 [2024-04-26 23:37:01.762104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.640 [2024-04-26 23:37:01.762112] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.641 qpair failed and we were unable to recover it. 00:34:12.641 [2024-04-26 23:37:01.762443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.641 [2024-04-26 23:37:01.762645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.641 [2024-04-26 23:37:01.762652] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.641 qpair failed and we were unable to recover it. 00:34:12.641 [2024-04-26 23:37:01.762979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.641 [2024-04-26 23:37:01.763344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.641 [2024-04-26 23:37:01.763352] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.641 qpair failed and we were unable to recover it. 00:34:12.641 [2024-04-26 23:37:01.763693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.641 [2024-04-26 23:37:01.763903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.641 [2024-04-26 23:37:01.763911] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.641 qpair failed and we were unable to recover it. 00:34:12.641 [2024-04-26 23:37:01.763963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.641 [2024-04-26 23:37:01.764010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.641 [2024-04-26 23:37:01.764017] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.641 qpair failed and we were unable to recover it. 00:34:12.641 [2024-04-26 23:37:01.764373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.641 [2024-04-26 23:37:01.764519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.641 [2024-04-26 23:37:01.764526] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.641 qpair failed and we were unable to recover it. 00:34:12.641 [2024-04-26 23:37:01.764834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.641 [2024-04-26 23:37:01.765153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.641 [2024-04-26 23:37:01.765162] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.641 qpair failed and we were unable to recover it. 00:34:12.641 [2024-04-26 23:37:01.765474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.641 [2024-04-26 23:37:01.765840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.641 [2024-04-26 23:37:01.765847] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.641 qpair failed and we were unable to recover it. 00:34:12.641 [2024-04-26 23:37:01.766033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.641 [2024-04-26 23:37:01.766346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.641 [2024-04-26 23:37:01.766354] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.641 qpair failed and we were unable to recover it. 00:34:12.641 [2024-04-26 23:37:01.766707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.641 [2024-04-26 23:37:01.766915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.641 [2024-04-26 23:37:01.766924] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.641 qpair failed and we were unable to recover it. 00:34:12.641 [2024-04-26 23:37:01.767085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.641 [2024-04-26 23:37:01.767429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.641 [2024-04-26 23:37:01.767437] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.641 qpair failed and we were unable to recover it. 00:34:12.641 [2024-04-26 23:37:01.767771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.641 [2024-04-26 23:37:01.768081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.641 [2024-04-26 23:37:01.768089] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.641 qpair failed and we were unable to recover it. 00:34:12.641 [2024-04-26 23:37:01.768425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.641 [2024-04-26 23:37:01.768790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.641 [2024-04-26 23:37:01.768800] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.641 qpair failed and we were unable to recover it. 00:34:12.641 [2024-04-26 23:37:01.769131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.641 [2024-04-26 23:37:01.769344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.641 [2024-04-26 23:37:01.769351] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.641 qpair failed and we were unable to recover it. 00:34:12.641 [2024-04-26 23:37:01.769685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.641 [2024-04-26 23:37:01.769889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.641 [2024-04-26 23:37:01.769897] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.641 qpair failed and we were unable to recover it. 00:34:12.641 [2024-04-26 23:37:01.769961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.641 [2024-04-26 23:37:01.770145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.641 [2024-04-26 23:37:01.770152] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.641 qpair failed and we were unable to recover it. 00:34:12.641 [2024-04-26 23:37:01.770556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.641 [2024-04-26 23:37:01.770873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.641 [2024-04-26 23:37:01.770882] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.641 qpair failed and we were unable to recover it. 00:34:12.641 [2024-04-26 23:37:01.771055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.641 [2024-04-26 23:37:01.771390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.641 [2024-04-26 23:37:01.771397] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.641 qpair failed and we were unable to recover it. 00:34:12.641 [2024-04-26 23:37:01.771592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.641 [2024-04-26 23:37:01.771923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.641 [2024-04-26 23:37:01.771932] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.641 qpair failed and we were unable to recover it. 00:34:12.641 [2024-04-26 23:37:01.772146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.641 [2024-04-26 23:37:01.772483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.641 [2024-04-26 23:37:01.772490] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.641 qpair failed and we were unable to recover it. 00:34:12.641 [2024-04-26 23:37:01.772823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.641 [2024-04-26 23:37:01.773152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.641 [2024-04-26 23:37:01.773160] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.641 qpair failed and we were unable to recover it. 00:34:12.641 [2024-04-26 23:37:01.773357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.641 [2024-04-26 23:37:01.773544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.641 [2024-04-26 23:37:01.773553] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.641 qpair failed and we were unable to recover it. 00:34:12.641 [2024-04-26 23:37:01.773881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.641 [2024-04-26 23:37:01.774187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.641 [2024-04-26 23:37:01.774196] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.641 qpair failed and we were unable to recover it. 00:34:12.641 [2024-04-26 23:37:01.774547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.641 [2024-04-26 23:37:01.774703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.641 [2024-04-26 23:37:01.774711] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.641 qpair failed and we were unable to recover it. 00:34:12.641 [2024-04-26 23:37:01.774903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.641 [2024-04-26 23:37:01.775270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.641 [2024-04-26 23:37:01.775277] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.641 qpair failed and we were unable to recover it. 00:34:12.641 [2024-04-26 23:37:01.775598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.641 [2024-04-26 23:37:01.775913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.641 [2024-04-26 23:37:01.775920] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.641 qpair failed and we were unable to recover it. 00:34:12.641 [2024-04-26 23:37:01.776220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.641 [2024-04-26 23:37:01.776601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.641 [2024-04-26 23:37:01.776608] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.641 qpair failed and we were unable to recover it. 00:34:12.641 [2024-04-26 23:37:01.776927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.641 [2024-04-26 23:37:01.777099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.641 [2024-04-26 23:37:01.777107] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.641 qpair failed and we were unable to recover it. 00:34:12.641 [2024-04-26 23:37:01.777301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.641 [2024-04-26 23:37:01.777499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.641 [2024-04-26 23:37:01.777506] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.641 qpair failed and we were unable to recover it. 00:34:12.641 [2024-04-26 23:37:01.777823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.642 [2024-04-26 23:37:01.778174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.642 [2024-04-26 23:37:01.778182] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.642 qpair failed and we were unable to recover it. 00:34:12.642 [2024-04-26 23:37:01.778523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.642 [2024-04-26 23:37:01.778728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.642 [2024-04-26 23:37:01.778735] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.642 qpair failed and we were unable to recover it. 00:34:12.642 [2024-04-26 23:37:01.779054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.642 [2024-04-26 23:37:01.779394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.642 [2024-04-26 23:37:01.779402] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.642 qpair failed and we were unable to recover it. 00:34:12.642 [2024-04-26 23:37:01.779731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.642 [2024-04-26 23:37:01.780071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.642 [2024-04-26 23:37:01.780081] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.642 qpair failed and we were unable to recover it. 00:34:12.642 [2024-04-26 23:37:01.780435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.642 [2024-04-26 23:37:01.780806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.642 [2024-04-26 23:37:01.780814] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.642 qpair failed and we were unable to recover it. 00:34:12.642 [2024-04-26 23:37:01.781012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.642 [2024-04-26 23:37:01.781194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.642 [2024-04-26 23:37:01.781202] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.642 qpair failed and we were unable to recover it. 00:34:12.642 [2024-04-26 23:37:01.781529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.642 [2024-04-26 23:37:01.781828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.642 [2024-04-26 23:37:01.781836] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.642 qpair failed and we were unable to recover it. 00:34:12.642 [2024-04-26 23:37:01.782014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.642 [2024-04-26 23:37:01.782218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.642 [2024-04-26 23:37:01.782228] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.642 qpair failed and we were unable to recover it. 00:34:12.642 [2024-04-26 23:37:01.782556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.642 [2024-04-26 23:37:01.782842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.642 [2024-04-26 23:37:01.782850] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.642 qpair failed and we were unable to recover it. 00:34:12.642 [2024-04-26 23:37:01.783156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.642 [2024-04-26 23:37:01.783330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.642 [2024-04-26 23:37:01.783337] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.642 qpair failed and we were unable to recover it. 00:34:12.642 [2024-04-26 23:37:01.783662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.642 [2024-04-26 23:37:01.783954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.642 [2024-04-26 23:37:01.783962] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.642 qpair failed and we were unable to recover it. 00:34:12.642 [2024-04-26 23:37:01.784147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.642 [2024-04-26 23:37:01.784506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.642 [2024-04-26 23:37:01.784514] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.642 qpair failed and we were unable to recover it. 00:34:12.642 [2024-04-26 23:37:01.784709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.642 [2024-04-26 23:37:01.784990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.642 [2024-04-26 23:37:01.784999] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.642 qpair failed and we were unable to recover it. 00:34:12.642 [2024-04-26 23:37:01.785332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.642 [2024-04-26 23:37:01.785535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.642 [2024-04-26 23:37:01.785545] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.642 qpair failed and we were unable to recover it. 00:34:12.642 [2024-04-26 23:37:01.785892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.642 [2024-04-26 23:37:01.786243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.642 [2024-04-26 23:37:01.786251] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.642 qpair failed and we were unable to recover it. 00:34:12.642 [2024-04-26 23:37:01.786619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.642 [2024-04-26 23:37:01.786820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.642 [2024-04-26 23:37:01.786827] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.642 qpair failed and we were unable to recover it. 00:34:12.642 [2024-04-26 23:37:01.787055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.642 [2024-04-26 23:37:01.787353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.642 [2024-04-26 23:37:01.787361] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.642 qpair failed and we were unable to recover it. 00:34:12.642 [2024-04-26 23:37:01.787650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.642 [2024-04-26 23:37:01.787964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.642 [2024-04-26 23:37:01.787971] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.642 qpair failed and we were unable to recover it. 00:34:12.642 [2024-04-26 23:37:01.788305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.642 [2024-04-26 23:37:01.788574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.642 [2024-04-26 23:37:01.788581] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.642 qpair failed and we were unable to recover it. 00:34:12.642 [2024-04-26 23:37:01.788933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.642 [2024-04-26 23:37:01.789024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.642 [2024-04-26 23:37:01.789032] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.642 qpair failed and we were unable to recover it. 00:34:12.642 [2024-04-26 23:37:01.789218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.642 [2024-04-26 23:37:01.789405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.642 [2024-04-26 23:37:01.789412] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.642 qpair failed and we were unable to recover it. 00:34:12.642 [2024-04-26 23:37:01.789756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.642 [2024-04-26 23:37:01.790056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.642 [2024-04-26 23:37:01.790063] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.642 qpair failed and we were unable to recover it. 00:34:12.642 [2024-04-26 23:37:01.790435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.642 [2024-04-26 23:37:01.790752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.642 [2024-04-26 23:37:01.790759] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.642 qpair failed and we were unable to recover it. 00:34:12.642 [2024-04-26 23:37:01.790953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.642 [2024-04-26 23:37:01.791264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.642 [2024-04-26 23:37:01.791272] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.642 qpair failed and we were unable to recover it. 00:34:12.642 [2024-04-26 23:37:01.791470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.642 [2024-04-26 23:37:01.791651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.642 [2024-04-26 23:37:01.791659] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.642 qpair failed and we were unable to recover it. 00:34:12.642 [2024-04-26 23:37:01.791996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.642 [2024-04-26 23:37:01.792282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.642 [2024-04-26 23:37:01.792290] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.642 qpair failed and we were unable to recover it. 00:34:12.642 [2024-04-26 23:37:01.792510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.642 [2024-04-26 23:37:01.792727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.642 [2024-04-26 23:37:01.792734] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.642 qpair failed and we were unable to recover it. 00:34:12.642 [2024-04-26 23:37:01.792921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.642 [2024-04-26 23:37:01.793229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.642 [2024-04-26 23:37:01.793236] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.642 qpair failed and we were unable to recover it. 00:34:12.642 [2024-04-26 23:37:01.793587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.642 [2024-04-26 23:37:01.793732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.642 [2024-04-26 23:37:01.793740] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.642 qpair failed and we were unable to recover it. 00:34:12.643 [2024-04-26 23:37:01.793920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.643 [2024-04-26 23:37:01.794278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.643 [2024-04-26 23:37:01.794287] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.643 qpair failed and we were unable to recover it. 00:34:12.643 [2024-04-26 23:37:01.794663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.643 [2024-04-26 23:37:01.795031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.643 [2024-04-26 23:37:01.795039] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.643 qpair failed and we were unable to recover it. 00:34:12.643 [2024-04-26 23:37:01.795235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.643 [2024-04-26 23:37:01.795523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.643 [2024-04-26 23:37:01.795531] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.643 qpair failed and we were unable to recover it. 00:34:12.643 [2024-04-26 23:37:01.795886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.643 [2024-04-26 23:37:01.796120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.643 [2024-04-26 23:37:01.796127] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.643 qpair failed and we were unable to recover it. 00:34:12.643 [2024-04-26 23:37:01.796332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.643 [2024-04-26 23:37:01.796640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.643 [2024-04-26 23:37:01.796648] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.643 qpair failed and we were unable to recover it. 00:34:12.643 [2024-04-26 23:37:01.796993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.643 [2024-04-26 23:37:01.797301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.643 [2024-04-26 23:37:01.797309] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.643 qpair failed and we were unable to recover it. 00:34:12.643 [2024-04-26 23:37:01.797662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.643 [2024-04-26 23:37:01.797870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.643 [2024-04-26 23:37:01.797879] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.643 qpair failed and we were unable to recover it. 00:34:12.643 [2024-04-26 23:37:01.798195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.643 [2024-04-26 23:37:01.798519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.643 [2024-04-26 23:37:01.798527] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.643 qpair failed and we were unable to recover it. 00:34:12.643 [2024-04-26 23:37:01.798883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.643 [2024-04-26 23:37:01.799201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.643 [2024-04-26 23:37:01.799210] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.643 qpair failed and we were unable to recover it. 00:34:12.643 [2024-04-26 23:37:01.799562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.643 [2024-04-26 23:37:01.799877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.643 [2024-04-26 23:37:01.799885] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.643 qpair failed and we were unable to recover it. 00:34:12.643 [2024-04-26 23:37:01.800078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.643 [2024-04-26 23:37:01.800384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.643 [2024-04-26 23:37:01.800391] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.643 qpair failed and we were unable to recover it. 00:34:12.643 [2024-04-26 23:37:01.800578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.643 [2024-04-26 23:37:01.800745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.643 [2024-04-26 23:37:01.800753] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.643 qpair failed and we were unable to recover it. 00:34:12.643 [2024-04-26 23:37:01.800956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.643 [2024-04-26 23:37:01.801300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.643 [2024-04-26 23:37:01.801307] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.643 qpair failed and we were unable to recover it. 00:34:12.643 [2024-04-26 23:37:01.801662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.643 [2024-04-26 23:37:01.801985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.643 [2024-04-26 23:37:01.801993] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.643 qpair failed and we were unable to recover it. 00:34:12.643 [2024-04-26 23:37:01.802356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.643 [2024-04-26 23:37:01.802723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.643 [2024-04-26 23:37:01.802731] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.643 qpair failed and we were unable to recover it. 00:34:12.643 [2024-04-26 23:37:01.803073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.643 [2024-04-26 23:37:01.803433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.643 [2024-04-26 23:37:01.803440] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.643 qpair failed and we were unable to recover it. 00:34:12.643 [2024-04-26 23:37:01.803770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.643 [2024-04-26 23:37:01.804120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.643 [2024-04-26 23:37:01.804128] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.643 qpair failed and we were unable to recover it. 00:34:12.643 [2024-04-26 23:37:01.804439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.643 [2024-04-26 23:37:01.804788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.643 [2024-04-26 23:37:01.804795] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.643 qpair failed and we were unable to recover it. 00:34:12.643 [2024-04-26 23:37:01.805128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.643 [2024-04-26 23:37:01.805490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.643 [2024-04-26 23:37:01.805497] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.643 qpair failed and we were unable to recover it. 00:34:12.643 [2024-04-26 23:37:01.805820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.643 [2024-04-26 23:37:01.806174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.643 [2024-04-26 23:37:01.806182] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.643 qpair failed and we were unable to recover it. 00:34:12.643 [2024-04-26 23:37:01.806588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.643 [2024-04-26 23:37:01.806903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.643 [2024-04-26 23:37:01.806911] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.643 qpair failed and we were unable to recover it. 00:34:12.643 [2024-04-26 23:37:01.807234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.643 [2024-04-26 23:37:01.807597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.643 [2024-04-26 23:37:01.807604] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.643 qpair failed and we were unable to recover it. 00:34:12.643 [2024-04-26 23:37:01.807807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.643 [2024-04-26 23:37:01.808135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.643 [2024-04-26 23:37:01.808144] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.643 qpair failed and we were unable to recover it. 00:34:12.643 [2024-04-26 23:37:01.808349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.643 [2024-04-26 23:37:01.808559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.644 [2024-04-26 23:37:01.808568] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.644 qpair failed and we were unable to recover it. 00:34:12.644 [2024-04-26 23:37:01.808748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.644 [2024-04-26 23:37:01.809046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.644 [2024-04-26 23:37:01.809054] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.644 qpair failed and we were unable to recover it. 00:34:12.644 [2024-04-26 23:37:01.809412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.644 [2024-04-26 23:37:01.809732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.644 [2024-04-26 23:37:01.809739] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.644 qpair failed and we were unable to recover it. 00:34:12.644 [2024-04-26 23:37:01.810098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.644 [2024-04-26 23:37:01.810460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.644 [2024-04-26 23:37:01.810467] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.644 qpair failed and we were unable to recover it. 00:34:12.644 [2024-04-26 23:37:01.810819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.644 [2024-04-26 23:37:01.811177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.644 [2024-04-26 23:37:01.811185] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.644 qpair failed and we were unable to recover it. 00:34:12.644 [2024-04-26 23:37:01.811428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.644 [2024-04-26 23:37:01.811716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.644 [2024-04-26 23:37:01.811724] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.644 qpair failed and we were unable to recover it. 00:34:12.644 [2024-04-26 23:37:01.812050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.644 [2024-04-26 23:37:01.812364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.644 [2024-04-26 23:37:01.812371] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.644 qpair failed and we were unable to recover it. 00:34:12.644 [2024-04-26 23:37:01.812595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.644 [2024-04-26 23:37:01.812888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.644 [2024-04-26 23:37:01.812896] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.644 qpair failed and we were unable to recover it. 00:34:12.644 [2024-04-26 23:37:01.813091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.644 [2024-04-26 23:37:01.813450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.644 [2024-04-26 23:37:01.813458] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.644 qpair failed and we were unable to recover it. 00:34:12.644 [2024-04-26 23:37:01.813788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.644 [2024-04-26 23:37:01.813940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.644 [2024-04-26 23:37:01.813948] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.644 qpair failed and we were unable to recover it. 00:34:12.644 [2024-04-26 23:37:01.814170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.644 [2024-04-26 23:37:01.814498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.644 [2024-04-26 23:37:01.814506] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.644 qpair failed and we were unable to recover it. 00:34:12.644 [2024-04-26 23:37:01.814839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.644 [2024-04-26 23:37:01.815186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.644 [2024-04-26 23:37:01.815193] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.644 qpair failed and we were unable to recover it. 00:34:12.644 [2024-04-26 23:37:01.815550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.644 [2024-04-26 23:37:01.815872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.644 [2024-04-26 23:37:01.815881] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.644 qpair failed and we were unable to recover it. 00:34:12.644 [2024-04-26 23:37:01.816070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.644 [2024-04-26 23:37:01.816265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.644 [2024-04-26 23:37:01.816272] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.644 qpair failed and we were unable to recover it. 00:34:12.644 [2024-04-26 23:37:01.816481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.644 [2024-04-26 23:37:01.816842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.644 [2024-04-26 23:37:01.816850] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.644 qpair failed and we were unable to recover it. 00:34:12.644 [2024-04-26 23:37:01.816900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.644 [2024-04-26 23:37:01.817261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.644 [2024-04-26 23:37:01.817269] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.644 qpair failed and we were unable to recover it. 00:34:12.644 [2024-04-26 23:37:01.817629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.644 [2024-04-26 23:37:01.817975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.644 [2024-04-26 23:37:01.817983] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.644 qpair failed and we were unable to recover it. 00:34:12.644 [2024-04-26 23:37:01.818314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.644 [2024-04-26 23:37:01.818492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.644 [2024-04-26 23:37:01.818499] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.644 qpair failed and we were unable to recover it. 00:34:12.644 [2024-04-26 23:37:01.818826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.644 [2024-04-26 23:37:01.819014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.644 [2024-04-26 23:37:01.819022] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.644 qpair failed and we were unable to recover it. 00:34:12.644 [2024-04-26 23:37:01.819360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.644 [2024-04-26 23:37:01.819674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.644 [2024-04-26 23:37:01.819682] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.644 qpair failed and we were unable to recover it. 00:34:12.644 [2024-04-26 23:37:01.819884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.644 [2024-04-26 23:37:01.820233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.644 [2024-04-26 23:37:01.820241] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.644 qpair failed and we were unable to recover it. 00:34:12.644 [2024-04-26 23:37:01.820570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.644 [2024-04-26 23:37:01.820933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.644 [2024-04-26 23:37:01.820941] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.644 qpair failed and we were unable to recover it. 00:34:12.644 [2024-04-26 23:37:01.821311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.644 [2024-04-26 23:37:01.821503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.644 [2024-04-26 23:37:01.821510] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.644 qpair failed and we were unable to recover it. 00:34:12.644 [2024-04-26 23:37:01.821707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.644 [2024-04-26 23:37:01.821900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.644 [2024-04-26 23:37:01.821907] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.644 qpair failed and we were unable to recover it. 00:34:12.644 [2024-04-26 23:37:01.822103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.644 [2024-04-26 23:37:01.822405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.644 [2024-04-26 23:37:01.822412] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.644 qpair failed and we were unable to recover it. 00:34:12.644 [2024-04-26 23:37:01.822734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.644 [2024-04-26 23:37:01.823038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.644 [2024-04-26 23:37:01.823046] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.644 qpair failed and we were unable to recover it. 00:34:12.644 [2024-04-26 23:37:01.823373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.644 [2024-04-26 23:37:01.823689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.644 [2024-04-26 23:37:01.823697] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.644 qpair failed and we were unable to recover it. 00:34:12.644 [2024-04-26 23:37:01.824026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.644 [2024-04-26 23:37:01.824274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.644 [2024-04-26 23:37:01.824282] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.644 qpair failed and we were unable to recover it. 00:34:12.644 [2024-04-26 23:37:01.824599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.644 [2024-04-26 23:37:01.824996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.644 [2024-04-26 23:37:01.825003] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.645 qpair failed and we were unable to recover it. 00:34:12.645 [2024-04-26 23:37:01.825330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.645 [2024-04-26 23:37:01.825604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.645 [2024-04-26 23:37:01.825612] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.645 qpair failed and we were unable to recover it. 00:34:12.645 [2024-04-26 23:37:01.825797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.645 [2024-04-26 23:37:01.826096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.645 [2024-04-26 23:37:01.826104] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.645 qpair failed and we were unable to recover it. 00:34:12.645 [2024-04-26 23:37:01.826437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.645 [2024-04-26 23:37:01.826733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.645 [2024-04-26 23:37:01.826741] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.645 qpair failed and we were unable to recover it. 00:34:12.645 [2024-04-26 23:37:01.827065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.645 [2024-04-26 23:37:01.827261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.645 [2024-04-26 23:37:01.827269] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.645 qpair failed and we were unable to recover it. 00:34:12.645 [2024-04-26 23:37:01.827591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.645 [2024-04-26 23:37:01.827947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.645 [2024-04-26 23:37:01.827955] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.645 qpair failed and we were unable to recover it. 00:34:12.645 [2024-04-26 23:37:01.828283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.645 [2024-04-26 23:37:01.828651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.645 [2024-04-26 23:37:01.828659] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.645 qpair failed and we were unable to recover it. 00:34:12.645 [2024-04-26 23:37:01.828858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.645 [2024-04-26 23:37:01.829081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.645 [2024-04-26 23:37:01.829089] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.645 qpair failed and we were unable to recover it. 00:34:12.645 [2024-04-26 23:37:01.829297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.645 [2024-04-26 23:37:01.829497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.645 [2024-04-26 23:37:01.829506] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.645 qpair failed and we were unable to recover it. 00:34:12.645 [2024-04-26 23:37:01.829684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.645 [2024-04-26 23:37:01.829935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.645 [2024-04-26 23:37:01.829942] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.645 qpair failed and we were unable to recover it. 00:34:12.645 [2024-04-26 23:37:01.830261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.645 [2024-04-26 23:37:01.830410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.645 [2024-04-26 23:37:01.830418] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.645 qpair failed and we were unable to recover it. 00:34:12.645 [2024-04-26 23:37:01.830610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.645 [2024-04-26 23:37:01.830890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.645 [2024-04-26 23:37:01.830898] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.645 qpair failed and we were unable to recover it. 00:34:12.645 [2024-04-26 23:37:01.831206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.645 [2024-04-26 23:37:01.831493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.645 [2024-04-26 23:37:01.831501] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.645 qpair failed and we were unable to recover it. 00:34:12.645 [2024-04-26 23:37:01.831834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.645 [2024-04-26 23:37:01.832135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.645 [2024-04-26 23:37:01.832144] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.645 qpair failed and we were unable to recover it. 00:34:12.645 [2024-04-26 23:37:01.832495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.645 [2024-04-26 23:37:01.832817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.645 [2024-04-26 23:37:01.832825] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.645 qpair failed and we were unable to recover it. 00:34:12.645 [2024-04-26 23:37:01.833062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.645 [2024-04-26 23:37:01.833283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.645 [2024-04-26 23:37:01.833291] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.645 qpair failed and we were unable to recover it. 00:34:12.645 [2024-04-26 23:37:01.833608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.645 [2024-04-26 23:37:01.833687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.645 [2024-04-26 23:37:01.833695] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.645 qpair failed and we were unable to recover it. 00:34:12.645 [2024-04-26 23:37:01.833982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.645 [2024-04-26 23:37:01.834272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.645 [2024-04-26 23:37:01.834280] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.645 qpair failed and we were unable to recover it. 00:34:12.645 [2024-04-26 23:37:01.834484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.645 [2024-04-26 23:37:01.834847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.645 [2024-04-26 23:37:01.834856] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.645 qpair failed and we were unable to recover it. 00:34:12.645 [2024-04-26 23:37:01.835023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.645 [2024-04-26 23:37:01.835245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.645 [2024-04-26 23:37:01.835252] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.645 qpair failed and we were unable to recover it. 00:34:12.645 [2024-04-26 23:37:01.835419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.645 [2024-04-26 23:37:01.835841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.645 [2024-04-26 23:37:01.835850] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.645 qpair failed and we were unable to recover it. 00:34:12.645 [2024-04-26 23:37:01.836185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.645 [2024-04-26 23:37:01.836393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.645 [2024-04-26 23:37:01.836400] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.645 qpair failed and we were unable to recover it. 00:34:12.645 [2024-04-26 23:37:01.836715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.645 [2024-04-26 23:37:01.837043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.645 [2024-04-26 23:37:01.837050] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.645 qpair failed and we were unable to recover it. 00:34:12.645 [2024-04-26 23:37:01.837131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.645 [2024-04-26 23:37:01.837435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.645 [2024-04-26 23:37:01.837443] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.645 qpair failed and we were unable to recover it. 00:34:12.645 [2024-04-26 23:37:01.837645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.645 [2024-04-26 23:37:01.837980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.645 [2024-04-26 23:37:01.837989] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.645 qpair failed and we were unable to recover it. 00:34:12.645 [2024-04-26 23:37:01.838286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.645 [2024-04-26 23:37:01.838567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.645 [2024-04-26 23:37:01.838574] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.645 qpair failed and we were unable to recover it. 00:34:12.645 [2024-04-26 23:37:01.838901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.645 [2024-04-26 23:37:01.839238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.645 [2024-04-26 23:37:01.839246] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.645 qpair failed and we were unable to recover it. 00:34:12.645 [2024-04-26 23:37:01.839578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.645 [2024-04-26 23:37:01.839778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.645 [2024-04-26 23:37:01.839785] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.645 qpair failed and we were unable to recover it. 00:34:12.645 [2024-04-26 23:37:01.839959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.645 [2024-04-26 23:37:01.840151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.645 [2024-04-26 23:37:01.840158] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.645 qpair failed and we were unable to recover it. 00:34:12.645 [2024-04-26 23:37:01.840563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.646 [2024-04-26 23:37:01.840737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.646 [2024-04-26 23:37:01.840745] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.646 qpair failed and we were unable to recover it. 00:34:12.646 [2024-04-26 23:37:01.841099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.646 [2024-04-26 23:37:01.841458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.646 [2024-04-26 23:37:01.841465] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.646 qpair failed and we were unable to recover it. 00:34:12.646 [2024-04-26 23:37:01.841787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.646 [2024-04-26 23:37:01.842112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.646 [2024-04-26 23:37:01.842121] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.646 qpair failed and we were unable to recover it. 00:34:12.646 [2024-04-26 23:37:01.842433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.646 [2024-04-26 23:37:01.842797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.646 [2024-04-26 23:37:01.842805] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.646 qpair failed and we were unable to recover it. 00:34:12.646 [2024-04-26 23:37:01.843007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.646 [2024-04-26 23:37:01.843294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.646 [2024-04-26 23:37:01.843301] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.646 qpair failed and we were unable to recover it. 00:34:12.646 [2024-04-26 23:37:01.843494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.646 [2024-04-26 23:37:01.843825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.646 [2024-04-26 23:37:01.843833] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.646 qpair failed and we were unable to recover it. 00:34:12.646 [2024-04-26 23:37:01.844211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.646 [2024-04-26 23:37:01.844525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.646 [2024-04-26 23:37:01.844533] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.646 qpair failed and we were unable to recover it. 00:34:12.646 [2024-04-26 23:37:01.844844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.646 [2024-04-26 23:37:01.845165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.646 [2024-04-26 23:37:01.845173] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.646 qpair failed and we were unable to recover it. 00:34:12.646 [2024-04-26 23:37:01.845515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.646 [2024-04-26 23:37:01.845714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.646 [2024-04-26 23:37:01.845722] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.646 qpair failed and we were unable to recover it. 00:34:12.646 [2024-04-26 23:37:01.845939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.646 [2024-04-26 23:37:01.846170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.646 [2024-04-26 23:37:01.846178] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.646 qpair failed and we were unable to recover it. 00:34:12.646 [2024-04-26 23:37:01.846490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.646 [2024-04-26 23:37:01.846774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.646 [2024-04-26 23:37:01.846782] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.646 qpair failed and we were unable to recover it. 00:34:12.646 [2024-04-26 23:37:01.847115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.646 [2024-04-26 23:37:01.847314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.646 [2024-04-26 23:37:01.847321] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.646 qpair failed and we were unable to recover it. 00:34:12.646 [2024-04-26 23:37:01.847675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.646 [2024-04-26 23:37:01.847976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.646 [2024-04-26 23:37:01.847983] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.646 qpair failed and we were unable to recover it. 00:34:12.646 [2024-04-26 23:37:01.848321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.646 [2024-04-26 23:37:01.848375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.646 [2024-04-26 23:37:01.848382] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.646 qpair failed and we were unable to recover it. 00:34:12.646 [2024-04-26 23:37:01.848608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.646 [2024-04-26 23:37:01.848781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.646 [2024-04-26 23:37:01.848788] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.646 qpair failed and we were unable to recover it. 00:34:12.646 [2024-04-26 23:37:01.849008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.646 [2024-04-26 23:37:01.849346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.646 [2024-04-26 23:37:01.849354] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.646 qpair failed and we were unable to recover it. 00:34:12.646 [2024-04-26 23:37:01.849678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.646 [2024-04-26 23:37:01.849962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.646 [2024-04-26 23:37:01.849969] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.646 qpair failed and we were unable to recover it. 00:34:12.646 [2024-04-26 23:37:01.850268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.646 [2024-04-26 23:37:01.850630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.646 [2024-04-26 23:37:01.850638] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.646 qpair failed and we were unable to recover it. 00:34:12.646 [2024-04-26 23:37:01.851069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.646 [2024-04-26 23:37:01.851216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.646 [2024-04-26 23:37:01.851223] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.646 qpair failed and we were unable to recover it. 00:34:12.646 [2024-04-26 23:37:01.851271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.646 [2024-04-26 23:37:01.851601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.646 [2024-04-26 23:37:01.851609] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.646 qpair failed and we were unable to recover it. 00:34:12.646 [2024-04-26 23:37:01.851817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.646 [2024-04-26 23:37:01.852157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.646 [2024-04-26 23:37:01.852165] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.646 qpair failed and we were unable to recover it. 00:34:12.646 [2024-04-26 23:37:01.852488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.646 [2024-04-26 23:37:01.852850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.646 [2024-04-26 23:37:01.852859] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.646 qpair failed and we were unable to recover it. 00:34:12.646 [2024-04-26 23:37:01.853136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.646 [2024-04-26 23:37:01.853480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.646 [2024-04-26 23:37:01.853487] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.646 qpair failed and we were unable to recover it. 00:34:12.646 [2024-04-26 23:37:01.853848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.646 [2024-04-26 23:37:01.854116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.646 [2024-04-26 23:37:01.854126] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.646 qpair failed and we were unable to recover it. 00:34:12.646 [2024-04-26 23:37:01.854322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.646 [2024-04-26 23:37:01.854545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.646 [2024-04-26 23:37:01.854553] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.646 qpair failed and we were unable to recover it. 00:34:12.646 [2024-04-26 23:37:01.854740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.646 [2024-04-26 23:37:01.855082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.646 [2024-04-26 23:37:01.855090] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.646 qpair failed and we were unable to recover it. 00:34:12.646 [2024-04-26 23:37:01.855429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.646 [2024-04-26 23:37:01.855631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.646 [2024-04-26 23:37:01.855639] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.646 qpair failed and we were unable to recover it. 00:34:12.646 [2024-04-26 23:37:01.856004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.646 [2024-04-26 23:37:01.856369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.646 [2024-04-26 23:37:01.856377] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.646 qpair failed and we were unable to recover it. 00:34:12.646 [2024-04-26 23:37:01.856708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.646 [2024-04-26 23:37:01.856908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.646 [2024-04-26 23:37:01.856916] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.647 qpair failed and we were unable to recover it. 00:34:12.647 [2024-04-26 23:37:01.857167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.647 [2024-04-26 23:37:01.857447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.647 [2024-04-26 23:37:01.857455] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.647 qpair failed and we were unable to recover it. 00:34:12.647 [2024-04-26 23:37:01.857811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.647 [2024-04-26 23:37:01.858017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.647 [2024-04-26 23:37:01.858025] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.647 qpair failed and we were unable to recover it. 00:34:12.647 [2024-04-26 23:37:01.858189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.647 [2024-04-26 23:37:01.858512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.647 [2024-04-26 23:37:01.858520] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.647 qpair failed and we were unable to recover it. 00:34:12.647 [2024-04-26 23:37:01.858704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.647 [2024-04-26 23:37:01.859055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.647 [2024-04-26 23:37:01.859063] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.647 qpair failed and we were unable to recover it. 00:34:12.647 [2024-04-26 23:37:01.859246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.647 [2024-04-26 23:37:01.859445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.647 [2024-04-26 23:37:01.859453] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.647 qpair failed and we were unable to recover it. 00:34:12.647 [2024-04-26 23:37:01.859785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.647 [2024-04-26 23:37:01.860073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.647 [2024-04-26 23:37:01.860082] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.647 qpair failed and we were unable to recover it. 00:34:12.647 [2024-04-26 23:37:01.860417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.647 [2024-04-26 23:37:01.860780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.647 [2024-04-26 23:37:01.860789] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.647 qpair failed and we were unable to recover it. 00:34:12.647 [2024-04-26 23:37:01.861133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.647 [2024-04-26 23:37:01.861454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.647 [2024-04-26 23:37:01.861463] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.647 qpair failed and we were unable to recover it. 00:34:12.647 [2024-04-26 23:37:01.861817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.647 [2024-04-26 23:37:01.862020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.647 [2024-04-26 23:37:01.862028] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.647 qpair failed and we were unable to recover it. 00:34:12.647 [2024-04-26 23:37:01.862380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.647 [2024-04-26 23:37:01.862743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.647 [2024-04-26 23:37:01.862751] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.647 qpair failed and we were unable to recover it. 00:34:12.647 [2024-04-26 23:37:01.862946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.647 [2024-04-26 23:37:01.863289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.647 [2024-04-26 23:37:01.863297] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.647 qpair failed and we were unable to recover it. 00:34:12.647 [2024-04-26 23:37:01.863597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.647 [2024-04-26 23:37:01.863792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.647 [2024-04-26 23:37:01.863800] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.647 qpair failed and we were unable to recover it. 00:34:12.647 [2024-04-26 23:37:01.863985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.647 [2024-04-26 23:37:01.864215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.647 [2024-04-26 23:37:01.864222] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.647 qpair failed and we were unable to recover it. 00:34:12.647 [2024-04-26 23:37:01.864520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.647 [2024-04-26 23:37:01.864824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.647 [2024-04-26 23:37:01.864832] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.647 qpair failed and we were unable to recover it. 00:34:12.647 [2024-04-26 23:37:01.865153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.647 [2024-04-26 23:37:01.865429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.647 [2024-04-26 23:37:01.865438] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.647 qpair failed and we were unable to recover it. 00:34:12.647 [2024-04-26 23:37:01.865631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.647 [2024-04-26 23:37:01.865989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.647 [2024-04-26 23:37:01.865997] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.647 qpair failed and we were unable to recover it. 00:34:12.647 [2024-04-26 23:37:01.866356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.647 [2024-04-26 23:37:01.866561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.647 [2024-04-26 23:37:01.866572] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.647 qpair failed and we were unable to recover it. 00:34:12.647 [2024-04-26 23:37:01.866902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.647 [2024-04-26 23:37:01.867239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.647 [2024-04-26 23:37:01.867247] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.647 qpair failed and we were unable to recover it. 00:34:12.647 [2024-04-26 23:37:01.867435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.647 [2024-04-26 23:37:01.867643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.647 [2024-04-26 23:37:01.867653] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.647 qpair failed and we were unable to recover it. 00:34:12.647 [2024-04-26 23:37:01.867962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.647 [2024-04-26 23:37:01.868282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.647 [2024-04-26 23:37:01.868291] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.647 qpair failed and we were unable to recover it. 00:34:12.647 [2024-04-26 23:37:01.868636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.647 [2024-04-26 23:37:01.868841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.647 [2024-04-26 23:37:01.868848] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.647 qpair failed and we were unable to recover it. 00:34:12.647 [2024-04-26 23:37:01.869166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.647 [2024-04-26 23:37:01.869416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.647 [2024-04-26 23:37:01.869425] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.647 qpair failed and we were unable to recover it. 00:34:12.647 [2024-04-26 23:37:01.869619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.647 [2024-04-26 23:37:01.869820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.647 [2024-04-26 23:37:01.869827] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.647 qpair failed and we were unable to recover it. 00:34:12.647 [2024-04-26 23:37:01.869874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.917 [2024-04-26 23:37:01.870103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.917 [2024-04-26 23:37:01.870114] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.917 qpair failed and we were unable to recover it. 00:34:12.918 [2024-04-26 23:37:01.870405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.918 [2024-04-26 23:37:01.870696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.918 [2024-04-26 23:37:01.870704] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.918 qpair failed and we were unable to recover it. 00:34:12.918 [2024-04-26 23:37:01.871027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.918 [2024-04-26 23:37:01.871378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.918 [2024-04-26 23:37:01.871386] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.918 qpair failed and we were unable to recover it. 00:34:12.918 [2024-04-26 23:37:01.871589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.918 [2024-04-26 23:37:01.871954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.918 [2024-04-26 23:37:01.871963] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.918 qpair failed and we were unable to recover it. 00:34:12.918 [2024-04-26 23:37:01.872289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.918 [2024-04-26 23:37:01.872494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.918 [2024-04-26 23:37:01.872503] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.918 qpair failed and we were unable to recover it. 00:34:12.918 [2024-04-26 23:37:01.872815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.918 [2024-04-26 23:37:01.873110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.918 [2024-04-26 23:37:01.873120] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.918 qpair failed and we were unable to recover it. 00:34:12.918 [2024-04-26 23:37:01.873272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.918 [2024-04-26 23:37:01.873576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.918 [2024-04-26 23:37:01.873584] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.918 qpair failed and we were unable to recover it. 00:34:12.918 [2024-04-26 23:37:01.873789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.918 [2024-04-26 23:37:01.874096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.918 [2024-04-26 23:37:01.874104] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.918 qpair failed and we were unable to recover it. 00:34:12.918 [2024-04-26 23:37:01.874439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.918 [2024-04-26 23:37:01.874754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.918 [2024-04-26 23:37:01.874762] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.918 qpair failed and we were unable to recover it. 00:34:12.918 [2024-04-26 23:37:01.875114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.918 [2024-04-26 23:37:01.875477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.918 [2024-04-26 23:37:01.875485] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.918 qpair failed and we were unable to recover it. 00:34:12.918 [2024-04-26 23:37:01.875819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.918 [2024-04-26 23:37:01.876014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.918 [2024-04-26 23:37:01.876023] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.918 qpair failed and we were unable to recover it. 00:34:12.918 [2024-04-26 23:37:01.876326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.918 [2024-04-26 23:37:01.876681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.918 [2024-04-26 23:37:01.876689] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.918 qpair failed and we were unable to recover it. 00:34:12.918 [2024-04-26 23:37:01.876873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.918 [2024-04-26 23:37:01.877234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.918 [2024-04-26 23:37:01.877242] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.918 qpair failed and we were unable to recover it. 00:34:12.918 [2024-04-26 23:37:01.877598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.918 [2024-04-26 23:37:01.877919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.918 [2024-04-26 23:37:01.877927] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.918 qpair failed and we were unable to recover it. 00:34:12.918 [2024-04-26 23:37:01.878259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.918 [2024-04-26 23:37:01.878453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.918 [2024-04-26 23:37:01.878461] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.918 qpair failed and we were unable to recover it. 00:34:12.918 [2024-04-26 23:37:01.878658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.918 [2024-04-26 23:37:01.878967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.918 [2024-04-26 23:37:01.878975] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.918 qpair failed and we were unable to recover it. 00:34:12.918 [2024-04-26 23:37:01.879316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.918 [2024-04-26 23:37:01.879678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.918 [2024-04-26 23:37:01.879685] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.918 qpair failed and we were unable to recover it. 00:34:12.918 [2024-04-26 23:37:01.880039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.918 [2024-04-26 23:37:01.880354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.918 [2024-04-26 23:37:01.880361] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.918 qpair failed and we were unable to recover it. 00:34:12.918 [2024-04-26 23:37:01.880688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.918 [2024-04-26 23:37:01.880836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.918 [2024-04-26 23:37:01.880854] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.918 qpair failed and we were unable to recover it. 00:34:12.918 [2024-04-26 23:37:01.881072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.918 [2024-04-26 23:37:01.881392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.918 [2024-04-26 23:37:01.881400] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.918 qpair failed and we were unable to recover it. 00:34:12.918 [2024-04-26 23:37:01.881607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.918 [2024-04-26 23:37:01.881917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.918 [2024-04-26 23:37:01.881925] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.918 qpair failed and we were unable to recover it. 00:34:12.918 [2024-04-26 23:37:01.882310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.918 [2024-04-26 23:37:01.882623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.918 [2024-04-26 23:37:01.882631] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.918 qpair failed and we were unable to recover it. 00:34:12.918 [2024-04-26 23:37:01.882848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.918 [2024-04-26 23:37:01.883071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.918 [2024-04-26 23:37:01.883079] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.918 qpair failed and we were unable to recover it. 00:34:12.918 [2024-04-26 23:37:01.883405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.918 [2024-04-26 23:37:01.883767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.918 [2024-04-26 23:37:01.883775] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.918 qpair failed and we were unable to recover it. 00:34:12.918 [2024-04-26 23:37:01.884109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.918 [2024-04-26 23:37:01.884471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.918 [2024-04-26 23:37:01.884479] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.918 qpair failed and we were unable to recover it. 00:34:12.918 [2024-04-26 23:37:01.884809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.918 [2024-04-26 23:37:01.885118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.918 [2024-04-26 23:37:01.885125] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.918 qpair failed and we were unable to recover it. 00:34:12.918 [2024-04-26 23:37:01.885328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.918 [2024-04-26 23:37:01.885666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.918 [2024-04-26 23:37:01.885674] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.918 qpair failed and we were unable to recover it. 00:34:12.918 [2024-04-26 23:37:01.885970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.918 [2024-04-26 23:37:01.886331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.918 [2024-04-26 23:37:01.886339] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.918 qpair failed and we were unable to recover it. 00:34:12.918 [2024-04-26 23:37:01.886580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.918 [2024-04-26 23:37:01.886943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.918 [2024-04-26 23:37:01.886951] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.918 qpair failed and we were unable to recover it. 00:34:12.918 [2024-04-26 23:37:01.887119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.918 [2024-04-26 23:37:01.887419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.918 [2024-04-26 23:37:01.887427] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.918 qpair failed and we were unable to recover it. 00:34:12.918 [2024-04-26 23:37:01.887789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.918 [2024-04-26 23:37:01.888148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.918 [2024-04-26 23:37:01.888156] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.918 qpair failed and we were unable to recover it. 00:34:12.918 [2024-04-26 23:37:01.888511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.918 [2024-04-26 23:37:01.888831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.918 [2024-04-26 23:37:01.888843] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.918 qpair failed and we were unable to recover it. 00:34:12.918 [2024-04-26 23:37:01.889172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.918 [2024-04-26 23:37:01.889537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.918 [2024-04-26 23:37:01.889544] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.918 qpair failed and we were unable to recover it. 00:34:12.918 [2024-04-26 23:37:01.889898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.918 [2024-04-26 23:37:01.890112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.918 [2024-04-26 23:37:01.890120] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.918 qpair failed and we were unable to recover it. 00:34:12.918 [2024-04-26 23:37:01.890459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.918 [2024-04-26 23:37:01.890662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.918 [2024-04-26 23:37:01.890669] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.918 qpair failed and we were unable to recover it. 00:34:12.918 [2024-04-26 23:37:01.891025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.918 [2024-04-26 23:37:01.891278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.918 [2024-04-26 23:37:01.891286] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.918 qpair failed and we were unable to recover it. 00:34:12.918 [2024-04-26 23:37:01.891474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.918 [2024-04-26 23:37:01.891786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.918 [2024-04-26 23:37:01.891794] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.918 qpair failed and we were unable to recover it. 00:34:12.918 [2024-04-26 23:37:01.892153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.918 [2024-04-26 23:37:01.892482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.918 [2024-04-26 23:37:01.892491] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.918 qpair failed and we were unable to recover it. 00:34:12.918 [2024-04-26 23:37:01.892822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.918 [2024-04-26 23:37:01.893028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.918 [2024-04-26 23:37:01.893036] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.918 qpair failed and we were unable to recover it. 00:34:12.919 [2024-04-26 23:37:01.893193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.919 [2024-04-26 23:37:01.893518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.919 [2024-04-26 23:37:01.893526] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.919 qpair failed and we were unable to recover it. 00:34:12.919 [2024-04-26 23:37:01.893709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.919 [2024-04-26 23:37:01.893902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.919 [2024-04-26 23:37:01.893910] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.919 qpair failed and we were unable to recover it. 00:34:12.919 [2024-04-26 23:37:01.894204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.919 [2024-04-26 23:37:01.894445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.919 [2024-04-26 23:37:01.894453] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.919 qpair failed and we were unable to recover it. 00:34:12.919 [2024-04-26 23:37:01.894787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.919 [2024-04-26 23:37:01.894865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.919 [2024-04-26 23:37:01.894871] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.919 qpair failed and we were unable to recover it. 00:34:12.919 [2024-04-26 23:37:01.895203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.919 [2024-04-26 23:37:01.895572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.919 [2024-04-26 23:37:01.895581] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.919 qpair failed and we were unable to recover it. 00:34:12.919 [2024-04-26 23:37:01.895916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.919 [2024-04-26 23:37:01.896250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.919 [2024-04-26 23:37:01.896258] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.919 qpair failed and we were unable to recover it. 00:34:12.919 [2024-04-26 23:37:01.896472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.919 [2024-04-26 23:37:01.896677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.919 [2024-04-26 23:37:01.896686] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.919 qpair failed and we were unable to recover it. 00:34:12.919 [2024-04-26 23:37:01.897024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.919 [2024-04-26 23:37:01.897310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.919 [2024-04-26 23:37:01.897318] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.919 qpair failed and we were unable to recover it. 00:34:12.919 [2024-04-26 23:37:01.897503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.919 [2024-04-26 23:37:01.897806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.919 [2024-04-26 23:37:01.897814] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.919 qpair failed and we were unable to recover it. 00:34:12.919 [2024-04-26 23:37:01.898140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.919 [2024-04-26 23:37:01.898452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.919 [2024-04-26 23:37:01.898460] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.919 qpair failed and we were unable to recover it. 00:34:12.919 [2024-04-26 23:37:01.898786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.919 [2024-04-26 23:37:01.899105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.919 [2024-04-26 23:37:01.899113] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.919 qpair failed and we were unable to recover it. 00:34:12.919 [2024-04-26 23:37:01.899354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.919 [2024-04-26 23:37:01.899650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.919 [2024-04-26 23:37:01.899658] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.919 qpair failed and we were unable to recover it. 00:34:12.919 [2024-04-26 23:37:01.899890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.919 [2024-04-26 23:37:01.900002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.919 [2024-04-26 23:37:01.900008] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.919 qpair failed and we were unable to recover it. 00:34:12.919 [2024-04-26 23:37:01.900337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.919 [2024-04-26 23:37:01.900610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.919 [2024-04-26 23:37:01.900619] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.919 qpair failed and we were unable to recover it. 00:34:12.919 [2024-04-26 23:37:01.900942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.919 [2024-04-26 23:37:01.901264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.919 [2024-04-26 23:37:01.901272] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.919 qpair failed and we were unable to recover it. 00:34:12.919 [2024-04-26 23:37:01.901599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.919 [2024-04-26 23:37:01.901920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.919 [2024-04-26 23:37:01.901928] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.919 qpair failed and we were unable to recover it. 00:34:12.919 [2024-04-26 23:37:01.902129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.919 [2024-04-26 23:37:01.902413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.919 [2024-04-26 23:37:01.902421] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.919 qpair failed and we were unable to recover it. 00:34:12.919 [2024-04-26 23:37:01.902754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.919 [2024-04-26 23:37:01.903061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.919 [2024-04-26 23:37:01.903069] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.919 qpair failed and we were unable to recover it. 00:34:12.919 [2024-04-26 23:37:01.903247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.919 [2024-04-26 23:37:01.903434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.919 [2024-04-26 23:37:01.903442] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.919 qpair failed and we were unable to recover it. 00:34:12.919 [2024-04-26 23:37:01.903641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.919 [2024-04-26 23:37:01.903956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.919 [2024-04-26 23:37:01.903964] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.919 qpair failed and we were unable to recover it. 00:34:12.919 [2024-04-26 23:37:01.904312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.919 [2024-04-26 23:37:01.904602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.919 [2024-04-26 23:37:01.904610] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.919 qpair failed and we were unable to recover it. 00:34:12.919 [2024-04-26 23:37:01.904742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.919 [2024-04-26 23:37:01.904922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.919 [2024-04-26 23:37:01.904929] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.919 qpair failed and we were unable to recover it. 00:34:12.919 [2024-04-26 23:37:01.905107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.919 [2024-04-26 23:37:01.905408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.919 [2024-04-26 23:37:01.905416] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.919 qpair failed and we were unable to recover it. 00:34:12.919 [2024-04-26 23:37:01.905750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.919 [2024-04-26 23:37:01.906043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.919 [2024-04-26 23:37:01.906051] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.919 qpair failed and we were unable to recover it. 00:34:12.919 [2024-04-26 23:37:01.906370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.919 [2024-04-26 23:37:01.906731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.919 [2024-04-26 23:37:01.906739] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.919 qpair failed and we were unable to recover it. 00:34:12.919 [2024-04-26 23:37:01.906963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.919 [2024-04-26 23:37:01.907268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.919 [2024-04-26 23:37:01.907276] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.919 qpair failed and we were unable to recover it. 00:34:12.919 [2024-04-26 23:37:01.907674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.919 [2024-04-26 23:37:01.907831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.919 [2024-04-26 23:37:01.907843] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.919 qpair failed and we were unable to recover it. 00:34:12.919 [2024-04-26 23:37:01.908243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.919 [2024-04-26 23:37:01.908393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.919 [2024-04-26 23:37:01.908401] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.919 qpair failed and we were unable to recover it. 00:34:12.919 [2024-04-26 23:37:01.908730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.919 [2024-04-26 23:37:01.908938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.919 [2024-04-26 23:37:01.908946] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.919 qpair failed and we were unable to recover it. 00:34:12.919 [2024-04-26 23:37:01.909145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.919 [2024-04-26 23:37:01.909478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.919 [2024-04-26 23:37:01.909486] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.919 qpair failed and we were unable to recover it. 00:34:12.919 [2024-04-26 23:37:01.909648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.919 [2024-04-26 23:37:01.909842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.919 [2024-04-26 23:37:01.909851] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.919 qpair failed and we were unable to recover it. 00:34:12.919 [2024-04-26 23:37:01.910218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.919 [2024-04-26 23:37:01.910369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.919 [2024-04-26 23:37:01.910376] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.919 qpair failed and we were unable to recover it. 00:34:12.919 [2024-04-26 23:37:01.910692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.919 [2024-04-26 23:37:01.911012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.919 [2024-04-26 23:37:01.911021] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.919 qpair failed and we were unable to recover it. 00:34:12.919 [2024-04-26 23:37:01.911390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.919 [2024-04-26 23:37:01.911605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.919 [2024-04-26 23:37:01.911613] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.919 qpair failed and we were unable to recover it. 00:34:12.919 [2024-04-26 23:37:01.912015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.919 [2024-04-26 23:37:01.912387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.919 [2024-04-26 23:37:01.912395] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.919 qpair failed and we were unable to recover it. 00:34:12.919 [2024-04-26 23:37:01.912730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.919 [2024-04-26 23:37:01.913059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.919 [2024-04-26 23:37:01.913068] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.919 qpair failed and we were unable to recover it. 00:34:12.920 [2024-04-26 23:37:01.913410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.920 [2024-04-26 23:37:01.913751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.920 [2024-04-26 23:37:01.913759] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.920 qpair failed and we were unable to recover it. 00:34:12.920 [2024-04-26 23:37:01.913955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.920 [2024-04-26 23:37:01.914162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.920 [2024-04-26 23:37:01.914170] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.920 qpair failed and we were unable to recover it. 00:34:12.920 [2024-04-26 23:37:01.914367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.920 [2024-04-26 23:37:01.914653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.920 [2024-04-26 23:37:01.914662] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.920 qpair failed and we were unable to recover it. 00:34:12.920 [2024-04-26 23:37:01.914989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.920 [2024-04-26 23:37:01.915331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.920 [2024-04-26 23:37:01.915338] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.920 qpair failed and we were unable to recover it. 00:34:12.920 [2024-04-26 23:37:01.915694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.920 [2024-04-26 23:37:01.916010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.920 [2024-04-26 23:37:01.916019] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.920 qpair failed and we were unable to recover it. 00:34:12.920 [2024-04-26 23:37:01.916188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.920 [2024-04-26 23:37:01.916390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.920 [2024-04-26 23:37:01.916399] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.920 qpair failed and we were unable to recover it. 00:34:12.920 [2024-04-26 23:37:01.916750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.920 [2024-04-26 23:37:01.917059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.920 [2024-04-26 23:37:01.917067] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.920 qpair failed and we were unable to recover it. 00:34:12.920 [2024-04-26 23:37:01.917405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.920 [2024-04-26 23:37:01.917720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.920 [2024-04-26 23:37:01.917728] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.920 qpair failed and we were unable to recover it. 00:34:12.920 [2024-04-26 23:37:01.918147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.920 [2024-04-26 23:37:01.918420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.920 [2024-04-26 23:37:01.918428] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.920 qpair failed and we were unable to recover it. 00:34:12.920 [2024-04-26 23:37:01.918621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.920 [2024-04-26 23:37:01.918954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.920 [2024-04-26 23:37:01.918962] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.920 qpair failed and we were unable to recover it. 00:34:12.920 [2024-04-26 23:37:01.919304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.920 [2024-04-26 23:37:01.919669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.920 [2024-04-26 23:37:01.919676] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.920 qpair failed and we were unable to recover it. 00:34:12.920 [2024-04-26 23:37:01.920000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.920 [2024-04-26 23:37:01.920370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.920 [2024-04-26 23:37:01.920378] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.920 qpair failed and we were unable to recover it. 00:34:12.920 [2024-04-26 23:37:01.920578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.920 [2024-04-26 23:37:01.920788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.920 [2024-04-26 23:37:01.920797] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.920 qpair failed and we were unable to recover it. 00:34:12.920 [2024-04-26 23:37:01.921100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.920 [2024-04-26 23:37:01.921442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.920 [2024-04-26 23:37:01.921450] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.920 qpair failed and we were unable to recover it. 00:34:12.920 [2024-04-26 23:37:01.921528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.920 [2024-04-26 23:37:01.921865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.920 [2024-04-26 23:37:01.921873] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.920 qpair failed and we were unable to recover it. 00:34:12.920 [2024-04-26 23:37:01.922262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.920 [2024-04-26 23:37:01.922336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.920 [2024-04-26 23:37:01.922343] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.920 qpair failed and we were unable to recover it. 00:34:12.920 [2024-04-26 23:37:01.922610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.920 [2024-04-26 23:37:01.922820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.920 [2024-04-26 23:37:01.922828] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.920 qpair failed and we were unable to recover it. 00:34:12.920 [2024-04-26 23:37:01.923116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.920 [2024-04-26 23:37:01.923438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.920 [2024-04-26 23:37:01.923446] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.920 qpair failed and we were unable to recover it. 00:34:12.920 [2024-04-26 23:37:01.923772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.920 [2024-04-26 23:37:01.924104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.920 [2024-04-26 23:37:01.924113] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.920 qpair failed and we were unable to recover it. 00:34:12.920 [2024-04-26 23:37:01.924335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.920 [2024-04-26 23:37:01.924581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.920 [2024-04-26 23:37:01.924588] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.920 qpair failed and we were unable to recover it. 00:34:12.920 [2024-04-26 23:37:01.924788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.920 [2024-04-26 23:37:01.924982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.920 [2024-04-26 23:37:01.924990] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.920 qpair failed and we were unable to recover it. 00:34:12.920 [2024-04-26 23:37:01.925342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.920 [2024-04-26 23:37:01.925658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.920 [2024-04-26 23:37:01.925667] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.920 qpair failed and we were unable to recover it. 00:34:12.920 [2024-04-26 23:37:01.926032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.920 [2024-04-26 23:37:01.926243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.920 [2024-04-26 23:37:01.926251] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.920 qpair failed and we were unable to recover it. 00:34:12.920 [2024-04-26 23:37:01.926595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.920 [2024-04-26 23:37:01.926960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.920 [2024-04-26 23:37:01.926968] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.920 qpair failed and we were unable to recover it. 00:34:12.920 [2024-04-26 23:37:01.927301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.920 [2024-04-26 23:37:01.927662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.920 [2024-04-26 23:37:01.927670] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.920 qpair failed and we were unable to recover it. 00:34:12.920 [2024-04-26 23:37:01.927853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.920 [2024-04-26 23:37:01.928034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.920 [2024-04-26 23:37:01.928043] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.920 qpair failed and we were unable to recover it. 00:34:12.920 [2024-04-26 23:37:01.928390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.920 [2024-04-26 23:37:01.928752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.920 [2024-04-26 23:37:01.928761] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.920 qpair failed and we were unable to recover it. 00:34:12.920 [2024-04-26 23:37:01.929008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.920 [2024-04-26 23:37:01.929198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.920 [2024-04-26 23:37:01.929206] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.920 qpair failed and we were unable to recover it. 00:34:12.920 [2024-04-26 23:37:01.929246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.920 [2024-04-26 23:37:01.929630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.920 [2024-04-26 23:37:01.929638] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.920 qpair failed and we were unable to recover it. 00:34:12.920 [2024-04-26 23:37:01.929711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.920 [2024-04-26 23:37:01.930017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.920 [2024-04-26 23:37:01.930026] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.920 qpair failed and we were unable to recover it. 00:34:12.920 [2024-04-26 23:37:01.930332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.920 [2024-04-26 23:37:01.930511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.920 [2024-04-26 23:37:01.930519] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.920 qpair failed and we were unable to recover it. 00:34:12.920 [2024-04-26 23:37:01.930879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.920 [2024-04-26 23:37:01.931259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.920 [2024-04-26 23:37:01.931267] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.920 qpair failed and we were unable to recover it. 00:34:12.920 [2024-04-26 23:37:01.931464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.920 [2024-04-26 23:37:01.931761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.920 [2024-04-26 23:37:01.931770] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.920 qpair failed and we were unable to recover it. 00:34:12.920 [2024-04-26 23:37:01.931974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.920 [2024-04-26 23:37:01.932305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.920 [2024-04-26 23:37:01.932314] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.920 qpair failed and we were unable to recover it. 00:34:12.920 [2024-04-26 23:37:01.932674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.920 [2024-04-26 23:37:01.933004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.920 [2024-04-26 23:37:01.933013] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.920 qpair failed and we were unable to recover it. 00:34:12.920 [2024-04-26 23:37:01.933346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.920 [2024-04-26 23:37:01.933715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.920 [2024-04-26 23:37:01.933723] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.920 qpair failed and we were unable to recover it. 00:34:12.920 [2024-04-26 23:37:01.933918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.920 [2024-04-26 23:37:01.934249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.920 [2024-04-26 23:37:01.934258] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.920 qpair failed and we were unable to recover it. 00:34:12.920 [2024-04-26 23:37:01.934411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.920 [2024-04-26 23:37:01.934776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.920 [2024-04-26 23:37:01.934785] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.920 qpair failed and we were unable to recover it. 00:34:12.920 [2024-04-26 23:37:01.935118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.920 [2024-04-26 23:37:01.935317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.921 [2024-04-26 23:37:01.935325] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.921 qpair failed and we were unable to recover it. 00:34:12.921 [2024-04-26 23:37:01.935668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.921 [2024-04-26 23:37:01.936025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.921 [2024-04-26 23:37:01.936034] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.921 qpair failed and we were unable to recover it. 00:34:12.921 [2024-04-26 23:37:01.936388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.921 [2024-04-26 23:37:01.936705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.921 [2024-04-26 23:37:01.936714] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.921 qpair failed and we were unable to recover it. 00:34:12.921 [2024-04-26 23:37:01.936974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.921 [2024-04-26 23:37:01.937332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.921 [2024-04-26 23:37:01.937340] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.921 qpair failed and we were unable to recover it. 00:34:12.921 [2024-04-26 23:37:01.937695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.921 [2024-04-26 23:37:01.937906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.921 [2024-04-26 23:37:01.937914] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.921 qpair failed and we were unable to recover it. 00:34:12.921 [2024-04-26 23:37:01.938241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.921 [2024-04-26 23:37:01.938604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.921 [2024-04-26 23:37:01.938612] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.921 qpair failed and we were unable to recover it. 00:34:12.921 [2024-04-26 23:37:01.938946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.921 [2024-04-26 23:37:01.939134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.921 [2024-04-26 23:37:01.939142] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.921 qpair failed and we were unable to recover it. 00:34:12.921 [2024-04-26 23:37:01.939184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.921 [2024-04-26 23:37:01.939486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.921 [2024-04-26 23:37:01.939503] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.921 qpair failed and we were unable to recover it. 00:34:12.921 [2024-04-26 23:37:01.939857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.921 [2024-04-26 23:37:01.940221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.921 [2024-04-26 23:37:01.940229] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.921 qpair failed and we were unable to recover it. 00:34:12.921 [2024-04-26 23:37:01.940559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.921 [2024-04-26 23:37:01.940877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.921 [2024-04-26 23:37:01.940886] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.921 qpair failed and we were unable to recover it. 00:34:12.921 [2024-04-26 23:37:01.941218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.921 [2024-04-26 23:37:01.941537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.921 [2024-04-26 23:37:01.941545] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.921 qpair failed and we were unable to recover it. 00:34:12.921 [2024-04-26 23:37:01.941876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.921 [2024-04-26 23:37:01.942231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.921 [2024-04-26 23:37:01.942239] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.921 qpair failed and we were unable to recover it. 00:34:12.921 [2024-04-26 23:37:01.942433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.921 [2024-04-26 23:37:01.942758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.921 [2024-04-26 23:37:01.942767] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.921 qpair failed and we were unable to recover it. 00:34:12.921 [2024-04-26 23:37:01.943100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.921 [2024-04-26 23:37:01.943302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.921 [2024-04-26 23:37:01.943311] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.921 qpair failed and we were unable to recover it. 00:34:12.921 [2024-04-26 23:37:01.943632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.921 [2024-04-26 23:37:01.943954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.921 [2024-04-26 23:37:01.943963] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.921 qpair failed and we were unable to recover it. 00:34:12.921 [2024-04-26 23:37:01.944164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.921 [2024-04-26 23:37:01.944351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.921 [2024-04-26 23:37:01.944360] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.921 qpair failed and we were unable to recover it. 00:34:12.921 [2024-04-26 23:37:01.944684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.921 [2024-04-26 23:37:01.944735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.921 [2024-04-26 23:37:01.944742] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.921 qpair failed and we were unable to recover it. 00:34:12.921 [2024-04-26 23:37:01.944984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.921 [2024-04-26 23:37:01.945339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.921 [2024-04-26 23:37:01.945347] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.921 qpair failed and we were unable to recover it. 00:34:12.921 [2024-04-26 23:37:01.945545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.921 [2024-04-26 23:37:01.945832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.921 [2024-04-26 23:37:01.945843] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.921 qpair failed and we were unable to recover it. 00:34:12.921 [2024-04-26 23:37:01.946163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.921 [2024-04-26 23:37:01.946523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.921 [2024-04-26 23:37:01.946531] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.921 qpair failed and we were unable to recover it. 00:34:12.921 [2024-04-26 23:37:01.946890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.921 [2024-04-26 23:37:01.947194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.921 [2024-04-26 23:37:01.947202] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.921 qpair failed and we were unable to recover it. 00:34:12.921 [2024-04-26 23:37:01.947543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.921 [2024-04-26 23:37:01.947586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.921 [2024-04-26 23:37:01.947595] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.921 qpair failed and we were unable to recover it. 00:34:12.921 [2024-04-26 23:37:01.947944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.921 [2024-04-26 23:37:01.948126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.921 [2024-04-26 23:37:01.948134] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.921 qpair failed and we were unable to recover it. 00:34:12.921 [2024-04-26 23:37:01.948337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.921 [2024-04-26 23:37:01.948642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.921 [2024-04-26 23:37:01.948651] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.921 qpair failed and we were unable to recover it. 00:34:12.921 [2024-04-26 23:37:01.948843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.921 [2024-04-26 23:37:01.948896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.921 [2024-04-26 23:37:01.948902] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.921 qpair failed and we were unable to recover it. 00:34:12.921 [2024-04-26 23:37:01.949243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.921 [2024-04-26 23:37:01.949546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.921 [2024-04-26 23:37:01.949554] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.921 qpair failed and we were unable to recover it. 00:34:12.921 [2024-04-26 23:37:01.949884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.921 [2024-04-26 23:37:01.950229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.921 [2024-04-26 23:37:01.950237] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.921 qpair failed and we were unable to recover it. 00:34:12.921 [2024-04-26 23:37:01.950604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.921 [2024-04-26 23:37:01.950960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.921 [2024-04-26 23:37:01.950969] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.921 qpair failed and we were unable to recover it. 00:34:12.921 [2024-04-26 23:37:01.951171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.921 [2024-04-26 23:37:01.951369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.921 [2024-04-26 23:37:01.951377] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.921 qpair failed and we were unable to recover it. 00:34:12.921 [2024-04-26 23:37:01.951715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.921 [2024-04-26 23:37:01.952055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.921 [2024-04-26 23:37:01.952064] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.921 qpair failed and we were unable to recover it. 00:34:12.921 [2024-04-26 23:37:01.952421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.921 [2024-04-26 23:37:01.952785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.921 [2024-04-26 23:37:01.952796] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.921 qpair failed and we were unable to recover it. 00:34:12.921 [2024-04-26 23:37:01.953156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.921 [2024-04-26 23:37:01.953387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.921 [2024-04-26 23:37:01.953397] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.921 qpair failed and we were unable to recover it. 00:34:12.921 [2024-04-26 23:37:01.953756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.921 [2024-04-26 23:37:01.954066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.921 [2024-04-26 23:37:01.954075] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.921 qpair failed and we were unable to recover it. 00:34:12.921 [2024-04-26 23:37:01.954420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.921 [2024-04-26 23:37:01.954573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.922 [2024-04-26 23:37:01.954582] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.922 qpair failed and we were unable to recover it. 00:34:12.922 [2024-04-26 23:37:01.954821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.922 [2024-04-26 23:37:01.955161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.922 [2024-04-26 23:37:01.955171] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.922 qpair failed and we were unable to recover it. 00:34:12.922 [2024-04-26 23:37:01.955465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.922 [2024-04-26 23:37:01.955786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.922 [2024-04-26 23:37:01.955794] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.922 qpair failed and we were unable to recover it. 00:34:12.922 [2024-04-26 23:37:01.956162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.922 [2024-04-26 23:37:01.956482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.922 [2024-04-26 23:37:01.956490] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.922 qpair failed and we were unable to recover it. 00:34:12.922 [2024-04-26 23:37:01.956690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.922 [2024-04-26 23:37:01.957044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.922 [2024-04-26 23:37:01.957052] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.922 qpair failed and we were unable to recover it. 00:34:12.922 [2024-04-26 23:37:01.957383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.922 [2024-04-26 23:37:01.957721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.922 [2024-04-26 23:37:01.957729] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.922 qpair failed and we were unable to recover it. 00:34:12.922 [2024-04-26 23:37:01.958082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.922 [2024-04-26 23:37:01.958141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.922 [2024-04-26 23:37:01.958149] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.922 qpair failed and we were unable to recover it. 00:34:12.922 [2024-04-26 23:37:01.958279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.922 [2024-04-26 23:37:01.958427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.922 [2024-04-26 23:37:01.958436] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.922 qpair failed and we were unable to recover it. 00:34:12.922 [2024-04-26 23:37:01.958703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.922 [2024-04-26 23:37:01.959069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.922 [2024-04-26 23:37:01.959079] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.922 qpair failed and we were unable to recover it. 00:34:12.922 [2024-04-26 23:37:01.959400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.922 [2024-04-26 23:37:01.959730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.922 [2024-04-26 23:37:01.959738] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.922 qpair failed and we were unable to recover it. 00:34:12.922 [2024-04-26 23:37:01.959910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.922 [2024-04-26 23:37:01.959951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.922 [2024-04-26 23:37:01.959957] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.922 qpair failed and we were unable to recover it. 00:34:12.922 [2024-04-26 23:37:01.960292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.922 [2024-04-26 23:37:01.960656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.922 [2024-04-26 23:37:01.960665] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.922 qpair failed and we were unable to recover it. 00:34:12.922 [2024-04-26 23:37:01.960851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.922 [2024-04-26 23:37:01.961180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.922 [2024-04-26 23:37:01.961188] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.922 qpair failed and we were unable to recover it. 00:34:12.922 [2024-04-26 23:37:01.961500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.922 [2024-04-26 23:37:01.961825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.922 [2024-04-26 23:37:01.961834] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.922 qpair failed and we were unable to recover it. 00:34:12.922 [2024-04-26 23:37:01.962036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.922 [2024-04-26 23:37:01.962269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.922 [2024-04-26 23:37:01.962277] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.922 qpair failed and we were unable to recover it. 00:34:12.922 [2024-04-26 23:37:01.962591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.922 [2024-04-26 23:37:01.962957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.922 [2024-04-26 23:37:01.962965] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.922 qpair failed and we were unable to recover it. 00:34:12.922 [2024-04-26 23:37:01.963308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.922 [2024-04-26 23:37:01.963626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.922 [2024-04-26 23:37:01.963634] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.922 qpair failed and we were unable to recover it. 00:34:12.922 [2024-04-26 23:37:01.963948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.922 [2024-04-26 23:37:01.964168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.922 [2024-04-26 23:37:01.964175] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.922 qpair failed and we were unable to recover it. 00:34:12.922 [2024-04-26 23:37:01.964517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.922 [2024-04-26 23:37:01.964885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.922 [2024-04-26 23:37:01.964895] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.922 qpair failed and we were unable to recover it. 00:34:12.922 [2024-04-26 23:37:01.965201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.922 [2024-04-26 23:37:01.965353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.922 [2024-04-26 23:37:01.965361] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.922 qpair failed and we were unable to recover it. 00:34:12.922 [2024-04-26 23:37:01.965687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.922 [2024-04-26 23:37:01.966044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.922 [2024-04-26 23:37:01.966052] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.922 qpair failed and we were unable to recover it. 00:34:12.922 [2024-04-26 23:37:01.966388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.922 [2024-04-26 23:37:01.966544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.922 [2024-04-26 23:37:01.966550] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.922 qpair failed and we were unable to recover it. 00:34:12.922 [2024-04-26 23:37:01.966870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.922 [2024-04-26 23:37:01.966964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.922 [2024-04-26 23:37:01.966970] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.922 qpair failed and we were unable to recover it. 00:34:12.922 [2024-04-26 23:37:01.967160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.922 [2024-04-26 23:37:01.967484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.922 [2024-04-26 23:37:01.967491] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.922 qpair failed and we were unable to recover it. 00:34:12.922 [2024-04-26 23:37:01.967665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.922 [2024-04-26 23:37:01.967959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.922 [2024-04-26 23:37:01.967967] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.922 qpair failed and we were unable to recover it. 00:34:12.922 [2024-04-26 23:37:01.968174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.922 [2024-04-26 23:37:01.968510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.922 [2024-04-26 23:37:01.968519] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.922 qpair failed and we were unable to recover it. 00:34:12.922 [2024-04-26 23:37:01.968761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.922 [2024-04-26 23:37:01.969107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.922 [2024-04-26 23:37:01.969116] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.922 qpair failed and we were unable to recover it. 00:34:12.922 [2024-04-26 23:37:01.969455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.922 [2024-04-26 23:37:01.969781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.922 [2024-04-26 23:37:01.969789] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.922 qpair failed and we were unable to recover it. 00:34:12.922 [2024-04-26 23:37:01.969985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.922 [2024-04-26 23:37:01.970299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.922 [2024-04-26 23:37:01.970307] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.922 qpair failed and we were unable to recover it. 00:34:12.922 [2024-04-26 23:37:01.970508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.922 [2024-04-26 23:37:01.970762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.922 [2024-04-26 23:37:01.970770] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.922 qpair failed and we were unable to recover it. 00:34:12.922 [2024-04-26 23:37:01.970964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.922 [2024-04-26 23:37:01.971268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.922 [2024-04-26 23:37:01.971275] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.922 qpair failed and we were unable to recover it. 00:34:12.922 [2024-04-26 23:37:01.971608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.922 [2024-04-26 23:37:01.971969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.922 [2024-04-26 23:37:01.971978] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.922 qpair failed and we were unable to recover it. 00:34:12.922 [2024-04-26 23:37:01.972181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.922 [2024-04-26 23:37:01.972360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.922 [2024-04-26 23:37:01.972367] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.922 qpair failed and we were unable to recover it. 00:34:12.922 [2024-04-26 23:37:01.972731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.922 [2024-04-26 23:37:01.973095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.922 [2024-04-26 23:37:01.973104] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.922 qpair failed and we were unable to recover it. 00:34:12.922 [2024-04-26 23:37:01.973442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.922 [2024-04-26 23:37:01.973623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.922 [2024-04-26 23:37:01.973631] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.922 qpair failed and we were unable to recover it. 00:34:12.922 [2024-04-26 23:37:01.973867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.922 [2024-04-26 23:37:01.974034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.922 [2024-04-26 23:37:01.974043] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.922 qpair failed and we were unable to recover it. 00:34:12.922 [2024-04-26 23:37:01.974440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.922 [2024-04-26 23:37:01.974778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.922 [2024-04-26 23:37:01.974786] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.922 qpair failed and we were unable to recover it. 00:34:12.922 [2024-04-26 23:37:01.975121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.922 [2024-04-26 23:37:01.975437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.922 [2024-04-26 23:37:01.975445] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.922 qpair failed and we were unable to recover it. 00:34:12.922 [2024-04-26 23:37:01.975780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.922 [2024-04-26 23:37:01.976012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.923 [2024-04-26 23:37:01.976020] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.923 qpair failed and we were unable to recover it. 00:34:12.923 [2024-04-26 23:37:01.976317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.923 [2024-04-26 23:37:01.976583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.923 [2024-04-26 23:37:01.976592] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.923 qpair failed and we were unable to recover it. 00:34:12.923 [2024-04-26 23:37:01.976923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.923 [2024-04-26 23:37:01.977245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.923 [2024-04-26 23:37:01.977253] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.923 qpair failed and we were unable to recover it. 00:34:12.923 [2024-04-26 23:37:01.977643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.923 [2024-04-26 23:37:01.977947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.923 [2024-04-26 23:37:01.977955] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.923 qpair failed and we were unable to recover it. 00:34:12.923 [2024-04-26 23:37:01.978224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.923 [2024-04-26 23:37:01.978407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.923 [2024-04-26 23:37:01.978415] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.923 qpair failed and we were unable to recover it. 00:34:12.923 [2024-04-26 23:37:01.978462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.923 [2024-04-26 23:37:01.978776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.923 [2024-04-26 23:37:01.978784] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.923 qpair failed and we were unable to recover it. 00:34:12.923 [2024-04-26 23:37:01.979118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.923 [2024-04-26 23:37:01.979425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.923 [2024-04-26 23:37:01.979433] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.923 qpair failed and we were unable to recover it. 00:34:12.923 [2024-04-26 23:37:01.979830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.923 [2024-04-26 23:37:01.980065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.923 [2024-04-26 23:37:01.980072] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.923 qpair failed and we were unable to recover it. 00:34:12.923 [2024-04-26 23:37:01.980402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.923 [2024-04-26 23:37:01.980718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.923 [2024-04-26 23:37:01.980726] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.923 qpair failed and we were unable to recover it. 00:34:12.923 [2024-04-26 23:37:01.980924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.923 [2024-04-26 23:37:01.981219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.923 [2024-04-26 23:37:01.981228] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.923 qpair failed and we were unable to recover it. 00:34:12.923 [2024-04-26 23:37:01.981640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.923 [2024-04-26 23:37:01.981976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.923 [2024-04-26 23:37:01.981984] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.923 qpair failed and we were unable to recover it. 00:34:12.923 [2024-04-26 23:37:01.982188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.923 [2024-04-26 23:37:01.982503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.923 [2024-04-26 23:37:01.982511] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.923 qpair failed and we were unable to recover it. 00:34:12.923 [2024-04-26 23:37:01.982848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.923 [2024-04-26 23:37:01.983064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.923 [2024-04-26 23:37:01.983072] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.923 qpair failed and we were unable to recover it. 00:34:12.923 [2024-04-26 23:37:01.983379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.923 [2024-04-26 23:37:01.983734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.923 [2024-04-26 23:37:01.983742] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.923 qpair failed and we were unable to recover it. 00:34:12.923 [2024-04-26 23:37:01.984059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.923 [2024-04-26 23:37:01.984254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.923 [2024-04-26 23:37:01.984262] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.923 qpair failed and we were unable to recover it. 00:34:12.923 [2024-04-26 23:37:01.984568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.923 [2024-04-26 23:37:01.984930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.923 [2024-04-26 23:37:01.984938] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.923 qpair failed and we were unable to recover it. 00:34:12.923 [2024-04-26 23:37:01.985289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.923 [2024-04-26 23:37:01.985490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.923 [2024-04-26 23:37:01.985497] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.923 qpair failed and we were unable to recover it. 00:34:12.923 [2024-04-26 23:37:01.985677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.923 [2024-04-26 23:37:01.985987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.923 [2024-04-26 23:37:01.985995] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.923 qpair failed and we were unable to recover it. 00:34:12.923 [2024-04-26 23:37:01.986318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.923 [2024-04-26 23:37:01.986635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.923 [2024-04-26 23:37:01.986644] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.923 qpair failed and we were unable to recover it. 00:34:12.923 [2024-04-26 23:37:01.986974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.923 [2024-04-26 23:37:01.987352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.923 [2024-04-26 23:37:01.987360] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.923 qpair failed and we were unable to recover it. 00:34:12.923 [2024-04-26 23:37:01.987691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.923 [2024-04-26 23:37:01.987973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.923 [2024-04-26 23:37:01.987981] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.923 qpair failed and we were unable to recover it. 00:34:12.923 [2024-04-26 23:37:01.988161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.923 [2024-04-26 23:37:01.988496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.923 [2024-04-26 23:37:01.988504] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.923 qpair failed and we were unable to recover it. 00:34:12.923 [2024-04-26 23:37:01.988849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.923 [2024-04-26 23:37:01.989045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.923 [2024-04-26 23:37:01.989053] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.923 qpair failed and we were unable to recover it. 00:34:12.923 [2024-04-26 23:37:01.989202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.923 [2024-04-26 23:37:01.989510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.923 [2024-04-26 23:37:01.989518] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.923 qpair failed and we were unable to recover it. 00:34:12.923 [2024-04-26 23:37:01.989875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.923 [2024-04-26 23:37:01.990187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.923 [2024-04-26 23:37:01.990195] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.923 qpair failed and we were unable to recover it. 00:34:12.923 [2024-04-26 23:37:01.990521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.923 [2024-04-26 23:37:01.990729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.923 [2024-04-26 23:37:01.990736] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.923 qpair failed and we were unable to recover it. 00:34:12.923 [2024-04-26 23:37:01.990858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.923 [2024-04-26 23:37:01.990906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.923 [2024-04-26 23:37:01.990914] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.923 qpair failed and we were unable to recover it. 00:34:12.923 [2024-04-26 23:37:01.991255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.923 [2024-04-26 23:37:01.991561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.923 [2024-04-26 23:37:01.991569] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.923 qpair failed and we were unable to recover it. 00:34:12.923 [2024-04-26 23:37:01.991903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.923 [2024-04-26 23:37:01.992149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.923 [2024-04-26 23:37:01.992157] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.923 qpair failed and we were unable to recover it. 00:34:12.923 [2024-04-26 23:37:01.992472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.923 [2024-04-26 23:37:01.992841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.923 [2024-04-26 23:37:01.992849] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.923 qpair failed and we were unable to recover it. 00:34:12.923 [2024-04-26 23:37:01.993148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.923 [2024-04-26 23:37:01.993346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.923 [2024-04-26 23:37:01.993353] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.923 qpair failed and we were unable to recover it. 00:34:12.923 [2024-04-26 23:37:01.993713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.923 [2024-04-26 23:37:01.994003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.923 [2024-04-26 23:37:01.994012] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.923 qpair failed and we were unable to recover it. 00:34:12.923 [2024-04-26 23:37:01.994324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.923 [2024-04-26 23:37:01.994673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.923 [2024-04-26 23:37:01.994681] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.923 qpair failed and we were unable to recover it. 00:34:12.923 [2024-04-26 23:37:01.995030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.923 [2024-04-26 23:37:01.995313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.923 [2024-04-26 23:37:01.995320] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.923 qpair failed and we were unable to recover it. 00:34:12.923 [2024-04-26 23:37:01.995687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.923 [2024-04-26 23:37:01.996031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.923 [2024-04-26 23:37:01.996039] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.923 qpair failed and we were unable to recover it. 00:34:12.923 [2024-04-26 23:37:01.996347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.923 [2024-04-26 23:37:01.996553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.923 [2024-04-26 23:37:01.996560] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.923 qpair failed and we were unable to recover it. 00:34:12.923 [2024-04-26 23:37:01.996894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.923 [2024-04-26 23:37:01.997099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.923 [2024-04-26 23:37:01.997107] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.923 qpair failed and we were unable to recover it. 00:34:12.923 [2024-04-26 23:37:01.997476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.923 [2024-04-26 23:37:01.997846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.923 [2024-04-26 23:37:01.997854] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.923 qpair failed and we were unable to recover it. 00:34:12.923 [2024-04-26 23:37:01.998170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.923 [2024-04-26 23:37:01.998387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.924 [2024-04-26 23:37:01.998395] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.924 qpair failed and we were unable to recover it. 00:34:12.924 [2024-04-26 23:37:01.998598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.924 [2024-04-26 23:37:01.998824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.924 [2024-04-26 23:37:01.998832] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.924 qpair failed and we were unable to recover it. 00:34:12.924 [2024-04-26 23:37:01.999028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.924 [2024-04-26 23:37:01.999422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.924 [2024-04-26 23:37:01.999430] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.924 qpair failed and we were unable to recover it. 00:34:12.924 [2024-04-26 23:37:01.999761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.924 23:37:01 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:34:12.924 23:37:01 -- common/autotest_common.sh@850 -- # return 0 00:34:12.924 23:37:01 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:34:12.924 23:37:01 -- common/autotest_common.sh@716 -- # xtrace_disable 00:34:12.924 23:37:01 -- common/autotest_common.sh@10 -- # set +x 00:34:12.924 [2024-04-26 23:37:02.000566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.924 [2024-04-26 23:37:02.000586] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.924 qpair failed and we were unable to recover it. 00:34:12.924 [2024-04-26 23:37:02.000933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.924 [2024-04-26 23:37:02.001187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.924 [2024-04-26 23:37:02.001196] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.924 qpair failed and we were unable to recover it. 00:34:12.924 [2024-04-26 23:37:02.001526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.924 [2024-04-26 23:37:02.001883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.924 [2024-04-26 23:37:02.001890] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.924 qpair failed and we were unable to recover it. 00:34:12.924 [2024-04-26 23:37:02.002214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.924 [2024-04-26 23:37:02.002576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.924 [2024-04-26 23:37:02.002583] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.924 qpair failed and we were unable to recover it. 00:34:12.924 [2024-04-26 23:37:02.002930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.924 [2024-04-26 23:37:02.003145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.924 [2024-04-26 23:37:02.003152] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.924 qpair failed and we were unable to recover it. 00:34:12.924 [2024-04-26 23:37:02.003490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.924 [2024-04-26 23:37:02.003811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.924 [2024-04-26 23:37:02.003821] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.924 qpair failed and we were unable to recover it. 00:34:12.924 [2024-04-26 23:37:02.004150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.924 [2024-04-26 23:37:02.004481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.924 [2024-04-26 23:37:02.004490] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.924 qpair failed and we were unable to recover it. 00:34:12.924 [2024-04-26 23:37:02.004813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.924 [2024-04-26 23:37:02.005218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.924 [2024-04-26 23:37:02.005228] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.924 qpair failed and we were unable to recover it. 00:34:12.924 [2024-04-26 23:37:02.005418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.924 [2024-04-26 23:37:02.005636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.924 [2024-04-26 23:37:02.005644] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.924 qpair failed and we were unable to recover it. 00:34:12.924 [2024-04-26 23:37:02.005823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.924 [2024-04-26 23:37:02.006143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.924 [2024-04-26 23:37:02.006151] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.924 qpair failed and we were unable to recover it. 00:34:12.924 [2024-04-26 23:37:02.006415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.924 [2024-04-26 23:37:02.006741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.924 [2024-04-26 23:37:02.006747] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.924 qpair failed and we were unable to recover it. 00:34:12.924 [2024-04-26 23:37:02.007067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.924 [2024-04-26 23:37:02.007283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.924 [2024-04-26 23:37:02.007291] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.924 qpair failed and we were unable to recover it. 00:34:12.924 [2024-04-26 23:37:02.007478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.924 [2024-04-26 23:37:02.007818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.924 [2024-04-26 23:37:02.007825] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.924 qpair failed and we were unable to recover it. 00:34:12.924 [2024-04-26 23:37:02.008168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.924 [2024-04-26 23:37:02.008377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.924 [2024-04-26 23:37:02.008383] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.924 qpair failed and we were unable to recover it. 00:34:12.924 [2024-04-26 23:37:02.008691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.924 [2024-04-26 23:37:02.009032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.924 [2024-04-26 23:37:02.009039] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.924 qpair failed and we were unable to recover it. 00:34:12.924 [2024-04-26 23:37:02.009235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.924 [2024-04-26 23:37:02.009581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.924 [2024-04-26 23:37:02.009589] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.924 qpair failed and we were unable to recover it. 00:34:12.924 [2024-04-26 23:37:02.009914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.924 [2024-04-26 23:37:02.010301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.924 [2024-04-26 23:37:02.010308] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.924 qpair failed and we were unable to recover it. 00:34:12.924 [2024-04-26 23:37:02.010623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.924 [2024-04-26 23:37:02.010964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.924 [2024-04-26 23:37:02.010972] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.924 qpair failed and we were unable to recover it. 00:34:12.924 [2024-04-26 23:37:02.011144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.924 [2024-04-26 23:37:02.011480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.924 [2024-04-26 23:37:02.011487] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.924 qpair failed and we were unable to recover it. 00:34:12.924 [2024-04-26 23:37:02.011787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.924 [2024-04-26 23:37:02.012111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.924 [2024-04-26 23:37:02.012119] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.924 qpair failed and we were unable to recover it. 00:34:12.924 [2024-04-26 23:37:02.012327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.924 [2024-04-26 23:37:02.012731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.924 [2024-04-26 23:37:02.012738] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.924 qpair failed and we were unable to recover it. 00:34:12.924 [2024-04-26 23:37:02.013120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.924 [2024-04-26 23:37:02.013490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.924 [2024-04-26 23:37:02.013497] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.924 qpair failed and we were unable to recover it. 00:34:12.924 [2024-04-26 23:37:02.013692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.924 [2024-04-26 23:37:02.014011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.924 [2024-04-26 23:37:02.014018] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.924 qpair failed and we were unable to recover it. 00:34:12.924 [2024-04-26 23:37:02.014359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.924 [2024-04-26 23:37:02.014570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.924 [2024-04-26 23:37:02.014577] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.924 qpair failed and we were unable to recover it. 00:34:12.924 [2024-04-26 23:37:02.014925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.924 [2024-04-26 23:37:02.015131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.924 [2024-04-26 23:37:02.015138] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.924 qpair failed and we were unable to recover it. 00:34:12.924 [2024-04-26 23:37:02.015562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.924 [2024-04-26 23:37:02.015808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.924 [2024-04-26 23:37:02.015815] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.924 qpair failed and we were unable to recover it. 00:34:12.924 [2024-04-26 23:37:02.016136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.924 [2024-04-26 23:37:02.016335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.924 [2024-04-26 23:37:02.016342] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.924 qpair failed and we were unable to recover it. 00:34:12.924 [2024-04-26 23:37:02.016656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.924 [2024-04-26 23:37:02.017001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.924 [2024-04-26 23:37:02.017008] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.924 qpair failed and we were unable to recover it. 00:34:12.924 [2024-04-26 23:37:02.017324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.924 [2024-04-26 23:37:02.017643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.924 [2024-04-26 23:37:02.017651] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.924 qpair failed and we were unable to recover it. 00:34:12.924 [2024-04-26 23:37:02.018019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.924 [2024-04-26 23:37:02.018367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.924 [2024-04-26 23:37:02.018377] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.924 qpair failed and we were unable to recover it. 00:34:12.924 [2024-04-26 23:37:02.018647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.924 [2024-04-26 23:37:02.018859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.924 [2024-04-26 23:37:02.018867] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.924 qpair failed and we were unable to recover it. 00:34:12.924 [2024-04-26 23:37:02.019056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.924 [2024-04-26 23:37:02.019391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.925 [2024-04-26 23:37:02.019397] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.925 qpair failed and we were unable to recover it. 00:34:12.925 [2024-04-26 23:37:02.019800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.925 [2024-04-26 23:37:02.020095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.925 [2024-04-26 23:37:02.020102] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.925 qpair failed and we were unable to recover it. 00:34:12.925 [2024-04-26 23:37:02.020289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.925 [2024-04-26 23:37:02.020557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.925 [2024-04-26 23:37:02.020563] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.925 qpair failed and we were unable to recover it. 00:34:12.925 [2024-04-26 23:37:02.020935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.925 [2024-04-26 23:37:02.021287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.925 [2024-04-26 23:37:02.021293] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.925 qpair failed and we were unable to recover it. 00:34:12.925 [2024-04-26 23:37:02.021500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.925 [2024-04-26 23:37:02.021802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.925 [2024-04-26 23:37:02.021809] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.925 qpair failed and we were unable to recover it. 00:34:12.925 [2024-04-26 23:37:02.022136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.925 [2024-04-26 23:37:02.022458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.925 [2024-04-26 23:37:02.022464] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.925 qpair failed and we were unable to recover it. 00:34:12.925 [2024-04-26 23:37:02.022662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.925 [2024-04-26 23:37:02.022898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.925 [2024-04-26 23:37:02.022906] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.925 qpair failed and we were unable to recover it. 00:34:12.925 [2024-04-26 23:37:02.023213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.925 [2024-04-26 23:37:02.023531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.925 [2024-04-26 23:37:02.023537] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.925 qpair failed and we were unable to recover it. 00:34:12.925 [2024-04-26 23:37:02.023882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.925 [2024-04-26 23:37:02.024103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.925 [2024-04-26 23:37:02.024111] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.925 qpair failed and we were unable to recover it. 00:34:12.925 [2024-04-26 23:37:02.024425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.925 [2024-04-26 23:37:02.024580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.925 [2024-04-26 23:37:02.024586] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.925 qpair failed and we were unable to recover it. 00:34:12.925 [2024-04-26 23:37:02.024916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.925 [2024-04-26 23:37:02.025264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.925 [2024-04-26 23:37:02.025272] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.925 qpair failed and we were unable to recover it. 00:34:12.925 [2024-04-26 23:37:02.025607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.925 [2024-04-26 23:37:02.025830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.925 [2024-04-26 23:37:02.025843] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.925 qpair failed and we were unable to recover it. 00:34:12.925 [2024-04-26 23:37:02.026185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.925 [2024-04-26 23:37:02.026515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.925 [2024-04-26 23:37:02.026522] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.925 qpair failed and we were unable to recover it. 00:34:12.925 [2024-04-26 23:37:02.026851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.925 [2024-04-26 23:37:02.027167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.925 [2024-04-26 23:37:02.027173] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.925 qpair failed and we were unable to recover it. 00:34:12.925 [2024-04-26 23:37:02.027501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.925 [2024-04-26 23:37:02.027849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.925 [2024-04-26 23:37:02.027857] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.925 qpair failed and we were unable to recover it. 00:34:12.925 [2024-04-26 23:37:02.028170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.925 [2024-04-26 23:37:02.028349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.925 [2024-04-26 23:37:02.028355] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.925 qpair failed and we were unable to recover it. 00:34:12.925 [2024-04-26 23:37:02.028629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.925 [2024-04-26 23:37:02.028946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.925 [2024-04-26 23:37:02.028953] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.925 qpair failed and we were unable to recover it. 00:34:12.925 [2024-04-26 23:37:02.029296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.925 [2024-04-26 23:37:02.029630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.925 [2024-04-26 23:37:02.029638] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.925 qpair failed and we were unable to recover it. 00:34:12.925 [2024-04-26 23:37:02.029833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.925 [2024-04-26 23:37:02.030032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.925 [2024-04-26 23:37:02.030041] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.925 qpair failed and we were unable to recover it. 00:34:12.925 [2024-04-26 23:37:02.030224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.925 [2024-04-26 23:37:02.030545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.925 [2024-04-26 23:37:02.030551] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.925 qpair failed and we were unable to recover it. 00:34:12.925 [2024-04-26 23:37:02.030875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.925 [2024-04-26 23:37:02.031169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.925 [2024-04-26 23:37:02.031175] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.925 qpair failed and we were unable to recover it. 00:34:12.925 [2024-04-26 23:37:02.031369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.925 [2024-04-26 23:37:02.031684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.925 [2024-04-26 23:37:02.031690] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.925 qpair failed and we were unable to recover it. 00:34:12.925 [2024-04-26 23:37:02.032023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.925 [2024-04-26 23:37:02.032214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.925 [2024-04-26 23:37:02.032220] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.925 qpair failed and we were unable to recover it. 00:34:12.925 [2024-04-26 23:37:02.032583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.925 [2024-04-26 23:37:02.032965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.925 [2024-04-26 23:37:02.032971] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.925 qpair failed and we were unable to recover it. 00:34:12.925 [2024-04-26 23:37:02.033334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.925 [2024-04-26 23:37:02.033669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.925 [2024-04-26 23:37:02.033675] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.925 qpair failed and we were unable to recover it. 00:34:12.925 [2024-04-26 23:37:02.034059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.925 [2024-04-26 23:37:02.034396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.925 [2024-04-26 23:37:02.034403] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.925 qpair failed and we were unable to recover it. 00:34:12.925 [2024-04-26 23:37:02.034602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.925 [2024-04-26 23:37:02.034774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.925 [2024-04-26 23:37:02.034781] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.925 qpair failed and we were unable to recover it. 00:34:12.925 [2024-04-26 23:37:02.035067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.925 [2024-04-26 23:37:02.035157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.925 [2024-04-26 23:37:02.035163] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.925 qpair failed and we were unable to recover it. 00:34:12.925 [2024-04-26 23:37:02.035339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.925 [2024-04-26 23:37:02.035494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.925 [2024-04-26 23:37:02.035503] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.925 qpair failed and we were unable to recover it. 00:34:12.925 [2024-04-26 23:37:02.035829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.925 [2024-04-26 23:37:02.036161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.925 [2024-04-26 23:37:02.036168] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.925 qpair failed and we were unable to recover it. 00:34:12.925 [2024-04-26 23:37:02.036485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.925 [2024-04-26 23:37:02.036844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.925 [2024-04-26 23:37:02.036850] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.925 qpair failed and we were unable to recover it. 00:34:12.925 [2024-04-26 23:37:02.037158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.925 23:37:02 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:12.925 [2024-04-26 23:37:02.037467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.925 [2024-04-26 23:37:02.037476] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.925 qpair failed and we were unable to recover it. 00:34:12.925 [2024-04-26 23:37:02.037670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.925 23:37:02 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:34:12.925 [2024-04-26 23:37:02.037889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.925 [2024-04-26 23:37:02.037897] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.925 qpair failed and we were unable to recover it. 00:34:12.925 [2024-04-26 23:37:02.038104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.925 23:37:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:34:12.925 [2024-04-26 23:37:02.038250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.925 [2024-04-26 23:37:02.038257] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.925 qpair failed and we were unable to recover it. 00:34:12.925 23:37:02 -- common/autotest_common.sh@10 -- # set +x 00:34:12.925 [2024-04-26 23:37:02.038460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.925 [2024-04-26 23:37:02.038826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.925 [2024-04-26 23:37:02.038833] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.925 qpair failed and we were unable to recover it. 00:34:12.925 [2024-04-26 23:37:02.039175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.925 [2024-04-26 23:37:02.039504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.925 [2024-04-26 23:37:02.039511] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.925 qpair failed and we were unable to recover it. 00:34:12.925 [2024-04-26 23:37:02.039844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.925 [2024-04-26 23:37:02.040161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.925 [2024-04-26 23:37:02.040167] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.925 qpair failed and we were unable to recover it. 00:34:12.925 [2024-04-26 23:37:02.040356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.925 [2024-04-26 23:37:02.040577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.925 [2024-04-26 23:37:02.040583] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.925 qpair failed and we were unable to recover it. 00:34:12.926 [2024-04-26 23:37:02.040778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.926 [2024-04-26 23:37:02.041110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.926 [2024-04-26 23:37:02.041117] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.926 qpair failed and we were unable to recover it. 00:34:12.926 [2024-04-26 23:37:02.041461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.926 [2024-04-26 23:37:02.041775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.926 [2024-04-26 23:37:02.041781] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.926 qpair failed and we were unable to recover it. 00:34:12.926 [2024-04-26 23:37:02.042148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.926 [2024-04-26 23:37:02.042509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.926 [2024-04-26 23:37:02.042515] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.926 qpair failed and we were unable to recover it. 00:34:12.926 [2024-04-26 23:37:02.042851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.926 [2024-04-26 23:37:02.043201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.926 [2024-04-26 23:37:02.043208] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.926 qpair failed and we were unable to recover it. 00:34:12.926 [2024-04-26 23:37:02.043440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.926 [2024-04-26 23:37:02.043732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.926 [2024-04-26 23:37:02.043739] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.926 qpair failed and we were unable to recover it. 00:34:12.926 [2024-04-26 23:37:02.043953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.926 [2024-04-26 23:37:02.044369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.926 [2024-04-26 23:37:02.044376] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.926 qpair failed and we were unable to recover it. 00:34:12.926 [2024-04-26 23:37:02.044709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.926 [2024-04-26 23:37:02.045027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.926 [2024-04-26 23:37:02.045034] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.926 qpair failed and we were unable to recover it. 00:34:12.926 [2024-04-26 23:37:02.045354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.926 [2024-04-26 23:37:02.045556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.926 [2024-04-26 23:37:02.045563] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.926 qpair failed and we were unable to recover it. 00:34:12.926 [2024-04-26 23:37:02.045904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.926 [2024-04-26 23:37:02.046228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.926 [2024-04-26 23:37:02.046234] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.926 qpair failed and we were unable to recover it. 00:34:12.926 [2024-04-26 23:37:02.046566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.926 [2024-04-26 23:37:02.046929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.926 [2024-04-26 23:37:02.046936] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.926 qpair failed and we were unable to recover it. 00:34:12.926 [2024-04-26 23:37:02.047278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.926 [2024-04-26 23:37:02.047479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.926 [2024-04-26 23:37:02.047485] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.926 qpair failed and we were unable to recover it. 00:34:12.926 [2024-04-26 23:37:02.047706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.926 [2024-04-26 23:37:02.047923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.926 [2024-04-26 23:37:02.047930] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.926 qpair failed and we were unable to recover it. 00:34:12.926 [2024-04-26 23:37:02.048284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.926 [2024-04-26 23:37:02.048607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.926 [2024-04-26 23:37:02.048614] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.926 qpair failed and we were unable to recover it. 00:34:12.926 [2024-04-26 23:37:02.048943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.926 [2024-04-26 23:37:02.049254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.926 [2024-04-26 23:37:02.049261] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.926 qpair failed and we were unable to recover it. 00:34:12.926 [2024-04-26 23:37:02.049586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.926 [2024-04-26 23:37:02.049931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.926 [2024-04-26 23:37:02.049938] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.926 qpair failed and we were unable to recover it. 00:34:12.926 [2024-04-26 23:37:02.050138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.926 [2024-04-26 23:37:02.050473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.926 [2024-04-26 23:37:02.050480] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.926 qpair failed and we were unable to recover it. 00:34:12.926 [2024-04-26 23:37:02.050796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.926 [2024-04-26 23:37:02.051153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.926 [2024-04-26 23:37:02.051160] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.926 qpair failed and we were unable to recover it. 00:34:12.926 [2024-04-26 23:37:02.051370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.926 [2024-04-26 23:37:02.051604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.926 [2024-04-26 23:37:02.051610] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.926 qpair failed and we were unable to recover it. 00:34:12.926 [2024-04-26 23:37:02.051938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.926 [2024-04-26 23:37:02.052267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.926 [2024-04-26 23:37:02.052274] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.926 qpair failed and we were unable to recover it. 00:34:12.926 [2024-04-26 23:37:02.052588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.926 [2024-04-26 23:37:02.052929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.926 [2024-04-26 23:37:02.052936] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.926 qpair failed and we were unable to recover it. 00:34:12.926 [2024-04-26 23:37:02.053290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.926 [2024-04-26 23:37:02.053637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.926 [2024-04-26 23:37:02.053645] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.926 qpair failed and we were unable to recover it. 00:34:12.926 [2024-04-26 23:37:02.054018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.926 [2024-04-26 23:37:02.054238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.926 [2024-04-26 23:37:02.054244] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.926 qpair failed and we were unable to recover it. 00:34:12.926 [2024-04-26 23:37:02.054473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.926 [2024-04-26 23:37:02.054771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.926 [2024-04-26 23:37:02.054777] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.926 qpair failed and we were unable to recover it. 00:34:12.926 Malloc0 00:34:12.926 [2024-04-26 23:37:02.055133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.926 [2024-04-26 23:37:02.055491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.926 [2024-04-26 23:37:02.055498] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.926 qpair failed and we were unable to recover it. 00:34:12.926 23:37:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:34:12.926 [2024-04-26 23:37:02.055846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.926 23:37:02 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:34:12.926 [2024-04-26 23:37:02.056216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.926 [2024-04-26 23:37:02.056223] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.926 qpair failed and we were unable to recover it. 00:34:12.926 23:37:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:34:12.926 [2024-04-26 23:37:02.056577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.926 23:37:02 -- common/autotest_common.sh@10 -- # set +x 00:34:12.926 [2024-04-26 23:37:02.056905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.926 [2024-04-26 23:37:02.056911] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.926 qpair failed and we were unable to recover it. 00:34:12.926 [2024-04-26 23:37:02.057289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.926 [2024-04-26 23:37:02.057608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.926 [2024-04-26 23:37:02.057615] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.926 qpair failed and we were unable to recover it. 00:34:12.926 [2024-04-26 23:37:02.057935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.926 [2024-04-26 23:37:02.058173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.926 [2024-04-26 23:37:02.058179] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.926 qpair failed and we were unable to recover it. 00:34:12.926 [2024-04-26 23:37:02.058429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.926 [2024-04-26 23:37:02.058657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.926 [2024-04-26 23:37:02.058664] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.926 qpair failed and we were unable to recover it. 00:34:12.926 [2024-04-26 23:37:02.058757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.926 [2024-04-26 23:37:02.059091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.926 [2024-04-26 23:37:02.059099] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.926 qpair failed and we were unable to recover it. 00:34:12.926 [2024-04-26 23:37:02.059290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.926 [2024-04-26 23:37:02.059626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.926 [2024-04-26 23:37:02.059632] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.926 qpair failed and we were unable to recover it. 00:34:12.926 [2024-04-26 23:37:02.059840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.926 [2024-04-26 23:37:02.060020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.926 [2024-04-26 23:37:02.060027] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.926 qpair failed and we were unable to recover it. 00:34:12.926 [2024-04-26 23:37:02.060345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.926 [2024-04-26 23:37:02.060559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.926 [2024-04-26 23:37:02.060566] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.926 qpair failed and we were unable to recover it. 00:34:12.926 [2024-04-26 23:37:02.060935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.926 [2024-04-26 23:37:02.061237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.926 [2024-04-26 23:37:02.061244] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.926 qpair failed and we were unable to recover it. 00:34:12.926 [2024-04-26 23:37:02.061454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.926 [2024-04-26 23:37:02.061654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.926 [2024-04-26 23:37:02.061660] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.926 qpair failed and we were unable to recover it. 00:34:12.926 [2024-04-26 23:37:02.062010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.926 [2024-04-26 23:37:02.062273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.926 [2024-04-26 23:37:02.062279] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.926 qpair failed and we were unable to recover it. 00:34:12.926 [2024-04-26 23:37:02.062458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.926 [2024-04-26 23:37:02.062461] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:12.926 [2024-04-26 23:37:02.062817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.926 [2024-04-26 23:37:02.062823] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.926 qpair failed and we were unable to recover it. 00:34:12.927 [2024-04-26 23:37:02.063001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.927 [2024-04-26 23:37:02.063287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.927 [2024-04-26 23:37:02.063293] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.927 qpair failed and we were unable to recover it. 00:34:12.927 [2024-04-26 23:37:02.063491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.927 [2024-04-26 23:37:02.063689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.927 [2024-04-26 23:37:02.063695] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.927 qpair failed and we were unable to recover it. 00:34:12.927 [2024-04-26 23:37:02.064030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.927 [2024-04-26 23:37:02.064324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.927 [2024-04-26 23:37:02.064331] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.927 qpair failed and we were unable to recover it. 00:34:12.927 [2024-04-26 23:37:02.064637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.927 [2024-04-26 23:37:02.064981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.927 [2024-04-26 23:37:02.064988] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.927 qpair failed and we were unable to recover it. 00:34:12.927 [2024-04-26 23:37:02.065156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.927 [2024-04-26 23:37:02.065348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.927 [2024-04-26 23:37:02.065354] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.927 qpair failed and we were unable to recover it. 00:34:12.927 [2024-04-26 23:37:02.065659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.927 [2024-04-26 23:37:02.065966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.927 [2024-04-26 23:37:02.065973] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.927 qpair failed and we were unable to recover it. 00:34:12.927 [2024-04-26 23:37:02.066292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.927 [2024-04-26 23:37:02.066644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.927 [2024-04-26 23:37:02.066651] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.927 qpair failed and we were unable to recover it. 00:34:12.927 [2024-04-26 23:37:02.066991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.927 [2024-04-26 23:37:02.067204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.927 [2024-04-26 23:37:02.067210] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.927 qpair failed and we were unable to recover it. 00:34:12.927 [2024-04-26 23:37:02.067580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.927 [2024-04-26 23:37:02.067928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.927 [2024-04-26 23:37:02.067940] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.927 qpair failed and we were unable to recover it. 00:34:12.927 [2024-04-26 23:37:02.068167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.927 [2024-04-26 23:37:02.068534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.927 [2024-04-26 23:37:02.068540] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.927 qpair failed and we were unable to recover it. 00:34:12.927 [2024-04-26 23:37:02.068864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.927 [2024-04-26 23:37:02.068909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.927 [2024-04-26 23:37:02.068915] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.927 qpair failed and we were unable to recover it. 00:34:12.927 [2024-04-26 23:37:02.069130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.927 [2024-04-26 23:37:02.069506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.927 [2024-04-26 23:37:02.069513] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.927 qpair failed and we were unable to recover it. 00:34:12.927 [2024-04-26 23:37:02.069697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.927 [2024-04-26 23:37:02.069890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.927 [2024-04-26 23:37:02.069897] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.927 qpair failed and we were unable to recover it. 00:34:12.927 [2024-04-26 23:37:02.070071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.927 [2024-04-26 23:37:02.070403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.927 [2024-04-26 23:37:02.070411] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.927 qpair failed and we were unable to recover it. 00:34:12.927 [2024-04-26 23:37:02.070617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.927 [2024-04-26 23:37:02.070964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.927 [2024-04-26 23:37:02.070971] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.927 qpair failed and we were unable to recover it. 00:34:12.927 [2024-04-26 23:37:02.071058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.927 [2024-04-26 23:37:02.071444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.927 [2024-04-26 23:37:02.071450] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.927 qpair failed and we were unable to recover it. 00:34:12.927 23:37:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:34:12.927 [2024-04-26 23:37:02.071645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.927 23:37:02 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:34:12.927 [2024-04-26 23:37:02.071969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.927 [2024-04-26 23:37:02.071976] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.927 qpair failed and we were unable to recover it. 00:34:12.927 23:37:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:34:12.927 [2024-04-26 23:37:02.072354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.927 23:37:02 -- common/autotest_common.sh@10 -- # set +x 00:34:12.927 [2024-04-26 23:37:02.072678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.927 [2024-04-26 23:37:02.072685] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.927 qpair failed and we were unable to recover it. 00:34:12.927 [2024-04-26 23:37:02.073037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.927 [2024-04-26 23:37:02.073411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.927 [2024-04-26 23:37:02.073418] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.927 qpair failed and we were unable to recover it. 00:34:12.927 [2024-04-26 23:37:02.073735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.927 [2024-04-26 23:37:02.074075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.927 [2024-04-26 23:37:02.074082] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.927 qpair failed and we were unable to recover it. 00:34:12.927 [2024-04-26 23:37:02.074400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.927 [2024-04-26 23:37:02.074754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.927 [2024-04-26 23:37:02.074761] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.927 qpair failed and we were unable to recover it. 00:34:12.927 [2024-04-26 23:37:02.074907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.927 [2024-04-26 23:37:02.075236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.927 [2024-04-26 23:37:02.075244] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.927 qpair failed and we were unable to recover it. 00:34:12.927 [2024-04-26 23:37:02.075543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.927 [2024-04-26 23:37:02.075890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.927 [2024-04-26 23:37:02.075896] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.927 qpair failed and we were unable to recover it. 00:34:12.927 [2024-04-26 23:37:02.076238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.927 [2024-04-26 23:37:02.076441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.927 [2024-04-26 23:37:02.076447] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.927 qpair failed and we were unable to recover it. 00:34:12.927 [2024-04-26 23:37:02.076781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.927 [2024-04-26 23:37:02.077130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.927 [2024-04-26 23:37:02.077137] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.927 qpair failed and we were unable to recover it. 00:34:12.927 [2024-04-26 23:37:02.077375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.927 [2024-04-26 23:37:02.077719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.927 [2024-04-26 23:37:02.077725] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.927 qpair failed and we were unable to recover it. 00:34:12.927 [2024-04-26 23:37:02.077916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.927 [2024-04-26 23:37:02.078247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.927 [2024-04-26 23:37:02.078253] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.927 qpair failed and we were unable to recover it. 00:34:12.927 [2024-04-26 23:37:02.078301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.927 [2024-04-26 23:37:02.078600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.927 [2024-04-26 23:37:02.078606] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.927 qpair failed and we were unable to recover it. 00:34:12.927 [2024-04-26 23:37:02.078803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.927 [2024-04-26 23:37:02.079119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.927 [2024-04-26 23:37:02.079126] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.927 qpair failed and we were unable to recover it. 00:34:12.927 [2024-04-26 23:37:02.079446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.927 [2024-04-26 23:37:02.079760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.927 [2024-04-26 23:37:02.079766] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.927 qpair failed and we were unable to recover it. 00:34:12.927 [2024-04-26 23:37:02.080101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.927 [2024-04-26 23:37:02.080455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.927 [2024-04-26 23:37:02.080461] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.927 qpair failed and we were unable to recover it. 00:34:12.927 [2024-04-26 23:37:02.080680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.927 [2024-04-26 23:37:02.080841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.927 [2024-04-26 23:37:02.080849] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.927 qpair failed and we were unable to recover it. 00:34:12.927 [2024-04-26 23:37:02.081242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.927 [2024-04-26 23:37:02.081559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.927 [2024-04-26 23:37:02.081565] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.927 qpair failed and we were unable to recover it. 00:34:12.927 [2024-04-26 23:37:02.081876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.927 [2024-04-26 23:37:02.082208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.927 [2024-04-26 23:37:02.082214] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.927 qpair failed and we were unable to recover it. 00:34:12.927 [2024-04-26 23:37:02.082416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.927 [2024-04-26 23:37:02.082736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.927 [2024-04-26 23:37:02.082742] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.927 qpair failed and we were unable to recover it. 00:34:12.927 [2024-04-26 23:37:02.082976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.927 [2024-04-26 23:37:02.083276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.927 [2024-04-26 23:37:02.083282] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.927 qpair failed and we were unable to recover it. 00:34:12.927 23:37:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:34:12.927 [2024-04-26 23:37:02.083597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.927 23:37:02 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:34:12.927 [2024-04-26 23:37:02.083886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.928 [2024-04-26 23:37:02.083892] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.928 qpair failed and we were unable to recover it. 00:34:12.928 [2024-04-26 23:37:02.083934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.928 [2024-04-26 23:37:02.084144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.928 [2024-04-26 23:37:02.084150] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.928 23:37:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:34:12.928 qpair failed and we were unable to recover it. 00:34:12.928 23:37:02 -- common/autotest_common.sh@10 -- # set +x 00:34:12.928 [2024-04-26 23:37:02.084442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.928 [2024-04-26 23:37:02.084669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.928 [2024-04-26 23:37:02.084675] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.928 qpair failed and we were unable to recover it. 00:34:12.928 [2024-04-26 23:37:02.085084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.928 [2024-04-26 23:37:02.085285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.928 [2024-04-26 23:37:02.085292] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.928 qpair failed and we were unable to recover it. 00:34:12.928 [2024-04-26 23:37:02.085660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.928 [2024-04-26 23:37:02.085843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.928 [2024-04-26 23:37:02.085851] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.928 qpair failed and we were unable to recover it. 00:34:12.928 [2024-04-26 23:37:02.086250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.928 [2024-04-26 23:37:02.086402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.928 [2024-04-26 23:37:02.086408] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.928 qpair failed and we were unable to recover it. 00:34:12.928 [2024-04-26 23:37:02.086716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.928 [2024-04-26 23:37:02.087020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.928 [2024-04-26 23:37:02.087027] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.928 qpair failed and we were unable to recover it. 00:34:12.928 [2024-04-26 23:37:02.087242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.928 [2024-04-26 23:37:02.087532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.928 [2024-04-26 23:37:02.087539] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.928 qpair failed and we were unable to recover it. 00:34:12.928 [2024-04-26 23:37:02.087758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.928 [2024-04-26 23:37:02.087939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.928 [2024-04-26 23:37:02.087946] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.928 qpair failed and we were unable to recover it. 00:34:12.928 [2024-04-26 23:37:02.088213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.928 [2024-04-26 23:37:02.088524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.928 [2024-04-26 23:37:02.088531] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.928 qpair failed and we were unable to recover it. 00:34:12.928 [2024-04-26 23:37:02.088827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.928 [2024-04-26 23:37:02.089187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.928 [2024-04-26 23:37:02.089194] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.928 qpair failed and we were unable to recover it. 00:34:12.928 [2024-04-26 23:37:02.089559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.928 [2024-04-26 23:37:02.089914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.928 [2024-04-26 23:37:02.089920] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.928 qpair failed and we were unable to recover it. 00:34:12.928 [2024-04-26 23:37:02.090190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.928 [2024-04-26 23:37:02.090554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.928 [2024-04-26 23:37:02.090560] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.928 qpair failed and we were unable to recover it. 00:34:12.928 [2024-04-26 23:37:02.090871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.928 [2024-04-26 23:37:02.091162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.928 [2024-04-26 23:37:02.091168] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.928 qpair failed and we were unable to recover it. 00:34:12.928 [2024-04-26 23:37:02.091479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.928 [2024-04-26 23:37:02.091839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.928 [2024-04-26 23:37:02.091846] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.928 qpair failed and we were unable to recover it. 00:34:12.928 [2024-04-26 23:37:02.092190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.928 [2024-04-26 23:37:02.092504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.928 [2024-04-26 23:37:02.092510] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.928 qpair failed and we were unable to recover it. 00:34:12.928 [2024-04-26 23:37:02.092863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.928 [2024-04-26 23:37:02.093178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.928 [2024-04-26 23:37:02.093184] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.928 qpair failed and we were unable to recover it. 00:34:12.928 [2024-04-26 23:37:02.093376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.928 [2024-04-26 23:37:02.093570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.928 [2024-04-26 23:37:02.093576] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.928 qpair failed and we were unable to recover it. 00:34:12.928 [2024-04-26 23:37:02.093911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.928 [2024-04-26 23:37:02.094123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.928 [2024-04-26 23:37:02.094130] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.928 qpair failed and we were unable to recover it. 00:34:12.928 [2024-04-26 23:37:02.094523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.928 [2024-04-26 23:37:02.094713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.928 [2024-04-26 23:37:02.094720] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.928 qpair failed and we were unable to recover it. 00:34:12.928 [2024-04-26 23:37:02.095003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.928 [2024-04-26 23:37:02.095299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.928 [2024-04-26 23:37:02.095305] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.928 qpair failed and we were unable to recover it. 00:34:12.928 23:37:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:34:12.928 [2024-04-26 23:37:02.095640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.928 [2024-04-26 23:37:02.095835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.928 [2024-04-26 23:37:02.095844] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.928 qpair failed and we were unable to recover it. 00:34:12.928 23:37:02 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:34:12.928 [2024-04-26 23:37:02.096035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.928 23:37:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:34:12.928 [2024-04-26 23:37:02.096364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.928 [2024-04-26 23:37:02.096372] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.928 qpair failed and we were unable to recover it. 00:34:12.928 23:37:02 -- common/autotest_common.sh@10 -- # set +x 00:34:12.928 [2024-04-26 23:37:02.096701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.928 [2024-04-26 23:37:02.097016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.928 [2024-04-26 23:37:02.097023] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.928 qpair failed and we were unable to recover it. 00:34:12.928 [2024-04-26 23:37:02.097219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.928 [2024-04-26 23:37:02.097568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.928 [2024-04-26 23:37:02.097576] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.928 qpair failed and we were unable to recover it. 00:34:12.928 [2024-04-26 23:37:02.097893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.928 [2024-04-26 23:37:02.098253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.928 [2024-04-26 23:37:02.098260] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.928 qpair failed and we were unable to recover it. 00:34:12.928 [2024-04-26 23:37:02.098476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.928 [2024-04-26 23:37:02.098668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.928 [2024-04-26 23:37:02.098674] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.928 qpair failed and we were unable to recover it. 00:34:12.928 [2024-04-26 23:37:02.098887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.928 [2024-04-26 23:37:02.099233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.928 [2024-04-26 23:37:02.099239] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.928 qpair failed and we were unable to recover it. 00:34:12.928 [2024-04-26 23:37:02.099425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.928 [2024-04-26 23:37:02.099768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.928 [2024-04-26 23:37:02.099774] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.928 qpair failed and we were unable to recover it. 00:34:12.928 [2024-04-26 23:37:02.100196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.928 [2024-04-26 23:37:02.100351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.928 [2024-04-26 23:37:02.100358] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.928 qpair failed and we were unable to recover it. 00:34:12.928 [2024-04-26 23:37:02.100651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.928 [2024-04-26 23:37:02.100969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.928 [2024-04-26 23:37:02.100977] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.928 qpair failed and we were unable to recover it. 00:34:12.928 [2024-04-26 23:37:02.101359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.928 [2024-04-26 23:37:02.101565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.928 [2024-04-26 23:37:02.101572] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.928 qpair failed and we were unable to recover it. 00:34:12.928 [2024-04-26 23:37:02.101959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.928 [2024-04-26 23:37:02.102272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.928 [2024-04-26 23:37:02.102279] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3ad0000b90 with addr=10.0.0.2, port=4420 00:34:12.928 qpair failed and we were unable to recover it. 00:34:12.928 [2024-04-26 23:37:02.102510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.928 [2024-04-26 23:37:02.102726] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:12.928 [2024-04-26 23:37:02.104750] posix.c: 675:posix_sock_psk_use_session_client_cb: *ERROR*: PSK is not set 00:34:12.929 [2024-04-26 23:37:02.104785] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7f3ad0000b90 (107): Transport endpoint is not connected 00:34:12.929 [2024-04-26 23:37:02.104818] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:12.929 qpair failed and we were unable to recover it. 00:34:12.929 23:37:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:34:12.929 23:37:02 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:34:12.929 23:37:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:34:12.929 23:37:02 -- common/autotest_common.sh@10 -- # set +x 00:34:12.929 [2024-04-26 23:37:02.113347] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:12.929 [2024-04-26 23:37:02.113427] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:12.929 [2024-04-26 23:37:02.113441] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:12.929 [2024-04-26 23:37:02.113447] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:12.929 [2024-04-26 23:37:02.113451] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3ad0000b90 00:34:12.929 [2024-04-26 23:37:02.113464] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:12.929 qpair failed and we were unable to recover it. 00:34:12.929 23:37:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:34:12.929 23:37:02 -- host/target_disconnect.sh@58 -- # wait 4188786 00:34:12.929 [2024-04-26 23:37:02.123270] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:12.929 [2024-04-26 23:37:02.123355] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:12.929 [2024-04-26 23:37:02.123367] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:12.929 [2024-04-26 23:37:02.123372] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:12.929 [2024-04-26 23:37:02.123376] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3ad0000b90 00:34:12.929 [2024-04-26 23:37:02.123387] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:12.929 qpair failed and we were unable to recover it. 00:34:12.929 [2024-04-26 23:37:02.133233] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:12.929 [2024-04-26 23:37:02.133296] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:12.929 [2024-04-26 23:37:02.133307] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:12.929 [2024-04-26 23:37:02.133312] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:12.929 [2024-04-26 23:37:02.133316] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3ad0000b90 00:34:12.929 [2024-04-26 23:37:02.133327] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:12.929 qpair failed and we were unable to recover it. 00:34:12.929 [2024-04-26 23:37:02.143240] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:12.929 [2024-04-26 23:37:02.143300] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:12.929 [2024-04-26 23:37:02.143312] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:12.929 [2024-04-26 23:37:02.143317] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:12.929 [2024-04-26 23:37:02.143321] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3ad0000b90 00:34:12.929 [2024-04-26 23:37:02.143331] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:12.929 qpair failed and we were unable to recover it. 00:34:12.929 [2024-04-26 23:37:02.153198] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:12.929 [2024-04-26 23:37:02.153251] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:12.929 [2024-04-26 23:37:02.153262] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:12.929 [2024-04-26 23:37:02.153267] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:12.929 [2024-04-26 23:37:02.153271] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3ad0000b90 00:34:12.929 [2024-04-26 23:37:02.153281] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:12.929 qpair failed and we were unable to recover it. 00:34:12.929 [2024-04-26 23:37:02.163182] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:12.929 [2024-04-26 23:37:02.163238] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:12.929 [2024-04-26 23:37:02.163250] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:12.929 [2024-04-26 23:37:02.163255] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:12.929 [2024-04-26 23:37:02.163259] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3ad0000b90 00:34:12.929 [2024-04-26 23:37:02.163270] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:12.929 qpair failed and we were unable to recover it. 00:34:13.193 [2024-04-26 23:37:02.173303] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:13.193 [2024-04-26 23:37:02.173357] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:13.193 [2024-04-26 23:37:02.173369] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:13.193 [2024-04-26 23:37:02.173374] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:13.193 [2024-04-26 23:37:02.173378] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3ad0000b90 00:34:13.193 [2024-04-26 23:37:02.173389] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:13.193 qpair failed and we were unable to recover it. 00:34:13.193 [2024-04-26 23:37:02.183353] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:13.193 [2024-04-26 23:37:02.183409] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:13.193 [2024-04-26 23:37:02.183421] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:13.193 [2024-04-26 23:37:02.183426] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:13.193 [2024-04-26 23:37:02.183430] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3ad0000b90 00:34:13.193 [2024-04-26 23:37:02.183440] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:13.193 qpair failed and we were unable to recover it. 00:34:13.193 [2024-04-26 23:37:02.193399] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:13.193 [2024-04-26 23:37:02.193506] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:13.193 [2024-04-26 23:37:02.193520] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:13.193 [2024-04-26 23:37:02.193525] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:13.193 [2024-04-26 23:37:02.193530] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3ad0000b90 00:34:13.193 [2024-04-26 23:37:02.193540] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:13.193 qpair failed and we were unable to recover it. 00:34:13.193 [2024-04-26 23:37:02.203385] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:13.193 [2024-04-26 23:37:02.203452] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:13.193 [2024-04-26 23:37:02.203463] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:13.193 [2024-04-26 23:37:02.203468] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:13.193 [2024-04-26 23:37:02.203472] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3ad0000b90 00:34:13.193 [2024-04-26 23:37:02.203482] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:13.193 qpair failed and we were unable to recover it. 00:34:13.193 [2024-04-26 23:37:02.213287] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:13.193 [2024-04-26 23:37:02.213345] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:13.193 [2024-04-26 23:37:02.213356] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:13.193 [2024-04-26 23:37:02.213361] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:13.193 [2024-04-26 23:37:02.213365] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3ad0000b90 00:34:13.193 [2024-04-26 23:37:02.213375] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:13.193 qpair failed and we were unable to recover it. 00:34:13.193 [2024-04-26 23:37:02.223417] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:13.193 [2024-04-26 23:37:02.223481] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:13.193 [2024-04-26 23:37:02.223492] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:13.193 [2024-04-26 23:37:02.223497] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:13.193 [2024-04-26 23:37:02.223501] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3ad0000b90 00:34:13.193 [2024-04-26 23:37:02.223511] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:13.193 qpair failed and we were unable to recover it. 00:34:13.193 [2024-04-26 23:37:02.233478] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:13.193 [2024-04-26 23:37:02.233528] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:13.193 [2024-04-26 23:37:02.233540] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:13.193 [2024-04-26 23:37:02.233545] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:13.193 [2024-04-26 23:37:02.233549] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3ad0000b90 00:34:13.193 [2024-04-26 23:37:02.233564] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:13.193 qpair failed and we were unable to recover it. 00:34:13.193 [2024-04-26 23:37:02.243501] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:13.193 [2024-04-26 23:37:02.243570] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:13.193 [2024-04-26 23:37:02.243581] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:13.193 [2024-04-26 23:37:02.243585] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:13.193 [2024-04-26 23:37:02.243590] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3ad0000b90 00:34:13.193 [2024-04-26 23:37:02.243599] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:13.193 qpair failed and we were unable to recover it. 00:34:13.193 [2024-04-26 23:37:02.253438] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:13.193 [2024-04-26 23:37:02.253524] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:13.193 [2024-04-26 23:37:02.253535] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:13.193 [2024-04-26 23:37:02.253540] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:13.193 [2024-04-26 23:37:02.253545] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3ad0000b90 00:34:13.193 [2024-04-26 23:37:02.253556] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:13.193 qpair failed and we were unable to recover it. 00:34:13.193 [2024-04-26 23:37:02.263551] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:13.193 [2024-04-26 23:37:02.263612] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:13.193 [2024-04-26 23:37:02.263623] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:13.193 [2024-04-26 23:37:02.263628] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:13.193 [2024-04-26 23:37:02.263632] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3ad0000b90 00:34:13.193 [2024-04-26 23:37:02.263642] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:13.193 qpair failed and we were unable to recover it. 00:34:13.193 [2024-04-26 23:37:02.273576] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:13.193 [2024-04-26 23:37:02.273634] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:13.193 [2024-04-26 23:37:02.273644] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:13.193 [2024-04-26 23:37:02.273649] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:13.193 [2024-04-26 23:37:02.273654] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3ad0000b90 00:34:13.193 [2024-04-26 23:37:02.273663] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:13.193 qpair failed and we were unable to recover it. 00:34:13.193 [2024-04-26 23:37:02.283588] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:13.193 [2024-04-26 23:37:02.283635] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:13.193 [2024-04-26 23:37:02.283649] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:13.193 [2024-04-26 23:37:02.283654] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:13.193 [2024-04-26 23:37:02.283659] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3ad0000b90 00:34:13.194 [2024-04-26 23:37:02.283669] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:13.194 qpair failed and we were unable to recover it. 00:34:13.194 [2024-04-26 23:37:02.293622] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:13.194 [2024-04-26 23:37:02.293704] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:13.194 [2024-04-26 23:37:02.293715] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:13.194 [2024-04-26 23:37:02.293720] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:13.194 [2024-04-26 23:37:02.293724] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3ad0000b90 00:34:13.194 [2024-04-26 23:37:02.293735] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:13.194 qpair failed and we were unable to recover it. 00:34:13.194 [2024-04-26 23:37:02.303646] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:13.194 [2024-04-26 23:37:02.303708] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:13.194 [2024-04-26 23:37:02.303719] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:13.194 [2024-04-26 23:37:02.303724] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:13.194 [2024-04-26 23:37:02.303728] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3ad0000b90 00:34:13.194 [2024-04-26 23:37:02.303738] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:13.194 qpair failed and we were unable to recover it. 00:34:13.194 [2024-04-26 23:37:02.313567] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:13.194 [2024-04-26 23:37:02.313617] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:13.194 [2024-04-26 23:37:02.313629] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:13.194 [2024-04-26 23:37:02.313634] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:13.194 [2024-04-26 23:37:02.313638] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3ad0000b90 00:34:13.194 [2024-04-26 23:37:02.313649] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:13.194 qpair failed and we were unable to recover it. 00:34:13.194 [2024-04-26 23:37:02.323710] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:13.194 [2024-04-26 23:37:02.323761] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:13.194 [2024-04-26 23:37:02.323773] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:13.194 [2024-04-26 23:37:02.323778] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:13.194 [2024-04-26 23:37:02.323786] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3ad0000b90 00:34:13.194 [2024-04-26 23:37:02.323796] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:13.194 qpair failed and we were unable to recover it. 00:34:13.194 [2024-04-26 23:37:02.333609] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:13.194 [2024-04-26 23:37:02.333713] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:13.194 [2024-04-26 23:37:02.333724] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:13.194 [2024-04-26 23:37:02.333731] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:13.194 [2024-04-26 23:37:02.333736] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3ad0000b90 00:34:13.194 [2024-04-26 23:37:02.333747] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:13.194 qpair failed and we were unable to recover it. 00:34:13.194 [2024-04-26 23:37:02.343765] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:13.194 [2024-04-26 23:37:02.343824] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:13.194 [2024-04-26 23:37:02.343835] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:13.194 [2024-04-26 23:37:02.343844] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:13.194 [2024-04-26 23:37:02.343849] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3ad0000b90 00:34:13.194 [2024-04-26 23:37:02.343859] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:13.194 qpair failed and we were unable to recover it. 00:34:13.194 [2024-04-26 23:37:02.353880] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:13.194 [2024-04-26 23:37:02.353946] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:13.194 [2024-04-26 23:37:02.353957] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:13.194 [2024-04-26 23:37:02.353962] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:13.194 [2024-04-26 23:37:02.353967] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3ad0000b90 00:34:13.194 [2024-04-26 23:37:02.353977] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:13.194 qpair failed and we were unable to recover it. 00:34:13.194 [2024-04-26 23:37:02.363877] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:13.194 [2024-04-26 23:37:02.363927] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:13.194 [2024-04-26 23:37:02.363938] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:13.194 [2024-04-26 23:37:02.363943] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:13.194 [2024-04-26 23:37:02.363947] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3ad0000b90 00:34:13.194 [2024-04-26 23:37:02.363958] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:13.194 qpair failed and we were unable to recover it. 00:34:13.194 [2024-04-26 23:37:02.373764] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:13.194 [2024-04-26 23:37:02.373821] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:13.194 [2024-04-26 23:37:02.373832] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:13.194 [2024-04-26 23:37:02.373841] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:13.194 [2024-04-26 23:37:02.373846] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3ad0000b90 00:34:13.194 [2024-04-26 23:37:02.373857] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:13.194 qpair failed and we were unable to recover it. 00:34:13.194 [2024-04-26 23:37:02.383955] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:13.194 [2024-04-26 23:37:02.384011] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:13.194 [2024-04-26 23:37:02.384022] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:13.194 [2024-04-26 23:37:02.384027] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:13.194 [2024-04-26 23:37:02.384032] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3ad0000b90 00:34:13.194 [2024-04-26 23:37:02.384042] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:13.194 qpair failed and we were unable to recover it. 00:34:13.194 [2024-04-26 23:37:02.393933] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:13.194 [2024-04-26 23:37:02.393984] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:13.194 [2024-04-26 23:37:02.393995] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:13.194 [2024-04-26 23:37:02.394000] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:13.194 [2024-04-26 23:37:02.394004] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3ad0000b90 00:34:13.194 [2024-04-26 23:37:02.394015] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:13.194 qpair failed and we were unable to recover it. 00:34:13.194 [2024-04-26 23:37:02.403931] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:13.194 [2024-04-26 23:37:02.403983] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:13.194 [2024-04-26 23:37:02.403994] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:13.194 [2024-04-26 23:37:02.403999] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:13.194 [2024-04-26 23:37:02.404003] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3ad0000b90 00:34:13.194 [2024-04-26 23:37:02.404013] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:13.194 qpair failed and we were unable to recover it. 00:34:13.194 [2024-04-26 23:37:02.413958] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:13.194 [2024-04-26 23:37:02.414053] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:13.194 [2024-04-26 23:37:02.414064] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:13.194 [2024-04-26 23:37:02.414069] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:13.194 [2024-04-26 23:37:02.414076] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3ad0000b90 00:34:13.194 [2024-04-26 23:37:02.414087] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:13.194 qpair failed and we were unable to recover it. 00:34:13.194 [2024-04-26 23:37:02.423992] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:13.194 [2024-04-26 23:37:02.424048] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:13.195 [2024-04-26 23:37:02.424059] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:13.195 [2024-04-26 23:37:02.424063] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:13.195 [2024-04-26 23:37:02.424068] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3ad0000b90 00:34:13.195 [2024-04-26 23:37:02.424078] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:13.195 qpair failed and we were unable to recover it. 00:34:13.195 [2024-04-26 23:37:02.434026] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:13.195 [2024-04-26 23:37:02.434085] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:13.195 [2024-04-26 23:37:02.434096] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:13.195 [2024-04-26 23:37:02.434101] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:13.195 [2024-04-26 23:37:02.434106] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3ad0000b90 00:34:13.195 [2024-04-26 23:37:02.434116] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:13.195 qpair failed and we were unable to recover it. 00:34:13.195 [2024-04-26 23:37:02.444076] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:13.195 [2024-04-26 23:37:02.444130] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:13.195 [2024-04-26 23:37:02.444142] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:13.195 [2024-04-26 23:37:02.444147] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:13.195 [2024-04-26 23:37:02.444151] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3ad0000b90 00:34:13.195 [2024-04-26 23:37:02.444162] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:13.195 qpair failed and we were unable to recover it. 00:34:13.457 [2024-04-26 23:37:02.454064] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:13.457 [2024-04-26 23:37:02.454128] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:13.457 [2024-04-26 23:37:02.454139] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:13.457 [2024-04-26 23:37:02.454144] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:13.457 [2024-04-26 23:37:02.454149] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3ad0000b90 00:34:13.457 [2024-04-26 23:37:02.454159] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:13.457 qpair failed and we were unable to recover it. 00:34:13.457 [2024-04-26 23:37:02.464184] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:13.457 [2024-04-26 23:37:02.464244] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:13.457 [2024-04-26 23:37:02.464255] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:13.457 [2024-04-26 23:37:02.464260] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:13.457 [2024-04-26 23:37:02.464264] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3ad0000b90 00:34:13.457 [2024-04-26 23:37:02.464274] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:13.457 qpair failed and we were unable to recover it. 00:34:13.457 [2024-04-26 23:37:02.474111] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:13.457 [2024-04-26 23:37:02.474166] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:13.457 [2024-04-26 23:37:02.474176] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:13.457 [2024-04-26 23:37:02.474181] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:13.457 [2024-04-26 23:37:02.474185] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3ad0000b90 00:34:13.457 [2024-04-26 23:37:02.474196] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:13.457 qpair failed and we were unable to recover it. 00:34:13.457 [2024-04-26 23:37:02.484137] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:13.457 [2024-04-26 23:37:02.484194] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:13.457 [2024-04-26 23:37:02.484205] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:13.457 [2024-04-26 23:37:02.484210] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:13.458 [2024-04-26 23:37:02.484214] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3ad0000b90 00:34:13.458 [2024-04-26 23:37:02.484224] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:13.458 qpair failed and we were unable to recover it. 00:34:13.458 [2024-04-26 23:37:02.494226] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:13.458 [2024-04-26 23:37:02.494280] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:13.458 [2024-04-26 23:37:02.494291] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:13.458 [2024-04-26 23:37:02.494295] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:13.458 [2024-04-26 23:37:02.494300] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3ad0000b90 00:34:13.458 [2024-04-26 23:37:02.494310] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:13.458 qpair failed and we were unable to recover it. 00:34:13.458 [2024-04-26 23:37:02.504221] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:13.458 [2024-04-26 23:37:02.504295] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:13.458 [2024-04-26 23:37:02.504306] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:13.458 [2024-04-26 23:37:02.504314] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:13.458 [2024-04-26 23:37:02.504318] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3ad0000b90 00:34:13.458 [2024-04-26 23:37:02.504329] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:13.458 qpair failed and we were unable to recover it. 00:34:13.458 [2024-04-26 23:37:02.514101] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:13.458 [2024-04-26 23:37:02.514162] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:13.458 [2024-04-26 23:37:02.514173] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:13.458 [2024-04-26 23:37:02.514178] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:13.458 [2024-04-26 23:37:02.514182] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3ad0000b90 00:34:13.458 [2024-04-26 23:37:02.514192] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:13.458 qpair failed and we were unable to recover it. 00:34:13.458 [2024-04-26 23:37:02.524223] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:13.458 [2024-04-26 23:37:02.524270] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:13.458 [2024-04-26 23:37:02.524281] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:13.458 [2024-04-26 23:37:02.524285] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:13.458 [2024-04-26 23:37:02.524290] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3ad0000b90 00:34:13.458 [2024-04-26 23:37:02.524300] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:13.458 qpair failed and we were unable to recover it. 00:34:13.458 [2024-04-26 23:37:02.534299] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:13.458 [2024-04-26 23:37:02.534369] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:13.458 [2024-04-26 23:37:02.534380] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:13.458 [2024-04-26 23:37:02.534385] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:13.458 [2024-04-26 23:37:02.534390] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3ad0000b90 00:34:13.458 [2024-04-26 23:37:02.534400] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:13.458 qpair failed and we were unable to recover it. 00:34:13.458 [2024-04-26 23:37:02.544288] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:13.458 [2024-04-26 23:37:02.544346] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:13.458 [2024-04-26 23:37:02.544357] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:13.458 [2024-04-26 23:37:02.544362] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:13.458 [2024-04-26 23:37:02.544367] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3ad0000b90 00:34:13.458 [2024-04-26 23:37:02.544377] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:13.458 qpair failed and we were unable to recover it. 00:34:13.458 [2024-04-26 23:37:02.554344] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:13.458 [2024-04-26 23:37:02.554396] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:13.458 [2024-04-26 23:37:02.554407] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:13.458 [2024-04-26 23:37:02.554412] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:13.458 [2024-04-26 23:37:02.554417] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3ad0000b90 00:34:13.458 [2024-04-26 23:37:02.554427] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:13.458 qpair failed and we were unable to recover it. 00:34:13.458 [2024-04-26 23:37:02.564382] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:13.458 [2024-04-26 23:37:02.564475] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:13.458 [2024-04-26 23:37:02.564486] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:13.458 [2024-04-26 23:37:02.564491] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:13.458 [2024-04-26 23:37:02.564496] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3ad0000b90 00:34:13.458 [2024-04-26 23:37:02.564506] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:13.458 qpair failed and we were unable to recover it. 00:34:13.458 [2024-04-26 23:37:02.574391] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:13.458 [2024-04-26 23:37:02.574449] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:13.458 [2024-04-26 23:37:02.574460] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:13.458 [2024-04-26 23:37:02.574465] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:13.458 [2024-04-26 23:37:02.574470] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3ad0000b90 00:34:13.458 [2024-04-26 23:37:02.574480] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:13.458 qpair failed and we were unable to recover it. 00:34:13.458 [2024-04-26 23:37:02.584306] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:13.458 [2024-04-26 23:37:02.584363] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:13.458 [2024-04-26 23:37:02.584374] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:13.458 [2024-04-26 23:37:02.584379] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:13.458 [2024-04-26 23:37:02.584384] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3ad0000b90 00:34:13.458 [2024-04-26 23:37:02.584394] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:13.458 qpair failed and we were unable to recover it. 00:34:13.458 [2024-04-26 23:37:02.594464] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:13.458 [2024-04-26 23:37:02.594514] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:13.458 [2024-04-26 23:37:02.594527] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:13.458 [2024-04-26 23:37:02.594532] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:13.458 [2024-04-26 23:37:02.594537] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3ad0000b90 00:34:13.458 [2024-04-26 23:37:02.594547] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:13.458 qpair failed and we were unable to recover it. 00:34:13.458 [2024-04-26 23:37:02.604477] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:13.458 [2024-04-26 23:37:02.604525] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:13.458 [2024-04-26 23:37:02.604537] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:13.458 [2024-04-26 23:37:02.604542] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:13.458 [2024-04-26 23:37:02.604547] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3ad0000b90 00:34:13.458 [2024-04-26 23:37:02.604558] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:13.458 qpair failed and we were unable to recover it. 00:34:13.459 [2024-04-26 23:37:02.614513] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:13.459 [2024-04-26 23:37:02.614565] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:13.459 [2024-04-26 23:37:02.614578] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:13.459 [2024-04-26 23:37:02.614583] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:13.459 [2024-04-26 23:37:02.614588] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3ad0000b90 00:34:13.459 [2024-04-26 23:37:02.614599] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:13.459 qpair failed and we were unable to recover it. 00:34:13.459 [2024-04-26 23:37:02.624553] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:13.459 [2024-04-26 23:37:02.624631] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:13.459 [2024-04-26 23:37:02.624643] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:13.459 [2024-04-26 23:37:02.624648] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:13.459 [2024-04-26 23:37:02.624652] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3ad0000b90 00:34:13.459 [2024-04-26 23:37:02.624662] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:13.459 qpair failed and we were unable to recover it. 00:34:13.459 [2024-04-26 23:37:02.634434] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:13.459 [2024-04-26 23:37:02.634484] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:13.459 [2024-04-26 23:37:02.634495] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:13.459 [2024-04-26 23:37:02.634500] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:13.459 [2024-04-26 23:37:02.634505] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3ad0000b90 00:34:13.459 [2024-04-26 23:37:02.634518] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:13.459 qpair failed and we were unable to recover it. 00:34:13.459 [2024-04-26 23:37:02.644653] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:13.459 [2024-04-26 23:37:02.644704] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:13.459 [2024-04-26 23:37:02.644715] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:13.459 [2024-04-26 23:37:02.644720] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:13.459 [2024-04-26 23:37:02.644725] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3ad0000b90 00:34:13.459 [2024-04-26 23:37:02.644735] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:13.459 qpair failed and we were unable to recover it. 00:34:13.459 [2024-04-26 23:37:02.654617] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:13.459 [2024-04-26 23:37:02.654674] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:13.459 [2024-04-26 23:37:02.654685] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:13.459 [2024-04-26 23:37:02.654690] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:13.459 [2024-04-26 23:37:02.654695] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3ad0000b90 00:34:13.459 [2024-04-26 23:37:02.654705] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:13.459 qpair failed and we were unable to recover it. 00:34:13.459 [2024-04-26 23:37:02.664637] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:13.459 [2024-04-26 23:37:02.664693] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:13.459 [2024-04-26 23:37:02.664704] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:13.459 [2024-04-26 23:37:02.664709] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:13.459 [2024-04-26 23:37:02.664714] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3ad0000b90 00:34:13.459 [2024-04-26 23:37:02.664724] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:13.459 qpair failed and we were unable to recover it. 00:34:13.459 [2024-04-26 23:37:02.674663] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:13.459 [2024-04-26 23:37:02.674715] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:13.459 [2024-04-26 23:37:02.674726] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:13.459 [2024-04-26 23:37:02.674731] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:13.459 [2024-04-26 23:37:02.674736] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3ad0000b90 00:34:13.459 [2024-04-26 23:37:02.674746] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:13.459 qpair failed and we were unable to recover it. 00:34:13.459 [2024-04-26 23:37:02.684702] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:13.459 [2024-04-26 23:37:02.684754] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:13.459 [2024-04-26 23:37:02.684768] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:13.459 [2024-04-26 23:37:02.684773] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:13.459 [2024-04-26 23:37:02.684778] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3ad0000b90 00:34:13.459 [2024-04-26 23:37:02.684788] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:13.459 qpair failed and we were unable to recover it. 00:34:13.459 [2024-04-26 23:37:02.694733] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:13.459 [2024-04-26 23:37:02.694791] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:13.459 [2024-04-26 23:37:02.694802] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:13.459 [2024-04-26 23:37:02.694807] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:13.459 [2024-04-26 23:37:02.694812] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3ad0000b90 00:34:13.459 [2024-04-26 23:37:02.694822] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:13.459 qpair failed and we were unable to recover it. 00:34:13.459 [2024-04-26 23:37:02.704751] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:13.459 [2024-04-26 23:37:02.704809] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:13.459 [2024-04-26 23:37:02.704820] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:13.459 [2024-04-26 23:37:02.704825] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:13.459 [2024-04-26 23:37:02.704829] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3ad0000b90 00:34:13.459 [2024-04-26 23:37:02.704844] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:13.459 qpair failed and we were unable to recover it. 00:34:13.722 [2024-04-26 23:37:02.714770] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:13.722 [2024-04-26 23:37:02.714824] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:13.722 [2024-04-26 23:37:02.714835] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:13.722 [2024-04-26 23:37:02.714844] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:13.722 [2024-04-26 23:37:02.714849] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3ad0000b90 00:34:13.722 [2024-04-26 23:37:02.714860] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:13.722 qpair failed and we were unable to recover it. 00:34:13.722 [2024-04-26 23:37:02.724781] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:13.722 [2024-04-26 23:37:02.724851] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:13.722 [2024-04-26 23:37:02.724862] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:13.722 [2024-04-26 23:37:02.724867] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:13.722 [2024-04-26 23:37:02.724875] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3ad0000b90 00:34:13.722 [2024-04-26 23:37:02.724885] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:13.722 qpair failed and we were unable to recover it. 00:34:13.722 [2024-04-26 23:37:02.734847] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:13.722 [2024-04-26 23:37:02.734903] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:13.722 [2024-04-26 23:37:02.734915] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:13.722 [2024-04-26 23:37:02.734920] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:13.722 [2024-04-26 23:37:02.734925] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3ad0000b90 00:34:13.722 [2024-04-26 23:37:02.734935] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:13.722 qpair failed and we were unable to recover it. 00:34:13.722 [2024-04-26 23:37:02.744742] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:13.722 [2024-04-26 23:37:02.744799] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:13.722 [2024-04-26 23:37:02.744811] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:13.722 [2024-04-26 23:37:02.744816] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:13.722 [2024-04-26 23:37:02.744821] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3ad0000b90 00:34:13.722 [2024-04-26 23:37:02.744831] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:13.722 qpair failed and we were unable to recover it. 00:34:13.722 [2024-04-26 23:37:02.754894] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:13.722 [2024-04-26 23:37:02.754943] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:13.722 [2024-04-26 23:37:02.754954] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:13.722 [2024-04-26 23:37:02.754959] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:13.722 [2024-04-26 23:37:02.754964] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3ad0000b90 00:34:13.722 [2024-04-26 23:37:02.754974] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:13.722 qpair failed and we were unable to recover it. 00:34:13.722 [2024-04-26 23:37:02.764903] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:13.722 [2024-04-26 23:37:02.764955] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:13.722 [2024-04-26 23:37:02.764966] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:13.722 [2024-04-26 23:37:02.764971] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:13.722 [2024-04-26 23:37:02.764975] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3ad0000b90 00:34:13.722 [2024-04-26 23:37:02.764986] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:13.723 qpair failed and we were unable to recover it. 00:34:13.723 [2024-04-26 23:37:02.774943] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:13.723 [2024-04-26 23:37:02.775045] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:13.723 [2024-04-26 23:37:02.775056] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:13.723 [2024-04-26 23:37:02.775062] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:13.723 [2024-04-26 23:37:02.775066] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3ad0000b90 00:34:13.723 [2024-04-26 23:37:02.775076] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:13.723 qpair failed and we were unable to recover it. 00:34:13.723 [2024-04-26 23:37:02.784969] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:13.723 [2024-04-26 23:37:02.785027] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:13.723 [2024-04-26 23:37:02.785038] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:13.723 [2024-04-26 23:37:02.785043] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:13.723 [2024-04-26 23:37:02.785048] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3ad0000b90 00:34:13.723 [2024-04-26 23:37:02.785058] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:13.723 qpair failed and we were unable to recover it. 00:34:13.723 [2024-04-26 23:37:02.794915] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:13.723 [2024-04-26 23:37:02.795004] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:13.723 [2024-04-26 23:37:02.795015] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:13.723 [2024-04-26 23:37:02.795020] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:13.723 [2024-04-26 23:37:02.795024] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3ad0000b90 00:34:13.723 [2024-04-26 23:37:02.795034] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:13.723 qpair failed and we were unable to recover it. 00:34:13.723 [2024-04-26 23:37:02.805028] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:13.723 [2024-04-26 23:37:02.805087] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:13.723 [2024-04-26 23:37:02.805098] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:13.723 [2024-04-26 23:37:02.805104] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:13.723 [2024-04-26 23:37:02.805109] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3ad0000b90 00:34:13.723 [2024-04-26 23:37:02.805119] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:13.723 qpair failed and we were unable to recover it. 00:34:13.723 [2024-04-26 23:37:02.815052] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:13.723 [2024-04-26 23:37:02.815141] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:13.723 [2024-04-26 23:37:02.815152] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:13.723 [2024-04-26 23:37:02.815157] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:13.723 [2024-04-26 23:37:02.815164] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3ad0000b90 00:34:13.723 [2024-04-26 23:37:02.815175] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:13.723 qpair failed and we were unable to recover it. 00:34:13.723 [2024-04-26 23:37:02.825066] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:13.723 [2024-04-26 23:37:02.825131] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:13.723 [2024-04-26 23:37:02.825142] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:13.723 [2024-04-26 23:37:02.825147] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:13.723 [2024-04-26 23:37:02.825151] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3ad0000b90 00:34:13.723 [2024-04-26 23:37:02.825162] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:13.723 qpair failed and we were unable to recover it. 00:34:13.723 [2024-04-26 23:37:02.835015] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:13.723 [2024-04-26 23:37:02.835111] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:13.723 [2024-04-26 23:37:02.835122] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:13.723 [2024-04-26 23:37:02.835127] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:13.723 [2024-04-26 23:37:02.835131] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3ad0000b90 00:34:13.723 [2024-04-26 23:37:02.835141] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:13.723 qpair failed and we were unable to recover it. 00:34:13.723 [2024-04-26 23:37:02.845160] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:13.723 [2024-04-26 23:37:02.845227] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:13.723 [2024-04-26 23:37:02.845237] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:13.723 [2024-04-26 23:37:02.845242] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:13.723 [2024-04-26 23:37:02.845247] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3ad0000b90 00:34:13.723 [2024-04-26 23:37:02.845257] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:13.723 qpair failed and we were unable to recover it. 00:34:13.723 [2024-04-26 23:37:02.855175] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:13.723 [2024-04-26 23:37:02.855228] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:13.723 [2024-04-26 23:37:02.855239] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:13.723 [2024-04-26 23:37:02.855244] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:13.723 [2024-04-26 23:37:02.855249] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3ad0000b90 00:34:13.723 [2024-04-26 23:37:02.855259] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:13.723 qpair failed and we were unable to recover it. 00:34:13.723 [2024-04-26 23:37:02.865191] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:13.723 [2024-04-26 23:37:02.865249] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:13.723 [2024-04-26 23:37:02.865261] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:13.723 [2024-04-26 23:37:02.865266] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:13.723 [2024-04-26 23:37:02.865270] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3ad0000b90 00:34:13.723 [2024-04-26 23:37:02.865281] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:13.723 qpair failed and we were unable to recover it. 00:34:13.723 [2024-04-26 23:37:02.875096] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:13.723 [2024-04-26 23:37:02.875149] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:13.723 [2024-04-26 23:37:02.875160] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:13.723 [2024-04-26 23:37:02.875166] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:13.723 [2024-04-26 23:37:02.875170] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3ad0000b90 00:34:13.723 [2024-04-26 23:37:02.875181] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:13.723 qpair failed and we were unable to recover it. 00:34:13.723 [2024-04-26 23:37:02.885233] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:13.723 [2024-04-26 23:37:02.885283] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:13.723 [2024-04-26 23:37:02.885294] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:13.723 [2024-04-26 23:37:02.885300] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:13.723 [2024-04-26 23:37:02.885304] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3ad0000b90 00:34:13.723 [2024-04-26 23:37:02.885314] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:13.723 qpair failed and we were unable to recover it. 00:34:13.723 [2024-04-26 23:37:02.895277] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:13.723 [2024-04-26 23:37:02.895333] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:13.723 [2024-04-26 23:37:02.895344] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:13.723 [2024-04-26 23:37:02.895349] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:13.723 [2024-04-26 23:37:02.895354] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3ad0000b90 00:34:13.723 [2024-04-26 23:37:02.895364] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:13.723 qpair failed and we were unable to recover it. 00:34:13.723 [2024-04-26 23:37:02.905172] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:13.723 [2024-04-26 23:37:02.905236] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:13.724 [2024-04-26 23:37:02.905248] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:13.724 [2024-04-26 23:37:02.905256] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:13.724 [2024-04-26 23:37:02.905261] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3ad0000b90 00:34:13.724 [2024-04-26 23:37:02.905272] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:13.724 qpair failed and we were unable to recover it. 00:34:13.724 [2024-04-26 23:37:02.915319] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:13.724 [2024-04-26 23:37:02.915370] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:13.724 [2024-04-26 23:37:02.915381] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:13.724 [2024-04-26 23:37:02.915386] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:13.724 [2024-04-26 23:37:02.915391] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3ad0000b90 00:34:13.724 [2024-04-26 23:37:02.915401] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:13.724 qpair failed and we were unable to recover it. 00:34:13.724 [2024-04-26 23:37:02.925333] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:13.724 [2024-04-26 23:37:02.925426] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:13.724 [2024-04-26 23:37:02.925437] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:13.724 [2024-04-26 23:37:02.925442] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:13.724 [2024-04-26 23:37:02.925447] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3ad0000b90 00:34:13.724 [2024-04-26 23:37:02.925457] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:13.724 qpair failed and we were unable to recover it. 00:34:13.724 [2024-04-26 23:37:02.935398] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:13.724 [2024-04-26 23:37:02.935451] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:13.724 [2024-04-26 23:37:02.935462] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:13.724 [2024-04-26 23:37:02.935467] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:13.724 [2024-04-26 23:37:02.935471] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3ad0000b90 00:34:13.724 [2024-04-26 23:37:02.935481] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:13.724 qpair failed and we were unable to recover it. 00:34:13.724 [2024-04-26 23:37:02.945426] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:13.724 [2024-04-26 23:37:02.945488] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:13.724 [2024-04-26 23:37:02.945498] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:13.724 [2024-04-26 23:37:02.945504] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:13.724 [2024-04-26 23:37:02.945508] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3ad0000b90 00:34:13.724 [2024-04-26 23:37:02.945518] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:13.724 qpair failed and we were unable to recover it. 00:34:13.724 [2024-04-26 23:37:02.955444] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:13.724 [2024-04-26 23:37:02.955495] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:13.724 [2024-04-26 23:37:02.955506] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:13.724 [2024-04-26 23:37:02.955510] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:13.724 [2024-04-26 23:37:02.955515] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3ad0000b90 00:34:13.724 [2024-04-26 23:37:02.955525] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:13.724 qpair failed and we were unable to recover it. 00:34:13.724 [2024-04-26 23:37:02.965479] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:13.724 [2024-04-26 23:37:02.965535] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:13.724 [2024-04-26 23:37:02.965546] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:13.724 [2024-04-26 23:37:02.965551] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:13.724 [2024-04-26 23:37:02.965555] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3ad0000b90 00:34:13.724 [2024-04-26 23:37:02.965565] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:13.724 qpair failed and we were unable to recover it. 00:34:13.986 [2024-04-26 23:37:02.975513] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:13.986 [2024-04-26 23:37:02.975596] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:13.986 [2024-04-26 23:37:02.975607] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:13.986 [2024-04-26 23:37:02.975612] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:13.986 [2024-04-26 23:37:02.975617] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3ad0000b90 00:34:13.986 [2024-04-26 23:37:02.975627] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:13.986 qpair failed and we were unable to recover it. 00:34:13.986 [2024-04-26 23:37:02.985545] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:13.986 [2024-04-26 23:37:02.985600] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:13.986 [2024-04-26 23:37:02.985611] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:13.986 [2024-04-26 23:37:02.985616] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:13.986 [2024-04-26 23:37:02.985621] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3ad0000b90 00:34:13.986 [2024-04-26 23:37:02.985631] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:13.986 qpair failed and we were unable to recover it. 00:34:13.986 [2024-04-26 23:37:02.995537] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:13.986 [2024-04-26 23:37:02.995586] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:13.986 [2024-04-26 23:37:02.995602] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:13.986 [2024-04-26 23:37:02.995608] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:13.986 [2024-04-26 23:37:02.995612] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3ad0000b90 00:34:13.986 [2024-04-26 23:37:02.995623] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:13.986 qpair failed and we were unable to recover it. 00:34:13.986 [2024-04-26 23:37:03.005576] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:13.986 [2024-04-26 23:37:03.005625] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:13.986 [2024-04-26 23:37:03.005636] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:13.986 [2024-04-26 23:37:03.005641] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:13.986 [2024-04-26 23:37:03.005646] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3ad0000b90 00:34:13.986 [2024-04-26 23:37:03.005656] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:13.986 qpair failed and we were unable to recover it. 00:34:13.986 [2024-04-26 23:37:03.015611] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:13.986 [2024-04-26 23:37:03.015666] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:13.987 [2024-04-26 23:37:03.015677] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:13.987 [2024-04-26 23:37:03.015682] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:13.987 [2024-04-26 23:37:03.015687] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3ad0000b90 00:34:13.987 [2024-04-26 23:37:03.015697] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:13.987 qpair failed and we were unable to recover it. 00:34:13.987 [2024-04-26 23:37:03.025630] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:13.987 [2024-04-26 23:37:03.025689] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:13.987 [2024-04-26 23:37:03.025700] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:13.987 [2024-04-26 23:37:03.025705] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:13.987 [2024-04-26 23:37:03.025709] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3ad0000b90 00:34:13.987 [2024-04-26 23:37:03.025719] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:13.987 qpair failed and we were unable to recover it. 00:34:13.987 [2024-04-26 23:37:03.035532] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:13.987 [2024-04-26 23:37:03.035587] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:13.987 [2024-04-26 23:37:03.035598] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:13.987 [2024-04-26 23:37:03.035603] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:13.987 [2024-04-26 23:37:03.035608] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3ad0000b90 00:34:13.987 [2024-04-26 23:37:03.035621] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:13.987 qpair failed and we were unable to recover it. 00:34:13.987 [2024-04-26 23:37:03.045695] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:13.987 [2024-04-26 23:37:03.045771] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:13.987 [2024-04-26 23:37:03.045782] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:13.987 [2024-04-26 23:37:03.045787] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:13.987 [2024-04-26 23:37:03.045791] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3ad0000b90 00:34:13.987 [2024-04-26 23:37:03.045802] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:13.987 qpair failed and we were unable to recover it. 00:34:13.987 [2024-04-26 23:37:03.055789] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:13.987 [2024-04-26 23:37:03.055849] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:13.987 [2024-04-26 23:37:03.055861] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:13.987 [2024-04-26 23:37:03.055865] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:13.987 [2024-04-26 23:37:03.055870] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3ad0000b90 00:34:13.987 [2024-04-26 23:37:03.055881] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:13.987 qpair failed and we were unable to recover it. 00:34:13.987 [2024-04-26 23:37:03.065752] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:13.987 [2024-04-26 23:37:03.065842] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:13.987 [2024-04-26 23:37:03.065853] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:13.987 [2024-04-26 23:37:03.065858] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:13.987 [2024-04-26 23:37:03.065863] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3ad0000b90 00:34:13.987 [2024-04-26 23:37:03.065873] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:13.987 qpair failed and we were unable to recover it. 00:34:13.987 [2024-04-26 23:37:03.075774] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:13.987 [2024-04-26 23:37:03.075821] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:13.987 [2024-04-26 23:37:03.075832] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:13.987 [2024-04-26 23:37:03.075841] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:13.987 [2024-04-26 23:37:03.075846] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3ad0000b90 00:34:13.987 [2024-04-26 23:37:03.075856] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:13.987 qpair failed and we were unable to recover it. 00:34:13.987 [2024-04-26 23:37:03.085794] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:13.987 [2024-04-26 23:37:03.085851] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:13.987 [2024-04-26 23:37:03.085865] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:13.987 [2024-04-26 23:37:03.085870] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:13.987 [2024-04-26 23:37:03.085874] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3ad0000b90 00:34:13.987 [2024-04-26 23:37:03.085885] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:13.987 qpair failed and we were unable to recover it. 00:34:13.987 [2024-04-26 23:37:03.095718] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:13.987 [2024-04-26 23:37:03.095779] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:13.987 [2024-04-26 23:37:03.095790] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:13.987 [2024-04-26 23:37:03.095795] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:13.987 [2024-04-26 23:37:03.095799] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3ad0000b90 00:34:13.987 [2024-04-26 23:37:03.095810] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:13.987 qpair failed and we were unable to recover it. 00:34:13.987 [2024-04-26 23:37:03.105889] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:13.987 [2024-04-26 23:37:03.105951] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:13.987 [2024-04-26 23:37:03.105963] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:13.987 [2024-04-26 23:37:03.105968] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:13.987 [2024-04-26 23:37:03.105972] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3ad0000b90 00:34:13.987 [2024-04-26 23:37:03.105982] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:13.987 qpair failed and we were unable to recover it. 00:34:13.987 [2024-04-26 23:37:03.115772] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:13.987 [2024-04-26 23:37:03.115834] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:13.987 [2024-04-26 23:37:03.115851] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:13.987 [2024-04-26 23:37:03.115855] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:13.987 [2024-04-26 23:37:03.115860] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3ad0000b90 00:34:13.987 [2024-04-26 23:37:03.115871] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:13.987 qpair failed and we were unable to recover it. 00:34:13.987 [2024-04-26 23:37:03.125932] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:13.987 [2024-04-26 23:37:03.125987] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:13.987 [2024-04-26 23:37:03.125999] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:13.987 [2024-04-26 23:37:03.126004] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:13.987 [2024-04-26 23:37:03.126009] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3ad0000b90 00:34:13.987 [2024-04-26 23:37:03.126022] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:13.987 qpair failed and we were unable to recover it. 00:34:13.987 [2024-04-26 23:37:03.135909] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:13.987 [2024-04-26 23:37:03.135986] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:13.987 [2024-04-26 23:37:03.135997] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:13.987 [2024-04-26 23:37:03.136002] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:13.987 [2024-04-26 23:37:03.136006] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3ad0000b90 00:34:13.987 [2024-04-26 23:37:03.136017] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:13.987 qpair failed and we were unable to recover it. 00:34:13.987 [2024-04-26 23:37:03.145986] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:13.987 [2024-04-26 23:37:03.146082] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:13.987 [2024-04-26 23:37:03.146093] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:13.987 [2024-04-26 23:37:03.146099] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:13.987 [2024-04-26 23:37:03.146103] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3ad0000b90 00:34:13.988 [2024-04-26 23:37:03.146114] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:13.988 qpair failed and we were unable to recover it. 00:34:13.988 [2024-04-26 23:37:03.156003] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:13.988 [2024-04-26 23:37:03.156060] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:13.988 [2024-04-26 23:37:03.156071] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:13.988 [2024-04-26 23:37:03.156076] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:13.988 [2024-04-26 23:37:03.156080] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3ad0000b90 00:34:13.988 [2024-04-26 23:37:03.156091] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:13.988 qpair failed and we were unable to recover it. 00:34:13.988 [2024-04-26 23:37:03.166019] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:13.988 [2024-04-26 23:37:03.166111] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:13.988 [2024-04-26 23:37:03.166123] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:13.988 [2024-04-26 23:37:03.166128] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:13.988 [2024-04-26 23:37:03.166132] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3ad0000b90 00:34:13.988 [2024-04-26 23:37:03.166142] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:13.988 qpair failed and we were unable to recover it. 00:34:13.988 [2024-04-26 23:37:03.175937] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:13.988 [2024-04-26 23:37:03.175994] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:13.988 [2024-04-26 23:37:03.176005] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:13.988 [2024-04-26 23:37:03.176011] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:13.988 [2024-04-26 23:37:03.176015] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3ad0000b90 00:34:13.988 [2024-04-26 23:37:03.176026] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:13.988 qpair failed and we were unable to recover it. 00:34:13.988 [2024-04-26 23:37:03.186071] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:13.988 [2024-04-26 23:37:03.186126] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:13.988 [2024-04-26 23:37:03.186137] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:13.988 [2024-04-26 23:37:03.186142] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:13.988 [2024-04-26 23:37:03.186146] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3ad0000b90 00:34:13.988 [2024-04-26 23:37:03.186156] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:13.988 qpair failed and we were unable to recover it. 00:34:13.988 [2024-04-26 23:37:03.196085] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:13.988 [2024-04-26 23:37:03.196159] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:13.988 [2024-04-26 23:37:03.196170] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:13.988 [2024-04-26 23:37:03.196175] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:13.988 [2024-04-26 23:37:03.196179] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3ad0000b90 00:34:13.988 [2024-04-26 23:37:03.196190] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:13.988 qpair failed and we were unable to recover it. 00:34:13.988 [2024-04-26 23:37:03.206000] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:13.988 [2024-04-26 23:37:03.206053] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:13.988 [2024-04-26 23:37:03.206065] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:13.988 [2024-04-26 23:37:03.206070] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:13.988 [2024-04-26 23:37:03.206074] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3ad0000b90 00:34:13.988 [2024-04-26 23:37:03.206085] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:13.988 qpair failed and we were unable to recover it. 00:34:13.988 [2024-04-26 23:37:03.216200] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:13.988 [2024-04-26 23:37:03.216253] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:13.988 [2024-04-26 23:37:03.216264] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:13.988 [2024-04-26 23:37:03.216269] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:13.988 [2024-04-26 23:37:03.216276] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3ad0000b90 00:34:13.988 [2024-04-26 23:37:03.216286] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:13.988 qpair failed and we were unable to recover it. 00:34:13.988 [2024-04-26 23:37:03.226182] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:13.988 [2024-04-26 23:37:03.226240] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:13.988 [2024-04-26 23:37:03.226251] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:13.988 [2024-04-26 23:37:03.226256] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:13.988 [2024-04-26 23:37:03.226261] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3ad0000b90 00:34:13.988 [2024-04-26 23:37:03.226271] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:13.988 qpair failed and we were unable to recover it. 00:34:13.988 [2024-04-26 23:37:03.236247] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:13.988 [2024-04-26 23:37:03.236305] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:13.988 [2024-04-26 23:37:03.236326] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:13.988 [2024-04-26 23:37:03.236332] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:13.988 [2024-04-26 23:37:03.236336] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3ad0000b90 00:34:13.988 [2024-04-26 23:37:03.236350] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:13.988 qpair failed and we were unable to recover it. 00:34:14.251 [2024-04-26 23:37:03.246233] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:14.251 [2024-04-26 23:37:03.246286] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:14.251 [2024-04-26 23:37:03.246298] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:14.251 [2024-04-26 23:37:03.246303] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:14.251 [2024-04-26 23:37:03.246308] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3ad0000b90 00:34:14.251 [2024-04-26 23:37:03.246318] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:14.251 qpair failed and we were unable to recover it. 00:34:14.251 [2024-04-26 23:37:03.256282] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:14.251 [2024-04-26 23:37:03.256336] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:14.251 [2024-04-26 23:37:03.256348] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:14.251 [2024-04-26 23:37:03.256353] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:14.251 [2024-04-26 23:37:03.256358] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3ad0000b90 00:34:14.251 [2024-04-26 23:37:03.256368] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:14.251 qpair failed and we were unable to recover it. 00:34:14.251 [2024-04-26 23:37:03.266365] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:14.251 [2024-04-26 23:37:03.266422] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:14.251 [2024-04-26 23:37:03.266434] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:14.251 [2024-04-26 23:37:03.266439] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:14.251 [2024-04-26 23:37:03.266443] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3ad0000b90 00:34:14.251 [2024-04-26 23:37:03.266454] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:14.251 qpair failed and we were unable to recover it. 00:34:14.251 [2024-04-26 23:37:03.276311] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:14.251 [2024-04-26 23:37:03.276366] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:14.251 [2024-04-26 23:37:03.276376] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:14.251 [2024-04-26 23:37:03.276381] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:14.251 [2024-04-26 23:37:03.276386] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3ad0000b90 00:34:14.251 [2024-04-26 23:37:03.276396] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:14.251 qpair failed and we were unable to recover it. 00:34:14.251 [2024-04-26 23:37:03.286218] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:14.251 [2024-04-26 23:37:03.286276] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:14.251 [2024-04-26 23:37:03.286287] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:14.251 [2024-04-26 23:37:03.286292] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:14.251 [2024-04-26 23:37:03.286296] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3ad0000b90 00:34:14.251 [2024-04-26 23:37:03.286307] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:14.251 qpair failed and we were unable to recover it. 00:34:14.251 [2024-04-26 23:37:03.296434] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:14.251 [2024-04-26 23:37:03.296492] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:14.251 [2024-04-26 23:37:03.296504] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:14.251 [2024-04-26 23:37:03.296509] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:14.251 [2024-04-26 23:37:03.296513] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3ad0000b90 00:34:14.251 [2024-04-26 23:37:03.296524] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:14.251 qpair failed and we were unable to recover it. 00:34:14.251 [2024-04-26 23:37:03.306395] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:14.251 [2024-04-26 23:37:03.306477] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:14.251 [2024-04-26 23:37:03.306488] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:14.251 [2024-04-26 23:37:03.306496] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:14.251 [2024-04-26 23:37:03.306500] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3ad0000b90 00:34:14.251 [2024-04-26 23:37:03.306511] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:14.251 qpair failed and we were unable to recover it. 00:34:14.251 [2024-04-26 23:37:03.316460] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:14.251 [2024-04-26 23:37:03.316516] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:14.251 [2024-04-26 23:37:03.316527] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:14.251 [2024-04-26 23:37:03.316532] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:14.251 [2024-04-26 23:37:03.316537] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3ad0000b90 00:34:14.251 [2024-04-26 23:37:03.316547] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:14.251 qpair failed and we were unable to recover it. 00:34:14.251 [2024-04-26 23:37:03.326456] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:14.251 [2024-04-26 23:37:03.326505] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:14.251 [2024-04-26 23:37:03.326516] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:14.251 [2024-04-26 23:37:03.326521] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:14.251 [2024-04-26 23:37:03.326526] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3ad0000b90 00:34:14.251 [2024-04-26 23:37:03.326536] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:14.251 qpair failed and we were unable to recover it. 00:34:14.251 [2024-04-26 23:37:03.336374] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:14.251 [2024-04-26 23:37:03.336429] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:14.251 [2024-04-26 23:37:03.336440] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:14.251 [2024-04-26 23:37:03.336445] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:14.251 [2024-04-26 23:37:03.336450] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3ad0000b90 00:34:14.251 [2024-04-26 23:37:03.336460] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:14.251 qpair failed and we were unable to recover it. 00:34:14.251 [2024-04-26 23:37:03.346518] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:14.251 [2024-04-26 23:37:03.346574] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:14.251 [2024-04-26 23:37:03.346585] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:14.251 [2024-04-26 23:37:03.346590] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:14.251 [2024-04-26 23:37:03.346595] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3ad0000b90 00:34:14.251 [2024-04-26 23:37:03.346605] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:14.251 qpair failed and we were unable to recover it. 00:34:14.251 [2024-04-26 23:37:03.356581] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:14.251 [2024-04-26 23:37:03.356646] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:14.251 [2024-04-26 23:37:03.356657] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:14.251 [2024-04-26 23:37:03.356662] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:14.251 [2024-04-26 23:37:03.356666] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3ad0000b90 00:34:14.251 [2024-04-26 23:37:03.356676] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:14.251 qpair failed and we were unable to recover it. 00:34:14.251 [2024-04-26 23:37:03.366593] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:14.251 [2024-04-26 23:37:03.366666] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:14.251 [2024-04-26 23:37:03.366677] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:14.251 [2024-04-26 23:37:03.366682] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:14.251 [2024-04-26 23:37:03.366686] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3ad0000b90 00:34:14.251 [2024-04-26 23:37:03.366696] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:14.251 qpair failed and we were unable to recover it. 00:34:14.251 [2024-04-26 23:37:03.376614] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:14.251 [2024-04-26 23:37:03.376674] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:14.251 [2024-04-26 23:37:03.376685] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:14.251 [2024-04-26 23:37:03.376690] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:14.251 [2024-04-26 23:37:03.376695] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3ad0000b90 00:34:14.251 [2024-04-26 23:37:03.376705] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:14.251 qpair failed and we were unable to recover it. 00:34:14.251 [2024-04-26 23:37:03.386612] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:14.251 [2024-04-26 23:37:03.386667] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:14.251 [2024-04-26 23:37:03.386678] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:14.251 [2024-04-26 23:37:03.386683] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:14.252 [2024-04-26 23:37:03.386687] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3ad0000b90 00:34:14.252 [2024-04-26 23:37:03.386697] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:14.252 qpair failed and we were unable to recover it. 00:34:14.252 [2024-04-26 23:37:03.396631] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:14.252 [2024-04-26 23:37:03.396717] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:14.252 [2024-04-26 23:37:03.396731] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:14.252 [2024-04-26 23:37:03.396736] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:14.252 [2024-04-26 23:37:03.396741] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3ad0000b90 00:34:14.252 [2024-04-26 23:37:03.396751] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:14.252 qpair failed and we were unable to recover it. 00:34:14.252 [2024-04-26 23:37:03.406671] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:14.252 [2024-04-26 23:37:03.406719] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:14.252 [2024-04-26 23:37:03.406730] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:14.252 [2024-04-26 23:37:03.406736] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:14.252 [2024-04-26 23:37:03.406740] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3ad0000b90 00:34:14.252 [2024-04-26 23:37:03.406750] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:14.252 qpair failed and we were unable to recover it. 00:34:14.252 [2024-04-26 23:37:03.416685] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:14.252 [2024-04-26 23:37:03.416739] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:14.252 [2024-04-26 23:37:03.416750] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:14.252 [2024-04-26 23:37:03.416755] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:14.252 [2024-04-26 23:37:03.416759] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3ad0000b90 00:34:14.252 [2024-04-26 23:37:03.416769] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:14.252 qpair failed and we were unable to recover it. 00:34:14.252 [2024-04-26 23:37:03.426618] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:14.252 [2024-04-26 23:37:03.426685] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:14.252 [2024-04-26 23:37:03.426696] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:14.252 [2024-04-26 23:37:03.426701] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:14.252 [2024-04-26 23:37:03.426705] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3ad0000b90 00:34:14.252 [2024-04-26 23:37:03.426715] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:14.252 qpair failed and we were unable to recover it. 00:34:14.252 [2024-04-26 23:37:03.436679] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:14.252 [2024-04-26 23:37:03.436740] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:14.252 [2024-04-26 23:37:03.436751] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:14.252 [2024-04-26 23:37:03.436756] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:14.252 [2024-04-26 23:37:03.436761] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3ad0000b90 00:34:14.252 [2024-04-26 23:37:03.436771] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:14.252 qpair failed and we were unable to recover it. 00:34:14.252 [2024-04-26 23:37:03.446768] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:14.252 [2024-04-26 23:37:03.446817] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:14.252 [2024-04-26 23:37:03.446827] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:14.252 [2024-04-26 23:37:03.446832] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:14.252 [2024-04-26 23:37:03.446840] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3ad0000b90 00:34:14.252 [2024-04-26 23:37:03.446851] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:14.252 qpair failed and we were unable to recover it. 00:34:14.252 [2024-04-26 23:37:03.456781] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:14.252 [2024-04-26 23:37:03.456835] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:14.252 [2024-04-26 23:37:03.456850] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:14.252 [2024-04-26 23:37:03.456855] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:14.252 [2024-04-26 23:37:03.456859] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3ad0000b90 00:34:14.252 [2024-04-26 23:37:03.456869] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:14.252 qpair failed and we were unable to recover it. 00:34:14.252 [2024-04-26 23:37:03.466844] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:14.252 [2024-04-26 23:37:03.466900] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:14.252 [2024-04-26 23:37:03.466911] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:14.252 [2024-04-26 23:37:03.466916] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:14.252 [2024-04-26 23:37:03.466920] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3ad0000b90 00:34:14.252 [2024-04-26 23:37:03.466930] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:14.252 qpair failed and we were unable to recover it. 00:34:14.252 [2024-04-26 23:37:03.476904] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:14.252 [2024-04-26 23:37:03.476959] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:14.252 [2024-04-26 23:37:03.476970] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:14.252 [2024-04-26 23:37:03.476975] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:14.252 [2024-04-26 23:37:03.476979] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3ad0000b90 00:34:14.252 [2024-04-26 23:37:03.476990] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:14.252 qpair failed and we were unable to recover it. 00:34:14.252 [2024-04-26 23:37:03.486772] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:14.252 [2024-04-26 23:37:03.486822] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:14.252 [2024-04-26 23:37:03.486836] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:14.252 [2024-04-26 23:37:03.486844] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:14.252 [2024-04-26 23:37:03.486849] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3ad0000b90 00:34:14.252 [2024-04-26 23:37:03.486860] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:14.252 qpair failed and we were unable to recover it. 00:34:14.252 [2024-04-26 23:37:03.496927] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:14.252 [2024-04-26 23:37:03.496988] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:14.252 [2024-04-26 23:37:03.496999] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:14.252 [2024-04-26 23:37:03.497004] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:14.252 [2024-04-26 23:37:03.497008] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3ad0000b90 00:34:14.252 [2024-04-26 23:37:03.497019] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:14.252 qpair failed and we were unable to recover it. 00:34:14.553 [2024-04-26 23:37:03.506968] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:14.553 [2024-04-26 23:37:03.507106] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:14.553 [2024-04-26 23:37:03.507118] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:14.553 [2024-04-26 23:37:03.507123] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:14.553 [2024-04-26 23:37:03.507128] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3ad0000b90 00:34:14.553 [2024-04-26 23:37:03.507138] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:14.553 qpair failed and we were unable to recover it. 00:34:14.553 [2024-04-26 23:37:03.516985] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:14.553 [2024-04-26 23:37:03.517036] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:14.553 [2024-04-26 23:37:03.517047] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:14.553 [2024-04-26 23:37:03.517052] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:14.553 [2024-04-26 23:37:03.517057] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3ad0000b90 00:34:14.553 [2024-04-26 23:37:03.517067] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:14.553 qpair failed and we were unable to recover it. 00:34:14.553 [2024-04-26 23:37:03.526994] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:14.553 [2024-04-26 23:37:03.527052] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:14.553 [2024-04-26 23:37:03.527063] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:14.553 [2024-04-26 23:37:03.527068] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:14.553 [2024-04-26 23:37:03.527072] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3ad0000b90 00:34:14.553 [2024-04-26 23:37:03.527086] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:14.553 qpair failed and we were unable to recover it. 00:34:14.553 [2024-04-26 23:37:03.537046] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:14.553 [2024-04-26 23:37:03.537099] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:14.553 [2024-04-26 23:37:03.537110] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:14.553 [2024-04-26 23:37:03.537115] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:14.553 [2024-04-26 23:37:03.537120] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3ad0000b90 00:34:14.553 [2024-04-26 23:37:03.537130] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:14.553 qpair failed and we were unable to recover it. 00:34:14.553 [2024-04-26 23:37:03.547052] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:14.553 [2024-04-26 23:37:03.547133] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:14.553 [2024-04-26 23:37:03.547144] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:14.553 [2024-04-26 23:37:03.547149] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:14.553 [2024-04-26 23:37:03.547154] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3ad0000b90 00:34:14.553 [2024-04-26 23:37:03.547166] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:14.553 qpair failed and we were unable to recover it. 00:34:14.553 [2024-04-26 23:37:03.556956] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:14.553 [2024-04-26 23:37:03.557019] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:14.553 [2024-04-26 23:37:03.557030] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:14.553 [2024-04-26 23:37:03.557035] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:14.553 [2024-04-26 23:37:03.557040] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3ad0000b90 00:34:14.553 [2024-04-26 23:37:03.557050] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:14.553 qpair failed and we were unable to recover it. 00:34:14.553 [2024-04-26 23:37:03.567108] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:14.553 [2024-04-26 23:37:03.567210] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:14.553 [2024-04-26 23:37:03.567221] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:14.553 [2024-04-26 23:37:03.567226] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:14.553 [2024-04-26 23:37:03.567232] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3ad0000b90 00:34:14.553 [2024-04-26 23:37:03.567242] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:14.553 qpair failed and we were unable to recover it. 00:34:14.553 [2024-04-26 23:37:03.577165] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:14.553 [2024-04-26 23:37:03.577219] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:14.553 [2024-04-26 23:37:03.577233] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:14.553 [2024-04-26 23:37:03.577238] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:14.553 [2024-04-26 23:37:03.577242] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3ad0000b90 00:34:14.553 [2024-04-26 23:37:03.577252] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:14.553 qpair failed and we were unable to recover it. 00:34:14.553 [2024-04-26 23:37:03.587176] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:14.553 [2024-04-26 23:37:03.587232] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:14.553 [2024-04-26 23:37:03.587243] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:14.553 [2024-04-26 23:37:03.587248] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:14.553 [2024-04-26 23:37:03.587252] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3ad0000b90 00:34:14.553 [2024-04-26 23:37:03.587262] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:14.553 qpair failed and we were unable to recover it. 00:34:14.553 [2024-04-26 23:37:03.597196] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:14.553 [2024-04-26 23:37:03.597246] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:14.553 [2024-04-26 23:37:03.597258] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:14.553 [2024-04-26 23:37:03.597263] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:14.553 [2024-04-26 23:37:03.597267] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3ad0000b90 00:34:14.553 [2024-04-26 23:37:03.597278] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:14.553 qpair failed and we were unable to recover it. 00:34:14.553 [2024-04-26 23:37:03.607110] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:14.553 [2024-04-26 23:37:03.607161] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:14.553 [2024-04-26 23:37:03.607173] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:14.553 [2024-04-26 23:37:03.607178] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:14.553 [2024-04-26 23:37:03.607182] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3ad0000b90 00:34:14.553 [2024-04-26 23:37:03.607193] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:14.553 qpair failed and we were unable to recover it. 00:34:14.553 [2024-04-26 23:37:03.617257] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:14.553 [2024-04-26 23:37:03.617309] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:14.553 [2024-04-26 23:37:03.617320] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:14.553 [2024-04-26 23:37:03.617326] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:14.553 [2024-04-26 23:37:03.617336] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3ad0000b90 00:34:14.553 [2024-04-26 23:37:03.617346] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:14.553 qpair failed and we were unable to recover it. 00:34:14.553 [2024-04-26 23:37:03.627277] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:14.553 [2024-04-26 23:37:03.627336] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:14.553 [2024-04-26 23:37:03.627347] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:14.553 [2024-04-26 23:37:03.627352] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:14.553 [2024-04-26 23:37:03.627356] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3ad0000b90 00:34:14.553 [2024-04-26 23:37:03.627366] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:14.553 qpair failed and we were unable to recover it. 00:34:14.554 [2024-04-26 23:37:03.637329] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:14.554 [2024-04-26 23:37:03.637381] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:14.554 [2024-04-26 23:37:03.637391] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:14.554 [2024-04-26 23:37:03.637396] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:14.554 [2024-04-26 23:37:03.637401] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3ad0000b90 00:34:14.554 [2024-04-26 23:37:03.637411] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:14.554 qpair failed and we were unable to recover it. 00:34:14.554 [2024-04-26 23:37:03.647341] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:14.554 [2024-04-26 23:37:03.647444] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:14.554 [2024-04-26 23:37:03.647455] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:14.554 [2024-04-26 23:37:03.647460] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:14.554 [2024-04-26 23:37:03.647465] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3ad0000b90 00:34:14.554 [2024-04-26 23:37:03.647475] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:14.554 qpair failed and we were unable to recover it. 00:34:14.554 [2024-04-26 23:37:03.657390] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:14.554 [2024-04-26 23:37:03.657446] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:14.554 [2024-04-26 23:37:03.657456] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:14.554 [2024-04-26 23:37:03.657462] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:14.554 [2024-04-26 23:37:03.657466] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3ad0000b90 00:34:14.554 [2024-04-26 23:37:03.657476] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:14.554 qpair failed and we were unable to recover it. 00:34:14.554 [2024-04-26 23:37:03.667392] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:14.554 [2024-04-26 23:37:03.667450] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:14.554 [2024-04-26 23:37:03.667461] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:14.554 [2024-04-26 23:37:03.667466] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:14.554 [2024-04-26 23:37:03.667470] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3ad0000b90 00:34:14.554 [2024-04-26 23:37:03.667480] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:14.554 qpair failed and we were unable to recover it. 00:34:14.554 [2024-04-26 23:37:03.677390] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:14.554 [2024-04-26 23:37:03.677491] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:14.554 [2024-04-26 23:37:03.677502] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:14.554 [2024-04-26 23:37:03.677507] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:14.554 [2024-04-26 23:37:03.677512] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3ad0000b90 00:34:14.554 [2024-04-26 23:37:03.677522] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:14.554 qpair failed and we were unable to recover it. 00:34:14.554 [2024-04-26 23:37:03.687441] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:14.554 [2024-04-26 23:37:03.687489] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:14.554 [2024-04-26 23:37:03.687500] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:14.554 [2024-04-26 23:37:03.687505] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:14.554 [2024-04-26 23:37:03.687510] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3ad0000b90 00:34:14.554 [2024-04-26 23:37:03.687520] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:14.554 qpair failed and we were unable to recover it. 00:34:14.554 [2024-04-26 23:37:03.697348] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:14.554 [2024-04-26 23:37:03.697407] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:14.554 [2024-04-26 23:37:03.697419] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:14.554 [2024-04-26 23:37:03.697424] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:14.554 [2024-04-26 23:37:03.697428] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3ad0000b90 00:34:14.554 [2024-04-26 23:37:03.697439] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:14.554 qpair failed and we were unable to recover it. 00:34:14.554 [2024-04-26 23:37:03.707506] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:14.554 [2024-04-26 23:37:03.707596] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:14.554 [2024-04-26 23:37:03.707607] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:14.554 [2024-04-26 23:37:03.707615] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:14.554 [2024-04-26 23:37:03.707620] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3ad0000b90 00:34:14.554 [2024-04-26 23:37:03.707631] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:14.554 qpair failed and we were unable to recover it. 00:34:14.554 [2024-04-26 23:37:03.717428] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:14.554 [2024-04-26 23:37:03.717475] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:14.554 [2024-04-26 23:37:03.717487] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:14.554 [2024-04-26 23:37:03.717492] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:14.554 [2024-04-26 23:37:03.717496] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3ad0000b90 00:34:14.554 [2024-04-26 23:37:03.717507] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:14.554 qpair failed and we were unable to recover it. 00:34:14.554 [2024-04-26 23:37:03.727448] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:14.554 [2024-04-26 23:37:03.727500] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:14.554 [2024-04-26 23:37:03.727511] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:14.554 [2024-04-26 23:37:03.727516] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:14.554 [2024-04-26 23:37:03.727521] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3ad0000b90 00:34:14.554 [2024-04-26 23:37:03.727531] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:14.554 qpair failed and we were unable to recover it. 00:34:14.554 [2024-04-26 23:37:03.737598] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:14.554 [2024-04-26 23:37:03.737653] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:14.554 [2024-04-26 23:37:03.737664] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:14.554 [2024-04-26 23:37:03.737669] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:14.554 [2024-04-26 23:37:03.737673] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3ad0000b90 00:34:14.554 [2024-04-26 23:37:03.737683] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:14.554 qpair failed and we were unable to recover it. 00:34:14.554 [2024-04-26 23:37:03.747614] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:14.554 [2024-04-26 23:37:03.747674] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:14.554 [2024-04-26 23:37:03.747685] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:14.554 [2024-04-26 23:37:03.747690] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:14.554 [2024-04-26 23:37:03.747694] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3ad0000b90 00:34:14.554 [2024-04-26 23:37:03.747705] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:14.554 qpair failed and we were unable to recover it. 00:34:14.554 [2024-04-26 23:37:03.757660] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:14.554 [2024-04-26 23:37:03.757712] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:14.554 [2024-04-26 23:37:03.757723] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:14.554 [2024-04-26 23:37:03.757728] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:14.554 [2024-04-26 23:37:03.757732] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3ad0000b90 00:34:14.554 [2024-04-26 23:37:03.757742] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:14.554 qpair failed and we were unable to recover it. 00:34:14.554 [2024-04-26 23:37:03.767657] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:14.554 [2024-04-26 23:37:03.767712] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:14.554 [2024-04-26 23:37:03.767722] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:14.554 [2024-04-26 23:37:03.767727] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:14.554 [2024-04-26 23:37:03.767732] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3ad0000b90 00:34:14.554 [2024-04-26 23:37:03.767742] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:14.554 qpair failed and we were unable to recover it. 00:34:14.554 [2024-04-26 23:37:03.777566] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:14.554 [2024-04-26 23:37:03.777625] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:14.554 [2024-04-26 23:37:03.777637] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:14.554 [2024-04-26 23:37:03.777642] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:14.554 [2024-04-26 23:37:03.777646] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3ad0000b90 00:34:14.554 [2024-04-26 23:37:03.777657] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:14.554 qpair failed and we were unable to recover it. 00:34:14.820 [2024-04-26 23:37:03.787775] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:14.820 [2024-04-26 23:37:03.787831] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:14.820 [2024-04-26 23:37:03.787846] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:14.820 [2024-04-26 23:37:03.787851] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:14.820 [2024-04-26 23:37:03.787855] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3ad0000b90 00:34:14.820 [2024-04-26 23:37:03.787866] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:14.820 qpair failed and we were unable to recover it. 00:34:14.820 [2024-04-26 23:37:03.797630] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:14.820 [2024-04-26 23:37:03.797697] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:14.820 [2024-04-26 23:37:03.797709] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:14.820 [2024-04-26 23:37:03.797717] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:14.820 [2024-04-26 23:37:03.797722] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3ad0000b90 00:34:14.820 [2024-04-26 23:37:03.797733] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:14.820 qpair failed and we were unable to recover it. 00:34:14.820 [2024-04-26 23:37:03.807767] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:14.820 [2024-04-26 23:37:03.807816] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:14.820 [2024-04-26 23:37:03.807827] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:14.820 [2024-04-26 23:37:03.807832] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:14.820 [2024-04-26 23:37:03.807841] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3ad0000b90 00:34:14.820 [2024-04-26 23:37:03.807852] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:14.820 qpair failed and we were unable to recover it. 00:34:14.820 [2024-04-26 23:37:03.817806] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:14.821 [2024-04-26 23:37:03.817863] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:14.821 [2024-04-26 23:37:03.817874] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:14.821 [2024-04-26 23:37:03.817879] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:14.821 [2024-04-26 23:37:03.817884] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3ad0000b90 00:34:14.821 [2024-04-26 23:37:03.817896] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:14.821 qpair failed and we were unable to recover it. 00:34:14.821 [2024-04-26 23:37:03.827853] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:14.821 [2024-04-26 23:37:03.827914] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:14.821 [2024-04-26 23:37:03.827925] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:14.821 [2024-04-26 23:37:03.827930] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:14.821 [2024-04-26 23:37:03.827935] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3ad0000b90 00:34:14.821 [2024-04-26 23:37:03.827946] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:14.821 qpair failed and we were unable to recover it. 00:34:14.821 [2024-04-26 23:37:03.837863] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:14.821 [2024-04-26 23:37:03.837914] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:14.821 [2024-04-26 23:37:03.837925] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:14.821 [2024-04-26 23:37:03.837930] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:14.821 [2024-04-26 23:37:03.837935] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3ad0000b90 00:34:14.821 [2024-04-26 23:37:03.837946] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:14.821 qpair failed and we were unable to recover it. 00:34:14.821 [2024-04-26 23:37:03.847878] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:14.821 [2024-04-26 23:37:03.847927] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:14.821 [2024-04-26 23:37:03.847937] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:14.821 [2024-04-26 23:37:03.847942] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:14.821 [2024-04-26 23:37:03.847947] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3ad0000b90 00:34:14.821 [2024-04-26 23:37:03.847958] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:14.821 qpair failed and we were unable to recover it. 00:34:14.821 [2024-04-26 23:37:03.857946] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:14.821 [2024-04-26 23:37:03.858046] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:14.821 [2024-04-26 23:37:03.858057] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:14.821 [2024-04-26 23:37:03.858062] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:14.821 [2024-04-26 23:37:03.858066] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3ad0000b90 00:34:14.821 [2024-04-26 23:37:03.858077] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:14.821 qpair failed and we were unable to recover it. 00:34:14.821 [2024-04-26 23:37:03.867939] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:14.821 [2024-04-26 23:37:03.868034] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:14.821 [2024-04-26 23:37:03.868045] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:14.821 [2024-04-26 23:37:03.868051] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:14.821 [2024-04-26 23:37:03.868055] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3ad0000b90 00:34:14.821 [2024-04-26 23:37:03.868065] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:14.821 qpair failed and we were unable to recover it. 00:34:14.821 [2024-04-26 23:37:03.877914] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:14.821 [2024-04-26 23:37:03.877969] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:14.821 [2024-04-26 23:37:03.877980] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:14.821 [2024-04-26 23:37:03.877985] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:14.821 [2024-04-26 23:37:03.877989] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3ad0000b90 00:34:14.821 [2024-04-26 23:37:03.877999] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:14.821 qpair failed and we were unable to recover it. 00:34:14.821 [2024-04-26 23:37:03.888007] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:14.821 [2024-04-26 23:37:03.888059] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:14.821 [2024-04-26 23:37:03.888073] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:14.821 [2024-04-26 23:37:03.888078] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:14.821 [2024-04-26 23:37:03.888082] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3ad0000b90 00:34:14.821 [2024-04-26 23:37:03.888092] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:14.821 qpair failed and we were unable to recover it. 00:34:14.821 [2024-04-26 23:37:03.898050] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:14.821 [2024-04-26 23:37:03.898103] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:14.821 [2024-04-26 23:37:03.898114] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:14.821 [2024-04-26 23:37:03.898119] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:14.821 [2024-04-26 23:37:03.898124] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3ad0000b90 00:34:14.821 [2024-04-26 23:37:03.898133] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:14.821 qpair failed and we were unable to recover it. 00:34:14.821 [2024-04-26 23:37:03.907932] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:14.821 [2024-04-26 23:37:03.907992] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:14.821 [2024-04-26 23:37:03.908003] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:14.821 [2024-04-26 23:37:03.908008] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:14.821 [2024-04-26 23:37:03.908013] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3ad0000b90 00:34:14.821 [2024-04-26 23:37:03.908023] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:14.821 qpair failed and we were unable to recover it. 00:34:14.821 [2024-04-26 23:37:03.917959] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:14.821 [2024-04-26 23:37:03.918028] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:14.821 [2024-04-26 23:37:03.918040] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:14.821 [2024-04-26 23:37:03.918044] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:14.821 [2024-04-26 23:37:03.918048] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3ad0000b90 00:34:14.821 [2024-04-26 23:37:03.918059] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:14.821 qpair failed and we were unable to recover it. 00:34:14.821 [2024-04-26 23:37:03.927993] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:14.821 [2024-04-26 23:37:03.928044] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:14.821 [2024-04-26 23:37:03.928056] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:14.821 [2024-04-26 23:37:03.928061] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:14.821 [2024-04-26 23:37:03.928065] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3ad0000b90 00:34:14.821 [2024-04-26 23:37:03.928078] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:14.821 qpair failed and we were unable to recover it. 00:34:14.821 [2024-04-26 23:37:03.938191] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:14.821 [2024-04-26 23:37:03.938256] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:14.821 [2024-04-26 23:37:03.938267] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:14.821 [2024-04-26 23:37:03.938272] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:14.821 [2024-04-26 23:37:03.938276] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3ad0000b90 00:34:14.821 [2024-04-26 23:37:03.938287] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:14.821 qpair failed and we were unable to recover it. 00:34:14.821 [2024-04-26 23:37:03.948155] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:14.821 [2024-04-26 23:37:03.948255] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:14.822 [2024-04-26 23:37:03.948266] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:14.822 [2024-04-26 23:37:03.948271] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:14.822 [2024-04-26 23:37:03.948276] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3ad0000b90 00:34:14.822 [2024-04-26 23:37:03.948286] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:14.822 qpair failed and we were unable to recover it. 00:34:14.822 [2024-04-26 23:37:03.958175] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:14.822 [2024-04-26 23:37:03.958235] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:14.822 [2024-04-26 23:37:03.958246] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:14.822 [2024-04-26 23:37:03.958251] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:14.822 [2024-04-26 23:37:03.958255] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3ad0000b90 00:34:14.822 [2024-04-26 23:37:03.958265] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:14.822 qpair failed and we were unable to recover it. 00:34:14.822 [2024-04-26 23:37:03.968219] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:14.822 [2024-04-26 23:37:03.968270] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:14.822 [2024-04-26 23:37:03.968280] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:14.822 [2024-04-26 23:37:03.968286] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:14.822 [2024-04-26 23:37:03.968290] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3ad0000b90 00:34:14.822 [2024-04-26 23:37:03.968300] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:14.822 qpair failed and we were unable to recover it. 00:34:14.822 [2024-04-26 23:37:03.978258] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:14.822 [2024-04-26 23:37:03.978311] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:14.822 [2024-04-26 23:37:03.978324] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:14.822 [2024-04-26 23:37:03.978329] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:14.822 [2024-04-26 23:37:03.978334] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3ad0000b90 00:34:14.822 [2024-04-26 23:37:03.978344] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:14.822 qpair failed and we were unable to recover it. 00:34:14.822 [2024-04-26 23:37:03.988268] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:14.822 [2024-04-26 23:37:03.988326] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:14.822 [2024-04-26 23:37:03.988338] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:14.822 [2024-04-26 23:37:03.988343] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:14.822 [2024-04-26 23:37:03.988347] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3ad0000b90 00:34:14.822 [2024-04-26 23:37:03.988358] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:14.822 qpair failed and we were unable to recover it. 00:34:14.822 [2024-04-26 23:37:03.998315] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:14.822 [2024-04-26 23:37:03.998416] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:14.822 [2024-04-26 23:37:03.998427] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:14.822 [2024-04-26 23:37:03.998432] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:14.822 [2024-04-26 23:37:03.998437] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3ad0000b90 00:34:14.822 [2024-04-26 23:37:03.998447] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:14.822 qpair failed and we were unable to recover it. 00:34:14.822 [2024-04-26 23:37:04.008364] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:14.822 [2024-04-26 23:37:04.008413] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:14.822 [2024-04-26 23:37:04.008424] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:14.822 [2024-04-26 23:37:04.008429] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:14.822 [2024-04-26 23:37:04.008434] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3ad0000b90 00:34:14.822 [2024-04-26 23:37:04.008444] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:14.822 qpair failed and we were unable to recover it. 00:34:14.822 [2024-04-26 23:37:04.018375] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:14.822 [2024-04-26 23:37:04.018430] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:14.822 [2024-04-26 23:37:04.018440] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:14.822 [2024-04-26 23:37:04.018445] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:14.822 [2024-04-26 23:37:04.018453] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3ad0000b90 00:34:14.822 [2024-04-26 23:37:04.018462] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:14.822 qpair failed and we were unable to recover it. 00:34:14.822 [2024-04-26 23:37:04.028406] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:14.822 [2024-04-26 23:37:04.028463] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:14.822 [2024-04-26 23:37:04.028473] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:14.822 [2024-04-26 23:37:04.028478] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:14.822 [2024-04-26 23:37:04.028483] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3ad0000b90 00:34:14.822 [2024-04-26 23:37:04.028493] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:14.822 qpair failed and we were unable to recover it. 00:34:14.822 [2024-04-26 23:37:04.038424] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:14.822 [2024-04-26 23:37:04.038477] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:14.822 [2024-04-26 23:37:04.038489] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:14.822 [2024-04-26 23:37:04.038494] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:14.822 [2024-04-26 23:37:04.038499] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3ad0000b90 00:34:14.822 [2024-04-26 23:37:04.038510] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:14.822 qpair failed and we were unable to recover it. 00:34:14.822 [2024-04-26 23:37:04.048455] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:14.822 [2024-04-26 23:37:04.048505] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:14.822 [2024-04-26 23:37:04.048517] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:14.822 [2024-04-26 23:37:04.048522] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:14.822 [2024-04-26 23:37:04.048526] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3ad0000b90 00:34:14.822 [2024-04-26 23:37:04.048537] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:14.822 qpair failed and we were unable to recover it. 00:34:14.822 [2024-04-26 23:37:04.058433] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:14.822 [2024-04-26 23:37:04.058486] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:14.822 [2024-04-26 23:37:04.058499] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:14.822 [2024-04-26 23:37:04.058504] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:14.822 [2024-04-26 23:37:04.058508] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3ad0000b90 00:34:14.822 [2024-04-26 23:37:04.058519] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:14.822 qpair failed and we were unable to recover it. 00:34:14.822 [2024-04-26 23:37:04.068419] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:14.822 [2024-04-26 23:37:04.068482] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:14.822 [2024-04-26 23:37:04.068493] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:14.822 [2024-04-26 23:37:04.068498] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:14.822 [2024-04-26 23:37:04.068503] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3ad0000b90 00:34:14.822 [2024-04-26 23:37:04.068513] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:14.822 qpair failed and we were unable to recover it. 00:34:15.085 [2024-04-26 23:37:04.078414] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:15.085 [2024-04-26 23:37:04.078463] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:15.085 [2024-04-26 23:37:04.078474] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:15.085 [2024-04-26 23:37:04.078479] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:15.085 [2024-04-26 23:37:04.078483] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3ad0000b90 00:34:15.085 [2024-04-26 23:37:04.078494] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:15.085 qpair failed and we were unable to recover it. 00:34:15.085 [2024-04-26 23:37:04.088561] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:15.085 [2024-04-26 23:37:04.088610] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:15.085 [2024-04-26 23:37:04.088621] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:15.085 [2024-04-26 23:37:04.088626] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:15.085 [2024-04-26 23:37:04.088631] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3ad0000b90 00:34:15.085 [2024-04-26 23:37:04.088640] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:15.085 qpair failed and we were unable to recover it. 00:34:15.085 [2024-04-26 23:37:04.098623] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:15.085 [2024-04-26 23:37:04.098728] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:15.085 [2024-04-26 23:37:04.098747] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:15.085 [2024-04-26 23:37:04.098753] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:15.085 [2024-04-26 23:37:04.098757] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3ad0000b90 00:34:15.085 [2024-04-26 23:37:04.098771] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:15.085 qpair failed and we were unable to recover it. 00:34:15.085 [2024-04-26 23:37:04.108651] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:15.085 [2024-04-26 23:37:04.108712] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:15.085 [2024-04-26 23:37:04.108730] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:15.085 [2024-04-26 23:37:04.108737] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:15.085 [2024-04-26 23:37:04.108746] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3ad0000b90 00:34:15.085 [2024-04-26 23:37:04.108760] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:15.085 qpair failed and we were unable to recover it. 00:34:15.085 [2024-04-26 23:37:04.118656] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:15.085 [2024-04-26 23:37:04.118704] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:15.085 [2024-04-26 23:37:04.118716] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:15.085 [2024-04-26 23:37:04.118722] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:15.085 [2024-04-26 23:37:04.118726] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3ad0000b90 00:34:15.085 [2024-04-26 23:37:04.118737] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:15.085 qpair failed and we were unable to recover it. 00:34:15.085 [2024-04-26 23:37:04.128629] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:15.085 [2024-04-26 23:37:04.128685] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:15.085 [2024-04-26 23:37:04.128696] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:15.085 [2024-04-26 23:37:04.128701] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:15.085 [2024-04-26 23:37:04.128705] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3ad0000b90 00:34:15.085 [2024-04-26 23:37:04.128715] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:15.085 qpair failed and we were unable to recover it. 00:34:15.085 [2024-04-26 23:37:04.138679] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:15.085 [2024-04-26 23:37:04.138740] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:15.085 [2024-04-26 23:37:04.138751] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:15.085 [2024-04-26 23:37:04.138756] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:15.085 [2024-04-26 23:37:04.138760] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3ad0000b90 00:34:15.085 [2024-04-26 23:37:04.138770] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:15.085 qpair failed and we were unable to recover it. 00:34:15.085 [2024-04-26 23:37:04.148746] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:15.085 [2024-04-26 23:37:04.148804] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:15.085 [2024-04-26 23:37:04.148815] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:15.085 [2024-04-26 23:37:04.148820] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:15.085 [2024-04-26 23:37:04.148825] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3ad0000b90 00:34:15.085 [2024-04-26 23:37:04.148835] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:15.085 qpair failed and we were unable to recover it. 00:34:15.085 [2024-04-26 23:37:04.158764] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:15.086 [2024-04-26 23:37:04.158819] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:15.086 [2024-04-26 23:37:04.158832] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:15.086 [2024-04-26 23:37:04.158840] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:15.086 [2024-04-26 23:37:04.158846] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3ad0000b90 00:34:15.086 [2024-04-26 23:37:04.158860] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:15.086 qpair failed and we were unable to recover it. 00:34:15.086 [2024-04-26 23:37:04.168789] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:15.086 [2024-04-26 23:37:04.168841] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:15.086 [2024-04-26 23:37:04.168852] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:15.086 [2024-04-26 23:37:04.168857] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:15.086 [2024-04-26 23:37:04.168862] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3ad0000b90 00:34:15.086 [2024-04-26 23:37:04.168872] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:15.086 qpair failed and we were unable to recover it. 00:34:15.086 [2024-04-26 23:37:04.178766] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:15.086 [2024-04-26 23:37:04.178853] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:15.086 [2024-04-26 23:37:04.178864] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:15.086 [2024-04-26 23:37:04.178869] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:15.086 [2024-04-26 23:37:04.178874] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3ad0000b90 00:34:15.086 [2024-04-26 23:37:04.178884] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:15.086 qpair failed and we were unable to recover it. 00:34:15.086 [2024-04-26 23:37:04.188876] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:15.086 [2024-04-26 23:37:04.188932] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:15.086 [2024-04-26 23:37:04.188943] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:15.086 [2024-04-26 23:37:04.188948] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:15.086 [2024-04-26 23:37:04.188953] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3ad0000b90 00:34:15.086 [2024-04-26 23:37:04.188963] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:15.086 qpair failed and we were unable to recover it. 00:34:15.086 [2024-04-26 23:37:04.198873] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:15.086 [2024-04-26 23:37:04.198923] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:15.086 [2024-04-26 23:37:04.198934] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:15.086 [2024-04-26 23:37:04.198942] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:15.086 [2024-04-26 23:37:04.198946] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3ad0000b90 00:34:15.086 [2024-04-26 23:37:04.198957] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:15.086 qpair failed and we were unable to recover it. 00:34:15.086 [2024-04-26 23:37:04.208907] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:15.086 [2024-04-26 23:37:04.208957] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:15.086 [2024-04-26 23:37:04.208968] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:15.086 [2024-04-26 23:37:04.208973] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:15.086 [2024-04-26 23:37:04.208978] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3ad0000b90 00:34:15.086 [2024-04-26 23:37:04.208988] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:15.086 qpair failed and we were unable to recover it. 00:34:15.086 [2024-04-26 23:37:04.218870] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:15.086 [2024-04-26 23:37:04.218927] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:15.086 [2024-04-26 23:37:04.218938] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:15.086 [2024-04-26 23:37:04.218943] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:15.086 [2024-04-26 23:37:04.218948] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3ad0000b90 00:34:15.086 [2024-04-26 23:37:04.218959] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:15.086 qpair failed and we were unable to recover it. 00:34:15.086 [2024-04-26 23:37:04.228928] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:15.086 [2024-04-26 23:37:04.228985] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:15.086 [2024-04-26 23:37:04.228996] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:15.086 [2024-04-26 23:37:04.229001] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:15.086 [2024-04-26 23:37:04.229006] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3ad0000b90 00:34:15.086 [2024-04-26 23:37:04.229016] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:15.086 qpair failed and we were unable to recover it. 00:34:15.086 [2024-04-26 23:37:04.239020] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:15.086 [2024-04-26 23:37:04.239092] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:15.086 [2024-04-26 23:37:04.239103] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:15.086 [2024-04-26 23:37:04.239108] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:15.086 [2024-04-26 23:37:04.239112] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3ad0000b90 00:34:15.086 [2024-04-26 23:37:04.239122] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:15.086 qpair failed and we were unable to recover it. 00:34:15.086 [2024-04-26 23:37:04.248982] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:15.086 [2024-04-26 23:37:04.249035] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:15.086 [2024-04-26 23:37:04.249046] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:15.086 [2024-04-26 23:37:04.249051] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:15.086 [2024-04-26 23:37:04.249055] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3ad0000b90 00:34:15.086 [2024-04-26 23:37:04.249066] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:15.086 qpair failed and we were unable to recover it. 00:34:15.086 [2024-04-26 23:37:04.258942] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:15.086 [2024-04-26 23:37:04.258997] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:15.086 [2024-04-26 23:37:04.259008] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:15.086 [2024-04-26 23:37:04.259013] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:15.086 [2024-04-26 23:37:04.259017] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3ad0000b90 00:34:15.086 [2024-04-26 23:37:04.259027] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:15.086 qpair failed and we were unable to recover it. 00:34:15.086 [2024-04-26 23:37:04.269081] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:15.086 [2024-04-26 23:37:04.269165] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:15.086 [2024-04-26 23:37:04.269176] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:15.086 [2024-04-26 23:37:04.269181] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:15.086 [2024-04-26 23:37:04.269186] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3ad0000b90 00:34:15.086 [2024-04-26 23:37:04.269196] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:15.086 qpair failed and we were unable to recover it. 00:34:15.086 [2024-04-26 23:37:04.279150] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:15.086 [2024-04-26 23:37:04.279203] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:15.086 [2024-04-26 23:37:04.279214] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:15.086 [2024-04-26 23:37:04.279220] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:15.086 [2024-04-26 23:37:04.279224] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3ad0000b90 00:34:15.086 [2024-04-26 23:37:04.279235] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:15.086 qpair failed and we were unable to recover it. 00:34:15.086 [2024-04-26 23:37:04.288992] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:15.086 [2024-04-26 23:37:04.289055] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:15.086 [2024-04-26 23:37:04.289069] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:15.086 [2024-04-26 23:37:04.289074] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:15.087 [2024-04-26 23:37:04.289078] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3ad0000b90 00:34:15.087 [2024-04-26 23:37:04.289089] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:15.087 qpair failed and we were unable to recover it. 00:34:15.087 [2024-04-26 23:37:04.299044] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:15.087 [2024-04-26 23:37:04.299099] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:15.087 [2024-04-26 23:37:04.299111] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:15.087 [2024-04-26 23:37:04.299117] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:15.087 [2024-04-26 23:37:04.299123] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3ad0000b90 00:34:15.087 [2024-04-26 23:37:04.299133] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:15.087 qpair failed and we were unable to recover it. 00:34:15.087 [2024-04-26 23:37:04.309201] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:15.087 [2024-04-26 23:37:04.309265] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:15.087 [2024-04-26 23:37:04.309276] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:15.087 [2024-04-26 23:37:04.309281] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:15.087 [2024-04-26 23:37:04.309285] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3ad0000b90 00:34:15.087 [2024-04-26 23:37:04.309295] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:15.087 qpair failed and we were unable to recover it. 00:34:15.087 [2024-04-26 23:37:04.319087] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:15.087 [2024-04-26 23:37:04.319144] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:15.087 [2024-04-26 23:37:04.319155] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:15.087 [2024-04-26 23:37:04.319160] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:15.087 [2024-04-26 23:37:04.319165] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3ad0000b90 00:34:15.087 [2024-04-26 23:37:04.319174] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:15.087 qpair failed and we were unable to recover it. 00:34:15.087 [2024-04-26 23:37:04.329164] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:15.087 [2024-04-26 23:37:04.329214] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:15.087 [2024-04-26 23:37:04.329225] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:15.087 [2024-04-26 23:37:04.329230] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:15.087 [2024-04-26 23:37:04.329234] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3ad0000b90 00:34:15.087 [2024-04-26 23:37:04.329247] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:15.087 qpair failed and we were unable to recover it. 00:34:15.349 [2024-04-26 23:37:04.339284] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:15.349 [2024-04-26 23:37:04.339337] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:15.349 [2024-04-26 23:37:04.339348] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:15.349 [2024-04-26 23:37:04.339353] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:15.349 [2024-04-26 23:37:04.339357] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3ad0000b90 00:34:15.349 [2024-04-26 23:37:04.339367] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:15.349 qpair failed and we were unable to recover it. 00:34:15.349 [2024-04-26 23:37:04.349302] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:15.349 [2024-04-26 23:37:04.349362] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:15.349 [2024-04-26 23:37:04.349374] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:15.349 [2024-04-26 23:37:04.349379] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:15.349 [2024-04-26 23:37:04.349383] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3ad0000b90 00:34:15.349 [2024-04-26 23:37:04.349394] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:15.349 qpair failed and we were unable to recover it. 00:34:15.349 [2024-04-26 23:37:04.359408] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:15.349 [2024-04-26 23:37:04.359475] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:15.349 [2024-04-26 23:37:04.359487] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:15.349 [2024-04-26 23:37:04.359492] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:15.349 [2024-04-26 23:37:04.359496] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3ad0000b90 00:34:15.349 [2024-04-26 23:37:04.359507] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:15.349 qpair failed and we were unable to recover it. 00:34:15.349 [2024-04-26 23:37:04.369403] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:15.349 [2024-04-26 23:37:04.369455] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:15.349 [2024-04-26 23:37:04.369466] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:15.349 [2024-04-26 23:37:04.369471] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:15.349 [2024-04-26 23:37:04.369476] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3ad0000b90 00:34:15.349 [2024-04-26 23:37:04.369486] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:15.349 qpair failed and we were unable to recover it. 00:34:15.349 [2024-04-26 23:37:04.379311] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:15.349 [2024-04-26 23:37:04.379366] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:15.349 [2024-04-26 23:37:04.379382] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:15.349 [2024-04-26 23:37:04.379388] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:15.349 [2024-04-26 23:37:04.379392] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3ad0000b90 00:34:15.349 [2024-04-26 23:37:04.379402] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:15.349 qpair failed and we were unable to recover it. 00:34:15.349 [2024-04-26 23:37:04.389434] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:15.349 [2024-04-26 23:37:04.389488] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:15.349 [2024-04-26 23:37:04.389499] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:15.349 [2024-04-26 23:37:04.389504] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:15.349 [2024-04-26 23:37:04.389509] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3ad0000b90 00:34:15.349 [2024-04-26 23:37:04.389518] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:15.349 qpair failed and we were unable to recover it. 00:34:15.349 [2024-04-26 23:37:04.399308] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:15.349 [2024-04-26 23:37:04.399363] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:15.349 [2024-04-26 23:37:04.399375] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:15.349 [2024-04-26 23:37:04.399380] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:15.349 [2024-04-26 23:37:04.399384] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3ad0000b90 00:34:15.349 [2024-04-26 23:37:04.399394] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:15.349 qpair failed and we were unable to recover it. 00:34:15.349 [2024-04-26 23:37:04.409334] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:15.349 [2024-04-26 23:37:04.409388] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:15.349 [2024-04-26 23:37:04.409398] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:15.349 [2024-04-26 23:37:04.409403] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:15.349 [2024-04-26 23:37:04.409408] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3ad0000b90 00:34:15.349 [2024-04-26 23:37:04.409418] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:15.349 qpair failed and we were unable to recover it. 00:34:15.349 [2024-04-26 23:37:04.419474] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:15.349 [2024-04-26 23:37:04.419528] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:15.349 [2024-04-26 23:37:04.419539] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:15.349 [2024-04-26 23:37:04.419544] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:15.349 [2024-04-26 23:37:04.419551] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3ad0000b90 00:34:15.349 [2024-04-26 23:37:04.419562] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:15.349 qpair failed and we were unable to recover it. 00:34:15.349 [2024-04-26 23:37:04.429531] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:15.349 [2024-04-26 23:37:04.429588] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:15.349 [2024-04-26 23:37:04.429599] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:15.350 [2024-04-26 23:37:04.429604] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:15.350 [2024-04-26 23:37:04.429608] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3ad0000b90 00:34:15.350 [2024-04-26 23:37:04.429618] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:15.350 qpair failed and we were unable to recover it. 00:34:15.350 [2024-04-26 23:37:04.439538] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:15.350 [2024-04-26 23:37:04.439589] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:15.350 [2024-04-26 23:37:04.439599] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:15.350 [2024-04-26 23:37:04.439604] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:15.350 [2024-04-26 23:37:04.439609] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3ad0000b90 00:34:15.350 [2024-04-26 23:37:04.439619] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:15.350 qpair failed and we were unable to recover it. 00:34:15.350 [2024-04-26 23:37:04.449583] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:15.350 [2024-04-26 23:37:04.449631] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:15.350 [2024-04-26 23:37:04.449642] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:15.350 [2024-04-26 23:37:04.449647] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:15.350 [2024-04-26 23:37:04.449652] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3ad0000b90 00:34:15.350 [2024-04-26 23:37:04.449662] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:15.350 qpair failed and we were unable to recover it. 00:34:15.350 [2024-04-26 23:37:04.459672] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:15.350 [2024-04-26 23:37:04.459726] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:15.350 [2024-04-26 23:37:04.459737] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:15.350 [2024-04-26 23:37:04.459743] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:15.350 [2024-04-26 23:37:04.459747] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3ad0000b90 00:34:15.350 [2024-04-26 23:37:04.459757] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:15.350 qpair failed and we were unable to recover it. 00:34:15.350 [2024-04-26 23:37:04.469631] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:15.350 [2024-04-26 23:37:04.469692] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:15.350 [2024-04-26 23:37:04.469703] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:15.350 [2024-04-26 23:37:04.469709] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:15.350 [2024-04-26 23:37:04.469713] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3ad0000b90 00:34:15.350 [2024-04-26 23:37:04.469723] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:15.350 qpair failed and we were unable to recover it. 00:34:15.350 [2024-04-26 23:37:04.479659] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:15.350 [2024-04-26 23:37:04.479711] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:15.350 [2024-04-26 23:37:04.479722] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:15.350 [2024-04-26 23:37:04.479727] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:15.350 [2024-04-26 23:37:04.479731] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3ad0000b90 00:34:15.350 [2024-04-26 23:37:04.479742] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:15.350 qpair failed and we were unable to recover it. 00:34:15.350 [2024-04-26 23:37:04.489717] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:15.350 [2024-04-26 23:37:04.489766] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:15.350 [2024-04-26 23:37:04.489777] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:15.350 [2024-04-26 23:37:04.489782] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:15.350 [2024-04-26 23:37:04.489787] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3ad0000b90 00:34:15.350 [2024-04-26 23:37:04.489797] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:15.350 qpair failed and we were unable to recover it. 00:34:15.350 [2024-04-26 23:37:04.499727] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:15.350 [2024-04-26 23:37:04.499779] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:15.350 [2024-04-26 23:37:04.499790] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:15.350 [2024-04-26 23:37:04.499795] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:15.350 [2024-04-26 23:37:04.499799] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3ad0000b90 00:34:15.350 [2024-04-26 23:37:04.499809] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:15.350 qpair failed and we were unable to recover it. 00:34:15.350 [2024-04-26 23:37:04.509767] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:15.350 [2024-04-26 23:37:04.509824] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:15.350 [2024-04-26 23:37:04.509835] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:15.350 [2024-04-26 23:37:04.509844] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:15.350 [2024-04-26 23:37:04.509851] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3ad0000b90 00:34:15.350 [2024-04-26 23:37:04.509862] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:15.350 qpair failed and we were unable to recover it. 00:34:15.350 [2024-04-26 23:37:04.519808] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:15.350 [2024-04-26 23:37:04.519935] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:15.350 [2024-04-26 23:37:04.519946] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:15.350 [2024-04-26 23:37:04.519951] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:15.350 [2024-04-26 23:37:04.519956] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3ad0000b90 00:34:15.350 [2024-04-26 23:37:04.519966] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:15.350 qpair failed and we were unable to recover it. 00:34:15.350 [2024-04-26 23:37:04.529751] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:15.350 [2024-04-26 23:37:04.529805] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:15.350 [2024-04-26 23:37:04.529816] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:15.350 [2024-04-26 23:37:04.529821] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:15.350 [2024-04-26 23:37:04.529825] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3ad0000b90 00:34:15.350 [2024-04-26 23:37:04.529835] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:15.350 qpair failed and we were unable to recover it. 00:34:15.350 [2024-04-26 23:37:04.539731] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:15.350 [2024-04-26 23:37:04.539824] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:15.350 [2024-04-26 23:37:04.539835] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:15.350 [2024-04-26 23:37:04.539843] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:15.350 [2024-04-26 23:37:04.539848] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3ad0000b90 00:34:15.350 [2024-04-26 23:37:04.539858] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:15.350 qpair failed and we were unable to recover it. 00:34:15.350 [2024-04-26 23:37:04.549870] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:15.350 [2024-04-26 23:37:04.549930] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:15.350 [2024-04-26 23:37:04.549940] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:15.350 [2024-04-26 23:37:04.549945] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:15.350 [2024-04-26 23:37:04.549950] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3ad0000b90 00:34:15.350 [2024-04-26 23:37:04.549960] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:15.350 qpair failed and we were unable to recover it. 00:34:15.350 [2024-04-26 23:37:04.559889] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:15.350 [2024-04-26 23:37:04.559938] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:15.350 [2024-04-26 23:37:04.559949] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:15.350 [2024-04-26 23:37:04.559954] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:15.350 [2024-04-26 23:37:04.559958] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3ad0000b90 00:34:15.350 [2024-04-26 23:37:04.559969] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:15.351 qpair failed and we were unable to recover it. 00:34:15.351 [2024-04-26 23:37:04.569931] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:15.351 [2024-04-26 23:37:04.570011] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:15.351 [2024-04-26 23:37:04.570022] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:15.351 [2024-04-26 23:37:04.570027] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:15.351 [2024-04-26 23:37:04.570031] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3ad0000b90 00:34:15.351 [2024-04-26 23:37:04.570042] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:15.351 qpair failed and we were unable to recover it. 00:34:15.351 [2024-04-26 23:37:04.579948] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:15.351 [2024-04-26 23:37:04.580002] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:15.351 [2024-04-26 23:37:04.580013] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:15.351 [2024-04-26 23:37:04.580018] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:15.351 [2024-04-26 23:37:04.580022] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3ad0000b90 00:34:15.351 [2024-04-26 23:37:04.580032] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:15.351 qpair failed and we were unable to recover it. 00:34:15.351 [2024-04-26 23:37:04.589865] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:15.351 [2024-04-26 23:37:04.589931] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:15.351 [2024-04-26 23:37:04.589942] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:15.351 [2024-04-26 23:37:04.589947] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:15.351 [2024-04-26 23:37:04.589951] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3ad0000b90 00:34:15.351 [2024-04-26 23:37:04.589961] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:15.351 qpair failed and we were unable to recover it. 00:34:15.351 [2024-04-26 23:37:04.599913] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:15.351 [2024-04-26 23:37:04.599974] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:15.351 [2024-04-26 23:37:04.599985] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:15.351 [2024-04-26 23:37:04.599992] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:15.351 [2024-04-26 23:37:04.599997] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3ad0000b90 00:34:15.351 [2024-04-26 23:37:04.600008] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:15.351 qpair failed and we were unable to recover it. 00:34:15.612 [2024-04-26 23:37:04.609913] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:15.612 [2024-04-26 23:37:04.609970] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:15.612 [2024-04-26 23:37:04.609980] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:15.612 [2024-04-26 23:37:04.609985] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:15.612 [2024-04-26 23:37:04.609989] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3ad0000b90 00:34:15.612 [2024-04-26 23:37:04.609999] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:15.612 qpair failed and we were unable to recover it. 00:34:15.612 [2024-04-26 23:37:04.620089] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:15.612 [2024-04-26 23:37:04.620141] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:15.612 [2024-04-26 23:37:04.620151] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:15.612 [2024-04-26 23:37:04.620156] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:15.612 [2024-04-26 23:37:04.620160] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3ad0000b90 00:34:15.612 [2024-04-26 23:37:04.620170] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:15.612 qpair failed and we were unable to recover it. 00:34:15.612 [2024-04-26 23:37:04.630106] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:15.612 [2024-04-26 23:37:04.630164] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:15.612 [2024-04-26 23:37:04.630175] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:15.612 [2024-04-26 23:37:04.630180] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:15.612 [2024-04-26 23:37:04.630184] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3ad0000b90 00:34:15.612 [2024-04-26 23:37:04.630195] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:15.612 qpair failed and we were unable to recover it. 00:34:15.612 [2024-04-26 23:37:04.640133] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:15.612 [2024-04-26 23:37:04.640184] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:15.612 [2024-04-26 23:37:04.640194] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:15.612 [2024-04-26 23:37:04.640200] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:15.612 [2024-04-26 23:37:04.640204] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3ad0000b90 00:34:15.612 [2024-04-26 23:37:04.640214] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:15.612 qpair failed and we were unable to recover it. 00:34:15.612 [2024-04-26 23:37:04.650120] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:15.612 [2024-04-26 23:37:04.650171] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:15.612 [2024-04-26 23:37:04.650182] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:15.612 [2024-04-26 23:37:04.650187] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:15.612 [2024-04-26 23:37:04.650192] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3ad0000b90 00:34:15.612 [2024-04-26 23:37:04.650202] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:15.612 qpair failed and we were unable to recover it. 00:34:15.612 [2024-04-26 23:37:04.660263] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:15.612 [2024-04-26 23:37:04.660315] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:15.612 [2024-04-26 23:37:04.660326] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:15.612 [2024-04-26 23:37:04.660331] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:15.612 [2024-04-26 23:37:04.660335] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3ad0000b90 00:34:15.612 [2024-04-26 23:37:04.660345] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:15.612 qpair failed and we were unable to recover it. 00:34:15.612 [2024-04-26 23:37:04.670143] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:15.612 [2024-04-26 23:37:04.670193] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:15.612 [2024-04-26 23:37:04.670204] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:15.612 [2024-04-26 23:37:04.670209] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:15.612 [2024-04-26 23:37:04.670214] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3ad0000b90 00:34:15.612 [2024-04-26 23:37:04.670224] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:15.612 qpair failed and we were unable to recover it. 00:34:15.612 [2024-04-26 23:37:04.680104] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:15.612 [2024-04-26 23:37:04.680155] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:15.612 [2024-04-26 23:37:04.680166] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:15.613 [2024-04-26 23:37:04.680170] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:15.613 [2024-04-26 23:37:04.680175] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3ad0000b90 00:34:15.613 [2024-04-26 23:37:04.680185] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:15.613 qpair failed and we were unable to recover it. 00:34:15.613 [2024-04-26 23:37:04.690276] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:15.613 [2024-04-26 23:37:04.690326] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:15.613 [2024-04-26 23:37:04.690339] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:15.613 [2024-04-26 23:37:04.690344] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:15.613 [2024-04-26 23:37:04.690349] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3ad0000b90 00:34:15.613 [2024-04-26 23:37:04.690358] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:15.613 qpair failed and we were unable to recover it. 00:34:15.613 [2024-04-26 23:37:04.700288] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:15.613 [2024-04-26 23:37:04.700343] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:15.613 [2024-04-26 23:37:04.700355] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:15.613 [2024-04-26 23:37:04.700360] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:15.613 [2024-04-26 23:37:04.700364] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3ad0000b90 00:34:15.613 [2024-04-26 23:37:04.700375] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:15.613 qpair failed and we were unable to recover it. 00:34:15.613 [2024-04-26 23:37:04.710345] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:15.613 [2024-04-26 23:37:04.710393] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:15.613 [2024-04-26 23:37:04.710404] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:15.613 [2024-04-26 23:37:04.710408] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:15.613 [2024-04-26 23:37:04.710413] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3ad0000b90 00:34:15.613 [2024-04-26 23:37:04.710423] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:15.613 qpair failed and we were unable to recover it. 00:34:15.613 [2024-04-26 23:37:04.720317] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:15.613 [2024-04-26 23:37:04.720370] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:15.613 [2024-04-26 23:37:04.720380] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:15.613 [2024-04-26 23:37:04.720385] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:15.613 [2024-04-26 23:37:04.720389] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3ad0000b90 00:34:15.613 [2024-04-26 23:37:04.720399] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:15.613 qpair failed and we were unable to recover it. 00:34:15.613 [2024-04-26 23:37:04.730290] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:15.613 [2024-04-26 23:37:04.730337] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:15.613 [2024-04-26 23:37:04.730349] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:15.613 [2024-04-26 23:37:04.730354] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:15.613 [2024-04-26 23:37:04.730358] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3ad0000b90 00:34:15.613 [2024-04-26 23:37:04.730372] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:15.613 qpair failed and we were unable to recover it. 00:34:15.613 [2024-04-26 23:37:04.740409] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:15.613 [2024-04-26 23:37:04.740460] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:15.613 [2024-04-26 23:37:04.740471] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:15.613 [2024-04-26 23:37:04.740475] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:15.613 [2024-04-26 23:37:04.740480] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3ad0000b90 00:34:15.613 [2024-04-26 23:37:04.740490] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:15.613 qpair failed and we were unable to recover it. 00:34:15.613 [2024-04-26 23:37:04.750336] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:15.613 [2024-04-26 23:37:04.750428] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:15.613 [2024-04-26 23:37:04.750439] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:15.613 [2024-04-26 23:37:04.750444] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:15.613 [2024-04-26 23:37:04.750449] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3ad0000b90 00:34:15.613 [2024-04-26 23:37:04.750460] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:15.613 qpair failed and we were unable to recover it. 00:34:15.613 [2024-04-26 23:37:04.760464] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:15.613 [2024-04-26 23:37:04.760518] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:15.613 [2024-04-26 23:37:04.760529] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:15.613 [2024-04-26 23:37:04.760534] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:15.613 [2024-04-26 23:37:04.760538] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3ad0000b90 00:34:15.613 [2024-04-26 23:37:04.760548] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:15.613 qpair failed and we were unable to recover it. 00:34:15.613 [2024-04-26 23:37:04.770489] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:15.613 [2024-04-26 23:37:04.770539] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:15.613 [2024-04-26 23:37:04.770550] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:15.613 [2024-04-26 23:37:04.770555] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:15.613 [2024-04-26 23:37:04.770559] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3ad0000b90 00:34:15.613 [2024-04-26 23:37:04.770570] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:15.613 qpair failed and we were unable to recover it. 00:34:15.613 [2024-04-26 23:37:04.780589] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:15.613 [2024-04-26 23:37:04.780651] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:15.613 [2024-04-26 23:37:04.780665] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:15.613 [2024-04-26 23:37:04.780670] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:15.613 [2024-04-26 23:37:04.780674] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3ad0000b90 00:34:15.613 [2024-04-26 23:37:04.780684] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:15.613 qpair failed and we were unable to recover it. 00:34:15.613 [2024-04-26 23:37:04.790493] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:15.613 [2024-04-26 23:37:04.790550] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:15.613 [2024-04-26 23:37:04.790561] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:15.613 [2024-04-26 23:37:04.790566] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:15.613 [2024-04-26 23:37:04.790571] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3ad0000b90 00:34:15.613 [2024-04-26 23:37:04.790581] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:15.613 qpair failed and we were unable to recover it. 00:34:15.613 [2024-04-26 23:37:04.800572] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:15.613 [2024-04-26 23:37:04.800630] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:15.613 [2024-04-26 23:37:04.800641] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:15.613 [2024-04-26 23:37:04.800646] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:15.613 [2024-04-26 23:37:04.800650] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3ad0000b90 00:34:15.613 [2024-04-26 23:37:04.800660] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:15.613 qpair failed and we were unable to recover it. 00:34:15.613 [2024-04-26 23:37:04.810468] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:15.613 [2024-04-26 23:37:04.810522] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:15.613 [2024-04-26 23:37:04.810534] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:15.613 [2024-04-26 23:37:04.810539] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:15.613 [2024-04-26 23:37:04.810544] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3ad0000b90 00:34:15.613 [2024-04-26 23:37:04.810554] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:15.614 qpair failed and we were unable to recover it. 00:34:15.614 [2024-04-26 23:37:04.820504] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:15.614 [2024-04-26 23:37:04.820556] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:15.614 [2024-04-26 23:37:04.820568] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:15.614 [2024-04-26 23:37:04.820573] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:15.614 [2024-04-26 23:37:04.820577] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3ad0000b90 00:34:15.614 [2024-04-26 23:37:04.820590] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:15.614 qpair failed and we were unable to recover it. 00:34:15.614 [2024-04-26 23:37:04.830491] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:15.614 [2024-04-26 23:37:04.830569] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:15.614 [2024-04-26 23:37:04.830580] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:15.614 [2024-04-26 23:37:04.830585] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:15.614 [2024-04-26 23:37:04.830589] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3ad0000b90 00:34:15.614 [2024-04-26 23:37:04.830599] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:15.614 qpair failed and we were unable to recover it. 00:34:15.614 [2024-04-26 23:37:04.840650] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:15.614 [2024-04-26 23:37:04.840702] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:15.614 [2024-04-26 23:37:04.840720] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:15.614 [2024-04-26 23:37:04.840727] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:15.614 [2024-04-26 23:37:04.840731] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3ad0000b90 00:34:15.614 [2024-04-26 23:37:04.840745] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:15.614 qpair failed and we were unable to recover it. 00:34:15.614 [2024-04-26 23:37:04.850665] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:15.614 [2024-04-26 23:37:04.850712] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:15.614 [2024-04-26 23:37:04.850725] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:15.614 [2024-04-26 23:37:04.850730] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:15.614 [2024-04-26 23:37:04.850734] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3ad0000b90 00:34:15.614 [2024-04-26 23:37:04.850745] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:15.614 qpair failed and we were unable to recover it. 00:34:15.614 [2024-04-26 23:37:04.860743] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:15.614 [2024-04-26 23:37:04.860798] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:15.614 [2024-04-26 23:37:04.860810] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:15.614 [2024-04-26 23:37:04.860816] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:15.614 [2024-04-26 23:37:04.860821] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3ad0000b90 00:34:15.614 [2024-04-26 23:37:04.860832] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:15.614 qpair failed and we were unable to recover it. 00:34:15.877 [2024-04-26 23:37:04.870725] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:15.877 [2024-04-26 23:37:04.870781] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:15.877 [2024-04-26 23:37:04.870792] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:15.877 [2024-04-26 23:37:04.870797] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:15.877 [2024-04-26 23:37:04.870802] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3ad0000b90 00:34:15.877 [2024-04-26 23:37:04.870812] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:15.877 qpair failed and we were unable to recover it. 00:34:15.877 [2024-04-26 23:37:04.880745] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:15.877 [2024-04-26 23:37:04.880795] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:15.877 [2024-04-26 23:37:04.880806] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:15.877 [2024-04-26 23:37:04.880811] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:15.877 [2024-04-26 23:37:04.880816] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3ad0000b90 00:34:15.877 [2024-04-26 23:37:04.880826] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:15.877 qpair failed and we were unable to recover it. 00:34:15.877 [2024-04-26 23:37:04.890782] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:15.877 [2024-04-26 23:37:04.890828] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:15.877 [2024-04-26 23:37:04.890844] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:15.877 [2024-04-26 23:37:04.890850] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:15.877 [2024-04-26 23:37:04.890854] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3ad0000b90 00:34:15.877 [2024-04-26 23:37:04.890865] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:15.877 qpair failed and we were unable to recover it. 00:34:15.877 [2024-04-26 23:37:04.900863] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:15.877 [2024-04-26 23:37:04.900919] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:15.877 [2024-04-26 23:37:04.900930] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:15.877 [2024-04-26 23:37:04.900935] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:15.877 [2024-04-26 23:37:04.900939] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3ad0000b90 00:34:15.877 [2024-04-26 23:37:04.900950] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:15.877 qpair failed and we were unable to recover it. 00:34:15.877 [2024-04-26 23:37:04.910841] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:15.877 [2024-04-26 23:37:04.910913] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:15.877 [2024-04-26 23:37:04.910924] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:15.877 [2024-04-26 23:37:04.910929] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:15.877 [2024-04-26 23:37:04.910937] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3ad0000b90 00:34:15.877 [2024-04-26 23:37:04.910948] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:15.877 qpair failed and we were unable to recover it. 00:34:15.877 [2024-04-26 23:37:04.920745] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:15.877 [2024-04-26 23:37:04.920795] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:15.877 [2024-04-26 23:37:04.920806] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:15.877 [2024-04-26 23:37:04.920811] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:15.877 [2024-04-26 23:37:04.920816] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3ad0000b90 00:34:15.877 [2024-04-26 23:37:04.920826] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:15.877 qpair failed and we were unable to recover it. 00:34:15.877 [2024-04-26 23:37:04.930940] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:15.877 [2024-04-26 23:37:04.930991] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:15.877 [2024-04-26 23:37:04.931001] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:15.877 [2024-04-26 23:37:04.931006] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:15.877 [2024-04-26 23:37:04.931011] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3ad0000b90 00:34:15.877 [2024-04-26 23:37:04.931021] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:15.877 qpair failed and we were unable to recover it. 00:34:15.877 [2024-04-26 23:37:04.940936] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:15.877 [2024-04-26 23:37:04.940991] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:15.877 [2024-04-26 23:37:04.941001] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:15.877 [2024-04-26 23:37:04.941006] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:15.877 [2024-04-26 23:37:04.941011] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3ad0000b90 00:34:15.877 [2024-04-26 23:37:04.941021] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:15.877 qpair failed and we were unable to recover it. 00:34:15.877 [2024-04-26 23:37:04.950921] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:15.878 [2024-04-26 23:37:04.950979] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:15.878 [2024-04-26 23:37:04.950990] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:15.878 [2024-04-26 23:37:04.950995] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:15.878 [2024-04-26 23:37:04.951000] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3ad0000b90 00:34:15.878 [2024-04-26 23:37:04.951010] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:15.878 qpair failed and we were unable to recover it. 00:34:15.878 [2024-04-26 23:37:04.960974] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:15.878 [2024-04-26 23:37:04.961021] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:15.878 [2024-04-26 23:37:04.961032] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:15.878 [2024-04-26 23:37:04.961038] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:15.878 [2024-04-26 23:37:04.961042] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3ad0000b90 00:34:15.878 [2024-04-26 23:37:04.961052] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:15.878 qpair failed and we were unable to recover it. 00:34:15.878 [2024-04-26 23:37:04.970993] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:15.878 [2024-04-26 23:37:04.971041] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:15.878 [2024-04-26 23:37:04.971052] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:15.878 [2024-04-26 23:37:04.971057] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:15.878 [2024-04-26 23:37:04.971061] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3ad0000b90 00:34:15.878 [2024-04-26 23:37:04.971072] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:15.878 qpair failed and we were unable to recover it. 00:34:15.878 [2024-04-26 23:37:04.981080] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:15.878 [2024-04-26 23:37:04.981133] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:15.878 [2024-04-26 23:37:04.981144] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:15.878 [2024-04-26 23:37:04.981149] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:15.878 [2024-04-26 23:37:04.981154] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3ad0000b90 00:34:15.878 [2024-04-26 23:37:04.981164] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:15.878 qpair failed and we were unable to recover it. 00:34:15.878 [2024-04-26 23:37:04.990919] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:15.878 [2024-04-26 23:37:04.990975] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:15.878 [2024-04-26 23:37:04.990986] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:15.878 [2024-04-26 23:37:04.990991] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:15.878 [2024-04-26 23:37:04.990995] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3ad0000b90 00:34:15.878 [2024-04-26 23:37:04.991006] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:15.878 qpair failed and we were unable to recover it. 00:34:15.878 [2024-04-26 23:37:05.000940] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:15.878 [2024-04-26 23:37:05.000992] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:15.878 [2024-04-26 23:37:05.001004] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:15.878 [2024-04-26 23:37:05.001012] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:15.878 [2024-04-26 23:37:05.001017] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3ad0000b90 00:34:15.878 [2024-04-26 23:37:05.001027] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:15.878 qpair failed and we were unable to recover it. 00:34:15.878 [2024-04-26 23:37:05.011165] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:15.878 [2024-04-26 23:37:05.011228] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:15.878 [2024-04-26 23:37:05.011239] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:15.878 [2024-04-26 23:37:05.011244] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:15.878 [2024-04-26 23:37:05.011249] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3ad0000b90 00:34:15.878 [2024-04-26 23:37:05.011259] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:15.878 qpair failed and we were unable to recover it. 00:34:15.878 [2024-04-26 23:37:05.021217] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:15.878 [2024-04-26 23:37:05.021270] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:15.878 [2024-04-26 23:37:05.021281] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:15.878 [2024-04-26 23:37:05.021285] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:15.878 [2024-04-26 23:37:05.021290] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3ad0000b90 00:34:15.878 [2024-04-26 23:37:05.021300] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:15.878 qpair failed and we were unable to recover it. 00:34:15.878 [2024-04-26 23:37:05.031161] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:15.878 [2024-04-26 23:37:05.031211] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:15.878 [2024-04-26 23:37:05.031221] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:15.878 [2024-04-26 23:37:05.031226] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:15.878 [2024-04-26 23:37:05.031231] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3ad0000b90 00:34:15.878 [2024-04-26 23:37:05.031241] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:15.878 qpair failed and we were unable to recover it. 00:34:15.878 [2024-04-26 23:37:05.041057] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:15.878 [2024-04-26 23:37:05.041110] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:15.878 [2024-04-26 23:37:05.041121] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:15.878 [2024-04-26 23:37:05.041125] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:15.878 [2024-04-26 23:37:05.041130] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3ad0000b90 00:34:15.878 [2024-04-26 23:37:05.041140] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:15.878 qpair failed and we were unable to recover it. 00:34:15.878 [2024-04-26 23:37:05.051184] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:15.878 [2024-04-26 23:37:05.051233] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:15.878 [2024-04-26 23:37:05.051244] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:15.878 [2024-04-26 23:37:05.051249] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:15.878 [2024-04-26 23:37:05.051253] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3ad0000b90 00:34:15.878 [2024-04-26 23:37:05.051264] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:15.878 qpair failed and we were unable to recover it. 00:34:15.878 [2024-04-26 23:37:05.061300] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:15.878 [2024-04-26 23:37:05.061355] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:15.878 [2024-04-26 23:37:05.061366] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:15.878 [2024-04-26 23:37:05.061371] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:15.878 [2024-04-26 23:37:05.061375] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3ad0000b90 00:34:15.878 [2024-04-26 23:37:05.061385] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:15.878 qpair failed and we were unable to recover it. 00:34:15.878 [2024-04-26 23:37:05.071257] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:15.878 [2024-04-26 23:37:05.071311] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:15.878 [2024-04-26 23:37:05.071321] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:15.878 [2024-04-26 23:37:05.071326] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:15.878 [2024-04-26 23:37:05.071331] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3ad0000b90 00:34:15.878 [2024-04-26 23:37:05.071341] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:15.878 qpair failed and we were unable to recover it. 00:34:15.878 [2024-04-26 23:37:05.081303] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:15.878 [2024-04-26 23:37:05.081376] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:15.878 [2024-04-26 23:37:05.081387] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:15.878 [2024-04-26 23:37:05.081392] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:15.879 [2024-04-26 23:37:05.081397] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3ad0000b90 00:34:15.879 [2024-04-26 23:37:05.081407] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:15.879 qpair failed and we were unable to recover it. 00:34:15.879 [2024-04-26 23:37:05.091194] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:15.879 [2024-04-26 23:37:05.091241] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:15.879 [2024-04-26 23:37:05.091255] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:15.879 [2024-04-26 23:37:05.091261] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:15.879 [2024-04-26 23:37:05.091266] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3ad0000b90 00:34:15.879 [2024-04-26 23:37:05.091277] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:15.879 qpair failed and we were unable to recover it. 00:34:15.879 [2024-04-26 23:37:05.101404] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:15.879 [2024-04-26 23:37:05.101464] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:15.879 [2024-04-26 23:37:05.101475] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:15.879 [2024-04-26 23:37:05.101480] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:15.879 [2024-04-26 23:37:05.101485] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3ad0000b90 00:34:15.879 [2024-04-26 23:37:05.101495] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:15.879 qpair failed and we were unable to recover it. 00:34:15.879 [2024-04-26 23:37:05.111382] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:15.879 [2024-04-26 23:37:05.111434] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:15.879 [2024-04-26 23:37:05.111445] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:15.879 [2024-04-26 23:37:05.111451] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:15.879 [2024-04-26 23:37:05.111455] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3ad0000b90 00:34:15.879 [2024-04-26 23:37:05.111465] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:15.879 qpair failed and we were unable to recover it. 00:34:15.879 [2024-04-26 23:37:05.121390] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:15.879 [2024-04-26 23:37:05.121438] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:15.879 [2024-04-26 23:37:05.121449] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:15.879 [2024-04-26 23:37:05.121454] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:15.879 [2024-04-26 23:37:05.121458] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3ad0000b90 00:34:15.879 [2024-04-26 23:37:05.121468] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:15.879 qpair failed and we were unable to recover it. 00:34:16.143 [2024-04-26 23:37:05.131421] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:16.143 [2024-04-26 23:37:05.131472] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:16.143 [2024-04-26 23:37:05.131483] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:16.143 [2024-04-26 23:37:05.131488] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:16.143 [2024-04-26 23:37:05.131493] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3ad0000b90 00:34:16.143 [2024-04-26 23:37:05.131503] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:16.143 qpair failed and we were unable to recover it. 00:34:16.143 [2024-04-26 23:37:05.141481] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:16.143 [2024-04-26 23:37:05.141537] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:16.143 [2024-04-26 23:37:05.141548] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:16.143 [2024-04-26 23:37:05.141553] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:16.143 [2024-04-26 23:37:05.141557] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3ad0000b90 00:34:16.143 [2024-04-26 23:37:05.141567] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:16.143 qpair failed and we were unable to recover it. 00:34:16.143 [2024-04-26 23:37:05.151498] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:16.143 [2024-04-26 23:37:05.151549] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:16.143 [2024-04-26 23:37:05.151560] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:16.143 [2024-04-26 23:37:05.151565] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:16.143 [2024-04-26 23:37:05.151570] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3ad0000b90 00:34:16.143 [2024-04-26 23:37:05.151580] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:16.143 qpair failed and we were unable to recover it. 00:34:16.143 [2024-04-26 23:37:05.161525] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:16.143 [2024-04-26 23:37:05.161612] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:16.143 [2024-04-26 23:37:05.161631] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:16.143 [2024-04-26 23:37:05.161637] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:16.143 [2024-04-26 23:37:05.161642] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3ad0000b90 00:34:16.143 [2024-04-26 23:37:05.161655] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:16.143 qpair failed and we were unable to recover it. 00:34:16.143 [2024-04-26 23:37:05.171543] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:16.143 [2024-04-26 23:37:05.171626] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:16.143 [2024-04-26 23:37:05.171644] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:16.143 [2024-04-26 23:37:05.171650] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:16.143 [2024-04-26 23:37:05.171656] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3ad0000b90 00:34:16.143 [2024-04-26 23:37:05.171670] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:16.143 qpair failed and we were unable to recover it. 00:34:16.143 [2024-04-26 23:37:05.181605] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:16.143 [2024-04-26 23:37:05.181691] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:16.143 [2024-04-26 23:37:05.181709] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:16.143 [2024-04-26 23:37:05.181715] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:16.143 [2024-04-26 23:37:05.181719] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3ad0000b90 00:34:16.143 [2024-04-26 23:37:05.181732] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:16.143 qpair failed and we were unable to recover it. 00:34:16.143 [2024-04-26 23:37:05.191589] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:16.143 [2024-04-26 23:37:05.191653] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:16.143 [2024-04-26 23:37:05.191665] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:16.143 [2024-04-26 23:37:05.191670] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:16.143 [2024-04-26 23:37:05.191675] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3ad0000b90 00:34:16.143 [2024-04-26 23:37:05.191685] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:16.143 qpair failed and we were unable to recover it. 00:34:16.143 [2024-04-26 23:37:05.201668] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:16.143 [2024-04-26 23:37:05.201714] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:16.143 [2024-04-26 23:37:05.201725] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:16.143 [2024-04-26 23:37:05.201730] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:16.143 [2024-04-26 23:37:05.201734] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3ad0000b90 00:34:16.143 [2024-04-26 23:37:05.201745] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:16.143 qpair failed and we were unable to recover it. 00:34:16.143 [2024-04-26 23:37:05.211644] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:16.143 [2024-04-26 23:37:05.211695] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:16.143 [2024-04-26 23:37:05.211705] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:16.143 [2024-04-26 23:37:05.211710] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:16.143 [2024-04-26 23:37:05.211715] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3ad0000b90 00:34:16.143 [2024-04-26 23:37:05.211725] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:16.143 qpair failed and we were unable to recover it. 00:34:16.143 [2024-04-26 23:37:05.221714] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:16.143 [2024-04-26 23:37:05.221811] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:16.143 [2024-04-26 23:37:05.221823] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:16.143 [2024-04-26 23:37:05.221828] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:16.143 [2024-04-26 23:37:05.221832] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3ad0000b90 00:34:16.143 [2024-04-26 23:37:05.221849] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:16.143 qpair failed and we were unable to recover it. 00:34:16.143 [2024-04-26 23:37:05.231681] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:16.143 [2024-04-26 23:37:05.231732] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:16.143 [2024-04-26 23:37:05.231743] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:16.143 [2024-04-26 23:37:05.231747] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:16.143 [2024-04-26 23:37:05.231752] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3ad0000b90 00:34:16.143 [2024-04-26 23:37:05.231762] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:16.143 qpair failed and we were unable to recover it. 00:34:16.143 [2024-04-26 23:37:05.241694] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:16.143 [2024-04-26 23:37:05.241738] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:16.143 [2024-04-26 23:37:05.241749] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:16.143 [2024-04-26 23:37:05.241754] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:16.143 [2024-04-26 23:37:05.241758] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3ad0000b90 00:34:16.143 [2024-04-26 23:37:05.241769] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:16.143 qpair failed and we were unable to recover it. 00:34:16.143 [2024-04-26 23:37:05.251723] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:16.143 [2024-04-26 23:37:05.251774] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:16.143 [2024-04-26 23:37:05.251785] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:16.143 [2024-04-26 23:37:05.251790] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:16.143 [2024-04-26 23:37:05.251794] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3ad0000b90 00:34:16.143 [2024-04-26 23:37:05.251805] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:16.143 qpair failed and we were unable to recover it. 00:34:16.143 [2024-04-26 23:37:05.261834] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:16.143 [2024-04-26 23:37:05.261893] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:16.143 [2024-04-26 23:37:05.261904] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:16.144 [2024-04-26 23:37:05.261909] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:16.144 [2024-04-26 23:37:05.261914] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3ad0000b90 00:34:16.144 [2024-04-26 23:37:05.261924] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:16.144 qpair failed and we were unable to recover it. 00:34:16.144 [2024-04-26 23:37:05.271809] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:16.144 [2024-04-26 23:37:05.271859] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:16.144 [2024-04-26 23:37:05.271872] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:16.144 [2024-04-26 23:37:05.271877] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:16.144 [2024-04-26 23:37:05.271882] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3ad0000b90 00:34:16.144 [2024-04-26 23:37:05.271892] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:16.144 qpair failed and we were unable to recover it. 00:34:16.144 [2024-04-26 23:37:05.281796] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:16.144 [2024-04-26 23:37:05.281850] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:16.144 [2024-04-26 23:37:05.281861] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:16.144 [2024-04-26 23:37:05.281866] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:16.144 [2024-04-26 23:37:05.281871] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3ad0000b90 00:34:16.144 [2024-04-26 23:37:05.281881] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:16.144 qpair failed and we were unable to recover it. 00:34:16.144 [2024-04-26 23:37:05.291744] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:16.144 [2024-04-26 23:37:05.291793] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:16.144 [2024-04-26 23:37:05.291804] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:16.144 [2024-04-26 23:37:05.291809] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:16.144 [2024-04-26 23:37:05.291813] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3ad0000b90 00:34:16.144 [2024-04-26 23:37:05.291823] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:16.144 qpair failed and we were unable to recover it. 00:34:16.144 [2024-04-26 23:37:05.301959] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:16.144 [2024-04-26 23:37:05.302016] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:16.144 [2024-04-26 23:37:05.302027] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:16.144 [2024-04-26 23:37:05.302032] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:16.144 [2024-04-26 23:37:05.302036] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3ad0000b90 00:34:16.144 [2024-04-26 23:37:05.302047] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:16.144 qpair failed and we were unable to recover it. 00:34:16.144 [2024-04-26 23:37:05.311913] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:16.144 [2024-04-26 23:37:05.311991] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:16.144 [2024-04-26 23:37:05.312001] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:16.144 [2024-04-26 23:37:05.312006] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:16.144 [2024-04-26 23:37:05.312013] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3ad0000b90 00:34:16.144 [2024-04-26 23:37:05.312023] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:16.144 qpair failed and we were unable to recover it. 00:34:16.144 [2024-04-26 23:37:05.321826] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:16.144 [2024-04-26 23:37:05.321902] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:16.144 [2024-04-26 23:37:05.321913] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:16.144 [2024-04-26 23:37:05.321918] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:16.144 [2024-04-26 23:37:05.321922] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3ad0000b90 00:34:16.144 [2024-04-26 23:37:05.321934] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:16.144 qpair failed and we were unable to recover it. 00:34:16.144 [2024-04-26 23:37:05.331908] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:16.144 [2024-04-26 23:37:05.331961] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:16.144 [2024-04-26 23:37:05.331972] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:16.144 [2024-04-26 23:37:05.331977] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:16.144 [2024-04-26 23:37:05.331981] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3ad0000b90 00:34:16.144 [2024-04-26 23:37:05.331991] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:16.144 qpair failed and we were unable to recover it. 00:34:16.144 [2024-04-26 23:37:05.342081] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:16.144 [2024-04-26 23:37:05.342162] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:16.144 [2024-04-26 23:37:05.342173] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:16.144 [2024-04-26 23:37:05.342178] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:16.144 [2024-04-26 23:37:05.342182] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3ad0000b90 00:34:16.144 [2024-04-26 23:37:05.342193] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:16.144 qpair failed and we were unable to recover it. 00:34:16.144 [2024-04-26 23:37:05.352054] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:16.144 [2024-04-26 23:37:05.352109] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:16.144 [2024-04-26 23:37:05.352119] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:16.144 [2024-04-26 23:37:05.352124] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:16.144 [2024-04-26 23:37:05.352129] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3ad0000b90 00:34:16.144 [2024-04-26 23:37:05.352139] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:16.144 qpair failed and we were unable to recover it. 00:34:16.144 [2024-04-26 23:37:05.362045] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:16.144 [2024-04-26 23:37:05.362141] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:16.144 [2024-04-26 23:37:05.362153] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:16.144 [2024-04-26 23:37:05.362158] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:16.144 [2024-04-26 23:37:05.362163] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3ad0000b90 00:34:16.144 [2024-04-26 23:37:05.362173] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:16.144 qpair failed and we were unable to recover it. 00:34:16.144 [2024-04-26 23:37:05.372108] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:16.144 [2024-04-26 23:37:05.372178] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:16.144 [2024-04-26 23:37:05.372189] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:16.144 [2024-04-26 23:37:05.372194] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:16.144 [2024-04-26 23:37:05.372198] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3ad0000b90 00:34:16.144 [2024-04-26 23:37:05.372208] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:16.144 qpair failed and we were unable to recover it. 00:34:16.144 [2024-04-26 23:37:05.382217] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:16.144 [2024-04-26 23:37:05.382282] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:16.144 [2024-04-26 23:37:05.382293] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:16.144 [2024-04-26 23:37:05.382298] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:16.144 [2024-04-26 23:37:05.382302] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3ad0000b90 00:34:16.144 [2024-04-26 23:37:05.382313] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:16.144 qpair failed and we were unable to recover it. 00:34:16.144 [2024-04-26 23:37:05.392204] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:16.144 [2024-04-26 23:37:05.392274] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:16.144 [2024-04-26 23:37:05.392285] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:16.144 [2024-04-26 23:37:05.392290] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:16.144 [2024-04-26 23:37:05.392294] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3ad0000b90 00:34:16.144 [2024-04-26 23:37:05.392304] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:16.144 qpair failed and we were unable to recover it. 00:34:16.407 [2024-04-26 23:37:05.402216] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:16.407 [2024-04-26 23:37:05.402302] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:16.407 [2024-04-26 23:37:05.402313] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:16.407 [2024-04-26 23:37:05.402322] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:16.407 [2024-04-26 23:37:05.402326] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3ad0000b90 00:34:16.407 [2024-04-26 23:37:05.402336] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:16.407 qpair failed and we were unable to recover it. 00:34:16.407 [2024-04-26 23:37:05.412193] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:16.407 [2024-04-26 23:37:05.412242] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:16.407 [2024-04-26 23:37:05.412252] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:16.407 [2024-04-26 23:37:05.412257] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:16.407 [2024-04-26 23:37:05.412262] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3ad0000b90 00:34:16.407 [2024-04-26 23:37:05.412272] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:16.407 qpair failed and we were unable to recover it. 00:34:16.407 [2024-04-26 23:37:05.422302] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:16.407 [2024-04-26 23:37:05.422363] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:16.407 [2024-04-26 23:37:05.422374] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:16.407 [2024-04-26 23:37:05.422379] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:16.407 [2024-04-26 23:37:05.422383] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3ad0000b90 00:34:16.407 [2024-04-26 23:37:05.422393] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:16.407 qpair failed and we were unable to recover it. 00:34:16.407 [2024-04-26 23:37:05.432269] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:16.407 [2024-04-26 23:37:05.432321] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:16.407 [2024-04-26 23:37:05.432332] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:16.407 [2024-04-26 23:37:05.432337] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:16.407 [2024-04-26 23:37:05.432341] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3ad0000b90 00:34:16.407 [2024-04-26 23:37:05.432351] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:16.407 qpair failed and we were unable to recover it. 00:34:16.407 [2024-04-26 23:37:05.442306] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:16.408 [2024-04-26 23:37:05.442357] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:16.408 [2024-04-26 23:37:05.442368] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:16.408 [2024-04-26 23:37:05.442373] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:16.408 [2024-04-26 23:37:05.442377] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3ad0000b90 00:34:16.408 [2024-04-26 23:37:05.442388] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:16.408 qpair failed and we were unable to recover it. 00:34:16.408 [2024-04-26 23:37:05.452313] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:16.408 [2024-04-26 23:37:05.452361] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:16.408 [2024-04-26 23:37:05.452372] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:16.408 [2024-04-26 23:37:05.452377] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:16.408 [2024-04-26 23:37:05.452381] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3ad0000b90 00:34:16.408 [2024-04-26 23:37:05.452391] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:16.408 qpair failed and we were unable to recover it. 00:34:16.408 [2024-04-26 23:37:05.462391] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:16.408 [2024-04-26 23:37:05.462442] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:16.408 [2024-04-26 23:37:05.462453] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:16.408 [2024-04-26 23:37:05.462458] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:16.408 [2024-04-26 23:37:05.462462] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3ad0000b90 00:34:16.408 [2024-04-26 23:37:05.462473] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:16.408 qpair failed and we were unable to recover it. 00:34:16.408 [2024-04-26 23:37:05.472391] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:16.408 [2024-04-26 23:37:05.472443] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:16.408 [2024-04-26 23:37:05.472454] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:16.408 [2024-04-26 23:37:05.472459] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:16.408 [2024-04-26 23:37:05.472463] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3ad0000b90 00:34:16.408 [2024-04-26 23:37:05.472473] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:16.408 qpair failed and we were unable to recover it. 00:34:16.408 [2024-04-26 23:37:05.482406] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:16.408 [2024-04-26 23:37:05.482450] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:16.408 [2024-04-26 23:37:05.482461] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:16.408 [2024-04-26 23:37:05.482465] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:16.408 [2024-04-26 23:37:05.482470] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3ad0000b90 00:34:16.408 [2024-04-26 23:37:05.482480] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:16.408 qpair failed and we were unable to recover it. 00:34:16.408 [2024-04-26 23:37:05.492399] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:16.408 [2024-04-26 23:37:05.492450] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:16.408 [2024-04-26 23:37:05.492461] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:16.408 [2024-04-26 23:37:05.492469] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:16.408 [2024-04-26 23:37:05.492474] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3ad0000b90 00:34:16.408 [2024-04-26 23:37:05.492486] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:16.408 qpair failed and we were unable to recover it. 00:34:16.408 [2024-04-26 23:37:05.502375] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:16.408 [2024-04-26 23:37:05.502430] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:16.408 [2024-04-26 23:37:05.502442] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:16.408 [2024-04-26 23:37:05.502447] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:16.408 [2024-04-26 23:37:05.502451] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3ad0000b90 00:34:16.408 [2024-04-26 23:37:05.502462] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:16.408 qpair failed and we were unable to recover it. 00:34:16.408 [2024-04-26 23:37:05.512487] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:16.408 [2024-04-26 23:37:05.512539] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:16.408 [2024-04-26 23:37:05.512550] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:16.408 [2024-04-26 23:37:05.512555] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:16.408 [2024-04-26 23:37:05.512559] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3ad0000b90 00:34:16.408 [2024-04-26 23:37:05.512569] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:16.408 qpair failed and we were unable to recover it. 00:34:16.408 [2024-04-26 23:37:05.522515] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:16.408 [2024-04-26 23:37:05.522566] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:16.408 [2024-04-26 23:37:05.522584] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:16.408 [2024-04-26 23:37:05.522590] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:16.408 [2024-04-26 23:37:05.522595] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3ad0000b90 00:34:16.408 [2024-04-26 23:37:05.522608] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:16.408 qpair failed and we were unable to recover it. 00:34:16.408 [2024-04-26 23:37:05.532534] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:16.408 [2024-04-26 23:37:05.532583] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:16.408 [2024-04-26 23:37:05.532602] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:16.408 [2024-04-26 23:37:05.532608] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:16.408 [2024-04-26 23:37:05.532612] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3ad0000b90 00:34:16.408 [2024-04-26 23:37:05.532626] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:16.408 qpair failed and we were unable to recover it. 00:34:16.408 [2024-04-26 23:37:05.542608] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:16.408 [2024-04-26 23:37:05.542664] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:16.408 [2024-04-26 23:37:05.542677] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:16.408 [2024-04-26 23:37:05.542682] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:16.408 [2024-04-26 23:37:05.542687] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3ad0000b90 00:34:16.408 [2024-04-26 23:37:05.542700] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:16.408 qpair failed and we were unable to recover it. 00:34:16.408 [2024-04-26 23:37:05.552476] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:16.408 [2024-04-26 23:37:05.552566] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:16.408 [2024-04-26 23:37:05.552580] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:16.408 [2024-04-26 23:37:05.552586] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:16.408 [2024-04-26 23:37:05.552591] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3ad0000b90 00:34:16.408 [2024-04-26 23:37:05.552603] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:16.408 qpair failed and we were unable to recover it. 00:34:16.408 [2024-04-26 23:37:05.562491] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:16.408 [2024-04-26 23:37:05.562536] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:16.408 [2024-04-26 23:37:05.562548] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:16.408 [2024-04-26 23:37:05.562553] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:16.408 [2024-04-26 23:37:05.562557] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3ad0000b90 00:34:16.408 [2024-04-26 23:37:05.562568] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:16.408 qpair failed and we were unable to recover it. 00:34:16.408 [2024-04-26 23:37:05.572667] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:16.408 [2024-04-26 23:37:05.572793] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:16.408 [2024-04-26 23:37:05.572812] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:16.409 [2024-04-26 23:37:05.572818] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:16.409 [2024-04-26 23:37:05.572823] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3ad0000b90 00:34:16.409 [2024-04-26 23:37:05.572840] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:16.409 qpair failed and we were unable to recover it. 00:34:16.409 [2024-04-26 23:37:05.582739] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:16.409 [2024-04-26 23:37:05.582809] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:16.409 [2024-04-26 23:37:05.582825] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:16.409 [2024-04-26 23:37:05.582830] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:16.409 [2024-04-26 23:37:05.582834] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3ad0000b90 00:34:16.409 [2024-04-26 23:37:05.582851] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:16.409 qpair failed and we were unable to recover it. 00:34:16.409 [2024-04-26 23:37:05.592747] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:16.409 [2024-04-26 23:37:05.592799] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:16.409 [2024-04-26 23:37:05.592809] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:16.409 [2024-04-26 23:37:05.592814] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:16.409 [2024-04-26 23:37:05.592819] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3ad0000b90 00:34:16.409 [2024-04-26 23:37:05.592829] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:16.409 qpair failed and we were unable to recover it. 00:34:16.409 [2024-04-26 23:37:05.602730] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:16.409 [2024-04-26 23:37:05.602776] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:16.409 [2024-04-26 23:37:05.602788] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:16.409 [2024-04-26 23:37:05.602792] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:16.409 [2024-04-26 23:37:05.602797] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3ad0000b90 00:34:16.409 [2024-04-26 23:37:05.602807] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:16.409 qpair failed and we were unable to recover it. 00:34:16.409 [2024-04-26 23:37:05.612758] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:16.409 [2024-04-26 23:37:05.612805] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:16.409 [2024-04-26 23:37:05.612816] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:16.409 [2024-04-26 23:37:05.612821] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:16.409 [2024-04-26 23:37:05.612825] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3ad0000b90 00:34:16.409 [2024-04-26 23:37:05.612835] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:16.409 qpair failed and we were unable to recover it. 00:34:16.409 [2024-04-26 23:37:05.622816] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:16.409 [2024-04-26 23:37:05.622875] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:16.409 [2024-04-26 23:37:05.622886] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:16.409 [2024-04-26 23:37:05.622891] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:16.409 [2024-04-26 23:37:05.622895] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3ad0000b90 00:34:16.409 [2024-04-26 23:37:05.622908] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:16.409 qpair failed and we were unable to recover it. 00:34:16.409 [2024-04-26 23:37:05.632671] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:16.409 [2024-04-26 23:37:05.632721] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:16.409 [2024-04-26 23:37:05.632732] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:16.409 [2024-04-26 23:37:05.632737] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:16.409 [2024-04-26 23:37:05.632741] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3ad0000b90 00:34:16.409 [2024-04-26 23:37:05.632752] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:16.409 qpair failed and we were unable to recover it. 00:34:16.409 [2024-04-26 23:37:05.642825] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:16.409 [2024-04-26 23:37:05.642872] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:16.409 [2024-04-26 23:37:05.642883] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:16.409 [2024-04-26 23:37:05.642888] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:16.409 [2024-04-26 23:37:05.642893] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3ad0000b90 00:34:16.409 [2024-04-26 23:37:05.642903] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:16.409 qpair failed and we were unable to recover it. 00:34:16.409 [2024-04-26 23:37:05.652852] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:16.409 [2024-04-26 23:37:05.652943] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:16.409 [2024-04-26 23:37:05.652955] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:16.409 [2024-04-26 23:37:05.652960] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:16.409 [2024-04-26 23:37:05.652964] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3ad0000b90 00:34:16.409 [2024-04-26 23:37:05.652974] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:16.409 qpair failed and we were unable to recover it. 00:34:16.672 [2024-04-26 23:37:05.662933] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:16.672 [2024-04-26 23:37:05.662987] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:16.673 [2024-04-26 23:37:05.662998] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:16.673 [2024-04-26 23:37:05.663003] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:16.673 [2024-04-26 23:37:05.663007] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3ad0000b90 00:34:16.673 [2024-04-26 23:37:05.663018] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:16.673 qpair failed and we were unable to recover it. 00:34:16.673 [2024-04-26 23:37:05.672919] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:16.673 [2024-04-26 23:37:05.673006] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:16.673 [2024-04-26 23:37:05.673020] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:16.673 [2024-04-26 23:37:05.673025] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:16.673 [2024-04-26 23:37:05.673030] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3ad0000b90 00:34:16.673 [2024-04-26 23:37:05.673040] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:16.673 qpair failed and we were unable to recover it. 00:34:16.673 [2024-04-26 23:37:05.682950] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:16.673 [2024-04-26 23:37:05.682995] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:16.673 [2024-04-26 23:37:05.683006] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:16.673 [2024-04-26 23:37:05.683011] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:16.673 [2024-04-26 23:37:05.683015] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3ad0000b90 00:34:16.673 [2024-04-26 23:37:05.683025] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:16.673 qpair failed and we were unable to recover it. 00:34:16.673 [2024-04-26 23:37:05.692842] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:16.673 [2024-04-26 23:37:05.692895] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:16.673 [2024-04-26 23:37:05.692906] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:16.673 [2024-04-26 23:37:05.692911] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:16.673 [2024-04-26 23:37:05.692916] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3ad0000b90 00:34:16.673 [2024-04-26 23:37:05.692926] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:16.673 qpair failed and we were unable to recover it. 00:34:16.673 [2024-04-26 23:37:05.703062] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:16.673 [2024-04-26 23:37:05.703116] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:16.673 [2024-04-26 23:37:05.703128] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:16.673 [2024-04-26 23:37:05.703133] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:16.673 [2024-04-26 23:37:05.703138] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3ad0000b90 00:34:16.673 [2024-04-26 23:37:05.703148] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:16.673 qpair failed and we were unable to recover it. 00:34:16.673 [2024-04-26 23:37:05.713025] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:16.673 [2024-04-26 23:37:05.713074] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:16.673 [2024-04-26 23:37:05.713085] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:16.673 [2024-04-26 23:37:05.713090] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:16.673 [2024-04-26 23:37:05.713098] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3ad0000b90 00:34:16.673 [2024-04-26 23:37:05.713108] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:16.673 qpair failed and we were unable to recover it. 00:34:16.673 [2024-04-26 23:37:05.723042] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:16.673 [2024-04-26 23:37:05.723097] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:16.673 [2024-04-26 23:37:05.723107] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:16.673 [2024-04-26 23:37:05.723113] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:16.673 [2024-04-26 23:37:05.723117] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3ad0000b90 00:34:16.673 [2024-04-26 23:37:05.723127] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:16.673 qpair failed and we were unable to recover it. 00:34:16.673 [2024-04-26 23:37:05.733059] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:16.673 [2024-04-26 23:37:05.733113] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:16.673 [2024-04-26 23:37:05.733124] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:16.673 [2024-04-26 23:37:05.733129] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:16.673 [2024-04-26 23:37:05.733133] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3ad0000b90 00:34:16.673 [2024-04-26 23:37:05.733143] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:16.673 qpair failed and we were unable to recover it. 00:34:16.673 [2024-04-26 23:37:05.743063] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:16.673 [2024-04-26 23:37:05.743161] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:16.673 [2024-04-26 23:37:05.743173] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:16.673 [2024-04-26 23:37:05.743178] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:16.673 [2024-04-26 23:37:05.743182] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3ad0000b90 00:34:16.673 [2024-04-26 23:37:05.743192] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:16.673 qpair failed and we were unable to recover it. 00:34:16.673 [2024-04-26 23:37:05.753134] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:16.673 [2024-04-26 23:37:05.753186] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:16.673 [2024-04-26 23:37:05.753197] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:16.673 [2024-04-26 23:37:05.753202] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:16.673 [2024-04-26 23:37:05.753207] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3ad0000b90 00:34:16.673 [2024-04-26 23:37:05.753217] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:16.673 qpair failed and we were unable to recover it. 00:34:16.673 [2024-04-26 23:37:05.763152] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:16.673 [2024-04-26 23:37:05.763203] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:16.673 [2024-04-26 23:37:05.763214] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:16.673 [2024-04-26 23:37:05.763219] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:16.673 [2024-04-26 23:37:05.763224] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3ad0000b90 00:34:16.673 [2024-04-26 23:37:05.763234] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:16.673 qpair failed and we were unable to recover it. 00:34:16.673 [2024-04-26 23:37:05.773177] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:16.673 [2024-04-26 23:37:05.773229] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:16.673 [2024-04-26 23:37:05.773240] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:16.673 [2024-04-26 23:37:05.773245] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:16.673 [2024-04-26 23:37:05.773249] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3ad0000b90 00:34:16.673 [2024-04-26 23:37:05.773259] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:16.673 qpair failed and we were unable to recover it. 00:34:16.673 [2024-04-26 23:37:05.783259] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:16.673 [2024-04-26 23:37:05.783321] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:16.673 [2024-04-26 23:37:05.783332] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:16.673 [2024-04-26 23:37:05.783337] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:16.673 [2024-04-26 23:37:05.783341] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3ad0000b90 00:34:16.673 [2024-04-26 23:37:05.783351] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:16.673 qpair failed and we were unable to recover it. 00:34:16.673 [2024-04-26 23:37:05.793144] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:16.673 [2024-04-26 23:37:05.793199] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:16.673 [2024-04-26 23:37:05.793212] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:16.673 [2024-04-26 23:37:05.793216] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:16.673 [2024-04-26 23:37:05.793221] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3ad0000b90 00:34:16.674 [2024-04-26 23:37:05.793232] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:16.674 qpair failed and we were unable to recover it. 00:34:16.674 [2024-04-26 23:37:05.803176] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:16.674 [2024-04-26 23:37:05.803238] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:16.674 [2024-04-26 23:37:05.803249] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:16.674 [2024-04-26 23:37:05.803255] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:16.674 [2024-04-26 23:37:05.803262] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3ad0000b90 00:34:16.674 [2024-04-26 23:37:05.803273] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:16.674 qpair failed and we were unable to recover it. 00:34:16.674 [2024-04-26 23:37:05.813294] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:16.674 [2024-04-26 23:37:05.813340] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:16.674 [2024-04-26 23:37:05.813351] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:16.674 [2024-04-26 23:37:05.813356] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:16.674 [2024-04-26 23:37:05.813361] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3ad0000b90 00:34:16.674 [2024-04-26 23:37:05.813371] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:16.674 qpair failed and we were unable to recover it. 00:34:16.674 [2024-04-26 23:37:05.823367] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:16.674 [2024-04-26 23:37:05.823422] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:16.674 [2024-04-26 23:37:05.823433] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:16.674 [2024-04-26 23:37:05.823438] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:16.674 [2024-04-26 23:37:05.823442] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3ad0000b90 00:34:16.674 [2024-04-26 23:37:05.823453] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:16.674 qpair failed and we were unable to recover it. 00:34:16.674 [2024-04-26 23:37:05.833324] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:16.674 [2024-04-26 23:37:05.833385] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:16.674 [2024-04-26 23:37:05.833396] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:16.674 [2024-04-26 23:37:05.833401] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:16.674 [2024-04-26 23:37:05.833406] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3ad0000b90 00:34:16.674 [2024-04-26 23:37:05.833416] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:16.674 qpair failed and we were unable to recover it. 00:34:16.674 [2024-04-26 23:37:05.843381] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:16.674 [2024-04-26 23:37:05.843432] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:16.674 [2024-04-26 23:37:05.843444] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:16.674 [2024-04-26 23:37:05.843449] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:16.674 [2024-04-26 23:37:05.843453] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3ad0000b90 00:34:16.674 [2024-04-26 23:37:05.843463] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:16.674 qpair failed and we were unable to recover it. 00:34:16.674 [2024-04-26 23:37:05.853270] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:16.674 [2024-04-26 23:37:05.853319] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:16.674 [2024-04-26 23:37:05.853330] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:16.674 [2024-04-26 23:37:05.853335] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:16.674 [2024-04-26 23:37:05.853339] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3ad0000b90 00:34:16.674 [2024-04-26 23:37:05.853350] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:16.674 qpair failed and we were unable to recover it. 00:34:16.674 [2024-04-26 23:37:05.863484] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:16.674 [2024-04-26 23:37:05.863546] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:16.674 [2024-04-26 23:37:05.863557] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:16.674 [2024-04-26 23:37:05.863562] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:16.674 [2024-04-26 23:37:05.863566] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3ad0000b90 00:34:16.674 [2024-04-26 23:37:05.863577] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:16.674 qpair failed and we were unable to recover it. 00:34:16.674 [2024-04-26 23:37:05.873494] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:16.674 [2024-04-26 23:37:05.873544] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:16.674 [2024-04-26 23:37:05.873555] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:16.674 [2024-04-26 23:37:05.873561] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:16.674 [2024-04-26 23:37:05.873565] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3ad0000b90 00:34:16.674 [2024-04-26 23:37:05.873575] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:16.674 qpair failed and we were unable to recover it. 00:34:16.674 [2024-04-26 23:37:05.883479] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:16.674 [2024-04-26 23:37:05.883525] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:16.674 [2024-04-26 23:37:05.883536] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:16.674 [2024-04-26 23:37:05.883541] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:16.674 [2024-04-26 23:37:05.883546] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3ad0000b90 00:34:16.674 [2024-04-26 23:37:05.883556] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:16.674 qpair failed and we were unable to recover it. 00:34:16.674 [2024-04-26 23:37:05.893506] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:16.674 [2024-04-26 23:37:05.893566] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:16.674 [2024-04-26 23:37:05.893577] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:16.674 [2024-04-26 23:37:05.893588] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:16.674 [2024-04-26 23:37:05.893593] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3ad0000b90 00:34:16.674 [2024-04-26 23:37:05.893603] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:16.674 qpair failed and we were unable to recover it. 00:34:16.674 [2024-04-26 23:37:05.903467] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:16.674 [2024-04-26 23:37:05.903548] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:16.674 [2024-04-26 23:37:05.903559] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:16.674 [2024-04-26 23:37:05.903564] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:16.674 [2024-04-26 23:37:05.903568] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3ad0000b90 00:34:16.674 [2024-04-26 23:37:05.903579] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:16.674 qpair failed and we were unable to recover it. 00:34:16.674 [2024-04-26 23:37:05.913575] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:16.674 [2024-04-26 23:37:05.913624] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:16.674 [2024-04-26 23:37:05.913635] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:16.674 [2024-04-26 23:37:05.913640] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:16.674 [2024-04-26 23:37:05.913644] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3ad0000b90 00:34:16.674 [2024-04-26 23:37:05.913654] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:16.674 qpair failed and we were unable to recover it. 00:34:16.674 [2024-04-26 23:37:05.923587] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:16.674 [2024-04-26 23:37:05.923641] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:16.674 [2024-04-26 23:37:05.923652] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:16.674 [2024-04-26 23:37:05.923658] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:16.674 [2024-04-26 23:37:05.923662] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3ad0000b90 00:34:16.674 [2024-04-26 23:37:05.923672] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:16.674 qpair failed and we were unable to recover it. 00:34:16.938 [2024-04-26 23:37:05.933661] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:16.938 [2024-04-26 23:37:05.933736] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:16.938 [2024-04-26 23:37:05.933747] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:16.938 [2024-04-26 23:37:05.933753] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:16.938 [2024-04-26 23:37:05.933757] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3ad0000b90 00:34:16.938 [2024-04-26 23:37:05.933767] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:16.938 qpair failed and we were unable to recover it. 00:34:16.938 [2024-04-26 23:37:05.943669] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:16.938 [2024-04-26 23:37:05.943731] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:16.938 [2024-04-26 23:37:05.943742] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:16.938 [2024-04-26 23:37:05.943747] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:16.938 [2024-04-26 23:37:05.943751] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3ad0000b90 00:34:16.938 [2024-04-26 23:37:05.943761] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:16.938 qpair failed and we were unable to recover it. 00:34:16.938 [2024-04-26 23:37:05.953633] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:16.938 [2024-04-26 23:37:05.953683] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:16.938 [2024-04-26 23:37:05.953695] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:16.938 [2024-04-26 23:37:05.953700] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:16.938 [2024-04-26 23:37:05.953705] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3ad0000b90 00:34:16.938 [2024-04-26 23:37:05.953715] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:16.938 qpair failed and we were unable to recover it. 00:34:16.938 [2024-04-26 23:37:05.963677] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:16.938 [2024-04-26 23:37:05.963726] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:16.938 [2024-04-26 23:37:05.963737] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:16.938 [2024-04-26 23:37:05.963742] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:16.938 [2024-04-26 23:37:05.963746] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3ad0000b90 00:34:16.938 [2024-04-26 23:37:05.963756] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:16.938 qpair failed and we were unable to recover it. 00:34:16.938 [2024-04-26 23:37:05.973739] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:16.938 [2024-04-26 23:37:05.973783] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:16.938 [2024-04-26 23:37:05.973795] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:16.938 [2024-04-26 23:37:05.973800] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:16.938 [2024-04-26 23:37:05.973805] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3ad0000b90 00:34:16.938 [2024-04-26 23:37:05.973815] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:16.938 qpair failed and we were unable to recover it. 00:34:16.938 [2024-04-26 23:37:05.983794] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:16.938 [2024-04-26 23:37:05.983856] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:16.938 [2024-04-26 23:37:05.983871] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:16.938 [2024-04-26 23:37:05.983876] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:16.938 [2024-04-26 23:37:05.983881] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3ad0000b90 00:34:16.938 [2024-04-26 23:37:05.983896] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:16.938 qpair failed and we were unable to recover it. 00:34:16.938 [2024-04-26 23:37:05.993776] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:16.938 [2024-04-26 23:37:05.993828] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:16.938 [2024-04-26 23:37:05.993844] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:16.938 [2024-04-26 23:37:05.993849] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:16.938 [2024-04-26 23:37:05.993854] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3ad0000b90 00:34:16.938 [2024-04-26 23:37:05.993865] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:16.938 qpair failed and we were unable to recover it. 00:34:16.938 [2024-04-26 23:37:06.003780] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:16.938 [2024-04-26 23:37:06.003828] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:16.938 [2024-04-26 23:37:06.003843] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:16.938 [2024-04-26 23:37:06.003848] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:16.938 [2024-04-26 23:37:06.003853] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3ad0000b90 00:34:16.938 [2024-04-26 23:37:06.003864] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:16.938 qpair failed and we were unable to recover it. 00:34:16.938 [2024-04-26 23:37:06.013865] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:16.938 [2024-04-26 23:37:06.013948] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:16.938 [2024-04-26 23:37:06.013959] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:16.938 [2024-04-26 23:37:06.013964] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:16.938 [2024-04-26 23:37:06.013968] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3ad0000b90 00:34:16.938 [2024-04-26 23:37:06.013979] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:16.938 qpair failed and we were unable to recover it. 00:34:16.938 [2024-04-26 23:37:06.023878] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:16.938 [2024-04-26 23:37:06.023933] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:16.938 [2024-04-26 23:37:06.023945] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:16.938 [2024-04-26 23:37:06.023950] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:16.938 [2024-04-26 23:37:06.023955] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3ad0000b90 00:34:16.938 [2024-04-26 23:37:06.023968] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:16.938 qpair failed and we were unable to recover it. 00:34:16.938 [2024-04-26 23:37:06.033908] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:16.939 [2024-04-26 23:37:06.033962] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:16.939 [2024-04-26 23:37:06.033973] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:16.939 [2024-04-26 23:37:06.033978] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:16.939 [2024-04-26 23:37:06.033982] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3ad0000b90 00:34:16.939 [2024-04-26 23:37:06.033993] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:16.939 qpair failed and we were unable to recover it. 00:34:16.939 [2024-04-26 23:37:06.043907] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:16.939 [2024-04-26 23:37:06.043961] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:16.939 [2024-04-26 23:37:06.043971] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:16.939 [2024-04-26 23:37:06.043977] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:16.939 [2024-04-26 23:37:06.043981] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3ad0000b90 00:34:16.939 [2024-04-26 23:37:06.043991] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:16.939 qpair failed and we were unable to recover it. 00:34:16.939 [2024-04-26 23:37:06.053811] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:16.939 [2024-04-26 23:37:06.053866] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:16.939 [2024-04-26 23:37:06.053878] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:16.939 [2024-04-26 23:37:06.053883] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:16.939 [2024-04-26 23:37:06.053887] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3ad0000b90 00:34:16.939 [2024-04-26 23:37:06.053898] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:16.939 qpair failed and we were unable to recover it. 00:34:16.939 [2024-04-26 23:37:06.064016] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:16.939 [2024-04-26 23:37:06.064070] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:16.939 [2024-04-26 23:37:06.064081] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:16.939 [2024-04-26 23:37:06.064086] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:16.939 [2024-04-26 23:37:06.064091] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3ad0000b90 00:34:16.939 [2024-04-26 23:37:06.064101] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:16.939 qpair failed and we were unable to recover it. 00:34:16.939 [2024-04-26 23:37:06.073945] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:16.939 [2024-04-26 23:37:06.073997] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:16.939 [2024-04-26 23:37:06.074011] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:16.939 [2024-04-26 23:37:06.074016] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:16.939 [2024-04-26 23:37:06.074021] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3ad0000b90 00:34:16.939 [2024-04-26 23:37:06.074032] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:16.939 qpair failed and we were unable to recover it. 00:34:16.939 [2024-04-26 23:37:06.083979] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:16.939 [2024-04-26 23:37:06.084025] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:16.939 [2024-04-26 23:37:06.084036] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:16.939 [2024-04-26 23:37:06.084041] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:16.939 [2024-04-26 23:37:06.084046] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3ad0000b90 00:34:16.939 [2024-04-26 23:37:06.084056] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:16.939 qpair failed and we were unable to recover it. 00:34:16.939 [2024-04-26 23:37:06.094042] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:16.939 [2024-04-26 23:37:06.094099] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:16.939 [2024-04-26 23:37:06.094110] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:16.939 [2024-04-26 23:37:06.094115] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:16.939 [2024-04-26 23:37:06.094119] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3ad0000b90 00:34:16.939 [2024-04-26 23:37:06.094129] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:16.939 qpair failed and we were unable to recover it. 00:34:16.939 [2024-04-26 23:37:06.103993] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:16.939 [2024-04-26 23:37:06.104046] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:16.939 [2024-04-26 23:37:06.104057] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:16.939 [2024-04-26 23:37:06.104062] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:16.939 [2024-04-26 23:37:06.104067] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3ad0000b90 00:34:16.939 [2024-04-26 23:37:06.104077] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:16.939 qpair failed and we were unable to recover it. 00:34:16.939 [2024-04-26 23:37:06.114020] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:16.939 [2024-04-26 23:37:06.114074] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:16.939 [2024-04-26 23:37:06.114085] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:16.939 [2024-04-26 23:37:06.114091] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:16.939 [2024-04-26 23:37:06.114098] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3ad0000b90 00:34:16.939 [2024-04-26 23:37:06.114108] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:16.939 qpair failed and we were unable to recover it. 00:34:16.939 [2024-04-26 23:37:06.124105] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:16.939 [2024-04-26 23:37:06.124191] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:16.939 [2024-04-26 23:37:06.124201] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:16.939 [2024-04-26 23:37:06.124206] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:16.939 [2024-04-26 23:37:06.124212] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3ad0000b90 00:34:16.939 [2024-04-26 23:37:06.124222] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:16.939 qpair failed and we were unable to recover it. 00:34:16.939 [2024-04-26 23:37:06.134168] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:16.939 [2024-04-26 23:37:06.134216] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:16.939 [2024-04-26 23:37:06.134227] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:16.939 [2024-04-26 23:37:06.134232] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:16.939 [2024-04-26 23:37:06.134236] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3ad0000b90 00:34:16.939 [2024-04-26 23:37:06.134246] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:16.939 qpair failed and we were unable to recover it. 00:34:16.939 [2024-04-26 23:37:06.144244] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:16.939 [2024-04-26 23:37:06.144338] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:16.939 [2024-04-26 23:37:06.144349] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:16.939 [2024-04-26 23:37:06.144354] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:16.939 [2024-04-26 23:37:06.144359] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3ad0000b90 00:34:16.939 [2024-04-26 23:37:06.144369] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:16.939 qpair failed and we were unable to recover it. 00:34:16.939 [2024-04-26 23:37:06.154203] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:16.939 [2024-04-26 23:37:06.154255] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:16.939 [2024-04-26 23:37:06.154266] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:16.939 [2024-04-26 23:37:06.154271] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:16.939 [2024-04-26 23:37:06.154275] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3ad0000b90 00:34:16.939 [2024-04-26 23:37:06.154285] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:16.939 qpair failed and we were unable to recover it. 00:34:16.939 [2024-04-26 23:37:06.164235] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:16.939 [2024-04-26 23:37:06.164296] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:16.939 [2024-04-26 23:37:06.164307] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:16.939 [2024-04-26 23:37:06.164312] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:16.940 [2024-04-26 23:37:06.164316] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3ad0000b90 00:34:16.940 [2024-04-26 23:37:06.164326] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:16.940 qpair failed and we were unable to recover it. 00:34:16.940 [2024-04-26 23:37:06.174272] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:16.940 [2024-04-26 23:37:06.174325] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:16.940 [2024-04-26 23:37:06.174336] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:16.940 [2024-04-26 23:37:06.174341] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:16.940 [2024-04-26 23:37:06.174346] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3ad0000b90 00:34:16.940 [2024-04-26 23:37:06.174356] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:16.940 qpair failed and we were unable to recover it. 00:34:16.940 [2024-04-26 23:37:06.184357] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:16.940 [2024-04-26 23:37:06.184409] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:16.940 [2024-04-26 23:37:06.184420] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:16.940 [2024-04-26 23:37:06.184425] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:16.940 [2024-04-26 23:37:06.184430] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3ad0000b90 00:34:16.940 [2024-04-26 23:37:06.184440] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:16.940 qpair failed and we were unable to recover it. 00:34:17.202 [2024-04-26 23:37:06.194341] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:17.202 [2024-04-26 23:37:06.194407] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:17.202 [2024-04-26 23:37:06.194419] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:17.202 [2024-04-26 23:37:06.194424] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:17.202 [2024-04-26 23:37:06.194429] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3ad0000b90 00:34:17.202 [2024-04-26 23:37:06.194439] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:17.202 qpair failed and we were unable to recover it. 00:34:17.202 [2024-04-26 23:37:06.204368] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:17.202 [2024-04-26 23:37:06.204417] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:17.202 [2024-04-26 23:37:06.204428] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:17.202 [2024-04-26 23:37:06.204433] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:17.202 [2024-04-26 23:37:06.204441] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3ad0000b90 00:34:17.202 [2024-04-26 23:37:06.204451] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:17.202 qpair failed and we were unable to recover it. 00:34:17.202 [2024-04-26 23:37:06.214377] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:17.202 [2024-04-26 23:37:06.214430] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:17.202 [2024-04-26 23:37:06.214442] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:17.202 [2024-04-26 23:37:06.214447] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:17.202 [2024-04-26 23:37:06.214451] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3ad0000b90 00:34:17.202 [2024-04-26 23:37:06.214461] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:17.202 qpair failed and we were unable to recover it. 00:34:17.202 [2024-04-26 23:37:06.224352] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:17.202 [2024-04-26 23:37:06.224407] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:17.202 [2024-04-26 23:37:06.224418] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:17.202 [2024-04-26 23:37:06.224423] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:17.203 [2024-04-26 23:37:06.224428] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3ad0000b90 00:34:17.203 [2024-04-26 23:37:06.224438] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:17.203 qpair failed and we were unable to recover it. 00:34:17.203 [2024-04-26 23:37:06.234431] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:17.203 [2024-04-26 23:37:06.234486] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:17.203 [2024-04-26 23:37:06.234498] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:17.203 [2024-04-26 23:37:06.234503] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:17.203 [2024-04-26 23:37:06.234508] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3ad0000b90 00:34:17.203 [2024-04-26 23:37:06.234518] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:17.203 qpair failed and we were unable to recover it. 00:34:17.203 [2024-04-26 23:37:06.244471] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:17.203 [2024-04-26 23:37:06.244516] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:17.203 [2024-04-26 23:37:06.244528] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:17.203 [2024-04-26 23:37:06.244533] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:17.203 [2024-04-26 23:37:06.244537] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3ad0000b90 00:34:17.203 [2024-04-26 23:37:06.244547] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:17.203 qpair failed and we were unable to recover it. 00:34:17.203 [2024-04-26 23:37:06.254465] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:17.203 [2024-04-26 23:37:06.254510] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:17.203 [2024-04-26 23:37:06.254521] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:17.203 [2024-04-26 23:37:06.254526] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:17.203 [2024-04-26 23:37:06.254531] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3ad0000b90 00:34:17.203 [2024-04-26 23:37:06.254541] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:17.203 qpair failed and we were unable to recover it. 00:34:17.203 [2024-04-26 23:37:06.264492] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:17.203 [2024-04-26 23:37:06.264549] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:17.203 [2024-04-26 23:37:06.264560] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:17.203 [2024-04-26 23:37:06.264565] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:17.203 [2024-04-26 23:37:06.264569] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3ad0000b90 00:34:17.203 [2024-04-26 23:37:06.264580] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:17.203 qpair failed and we were unable to recover it. 00:34:17.203 [2024-04-26 23:37:06.274522] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:17.203 [2024-04-26 23:37:06.274573] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:17.203 [2024-04-26 23:37:06.274583] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:17.203 [2024-04-26 23:37:06.274588] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:17.203 [2024-04-26 23:37:06.274593] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3ad0000b90 00:34:17.203 [2024-04-26 23:37:06.274603] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:17.203 qpair failed and we were unable to recover it. 00:34:17.203 [2024-04-26 23:37:06.284573] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:17.203 [2024-04-26 23:37:06.284624] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:17.203 [2024-04-26 23:37:06.284635] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:17.203 [2024-04-26 23:37:06.284640] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:17.203 [2024-04-26 23:37:06.284645] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3ad0000b90 00:34:17.203 [2024-04-26 23:37:06.284655] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:17.203 qpair failed and we were unable to recover it. 00:34:17.203 [2024-04-26 23:37:06.294593] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:17.203 [2024-04-26 23:37:06.294640] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:17.203 [2024-04-26 23:37:06.294652] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:17.203 [2024-04-26 23:37:06.294660] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:17.203 [2024-04-26 23:37:06.294664] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3ad0000b90 00:34:17.203 [2024-04-26 23:37:06.294674] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:17.203 qpair failed and we were unable to recover it. 00:34:17.203 [2024-04-26 23:37:06.304543] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:17.203 [2024-04-26 23:37:06.304598] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:17.203 [2024-04-26 23:37:06.304609] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:17.203 [2024-04-26 23:37:06.304614] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:17.203 [2024-04-26 23:37:06.304618] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3ad0000b90 00:34:17.203 [2024-04-26 23:37:06.304628] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:17.203 qpair failed and we were unable to recover it. 00:34:17.203 [2024-04-26 23:37:06.314655] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:17.203 [2024-04-26 23:37:06.314706] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:17.203 [2024-04-26 23:37:06.314717] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:17.203 [2024-04-26 23:37:06.314722] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:17.203 [2024-04-26 23:37:06.314726] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3ad0000b90 00:34:17.203 [2024-04-26 23:37:06.314736] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:17.203 qpair failed and we were unable to recover it. 00:34:17.203 [2024-04-26 23:37:06.324657] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:17.203 [2024-04-26 23:37:06.324703] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:17.203 [2024-04-26 23:37:06.324714] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:17.203 [2024-04-26 23:37:06.324719] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:17.203 [2024-04-26 23:37:06.324723] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3ad0000b90 00:34:17.203 [2024-04-26 23:37:06.324733] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:17.203 qpair failed and we were unable to recover it. 00:34:17.203 [2024-04-26 23:37:06.334633] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:17.203 [2024-04-26 23:37:06.334678] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:17.203 [2024-04-26 23:37:06.334689] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:17.203 [2024-04-26 23:37:06.334694] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:17.203 [2024-04-26 23:37:06.334698] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3ad0000b90 00:34:17.203 [2024-04-26 23:37:06.334708] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:17.203 qpair failed and we were unable to recover it. 00:34:17.203 [2024-04-26 23:37:06.344789] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:17.203 [2024-04-26 23:37:06.344844] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:17.203 [2024-04-26 23:37:06.344856] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:17.203 [2024-04-26 23:37:06.344861] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:17.203 [2024-04-26 23:37:06.344865] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3ad0000b90 00:34:17.203 [2024-04-26 23:37:06.344875] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:17.203 qpair failed and we were unable to recover it. 00:34:17.203 [2024-04-26 23:37:06.354781] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:17.203 [2024-04-26 23:37:06.354834] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:17.203 [2024-04-26 23:37:06.354849] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:17.203 [2024-04-26 23:37:06.354854] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:17.203 [2024-04-26 23:37:06.354858] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3ad0000b90 00:34:17.203 [2024-04-26 23:37:06.354868] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:17.203 qpair failed and we were unable to recover it. 00:34:17.203 [2024-04-26 23:37:06.364800] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:17.203 [2024-04-26 23:37:06.364847] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:17.203 [2024-04-26 23:37:06.364858] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:17.203 [2024-04-26 23:37:06.364863] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:17.203 [2024-04-26 23:37:06.364868] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3ad0000b90 00:34:17.203 [2024-04-26 23:37:06.364878] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:17.203 qpair failed and we were unable to recover it. 00:34:17.203 [2024-04-26 23:37:06.374840] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:17.203 [2024-04-26 23:37:06.374888] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:17.203 [2024-04-26 23:37:06.374899] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:17.203 [2024-04-26 23:37:06.374904] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:17.203 [2024-04-26 23:37:06.374909] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3ad0000b90 00:34:17.203 [2024-04-26 23:37:06.374919] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:17.203 qpair failed and we were unable to recover it. 00:34:17.203 [2024-04-26 23:37:06.384895] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:17.203 [2024-04-26 23:37:06.384946] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:17.203 [2024-04-26 23:37:06.384959] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:17.203 [2024-04-26 23:37:06.384964] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:17.203 [2024-04-26 23:37:06.384969] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3ad0000b90 00:34:17.203 [2024-04-26 23:37:06.384979] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:17.203 qpair failed and we were unable to recover it. 00:34:17.203 [2024-04-26 23:37:06.394889] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:17.203 [2024-04-26 23:37:06.394941] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:17.203 [2024-04-26 23:37:06.394953] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:17.203 [2024-04-26 23:37:06.394958] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:17.203 [2024-04-26 23:37:06.394962] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3ad0000b90 00:34:17.203 [2024-04-26 23:37:06.394972] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:17.203 qpair failed and we were unable to recover it. 00:34:17.203 [2024-04-26 23:37:06.404895] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:17.203 [2024-04-26 23:37:06.404940] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:17.203 [2024-04-26 23:37:06.404951] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:17.203 [2024-04-26 23:37:06.404956] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:17.203 [2024-04-26 23:37:06.404961] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3ad0000b90 00:34:17.203 [2024-04-26 23:37:06.404971] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:17.203 qpair failed and we were unable to recover it. 00:34:17.203 [2024-04-26 23:37:06.414954] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:17.203 [2024-04-26 23:37:06.415003] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:17.203 [2024-04-26 23:37:06.415015] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:17.203 [2024-04-26 23:37:06.415020] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:17.203 [2024-04-26 23:37:06.415024] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3ad0000b90 00:34:17.203 [2024-04-26 23:37:06.415035] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:17.203 qpair failed and we were unable to recover it. 00:34:17.203 [2024-04-26 23:37:06.425007] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:17.203 [2024-04-26 23:37:06.425061] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:17.203 [2024-04-26 23:37:06.425072] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:17.203 [2024-04-26 23:37:06.425077] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:17.203 [2024-04-26 23:37:06.425081] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3ad0000b90 00:34:17.203 [2024-04-26 23:37:06.425094] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:17.203 qpair failed and we were unable to recover it. 00:34:17.203 [2024-04-26 23:37:06.434876] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:17.203 [2024-04-26 23:37:06.434965] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:17.203 [2024-04-26 23:37:06.434976] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:17.203 [2024-04-26 23:37:06.434981] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:17.203 [2024-04-26 23:37:06.434986] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3ad0000b90 00:34:17.203 [2024-04-26 23:37:06.434996] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:17.203 qpair failed and we were unable to recover it. 00:34:17.203 [2024-04-26 23:37:06.445027] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:17.203 [2024-04-26 23:37:06.445074] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:17.203 [2024-04-26 23:37:06.445085] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:17.203 [2024-04-26 23:37:06.445090] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:17.203 [2024-04-26 23:37:06.445094] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3ad0000b90 00:34:17.203 [2024-04-26 23:37:06.445104] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:17.203 qpair failed and we were unable to recover it. 00:34:17.203 [2024-04-26 23:37:06.455020] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:17.203 [2024-04-26 23:37:06.455067] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:17.203 [2024-04-26 23:37:06.455078] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:17.203 [2024-04-26 23:37:06.455083] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:17.203 [2024-04-26 23:37:06.455088] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3ad0000b90 00:34:17.203 [2024-04-26 23:37:06.455097] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:17.203 qpair failed and we were unable to recover it. 00:34:17.467 [2024-04-26 23:37:06.465108] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:17.467 [2024-04-26 23:37:06.465160] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:17.467 [2024-04-26 23:37:06.465172] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:17.467 [2024-04-26 23:37:06.465178] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:17.467 [2024-04-26 23:37:06.465183] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3ad0000b90 00:34:17.467 [2024-04-26 23:37:06.465193] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:17.467 qpair failed and we were unable to recover it. 00:34:17.467 [2024-04-26 23:37:06.475086] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:17.467 [2024-04-26 23:37:06.475143] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:17.467 [2024-04-26 23:37:06.475157] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:17.467 [2024-04-26 23:37:06.475162] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:17.467 [2024-04-26 23:37:06.475166] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3ad0000b90 00:34:17.467 [2024-04-26 23:37:06.475176] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:17.467 qpair failed and we were unable to recover it. 00:34:17.467 [2024-04-26 23:37:06.485005] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:17.467 [2024-04-26 23:37:06.485054] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:17.467 [2024-04-26 23:37:06.485066] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:17.467 [2024-04-26 23:37:06.485071] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:17.467 [2024-04-26 23:37:06.485076] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3ad0000b90 00:34:17.467 [2024-04-26 23:37:06.485087] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:17.467 qpair failed and we were unable to recover it. 00:34:17.467 [2024-04-26 23:37:06.495011] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:17.467 [2024-04-26 23:37:06.495059] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:17.467 [2024-04-26 23:37:06.495071] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:17.467 [2024-04-26 23:37:06.495076] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:17.467 [2024-04-26 23:37:06.495081] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3ad0000b90 00:34:17.467 [2024-04-26 23:37:06.495091] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:17.467 qpair failed and we were unable to recover it. 00:34:17.467 [2024-04-26 23:37:06.505218] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:17.467 [2024-04-26 23:37:06.505318] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:17.467 [2024-04-26 23:37:06.505329] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:17.467 [2024-04-26 23:37:06.505334] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:17.467 [2024-04-26 23:37:06.505339] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3ad0000b90 00:34:17.467 [2024-04-26 23:37:06.505349] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:17.467 qpair failed and we were unable to recover it. 00:34:17.467 [2024-04-26 23:37:06.515099] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:17.467 [2024-04-26 23:37:06.515151] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:17.467 [2024-04-26 23:37:06.515163] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:17.467 [2024-04-26 23:37:06.515168] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:17.467 [2024-04-26 23:37:06.515172] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3ad0000b90 00:34:17.467 [2024-04-26 23:37:06.515185] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:17.467 qpair failed and we were unable to recover it. 00:34:17.467 [2024-04-26 23:37:06.525226] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:17.467 [2024-04-26 23:37:06.525272] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:17.467 [2024-04-26 23:37:06.525284] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:17.467 [2024-04-26 23:37:06.525289] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:17.467 [2024-04-26 23:37:06.525293] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3ad0000b90 00:34:17.467 [2024-04-26 23:37:06.525303] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:17.467 qpair failed and we were unable to recover it. 00:34:17.467 [2024-04-26 23:37:06.535218] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:17.467 [2024-04-26 23:37:06.535263] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:17.467 [2024-04-26 23:37:06.535274] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:17.467 [2024-04-26 23:37:06.535279] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:17.467 [2024-04-26 23:37:06.535284] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3ad0000b90 00:34:17.467 [2024-04-26 23:37:06.535293] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:17.467 qpair failed and we were unable to recover it. 00:34:17.467 [2024-04-26 23:37:06.545317] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:17.467 [2024-04-26 23:37:06.545371] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:17.467 [2024-04-26 23:37:06.545382] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:17.467 [2024-04-26 23:37:06.545387] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:17.467 [2024-04-26 23:37:06.545392] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3ad0000b90 00:34:17.467 [2024-04-26 23:37:06.545402] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:17.467 qpair failed and we were unable to recover it. 00:34:17.467 [2024-04-26 23:37:06.555167] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:17.467 [2024-04-26 23:37:06.555221] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:17.467 [2024-04-26 23:37:06.555232] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:17.467 [2024-04-26 23:37:06.555237] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:17.467 [2024-04-26 23:37:06.555242] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3ad0000b90 00:34:17.467 [2024-04-26 23:37:06.555252] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:17.467 qpair failed and we were unable to recover it. 00:34:17.467 [2024-04-26 23:37:06.565326] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:17.467 [2024-04-26 23:37:06.565376] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:17.467 [2024-04-26 23:37:06.565388] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:17.467 [2024-04-26 23:37:06.565393] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:17.467 [2024-04-26 23:37:06.565398] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3ad0000b90 00:34:17.467 [2024-04-26 23:37:06.565408] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:17.467 qpair failed and we were unable to recover it. 00:34:17.467 [2024-04-26 23:37:06.575238] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:17.467 [2024-04-26 23:37:06.575287] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:17.467 [2024-04-26 23:37:06.575299] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:17.467 [2024-04-26 23:37:06.575304] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:17.467 [2024-04-26 23:37:06.575309] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3ad0000b90 00:34:17.467 [2024-04-26 23:37:06.575319] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:17.467 qpair failed and we were unable to recover it. 00:34:17.467 [2024-04-26 23:37:06.585391] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:17.467 [2024-04-26 23:37:06.585447] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:17.467 [2024-04-26 23:37:06.585459] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:17.467 [2024-04-26 23:37:06.585464] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:17.467 [2024-04-26 23:37:06.585469] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3ad0000b90 00:34:17.467 [2024-04-26 23:37:06.585479] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:17.467 qpair failed and we were unable to recover it. 00:34:17.467 [2024-04-26 23:37:06.595428] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:17.467 [2024-04-26 23:37:06.595483] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:17.467 [2024-04-26 23:37:06.595494] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:17.467 [2024-04-26 23:37:06.595499] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:17.467 [2024-04-26 23:37:06.595504] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3ad0000b90 00:34:17.467 [2024-04-26 23:37:06.595514] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:17.467 qpair failed and we were unable to recover it. 00:34:17.467 [2024-04-26 23:37:06.605488] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:17.467 [2024-04-26 23:37:06.605559] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:17.467 [2024-04-26 23:37:06.605571] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:17.467 [2024-04-26 23:37:06.605576] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:17.467 [2024-04-26 23:37:06.605583] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3ad0000b90 00:34:17.467 [2024-04-26 23:37:06.605593] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:17.467 qpair failed and we were unable to recover it. 00:34:17.467 [2024-04-26 23:37:06.615333] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:17.468 [2024-04-26 23:37:06.615380] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:17.468 [2024-04-26 23:37:06.615391] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:17.468 [2024-04-26 23:37:06.615396] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:17.468 [2024-04-26 23:37:06.615401] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3ad0000b90 00:34:17.468 [2024-04-26 23:37:06.615411] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:17.468 qpair failed and we were unable to recover it. 00:34:17.468 [2024-04-26 23:37:06.625532] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:17.468 [2024-04-26 23:37:06.625586] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:17.468 [2024-04-26 23:37:06.625597] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:17.468 [2024-04-26 23:37:06.625602] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:17.468 [2024-04-26 23:37:06.625607] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3ad0000b90 00:34:17.468 [2024-04-26 23:37:06.625617] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:17.468 qpair failed and we were unable to recover it. 00:34:17.468 [2024-04-26 23:37:06.635532] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:17.468 [2024-04-26 23:37:06.635584] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:17.468 [2024-04-26 23:37:06.635595] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:17.468 [2024-04-26 23:37:06.635600] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:17.468 [2024-04-26 23:37:06.635604] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3ad0000b90 00:34:17.468 [2024-04-26 23:37:06.635614] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:17.468 qpair failed and we were unable to recover it. 00:34:17.468 [2024-04-26 23:37:06.645558] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:17.468 [2024-04-26 23:37:06.645607] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:17.468 [2024-04-26 23:37:06.645619] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:17.468 [2024-04-26 23:37:06.645624] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:17.468 [2024-04-26 23:37:06.645628] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3ad0000b90 00:34:17.468 [2024-04-26 23:37:06.645638] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:17.468 qpair failed and we were unable to recover it. 00:34:17.468 [2024-04-26 23:37:06.655597] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:17.468 [2024-04-26 23:37:06.655649] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:17.468 [2024-04-26 23:37:06.655660] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:17.468 [2024-04-26 23:37:06.655666] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:17.468 [2024-04-26 23:37:06.655670] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3ad0000b90 00:34:17.468 [2024-04-26 23:37:06.655680] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:17.468 qpair failed and we were unable to recover it. 00:34:17.468 [2024-04-26 23:37:06.665653] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:17.468 [2024-04-26 23:37:06.665735] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:17.468 [2024-04-26 23:37:06.665746] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:17.468 [2024-04-26 23:37:06.665751] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:17.468 [2024-04-26 23:37:06.665756] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3ad0000b90 00:34:17.468 [2024-04-26 23:37:06.665767] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:17.468 qpair failed and we were unable to recover it. 00:34:17.468 [2024-04-26 23:37:06.675631] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:17.468 [2024-04-26 23:37:06.675685] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:17.468 [2024-04-26 23:37:06.675696] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:17.468 [2024-04-26 23:37:06.675702] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:17.468 [2024-04-26 23:37:06.675706] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3ad0000b90 00:34:17.468 [2024-04-26 23:37:06.675716] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:17.468 qpair failed and we were unable to recover it. 00:34:17.468 [2024-04-26 23:37:06.685514] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:17.468 [2024-04-26 23:37:06.685563] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:17.468 [2024-04-26 23:37:06.685574] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:17.468 [2024-04-26 23:37:06.685579] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:17.468 [2024-04-26 23:37:06.685584] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3ad0000b90 00:34:17.468 [2024-04-26 23:37:06.685594] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:17.468 qpair failed and we were unable to recover it. 00:34:17.468 [2024-04-26 23:37:06.695684] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:17.468 [2024-04-26 23:37:06.695728] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:17.468 [2024-04-26 23:37:06.695739] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:17.468 [2024-04-26 23:37:06.695748] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:17.468 [2024-04-26 23:37:06.695752] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3ad0000b90 00:34:17.468 [2024-04-26 23:37:06.695763] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:17.468 qpair failed and we were unable to recover it. 00:34:17.468 [2024-04-26 23:37:06.705623] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:17.468 [2024-04-26 23:37:06.705692] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:17.468 [2024-04-26 23:37:06.705703] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:17.468 [2024-04-26 23:37:06.705708] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:17.468 [2024-04-26 23:37:06.705713] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3ad0000b90 00:34:17.468 [2024-04-26 23:37:06.705723] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:17.468 qpair failed and we were unable to recover it. 00:34:17.468 [2024-04-26 23:37:06.715732] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:17.468 [2024-04-26 23:37:06.715788] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:17.468 [2024-04-26 23:37:06.715799] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:17.468 [2024-04-26 23:37:06.715805] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:17.468 [2024-04-26 23:37:06.715809] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3ad0000b90 00:34:17.468 [2024-04-26 23:37:06.715820] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:17.468 qpair failed and we were unable to recover it. 00:34:17.730 [2024-04-26 23:37:06.725790] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:17.730 [2024-04-26 23:37:06.725872] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:17.730 [2024-04-26 23:37:06.725883] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:17.730 [2024-04-26 23:37:06.725889] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:17.730 [2024-04-26 23:37:06.725894] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3ad0000b90 00:34:17.730 [2024-04-26 23:37:06.725904] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:17.730 qpair failed and we were unable to recover it. 00:34:17.730 [2024-04-26 23:37:06.735751] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:17.730 [2024-04-26 23:37:06.735802] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:17.730 [2024-04-26 23:37:06.735813] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:17.730 [2024-04-26 23:37:06.735818] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:17.730 [2024-04-26 23:37:06.735823] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3ad0000b90 00:34:17.730 [2024-04-26 23:37:06.735833] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:17.730 qpair failed and we were unable to recover it. 00:34:17.730 [2024-04-26 23:37:06.745864] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:17.730 [2024-04-26 23:37:06.745921] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:17.730 [2024-04-26 23:37:06.745931] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:17.730 [2024-04-26 23:37:06.745936] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:17.730 [2024-04-26 23:37:06.745941] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3ad0000b90 00:34:17.730 [2024-04-26 23:37:06.745951] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:17.730 qpair failed and we were unable to recover it. 00:34:17.730 [2024-04-26 23:37:06.755839] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:17.730 [2024-04-26 23:37:06.755892] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:17.730 [2024-04-26 23:37:06.755902] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:17.730 [2024-04-26 23:37:06.755908] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:17.731 [2024-04-26 23:37:06.755912] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3ad0000b90 00:34:17.731 [2024-04-26 23:37:06.755922] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:17.731 qpair failed and we were unable to recover it. 00:34:17.731 [2024-04-26 23:37:06.765868] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:17.731 [2024-04-26 23:37:06.765918] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:17.731 [2024-04-26 23:37:06.765929] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:17.731 [2024-04-26 23:37:06.765935] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:17.731 [2024-04-26 23:37:06.765939] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3ad0000b90 00:34:17.731 [2024-04-26 23:37:06.765949] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:17.731 qpair failed and we were unable to recover it. 00:34:17.731 [2024-04-26 23:37:06.775882] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:17.731 [2024-04-26 23:37:06.775930] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:17.731 [2024-04-26 23:37:06.775941] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:17.731 [2024-04-26 23:37:06.775946] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:17.731 [2024-04-26 23:37:06.775950] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3ad0000b90 00:34:17.731 [2024-04-26 23:37:06.775960] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:17.731 qpair failed and we were unable to recover it. 00:34:17.731 [2024-04-26 23:37:06.785843] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:17.731 [2024-04-26 23:37:06.785900] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:17.731 [2024-04-26 23:37:06.785914] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:17.731 [2024-04-26 23:37:06.785919] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:17.731 [2024-04-26 23:37:06.785924] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3ad0000b90 00:34:17.731 [2024-04-26 23:37:06.785934] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:17.731 qpair failed and we were unable to recover it. 00:34:17.731 [2024-04-26 23:37:06.795986] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:17.731 [2024-04-26 23:37:06.796043] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:17.731 [2024-04-26 23:37:06.796054] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:17.731 [2024-04-26 23:37:06.796059] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:17.731 [2024-04-26 23:37:06.796064] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3ad0000b90 00:34:17.731 [2024-04-26 23:37:06.796074] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:17.731 qpair failed and we were unable to recover it. 00:34:17.731 [2024-04-26 23:37:06.805977] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:17.731 [2024-04-26 23:37:06.806021] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:17.731 [2024-04-26 23:37:06.806032] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:17.731 [2024-04-26 23:37:06.806037] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:17.731 [2024-04-26 23:37:06.806041] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3ad0000b90 00:34:17.731 [2024-04-26 23:37:06.806051] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:17.731 qpair failed and we were unable to recover it. 00:34:17.731 [2024-04-26 23:37:06.815986] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:17.731 [2024-04-26 23:37:06.816034] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:17.731 [2024-04-26 23:37:06.816044] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:17.731 [2024-04-26 23:37:06.816049] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:17.731 [2024-04-26 23:37:06.816054] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3ad0000b90 00:34:17.731 [2024-04-26 23:37:06.816064] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:17.731 qpair failed and we were unable to recover it. 00:34:17.731 [2024-04-26 23:37:06.826105] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:17.731 [2024-04-26 23:37:06.826165] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:17.731 [2024-04-26 23:37:06.826177] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:17.731 [2024-04-26 23:37:06.826182] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:17.731 [2024-04-26 23:37:06.826186] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3ad0000b90 00:34:17.731 [2024-04-26 23:37:06.826197] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:17.731 qpair failed and we were unable to recover it. 00:34:17.731 [2024-04-26 23:37:06.836077] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:17.731 [2024-04-26 23:37:06.836126] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:17.731 [2024-04-26 23:37:06.836137] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:17.731 [2024-04-26 23:37:06.836142] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:17.731 [2024-04-26 23:37:06.836147] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3ad0000b90 00:34:17.731 [2024-04-26 23:37:06.836157] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:17.731 qpair failed and we were unable to recover it. 00:34:17.731 [2024-04-26 23:37:06.846027] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:17.731 [2024-04-26 23:37:06.846076] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:17.731 [2024-04-26 23:37:06.846087] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:17.731 [2024-04-26 23:37:06.846091] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:17.731 [2024-04-26 23:37:06.846096] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3ad0000b90 00:34:17.731 [2024-04-26 23:37:06.846106] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:17.731 qpair failed and we were unable to recover it. 00:34:17.731 [2024-04-26 23:37:06.855979] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:17.731 [2024-04-26 23:37:06.856025] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:17.731 [2024-04-26 23:37:06.856036] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:17.731 [2024-04-26 23:37:06.856041] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:17.731 [2024-04-26 23:37:06.856045] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3ad0000b90 00:34:17.731 [2024-04-26 23:37:06.856055] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:17.731 qpair failed and we were unable to recover it. 00:34:17.731 [2024-04-26 23:37:06.866236] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:17.731 [2024-04-26 23:37:06.866288] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:17.731 [2024-04-26 23:37:06.866299] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:17.731 [2024-04-26 23:37:06.866304] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:17.731 [2024-04-26 23:37:06.866308] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3ad0000b90 00:34:17.731 [2024-04-26 23:37:06.866319] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:17.731 qpair failed and we were unable to recover it. 00:34:17.731 [2024-04-26 23:37:06.876180] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:17.731 [2024-04-26 23:37:06.876234] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:17.731 [2024-04-26 23:37:06.876248] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:17.731 [2024-04-26 23:37:06.876253] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:17.731 [2024-04-26 23:37:06.876257] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3ad0000b90 00:34:17.731 [2024-04-26 23:37:06.876267] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:17.731 qpair failed and we were unable to recover it. 00:34:17.731 [2024-04-26 23:37:06.886192] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:17.731 [2024-04-26 23:37:06.886244] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:17.731 [2024-04-26 23:37:06.886255] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:17.731 [2024-04-26 23:37:06.886260] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:17.731 [2024-04-26 23:37:06.886265] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3ad0000b90 00:34:17.731 [2024-04-26 23:37:06.886275] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:17.731 qpair failed and we were unable to recover it. 00:34:17.732 [2024-04-26 23:37:06.896189] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:17.732 [2024-04-26 23:37:06.896255] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:17.732 [2024-04-26 23:37:06.896267] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:17.732 [2024-04-26 23:37:06.896272] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:17.732 [2024-04-26 23:37:06.896277] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3ad0000b90 00:34:17.732 [2024-04-26 23:37:06.896287] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:17.732 qpair failed and we were unable to recover it. 00:34:17.732 [2024-04-26 23:37:06.906289] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:17.732 [2024-04-26 23:37:06.906342] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:17.732 [2024-04-26 23:37:06.906353] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:17.732 [2024-04-26 23:37:06.906358] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:17.732 [2024-04-26 23:37:06.906362] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3ad0000b90 00:34:17.732 [2024-04-26 23:37:06.906372] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:17.732 qpair failed and we were unable to recover it. 00:34:17.732 [2024-04-26 23:37:06.916346] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:17.732 [2024-04-26 23:37:06.916401] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:17.732 [2024-04-26 23:37:06.916412] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:17.732 [2024-04-26 23:37:06.916417] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:17.732 [2024-04-26 23:37:06.916422] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3ad0000b90 00:34:17.732 [2024-04-26 23:37:06.916435] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:17.732 qpair failed and we were unable to recover it. 00:34:17.732 [2024-04-26 23:37:06.926333] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:17.732 [2024-04-26 23:37:06.926419] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:17.732 [2024-04-26 23:37:06.926430] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:17.732 [2024-04-26 23:37:06.926435] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:17.732 [2024-04-26 23:37:06.926439] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3ad0000b90 00:34:17.732 [2024-04-26 23:37:06.926449] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:17.732 qpair failed and we were unable to recover it. 00:34:17.732 [2024-04-26 23:37:06.936379] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:17.732 [2024-04-26 23:37:06.936425] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:17.732 [2024-04-26 23:37:06.936436] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:17.732 [2024-04-26 23:37:06.936441] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:17.732 [2024-04-26 23:37:06.936446] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3ad0000b90 00:34:17.732 [2024-04-26 23:37:06.936456] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:17.732 qpair failed and we were unable to recover it. 00:34:17.732 [2024-04-26 23:37:06.946419] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:17.732 [2024-04-26 23:37:06.946471] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:17.732 [2024-04-26 23:37:06.946482] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:17.732 [2024-04-26 23:37:06.946487] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:17.732 [2024-04-26 23:37:06.946491] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3ad0000b90 00:34:17.732 [2024-04-26 23:37:06.946501] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:17.732 qpair failed and we were unable to recover it. 00:34:17.732 [2024-04-26 23:37:06.956386] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:17.732 [2024-04-26 23:37:06.956437] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:17.732 [2024-04-26 23:37:06.956450] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:17.732 [2024-04-26 23:37:06.956455] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:17.732 [2024-04-26 23:37:06.956459] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3ad0000b90 00:34:17.732 [2024-04-26 23:37:06.956470] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:17.732 qpair failed and we were unable to recover it. 00:34:17.732 [2024-04-26 23:37:06.966409] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:17.732 [2024-04-26 23:37:06.966463] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:17.732 [2024-04-26 23:37:06.966477] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:17.732 [2024-04-26 23:37:06.966482] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:17.732 [2024-04-26 23:37:06.966487] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3ad0000b90 00:34:17.732 [2024-04-26 23:37:06.966497] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:17.732 qpair failed and we were unable to recover it. 00:34:17.732 [2024-04-26 23:37:06.976350] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:17.732 [2024-04-26 23:37:06.976411] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:17.732 [2024-04-26 23:37:06.976422] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:17.732 [2024-04-26 23:37:06.976427] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:17.732 [2024-04-26 23:37:06.976431] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3ad0000b90 00:34:17.732 [2024-04-26 23:37:06.976441] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:17.732 qpair failed and we were unable to recover it. 00:34:17.994 [2024-04-26 23:37:06.986541] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:17.995 [2024-04-26 23:37:06.986593] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:17.995 [2024-04-26 23:37:06.986604] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:17.995 [2024-04-26 23:37:06.986609] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:17.995 [2024-04-26 23:37:06.986614] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3ad0000b90 00:34:17.995 [2024-04-26 23:37:06.986624] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:17.995 qpair failed and we were unable to recover it. 00:34:17.995 [2024-04-26 23:37:06.996489] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:17.995 [2024-04-26 23:37:06.996544] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:17.995 [2024-04-26 23:37:06.996555] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:17.995 [2024-04-26 23:37:06.996560] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:17.995 [2024-04-26 23:37:06.996565] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3ad0000b90 00:34:17.995 [2024-04-26 23:37:06.996575] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:17.995 qpair failed and we were unable to recover it. 00:34:17.995 [2024-04-26 23:37:07.006512] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:17.995 [2024-04-26 23:37:07.006559] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:17.995 [2024-04-26 23:37:07.006570] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:17.995 [2024-04-26 23:37:07.006575] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:17.995 [2024-04-26 23:37:07.006583] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3ad0000b90 00:34:17.995 [2024-04-26 23:37:07.006593] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:17.995 qpair failed and we were unable to recover it. 00:34:17.995 [2024-04-26 23:37:07.016546] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:17.995 [2024-04-26 23:37:07.016594] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:17.995 [2024-04-26 23:37:07.016605] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:17.995 [2024-04-26 23:37:07.016610] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:17.995 [2024-04-26 23:37:07.016614] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3ad0000b90 00:34:17.995 [2024-04-26 23:37:07.016625] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:17.995 qpair failed and we were unable to recover it. 00:34:17.995 [2024-04-26 23:37:07.026583] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:17.995 [2024-04-26 23:37:07.026635] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:17.995 [2024-04-26 23:37:07.026645] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:17.995 [2024-04-26 23:37:07.026650] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:17.995 [2024-04-26 23:37:07.026655] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3ad0000b90 00:34:17.995 [2024-04-26 23:37:07.026665] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:17.995 qpair failed and we were unable to recover it. 00:34:17.995 [2024-04-26 23:37:07.036602] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:17.995 [2024-04-26 23:37:07.036655] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:17.995 [2024-04-26 23:37:07.036666] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:17.995 [2024-04-26 23:37:07.036671] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:17.995 [2024-04-26 23:37:07.036675] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3ad0000b90 00:34:17.995 [2024-04-26 23:37:07.036685] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:17.995 qpair failed and we were unable to recover it. 00:34:17.995 [2024-04-26 23:37:07.046622] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:17.995 [2024-04-26 23:37:07.046664] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:17.995 [2024-04-26 23:37:07.046674] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:17.995 [2024-04-26 23:37:07.046679] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:17.995 [2024-04-26 23:37:07.046684] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3ad0000b90 00:34:17.995 [2024-04-26 23:37:07.046694] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:17.995 qpair failed and we were unable to recover it. 00:34:17.995 [2024-04-26 23:37:07.056654] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:17.995 [2024-04-26 23:37:07.056704] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:17.995 [2024-04-26 23:37:07.056715] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:17.995 [2024-04-26 23:37:07.056720] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:17.995 [2024-04-26 23:37:07.056725] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3ad0000b90 00:34:17.995 [2024-04-26 23:37:07.056735] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:17.995 qpair failed and we were unable to recover it. 00:34:17.995 [2024-04-26 23:37:07.066730] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:17.995 [2024-04-26 23:37:07.066783] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:17.995 [2024-04-26 23:37:07.066794] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:17.995 [2024-04-26 23:37:07.066800] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:17.995 [2024-04-26 23:37:07.066804] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3ad0000b90 00:34:17.995 [2024-04-26 23:37:07.066814] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:17.995 qpair failed and we were unable to recover it. 00:34:17.995 [2024-04-26 23:37:07.076713] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:17.995 [2024-04-26 23:37:07.076768] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:17.995 [2024-04-26 23:37:07.076779] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:17.995 [2024-04-26 23:37:07.076784] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:17.995 [2024-04-26 23:37:07.076788] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3ad0000b90 00:34:17.995 [2024-04-26 23:37:07.076799] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:17.995 qpair failed and we were unable to recover it. 00:34:17.995 [2024-04-26 23:37:07.086730] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:17.995 [2024-04-26 23:37:07.086780] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:17.995 [2024-04-26 23:37:07.086790] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:17.995 [2024-04-26 23:37:07.086796] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:17.995 [2024-04-26 23:37:07.086800] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3ad0000b90 00:34:17.995 [2024-04-26 23:37:07.086810] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:17.995 qpair failed and we were unable to recover it. 00:34:17.995 [2024-04-26 23:37:07.096633] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:17.995 [2024-04-26 23:37:07.096682] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:17.995 [2024-04-26 23:37:07.096694] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:17.995 [2024-04-26 23:37:07.096701] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:17.995 [2024-04-26 23:37:07.096706] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3ad0000b90 00:34:17.995 [2024-04-26 23:37:07.096717] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:17.995 qpair failed and we were unable to recover it. 00:34:17.995 [2024-04-26 23:37:07.106704] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:17.995 [2024-04-26 23:37:07.106761] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:17.995 [2024-04-26 23:37:07.106772] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:17.995 [2024-04-26 23:37:07.106777] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:17.995 [2024-04-26 23:37:07.106782] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3ad0000b90 00:34:17.995 [2024-04-26 23:37:07.106792] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:17.995 qpair failed and we were unable to recover it. 00:34:17.995 [2024-04-26 23:37:07.116807] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:17.996 [2024-04-26 23:37:07.116904] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:17.996 [2024-04-26 23:37:07.116915] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:17.996 [2024-04-26 23:37:07.116921] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:17.996 [2024-04-26 23:37:07.116925] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3ad0000b90 00:34:17.996 [2024-04-26 23:37:07.116935] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:17.996 qpair failed and we were unable to recover it. 00:34:17.996 [2024-04-26 23:37:07.126825] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:17.996 [2024-04-26 23:37:07.126874] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:17.996 [2024-04-26 23:37:07.126892] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:17.996 [2024-04-26 23:37:07.126897] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:17.996 [2024-04-26 23:37:07.126901] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3ad0000b90 00:34:17.996 [2024-04-26 23:37:07.126911] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:17.996 qpair failed and we were unable to recover it. 00:34:17.996 [2024-04-26 23:37:07.136883] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:17.996 [2024-04-26 23:37:07.136929] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:17.996 [2024-04-26 23:37:07.136940] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:17.996 [2024-04-26 23:37:07.136946] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:17.996 [2024-04-26 23:37:07.136950] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3ad0000b90 00:34:17.996 [2024-04-26 23:37:07.136961] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:17.996 qpair failed and we were unable to recover it. 00:34:17.996 [2024-04-26 23:37:07.146972] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:17.996 [2024-04-26 23:37:07.147028] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:17.996 [2024-04-26 23:37:07.147040] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:17.996 [2024-04-26 23:37:07.147045] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:17.996 [2024-04-26 23:37:07.147049] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3ad0000b90 00:34:17.996 [2024-04-26 23:37:07.147059] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:17.996 qpair failed and we were unable to recover it. 00:34:17.996 [2024-04-26 23:37:07.156933] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:17.996 [2024-04-26 23:37:07.156985] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:17.996 [2024-04-26 23:37:07.156996] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:17.996 [2024-04-26 23:37:07.157001] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:17.996 [2024-04-26 23:37:07.157005] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3ad0000b90 00:34:17.996 [2024-04-26 23:37:07.157015] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:17.996 qpair failed and we were unable to recover it. 00:34:17.996 [2024-04-26 23:37:07.166942] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:17.996 [2024-04-26 23:37:07.166997] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:17.996 [2024-04-26 23:37:07.167008] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:17.996 [2024-04-26 23:37:07.167013] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:17.996 [2024-04-26 23:37:07.167017] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3ad0000b90 00:34:17.996 [2024-04-26 23:37:07.167027] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:17.996 qpair failed and we were unable to recover it. 00:34:17.996 [2024-04-26 23:37:07.176970] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:17.996 [2024-04-26 23:37:07.177016] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:17.996 [2024-04-26 23:37:07.177027] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:17.996 [2024-04-26 23:37:07.177032] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:17.996 [2024-04-26 23:37:07.177036] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3ad0000b90 00:34:17.996 [2024-04-26 23:37:07.177046] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:17.996 qpair failed and we were unable to recover it. 00:34:17.996 [2024-04-26 23:37:07.187060] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:17.996 [2024-04-26 23:37:07.187111] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:17.996 [2024-04-26 23:37:07.187123] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:17.996 [2024-04-26 23:37:07.187131] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:17.996 [2024-04-26 23:37:07.187135] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3ad0000b90 00:34:17.996 [2024-04-26 23:37:07.187146] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:17.996 qpair failed and we were unable to recover it. 00:34:17.996 [2024-04-26 23:37:07.197080] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:17.996 [2024-04-26 23:37:07.197165] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:17.996 [2024-04-26 23:37:07.197176] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:17.996 [2024-04-26 23:37:07.197181] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:17.996 [2024-04-26 23:37:07.197186] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3ad0000b90 00:34:17.996 [2024-04-26 23:37:07.197196] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:17.996 qpair failed and we were unable to recover it. 00:34:17.996 [2024-04-26 23:37:07.207081] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:17.996 [2024-04-26 23:37:07.207133] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:17.996 [2024-04-26 23:37:07.207144] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:17.996 [2024-04-26 23:37:07.207149] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:17.996 [2024-04-26 23:37:07.207153] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3ad0000b90 00:34:17.996 [2024-04-26 23:37:07.207163] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:17.996 qpair failed and we were unable to recover it. 00:34:17.996 [2024-04-26 23:37:07.217104] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:17.996 [2024-04-26 23:37:07.217155] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:17.996 [2024-04-26 23:37:07.217166] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:17.996 [2024-04-26 23:37:07.217170] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:17.996 [2024-04-26 23:37:07.217175] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3ad0000b90 00:34:17.996 [2024-04-26 23:37:07.217185] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:17.996 qpair failed and we were unable to recover it. 00:34:17.996 [2024-04-26 23:37:07.227047] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:17.996 [2024-04-26 23:37:07.227102] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:17.996 [2024-04-26 23:37:07.227113] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:17.996 [2024-04-26 23:37:07.227118] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:17.996 [2024-04-26 23:37:07.227123] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3ad0000b90 00:34:17.996 [2024-04-26 23:37:07.227133] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:17.996 qpair failed and we were unable to recover it. 00:34:17.996 [2024-04-26 23:37:07.237042] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:17.996 [2024-04-26 23:37:07.237096] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:17.996 [2024-04-26 23:37:07.237107] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:17.996 [2024-04-26 23:37:07.237112] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:17.996 [2024-04-26 23:37:07.237116] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3ad0000b90 00:34:17.996 [2024-04-26 23:37:07.237126] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:17.996 qpair failed and we were unable to recover it. 00:34:17.996 [2024-04-26 23:37:07.247166] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:17.996 [2024-04-26 23:37:07.247214] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:17.996 [2024-04-26 23:37:07.247224] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:17.997 [2024-04-26 23:37:07.247229] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:17.997 [2024-04-26 23:37:07.247234] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3ad0000b90 00:34:17.997 [2024-04-26 23:37:07.247244] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:17.997 qpair failed and we were unable to recover it. 00:34:18.259 [2024-04-26 23:37:07.257205] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:18.259 [2024-04-26 23:37:07.257253] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:18.259 [2024-04-26 23:37:07.257263] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:18.259 [2024-04-26 23:37:07.257269] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:18.259 [2024-04-26 23:37:07.257273] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3ad0000b90 00:34:18.259 [2024-04-26 23:37:07.257283] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:18.259 qpair failed and we were unable to recover it. 00:34:18.259 [2024-04-26 23:37:07.267145] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:18.259 [2024-04-26 23:37:07.267204] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:18.259 [2024-04-26 23:37:07.267215] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:18.259 [2024-04-26 23:37:07.267220] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:18.259 [2024-04-26 23:37:07.267224] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3ad0000b90 00:34:18.259 [2024-04-26 23:37:07.267234] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:18.259 qpair failed and we were unable to recover it. 00:34:18.259 [2024-04-26 23:37:07.277279] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:18.259 [2024-04-26 23:37:07.277335] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:18.259 [2024-04-26 23:37:07.277349] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:18.259 [2024-04-26 23:37:07.277354] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:18.259 [2024-04-26 23:37:07.277358] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3ad0000b90 00:34:18.259 [2024-04-26 23:37:07.277368] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:18.259 qpair failed and we were unable to recover it. 00:34:18.259 [2024-04-26 23:37:07.287309] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:18.259 [2024-04-26 23:37:07.287355] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:18.259 [2024-04-26 23:37:07.287366] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:18.259 [2024-04-26 23:37:07.287371] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:18.259 [2024-04-26 23:37:07.287376] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3ad0000b90 00:34:18.259 [2024-04-26 23:37:07.287385] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:18.259 qpair failed and we were unable to recover it. 00:34:18.259 [2024-04-26 23:37:07.297299] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:18.259 [2024-04-26 23:37:07.297346] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:18.259 [2024-04-26 23:37:07.297357] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:18.259 [2024-04-26 23:37:07.297362] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:18.259 [2024-04-26 23:37:07.297366] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3ad0000b90 00:34:18.259 [2024-04-26 23:37:07.297376] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:18.259 qpair failed and we were unable to recover it. 00:34:18.259 [2024-04-26 23:37:07.307391] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:18.259 [2024-04-26 23:37:07.307488] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:18.259 [2024-04-26 23:37:07.307500] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:18.259 [2024-04-26 23:37:07.307505] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:18.259 [2024-04-26 23:37:07.307509] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3ad0000b90 00:34:18.259 [2024-04-26 23:37:07.307521] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:18.259 qpair failed and we were unable to recover it. 00:34:18.259 [2024-04-26 23:37:07.317236] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:18.259 [2024-04-26 23:37:07.317285] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:18.259 [2024-04-26 23:37:07.317295] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:18.259 [2024-04-26 23:37:07.317300] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:18.259 [2024-04-26 23:37:07.317305] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3ad0000b90 00:34:18.259 [2024-04-26 23:37:07.317318] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:18.259 qpair failed and we were unable to recover it. 00:34:18.259 [2024-04-26 23:37:07.327280] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:18.259 [2024-04-26 23:37:07.327326] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:18.259 [2024-04-26 23:37:07.327337] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:18.259 [2024-04-26 23:37:07.327342] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:18.259 [2024-04-26 23:37:07.327347] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3ad0000b90 00:34:18.259 [2024-04-26 23:37:07.327357] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:18.259 qpair failed and we were unable to recover it. 00:34:18.260 [2024-04-26 23:37:07.337340] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:18.260 [2024-04-26 23:37:07.337385] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:18.260 [2024-04-26 23:37:07.337397] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:18.260 [2024-04-26 23:37:07.337403] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:18.260 [2024-04-26 23:37:07.337407] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3ad0000b90 00:34:18.260 [2024-04-26 23:37:07.337418] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:18.260 qpair failed and we were unable to recover it. 00:34:18.260 [2024-04-26 23:37:07.347371] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:18.260 [2024-04-26 23:37:07.347440] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:18.260 [2024-04-26 23:37:07.347451] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:18.260 [2024-04-26 23:37:07.347456] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:18.260 [2024-04-26 23:37:07.347460] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3ad0000b90 00:34:18.260 [2024-04-26 23:37:07.347471] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:18.260 qpair failed and we were unable to recover it. 00:34:18.260 [2024-04-26 23:37:07.357479] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:18.260 [2024-04-26 23:37:07.357532] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:18.260 [2024-04-26 23:37:07.357543] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:18.260 [2024-04-26 23:37:07.357548] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:18.260 [2024-04-26 23:37:07.357553] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3ad0000b90 00:34:18.260 [2024-04-26 23:37:07.357563] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:18.260 qpair failed and we were unable to recover it. 00:34:18.260 [2024-04-26 23:37:07.367494] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:18.260 [2024-04-26 23:37:07.367546] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:18.260 [2024-04-26 23:37:07.367562] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:18.260 [2024-04-26 23:37:07.367567] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:18.260 [2024-04-26 23:37:07.367572] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3ad0000b90 00:34:18.260 [2024-04-26 23:37:07.367582] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:18.260 qpair failed and we were unable to recover it. 00:34:18.260 [2024-04-26 23:37:07.377646] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:18.260 [2024-04-26 23:37:07.377692] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:18.260 [2024-04-26 23:37:07.377704] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:18.260 [2024-04-26 23:37:07.377709] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:18.260 [2024-04-26 23:37:07.377713] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3ad0000b90 00:34:18.260 [2024-04-26 23:37:07.377723] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:18.260 qpair failed and we were unable to recover it. 00:34:18.260 [2024-04-26 23:37:07.387613] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:18.260 [2024-04-26 23:37:07.387668] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:18.260 [2024-04-26 23:37:07.387679] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:18.260 [2024-04-26 23:37:07.387684] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:18.260 [2024-04-26 23:37:07.387689] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3ad0000b90 00:34:18.260 [2024-04-26 23:37:07.387699] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:18.260 qpair failed and we were unable to recover it. 00:34:18.260 [2024-04-26 23:37:07.397651] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:18.260 [2024-04-26 23:37:07.397724] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:18.260 [2024-04-26 23:37:07.397735] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:18.260 [2024-04-26 23:37:07.397740] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:18.260 [2024-04-26 23:37:07.397745] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3ad0000b90 00:34:18.260 [2024-04-26 23:37:07.397755] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:18.260 qpair failed and we were unable to recover it. 00:34:18.260 [2024-04-26 23:37:07.407659] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:18.260 [2024-04-26 23:37:07.407721] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:18.260 [2024-04-26 23:37:07.407732] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:18.260 [2024-04-26 23:37:07.407737] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:18.260 [2024-04-26 23:37:07.407744] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3ad0000b90 00:34:18.260 [2024-04-26 23:37:07.407754] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:18.260 qpair failed and we were unable to recover it. 00:34:18.260 [2024-04-26 23:37:07.417669] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:18.260 [2024-04-26 23:37:07.417715] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:18.260 [2024-04-26 23:37:07.417727] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:18.260 [2024-04-26 23:37:07.417732] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:18.260 [2024-04-26 23:37:07.417736] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3ad0000b90 00:34:18.260 [2024-04-26 23:37:07.417747] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:18.260 qpair failed and we were unable to recover it. 00:34:18.260 [2024-04-26 23:37:07.427713] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:18.260 [2024-04-26 23:37:07.427769] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:18.260 [2024-04-26 23:37:07.427780] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:18.260 [2024-04-26 23:37:07.427785] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:18.260 [2024-04-26 23:37:07.427790] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3ad0000b90 00:34:18.260 [2024-04-26 23:37:07.427800] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:18.260 qpair failed and we were unable to recover it. 00:34:18.260 [2024-04-26 23:37:07.437689] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:18.260 [2024-04-26 23:37:07.437744] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:18.260 [2024-04-26 23:37:07.437756] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:18.260 [2024-04-26 23:37:07.437761] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:18.260 [2024-04-26 23:37:07.437765] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3ad0000b90 00:34:18.260 [2024-04-26 23:37:07.437776] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:18.260 qpair failed and we were unable to recover it. 00:34:18.260 [2024-04-26 23:37:07.447740] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:18.260 [2024-04-26 23:37:07.447793] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:18.260 [2024-04-26 23:37:07.447804] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:18.260 [2024-04-26 23:37:07.447809] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:18.260 [2024-04-26 23:37:07.447814] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3ad0000b90 00:34:18.260 [2024-04-26 23:37:07.447824] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:18.260 qpair failed and we were unable to recover it. 00:34:18.260 [2024-04-26 23:37:07.457766] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:18.260 [2024-04-26 23:37:07.457815] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:18.260 [2024-04-26 23:37:07.457826] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:18.260 [2024-04-26 23:37:07.457831] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:18.260 [2024-04-26 23:37:07.457835] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3ad0000b90 00:34:18.260 [2024-04-26 23:37:07.457850] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:18.261 qpair failed and we were unable to recover it. 00:34:18.261 [2024-04-26 23:37:07.467820] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:18.261 [2024-04-26 23:37:07.467877] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:18.261 [2024-04-26 23:37:07.467888] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:18.261 [2024-04-26 23:37:07.467893] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:18.261 [2024-04-26 23:37:07.467897] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3ad0000b90 00:34:18.261 [2024-04-26 23:37:07.467907] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:18.261 qpair failed and we were unable to recover it. 00:34:18.261 [2024-04-26 23:37:07.477796] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:18.261 [2024-04-26 23:37:07.477848] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:18.261 [2024-04-26 23:37:07.477859] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:18.261 [2024-04-26 23:37:07.477864] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:18.261 [2024-04-26 23:37:07.477868] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3ad0000b90 00:34:18.261 [2024-04-26 23:37:07.477878] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:18.261 qpair failed and we were unable to recover it. 00:34:18.261 [2024-04-26 23:37:07.487717] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:18.261 [2024-04-26 23:37:07.487809] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:18.261 [2024-04-26 23:37:07.487820] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:18.261 [2024-04-26 23:37:07.487825] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:18.261 [2024-04-26 23:37:07.487830] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3ad0000b90 00:34:18.261 [2024-04-26 23:37:07.487844] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:18.261 qpair failed and we were unable to recover it. 00:34:18.261 [2024-04-26 23:37:07.488255] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1813e60 is same with the state(5) to be set 00:34:18.261 Read completed with error (sct=0, sc=8) 00:34:18.261 starting I/O failed 00:34:18.261 Read completed with error (sct=0, sc=8) 00:34:18.261 starting I/O failed 00:34:18.261 Read completed with error (sct=0, sc=8) 00:34:18.261 starting I/O failed 00:34:18.261 Read completed with error (sct=0, sc=8) 00:34:18.261 starting I/O failed 00:34:18.261 Read completed with error (sct=0, sc=8) 00:34:18.261 starting I/O failed 00:34:18.261 Read completed with error (sct=0, sc=8) 00:34:18.261 starting I/O failed 00:34:18.261 Read completed with error (sct=0, sc=8) 00:34:18.261 starting I/O failed 00:34:18.261 Read completed with error (sct=0, sc=8) 00:34:18.261 starting I/O failed 00:34:18.261 Read completed with error (sct=0, sc=8) 00:34:18.261 starting I/O failed 00:34:18.261 Read completed with error (sct=0, sc=8) 00:34:18.261 starting I/O failed 00:34:18.261 Write completed with error (sct=0, sc=8) 00:34:18.261 starting I/O failed 00:34:18.261 Read completed with error (sct=0, sc=8) 00:34:18.261 starting I/O failed 00:34:18.261 Read completed with error (sct=0, sc=8) 00:34:18.261 starting I/O failed 00:34:18.261 Read completed with error (sct=0, sc=8) 00:34:18.261 starting I/O failed 00:34:18.261 Write completed with error (sct=0, sc=8) 00:34:18.261 starting I/O failed 00:34:18.261 Read completed with error (sct=0, sc=8) 00:34:18.261 starting I/O failed 00:34:18.261 Write completed with error (sct=0, sc=8) 00:34:18.261 starting I/O failed 00:34:18.261 Write completed with error (sct=0, sc=8) 00:34:18.261 starting I/O failed 00:34:18.261 Read completed with error (sct=0, sc=8) 00:34:18.261 starting I/O failed 00:34:18.261 Read completed with error (sct=0, sc=8) 00:34:18.261 starting I/O failed 00:34:18.261 Read completed with error (sct=0, sc=8) 00:34:18.261 starting I/O failed 00:34:18.261 Write completed with error (sct=0, sc=8) 00:34:18.261 starting I/O failed 00:34:18.261 Read completed with error (sct=0, sc=8) 00:34:18.261 starting I/O failed 00:34:18.261 Read completed with error (sct=0, sc=8) 00:34:18.261 starting I/O failed 00:34:18.261 Write completed with error (sct=0, sc=8) 00:34:18.261 starting I/O failed 00:34:18.261 Read completed with error (sct=0, sc=8) 00:34:18.261 starting I/O failed 00:34:18.261 Read completed with error (sct=0, sc=8) 00:34:18.261 starting I/O failed 00:34:18.261 Write completed with error (sct=0, sc=8) 00:34:18.261 starting I/O failed 00:34:18.261 Write completed with error (sct=0, sc=8) 00:34:18.261 starting I/O failed 00:34:18.261 Read completed with error (sct=0, sc=8) 00:34:18.261 starting I/O failed 00:34:18.261 Read completed with error (sct=0, sc=8) 00:34:18.261 starting I/O failed 00:34:18.261 Write completed with error (sct=0, sc=8) 00:34:18.261 starting I/O failed 00:34:18.261 [2024-04-26 23:37:07.488728] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:18.261 [2024-04-26 23:37:07.497859] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:18.261 [2024-04-26 23:37:07.497915] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:18.261 [2024-04-26 23:37:07.497935] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:18.261 [2024-04-26 23:37:07.497943] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:18.261 [2024-04-26 23:37:07.497949] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3ad8000b90 00:34:18.261 [2024-04-26 23:37:07.497966] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:18.261 qpair failed and we were unable to recover it. 00:34:18.261 [2024-04-26 23:37:07.507911] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:18.261 [2024-04-26 23:37:07.507979] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:18.261 [2024-04-26 23:37:07.507995] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:18.261 [2024-04-26 23:37:07.508002] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:18.261 [2024-04-26 23:37:07.508009] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3ad8000b90 00:34:18.261 [2024-04-26 23:37:07.508023] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:18.261 qpair failed and we were unable to recover it. 00:34:18.261 Read completed with error (sct=0, sc=8) 00:34:18.261 starting I/O failed 00:34:18.261 Read completed with error (sct=0, sc=8) 00:34:18.261 starting I/O failed 00:34:18.261 Read completed with error (sct=0, sc=8) 00:34:18.261 starting I/O failed 00:34:18.261 Read completed with error (sct=0, sc=8) 00:34:18.261 starting I/O failed 00:34:18.261 Read completed with error (sct=0, sc=8) 00:34:18.261 starting I/O failed 00:34:18.261 Read completed with error (sct=0, sc=8) 00:34:18.261 starting I/O failed 00:34:18.261 Read completed with error (sct=0, sc=8) 00:34:18.261 starting I/O failed 00:34:18.261 Read completed with error (sct=0, sc=8) 00:34:18.261 starting I/O failed 00:34:18.261 Read completed with error (sct=0, sc=8) 00:34:18.261 starting I/O failed 00:34:18.261 Read completed with error (sct=0, sc=8) 00:34:18.261 starting I/O failed 00:34:18.261 Read completed with error (sct=0, sc=8) 00:34:18.261 starting I/O failed 00:34:18.261 Read completed with error (sct=0, sc=8) 00:34:18.261 starting I/O failed 00:34:18.261 Read completed with error (sct=0, sc=8) 00:34:18.261 starting I/O failed 00:34:18.261 Read completed with error (sct=0, sc=8) 00:34:18.261 starting I/O failed 00:34:18.261 Write completed with error (sct=0, sc=8) 00:34:18.261 starting I/O failed 00:34:18.261 Write completed with error (sct=0, sc=8) 00:34:18.261 starting I/O failed 00:34:18.261 Read completed with error (sct=0, sc=8) 00:34:18.261 starting I/O failed 00:34:18.261 Read completed with error (sct=0, sc=8) 00:34:18.261 starting I/O failed 00:34:18.261 Write completed with error (sct=0, sc=8) 00:34:18.261 starting I/O failed 00:34:18.261 Write completed with error (sct=0, sc=8) 00:34:18.261 starting I/O failed 00:34:18.261 Write completed with error (sct=0, sc=8) 00:34:18.261 starting I/O failed 00:34:18.261 Read completed with error (sct=0, sc=8) 00:34:18.261 starting I/O failed 00:34:18.261 Read completed with error (sct=0, sc=8) 00:34:18.261 starting I/O failed 00:34:18.261 Write completed with error (sct=0, sc=8) 00:34:18.261 starting I/O failed 00:34:18.261 Read completed with error (sct=0, sc=8) 00:34:18.261 starting I/O failed 00:34:18.261 Read completed with error (sct=0, sc=8) 00:34:18.261 starting I/O failed 00:34:18.261 Write completed with error (sct=0, sc=8) 00:34:18.261 starting I/O failed 00:34:18.261 Write completed with error (sct=0, sc=8) 00:34:18.261 starting I/O failed 00:34:18.261 Read completed with error (sct=0, sc=8) 00:34:18.261 starting I/O failed 00:34:18.261 Read completed with error (sct=0, sc=8) 00:34:18.261 starting I/O failed 00:34:18.261 Read completed with error (sct=0, sc=8) 00:34:18.261 starting I/O failed 00:34:18.261 Write completed with error (sct=0, sc=8) 00:34:18.261 starting I/O failed 00:34:18.262 [2024-04-26 23:37:07.508907] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:18.523 [2024-04-26 23:37:07.517936] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:18.523 [2024-04-26 23:37:07.518062] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:18.523 [2024-04-26 23:37:07.518112] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:18.523 [2024-04-26 23:37:07.518134] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:18.523 [2024-04-26 23:37:07.518153] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3ac8000b90 00:34:18.523 [2024-04-26 23:37:07.518197] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:18.523 qpair failed and we were unable to recover it. 00:34:18.523 [2024-04-26 23:37:07.527897] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:18.523 [2024-04-26 23:37:07.528008] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:18.523 [2024-04-26 23:37:07.528036] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:18.523 [2024-04-26 23:37:07.528052] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:18.523 [2024-04-26 23:37:07.528066] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3ac8000b90 00:34:18.523 [2024-04-26 23:37:07.528095] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:18.523 qpair failed and we were unable to recover it. 00:34:18.523 Read completed with error (sct=0, sc=8) 00:34:18.523 starting I/O failed 00:34:18.523 Read completed with error (sct=0, sc=8) 00:34:18.523 starting I/O failed 00:34:18.523 Read completed with error (sct=0, sc=8) 00:34:18.523 starting I/O failed 00:34:18.523 Read completed with error (sct=0, sc=8) 00:34:18.523 starting I/O failed 00:34:18.523 Read completed with error (sct=0, sc=8) 00:34:18.523 starting I/O failed 00:34:18.523 Read completed with error (sct=0, sc=8) 00:34:18.524 starting I/O failed 00:34:18.524 Read completed with error (sct=0, sc=8) 00:34:18.524 starting I/O failed 00:34:18.524 Read completed with error (sct=0, sc=8) 00:34:18.524 starting I/O failed 00:34:18.524 Read completed with error (sct=0, sc=8) 00:34:18.524 starting I/O failed 00:34:18.524 Write completed with error (sct=0, sc=8) 00:34:18.524 starting I/O failed 00:34:18.524 Read completed with error (sct=0, sc=8) 00:34:18.524 starting I/O failed 00:34:18.524 Write completed with error (sct=0, sc=8) 00:34:18.524 starting I/O failed 00:34:18.524 Write completed with error (sct=0, sc=8) 00:34:18.524 starting I/O failed 00:34:18.524 Read completed with error (sct=0, sc=8) 00:34:18.524 starting I/O failed 00:34:18.524 Write completed with error (sct=0, sc=8) 00:34:18.524 starting I/O failed 00:34:18.524 Read completed with error (sct=0, sc=8) 00:34:18.524 starting I/O failed 00:34:18.524 Read completed with error (sct=0, sc=8) 00:34:18.524 starting I/O failed 00:34:18.524 Write completed with error (sct=0, sc=8) 00:34:18.524 starting I/O failed 00:34:18.524 Read completed with error (sct=0, sc=8) 00:34:18.524 starting I/O failed 00:34:18.524 Read completed with error (sct=0, sc=8) 00:34:18.524 starting I/O failed 00:34:18.524 Read completed with error (sct=0, sc=8) 00:34:18.524 starting I/O failed 00:34:18.524 Read completed with error (sct=0, sc=8) 00:34:18.524 starting I/O failed 00:34:18.524 Write completed with error (sct=0, sc=8) 00:34:18.524 starting I/O failed 00:34:18.524 Read completed with error (sct=0, sc=8) 00:34:18.524 starting I/O failed 00:34:18.524 Write completed with error (sct=0, sc=8) 00:34:18.524 starting I/O failed 00:34:18.524 Read completed with error (sct=0, sc=8) 00:34:18.524 starting I/O failed 00:34:18.524 Write completed with error (sct=0, sc=8) 00:34:18.524 starting I/O failed 00:34:18.524 Read completed with error (sct=0, sc=8) 00:34:18.524 starting I/O failed 00:34:18.524 Read completed with error (sct=0, sc=8) 00:34:18.524 starting I/O failed 00:34:18.524 Write completed with error (sct=0, sc=8) 00:34:18.524 starting I/O failed 00:34:18.524 Write completed with error (sct=0, sc=8) 00:34:18.524 starting I/O failed 00:34:18.524 Write completed with error (sct=0, sc=8) 00:34:18.524 starting I/O failed 00:34:18.524 [2024-04-26 23:37:07.528500] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:18.524 [2024-04-26 23:37:07.537940] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:18.524 [2024-04-26 23:37:07.538004] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:18.524 [2024-04-26 23:37:07.538029] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:18.524 [2024-04-26 23:37:07.538037] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:18.524 [2024-04-26 23:37:07.538044] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1806330 00:34:18.524 [2024-04-26 23:37:07.538062] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:18.524 qpair failed and we were unable to recover it. 00:34:18.524 [2024-04-26 23:37:07.547975] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:18.524 [2024-04-26 23:37:07.548036] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:18.524 [2024-04-26 23:37:07.548053] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:18.524 [2024-04-26 23:37:07.548061] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:18.524 [2024-04-26 23:37:07.548068] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1806330 00:34:18.524 [2024-04-26 23:37:07.548083] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:18.524 qpair failed and we were unable to recover it. 00:34:18.524 [2024-04-26 23:37:07.548381] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1813e60 (9): Bad file descriptor 00:34:18.524 Initializing NVMe Controllers 00:34:18.524 Attaching to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:34:18.524 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:34:18.524 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:34:18.524 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:34:18.524 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:34:18.524 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:34:18.524 Initialization complete. Launching workers. 00:34:18.524 Starting thread on core 1 00:34:18.524 Starting thread on core 2 00:34:18.524 Starting thread on core 3 00:34:18.524 Starting thread on core 0 00:34:18.524 23:37:07 -- host/target_disconnect.sh@59 -- # sync 00:34:18.524 00:34:18.524 real 0m11.359s 00:34:18.524 user 0m21.013s 00:34:18.524 sys 0m3.657s 00:34:18.524 23:37:07 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:34:18.524 23:37:07 -- common/autotest_common.sh@10 -- # set +x 00:34:18.524 ************************************ 00:34:18.524 END TEST nvmf_target_disconnect_tc2 00:34:18.524 ************************************ 00:34:18.524 23:37:07 -- host/target_disconnect.sh@80 -- # '[' -n '' ']' 00:34:18.524 23:37:07 -- host/target_disconnect.sh@84 -- # trap - SIGINT SIGTERM EXIT 00:34:18.524 23:37:07 -- host/target_disconnect.sh@85 -- # nvmftestfini 00:34:18.524 23:37:07 -- nvmf/common.sh@477 -- # nvmfcleanup 00:34:18.524 23:37:07 -- nvmf/common.sh@117 -- # sync 00:34:18.524 23:37:07 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:34:18.524 23:37:07 -- nvmf/common.sh@120 -- # set +e 00:34:18.524 23:37:07 -- nvmf/common.sh@121 -- # for i in {1..20} 00:34:18.524 23:37:07 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:34:18.524 rmmod nvme_tcp 00:34:18.524 rmmod nvme_fabrics 00:34:18.524 rmmod nvme_keyring 00:34:18.524 23:37:07 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:34:18.524 23:37:07 -- nvmf/common.sh@124 -- # set -e 00:34:18.524 23:37:07 -- nvmf/common.sh@125 -- # return 0 00:34:18.524 23:37:07 -- nvmf/common.sh@478 -- # '[' -n 4189640 ']' 00:34:18.524 23:37:07 -- nvmf/common.sh@479 -- # killprocess 4189640 00:34:18.524 23:37:07 -- common/autotest_common.sh@936 -- # '[' -z 4189640 ']' 00:34:18.524 23:37:07 -- common/autotest_common.sh@940 -- # kill -0 4189640 00:34:18.524 23:37:07 -- common/autotest_common.sh@941 -- # uname 00:34:18.524 23:37:07 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:34:18.524 23:37:07 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 4189640 00:34:18.524 23:37:07 -- common/autotest_common.sh@942 -- # process_name=reactor_4 00:34:18.524 23:37:07 -- common/autotest_common.sh@946 -- # '[' reactor_4 = sudo ']' 00:34:18.525 23:37:07 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 4189640' 00:34:18.525 killing process with pid 4189640 00:34:18.525 23:37:07 -- common/autotest_common.sh@955 -- # kill 4189640 00:34:18.525 23:37:07 -- common/autotest_common.sh@960 -- # wait 4189640 00:34:18.785 23:37:07 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:34:18.785 23:37:07 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:34:18.785 23:37:07 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:34:18.785 23:37:07 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:34:18.785 23:37:07 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:34:18.785 23:37:07 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:18.785 23:37:07 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:34:18.786 23:37:07 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:20.698 23:37:09 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:34:20.698 00:34:20.698 real 0m21.433s 00:34:20.698 user 0m49.021s 00:34:20.698 sys 0m9.387s 00:34:20.698 23:37:09 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:34:20.698 23:37:09 -- common/autotest_common.sh@10 -- # set +x 00:34:20.698 ************************************ 00:34:20.698 END TEST nvmf_target_disconnect 00:34:20.698 ************************************ 00:34:20.958 23:37:09 -- nvmf/nvmf.sh@123 -- # timing_exit host 00:34:20.958 23:37:09 -- common/autotest_common.sh@716 -- # xtrace_disable 00:34:20.958 23:37:09 -- common/autotest_common.sh@10 -- # set +x 00:34:20.958 23:37:10 -- nvmf/nvmf.sh@125 -- # trap - SIGINT SIGTERM EXIT 00:34:20.958 00:34:20.958 real 27m27.640s 00:34:20.958 user 69m42.090s 00:34:20.958 sys 7m41.426s 00:34:20.958 23:37:10 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:34:20.958 23:37:10 -- common/autotest_common.sh@10 -- # set +x 00:34:20.958 ************************************ 00:34:20.958 END TEST nvmf_tcp 00:34:20.958 ************************************ 00:34:20.958 23:37:10 -- spdk/autotest.sh@286 -- # [[ 0 -eq 0 ]] 00:34:20.958 23:37:10 -- spdk/autotest.sh@287 -- # run_test spdkcli_nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:34:20.958 23:37:10 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:34:20.958 23:37:10 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:34:20.958 23:37:10 -- common/autotest_common.sh@10 -- # set +x 00:34:21.218 ************************************ 00:34:21.218 START TEST spdkcli_nvmf_tcp 00:34:21.218 ************************************ 00:34:21.218 23:37:10 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:34:21.218 * Looking for test storage... 00:34:21.218 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:34:21.218 23:37:10 -- spdkcli/nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:34:21.218 23:37:10 -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:34:21.218 23:37:10 -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:34:21.218 23:37:10 -- spdkcli/nvmf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:21.218 23:37:10 -- nvmf/common.sh@7 -- # uname -s 00:34:21.218 23:37:10 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:21.218 23:37:10 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:21.218 23:37:10 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:21.218 23:37:10 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:21.218 23:37:10 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:21.218 23:37:10 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:21.218 23:37:10 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:21.218 23:37:10 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:21.218 23:37:10 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:21.218 23:37:10 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:21.218 23:37:10 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:34:21.218 23:37:10 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:34:21.218 23:37:10 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:21.218 23:37:10 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:21.218 23:37:10 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:21.218 23:37:10 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:21.218 23:37:10 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:21.218 23:37:10 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:21.218 23:37:10 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:21.218 23:37:10 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:21.218 23:37:10 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:21.218 23:37:10 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:21.218 23:37:10 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:21.218 23:37:10 -- paths/export.sh@5 -- # export PATH 00:34:21.218 23:37:10 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:21.218 23:37:10 -- nvmf/common.sh@47 -- # : 0 00:34:21.218 23:37:10 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:34:21.218 23:37:10 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:34:21.218 23:37:10 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:21.218 23:37:10 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:21.218 23:37:10 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:21.218 23:37:10 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:34:21.218 23:37:10 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:34:21.218 23:37:10 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:34:21.218 23:37:10 -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:34:21.218 23:37:10 -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:34:21.218 23:37:10 -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:34:21.218 23:37:10 -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:34:21.218 23:37:10 -- common/autotest_common.sh@710 -- # xtrace_disable 00:34:21.218 23:37:10 -- common/autotest_common.sh@10 -- # set +x 00:34:21.218 23:37:10 -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:34:21.218 23:37:10 -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=4191563 00:34:21.218 23:37:10 -- spdkcli/common.sh@34 -- # waitforlisten 4191563 00:34:21.218 23:37:10 -- common/autotest_common.sh@817 -- # '[' -z 4191563 ']' 00:34:21.218 23:37:10 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:21.218 23:37:10 -- spdkcli/common.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:34:21.218 23:37:10 -- common/autotest_common.sh@822 -- # local max_retries=100 00:34:21.218 23:37:10 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:21.218 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:21.218 23:37:10 -- common/autotest_common.sh@826 -- # xtrace_disable 00:34:21.219 23:37:10 -- common/autotest_common.sh@10 -- # set +x 00:34:21.219 [2024-04-26 23:37:10.411109] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:34:21.219 [2024-04-26 23:37:10.411160] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4191563 ] 00:34:21.219 EAL: No free 2048 kB hugepages reported on node 1 00:34:21.219 [2024-04-26 23:37:10.471594] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 2 00:34:21.480 [2024-04-26 23:37:10.501570] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:34:21.480 [2024-04-26 23:37:10.501574] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:34:21.480 23:37:10 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:34:21.480 23:37:10 -- common/autotest_common.sh@850 -- # return 0 00:34:21.480 23:37:10 -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:34:21.480 23:37:10 -- common/autotest_common.sh@716 -- # xtrace_disable 00:34:21.480 23:37:10 -- common/autotest_common.sh@10 -- # set +x 00:34:21.480 23:37:10 -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:34:21.480 23:37:10 -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:34:21.480 23:37:10 -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:34:21.480 23:37:10 -- common/autotest_common.sh@710 -- # xtrace_disable 00:34:21.480 23:37:10 -- common/autotest_common.sh@10 -- # set +x 00:34:21.480 23:37:10 -- spdkcli/nvmf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:34:21.480 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:34:21.480 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:34:21.480 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:34:21.480 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:34:21.480 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:34:21.480 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:34:21.480 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:34:21.480 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:34:21.480 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:34:21.480 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:34:21.480 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:34:21.480 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:34:21.480 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:34:21.480 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:34:21.480 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:34:21.480 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:34:21.480 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:34:21.480 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:34:21.480 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:34:21.480 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:34:21.480 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:34:21.480 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:34:21.480 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:34:21.480 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:34:21.480 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:34:21.480 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:34:21.480 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:34:21.480 ' 00:34:21.741 [2024-04-26 23:37:10.938320] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:34:24.284 [2024-04-26 23:37:12.939265] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:24.856 [2024-04-26 23:37:14.102953] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:34:27.406 [2024-04-26 23:37:16.237065] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:34:29.324 [2024-04-26 23:37:18.070475] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:34:30.265 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:34:30.265 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:34:30.265 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:34:30.265 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:34:30.265 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:34:30.265 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:34:30.266 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:34:30.266 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:34:30.266 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:34:30.266 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:34:30.266 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:34:30.266 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:34:30.266 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:34:30.266 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:34:30.266 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:34:30.266 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:34:30.266 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:34:30.266 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:34:30.266 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:34:30.266 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:34:30.266 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:34:30.266 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:34:30.266 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:34:30.266 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:34:30.266 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:34:30.266 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:34:30.266 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:34:30.266 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:34:30.527 23:37:19 -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:34:30.527 23:37:19 -- common/autotest_common.sh@716 -- # xtrace_disable 00:34:30.527 23:37:19 -- common/autotest_common.sh@10 -- # set +x 00:34:30.527 23:37:19 -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:34:30.527 23:37:19 -- common/autotest_common.sh@710 -- # xtrace_disable 00:34:30.527 23:37:19 -- common/autotest_common.sh@10 -- # set +x 00:34:30.527 23:37:19 -- spdkcli/nvmf.sh@69 -- # check_match 00:34:30.527 23:37:19 -- spdkcli/common.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdkcli.py ll /nvmf 00:34:30.788 23:37:19 -- spdkcli/common.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/match/match /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:34:30.788 23:37:20 -- spdkcli/common.sh@46 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:34:30.788 23:37:20 -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:34:30.788 23:37:20 -- common/autotest_common.sh@716 -- # xtrace_disable 00:34:30.788 23:37:20 -- common/autotest_common.sh@10 -- # set +x 00:34:31.048 23:37:20 -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:34:31.048 23:37:20 -- common/autotest_common.sh@710 -- # xtrace_disable 00:34:31.048 23:37:20 -- common/autotest_common.sh@10 -- # set +x 00:34:31.048 23:37:20 -- spdkcli/nvmf.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:34:31.048 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:34:31.048 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:34:31.048 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:34:31.048 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:34:31.048 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:34:31.048 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:34:31.048 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:34:31.048 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:34:31.048 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:34:31.048 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:34:31.048 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:34:31.048 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:34:31.048 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:34:31.048 ' 00:34:36.336 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:34:36.336 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:34:36.336 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:34:36.336 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:34:36.336 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:34:36.336 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:34:36.336 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:34:36.336 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:34:36.336 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:34:36.336 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:34:36.336 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:34:36.336 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:34:36.336 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:34:36.336 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:34:36.336 23:37:25 -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:34:36.336 23:37:25 -- common/autotest_common.sh@716 -- # xtrace_disable 00:34:36.336 23:37:25 -- common/autotest_common.sh@10 -- # set +x 00:34:36.336 23:37:25 -- spdkcli/nvmf.sh@90 -- # killprocess 4191563 00:34:36.336 23:37:25 -- common/autotest_common.sh@936 -- # '[' -z 4191563 ']' 00:34:36.336 23:37:25 -- common/autotest_common.sh@940 -- # kill -0 4191563 00:34:36.336 23:37:25 -- common/autotest_common.sh@941 -- # uname 00:34:36.336 23:37:25 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:34:36.336 23:37:25 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 4191563 00:34:36.336 23:37:25 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:34:36.336 23:37:25 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:34:36.336 23:37:25 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 4191563' 00:34:36.336 killing process with pid 4191563 00:34:36.336 23:37:25 -- common/autotest_common.sh@955 -- # kill 4191563 00:34:36.336 [2024-04-26 23:37:25.541829] app.c: 937:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:34:36.336 23:37:25 -- common/autotest_common.sh@960 -- # wait 4191563 00:34:36.597 23:37:25 -- spdkcli/nvmf.sh@1 -- # cleanup 00:34:36.597 23:37:25 -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:34:36.597 23:37:25 -- spdkcli/common.sh@13 -- # '[' -n 4191563 ']' 00:34:36.597 23:37:25 -- spdkcli/common.sh@14 -- # killprocess 4191563 00:34:36.597 23:37:25 -- common/autotest_common.sh@936 -- # '[' -z 4191563 ']' 00:34:36.597 23:37:25 -- common/autotest_common.sh@940 -- # kill -0 4191563 00:34:36.597 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 940: kill: (4191563) - No such process 00:34:36.597 23:37:25 -- common/autotest_common.sh@963 -- # echo 'Process with pid 4191563 is not found' 00:34:36.597 Process with pid 4191563 is not found 00:34:36.597 23:37:25 -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:34:36.597 23:37:25 -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:34:36.597 23:37:25 -- spdkcli/common.sh@22 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_nvmf.test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:34:36.597 00:34:36.597 real 0m15.449s 00:34:36.597 user 0m32.445s 00:34:36.597 sys 0m0.741s 00:34:36.597 23:37:25 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:34:36.597 23:37:25 -- common/autotest_common.sh@10 -- # set +x 00:34:36.597 ************************************ 00:34:36.597 END TEST spdkcli_nvmf_tcp 00:34:36.597 ************************************ 00:34:36.597 23:37:25 -- spdk/autotest.sh@288 -- # run_test nvmf_identify_passthru /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:34:36.597 23:37:25 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:34:36.597 23:37:25 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:34:36.597 23:37:25 -- common/autotest_common.sh@10 -- # set +x 00:34:36.857 ************************************ 00:34:36.857 START TEST nvmf_identify_passthru 00:34:36.857 ************************************ 00:34:36.857 23:37:25 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:34:36.857 * Looking for test storage... 00:34:36.857 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:34:36.857 23:37:25 -- target/identify_passthru.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:36.857 23:37:25 -- nvmf/common.sh@7 -- # uname -s 00:34:36.857 23:37:25 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:36.858 23:37:25 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:36.858 23:37:25 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:36.858 23:37:25 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:36.858 23:37:25 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:36.858 23:37:25 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:36.858 23:37:25 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:36.858 23:37:25 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:36.858 23:37:25 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:36.858 23:37:25 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:36.858 23:37:25 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:34:36.858 23:37:25 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:34:36.858 23:37:25 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:36.858 23:37:25 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:36.858 23:37:25 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:36.858 23:37:25 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:36.858 23:37:25 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:36.858 23:37:25 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:36.858 23:37:25 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:36.858 23:37:25 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:36.858 23:37:25 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:36.858 23:37:25 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:36.858 23:37:25 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:36.858 23:37:25 -- paths/export.sh@5 -- # export PATH 00:34:36.858 23:37:25 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:36.858 23:37:25 -- nvmf/common.sh@47 -- # : 0 00:34:36.858 23:37:25 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:34:36.858 23:37:25 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:34:36.858 23:37:25 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:36.858 23:37:25 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:36.858 23:37:25 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:36.858 23:37:25 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:34:36.858 23:37:25 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:34:36.858 23:37:25 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:34:36.858 23:37:25 -- target/identify_passthru.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:36.858 23:37:25 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:36.858 23:37:25 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:36.858 23:37:25 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:36.858 23:37:25 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:36.858 23:37:25 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:36.858 23:37:25 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:36.858 23:37:25 -- paths/export.sh@5 -- # export PATH 00:34:36.858 23:37:25 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:36.858 23:37:25 -- target/identify_passthru.sh@12 -- # nvmftestinit 00:34:36.858 23:37:25 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:34:36.858 23:37:25 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:36.858 23:37:25 -- nvmf/common.sh@437 -- # prepare_net_devs 00:34:36.858 23:37:25 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:34:36.858 23:37:25 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:34:36.858 23:37:25 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:36.858 23:37:25 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:34:36.858 23:37:25 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:36.858 23:37:26 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:34:36.858 23:37:26 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:34:36.858 23:37:26 -- nvmf/common.sh@285 -- # xtrace_disable 00:34:36.858 23:37:26 -- common/autotest_common.sh@10 -- # set +x 00:34:45.020 23:37:33 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:34:45.020 23:37:33 -- nvmf/common.sh@291 -- # pci_devs=() 00:34:45.020 23:37:33 -- nvmf/common.sh@291 -- # local -a pci_devs 00:34:45.020 23:37:33 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:34:45.020 23:37:33 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:34:45.020 23:37:33 -- nvmf/common.sh@293 -- # pci_drivers=() 00:34:45.020 23:37:33 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:34:45.020 23:37:33 -- nvmf/common.sh@295 -- # net_devs=() 00:34:45.020 23:37:33 -- nvmf/common.sh@295 -- # local -ga net_devs 00:34:45.020 23:37:33 -- nvmf/common.sh@296 -- # e810=() 00:34:45.020 23:37:33 -- nvmf/common.sh@296 -- # local -ga e810 00:34:45.020 23:37:33 -- nvmf/common.sh@297 -- # x722=() 00:34:45.020 23:37:33 -- nvmf/common.sh@297 -- # local -ga x722 00:34:45.020 23:37:33 -- nvmf/common.sh@298 -- # mlx=() 00:34:45.020 23:37:33 -- nvmf/common.sh@298 -- # local -ga mlx 00:34:45.020 23:37:33 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:45.020 23:37:33 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:45.020 23:37:33 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:45.020 23:37:33 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:45.020 23:37:33 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:45.020 23:37:33 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:45.020 23:37:33 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:45.020 23:37:33 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:45.020 23:37:33 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:45.020 23:37:33 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:45.020 23:37:33 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:45.020 23:37:33 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:34:45.020 23:37:33 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:34:45.020 23:37:33 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:34:45.020 23:37:33 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:34:45.020 23:37:33 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:34:45.020 23:37:33 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:34:45.020 23:37:33 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:34:45.020 23:37:33 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:34:45.020 Found 0000:31:00.0 (0x8086 - 0x159b) 00:34:45.020 23:37:33 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:34:45.020 23:37:33 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:34:45.020 23:37:33 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:45.020 23:37:33 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:45.020 23:37:33 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:34:45.020 23:37:33 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:34:45.020 23:37:33 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:34:45.020 Found 0000:31:00.1 (0x8086 - 0x159b) 00:34:45.020 23:37:33 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:34:45.020 23:37:33 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:34:45.020 23:37:33 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:45.020 23:37:33 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:45.020 23:37:33 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:34:45.020 23:37:33 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:34:45.020 23:37:33 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:34:45.020 23:37:33 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:34:45.020 23:37:33 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:34:45.020 23:37:33 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:45.020 23:37:33 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:34:45.020 23:37:33 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:45.020 23:37:33 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:34:45.020 Found net devices under 0000:31:00.0: cvl_0_0 00:34:45.020 23:37:33 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:34:45.020 23:37:33 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:34:45.020 23:37:33 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:45.020 23:37:33 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:34:45.020 23:37:33 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:45.020 23:37:33 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:34:45.020 Found net devices under 0000:31:00.1: cvl_0_1 00:34:45.020 23:37:33 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:34:45.020 23:37:33 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:34:45.020 23:37:33 -- nvmf/common.sh@403 -- # is_hw=yes 00:34:45.020 23:37:33 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:34:45.020 23:37:33 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:34:45.020 23:37:33 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:34:45.020 23:37:33 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:45.020 23:37:33 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:45.020 23:37:33 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:34:45.020 23:37:33 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:34:45.020 23:37:33 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:34:45.020 23:37:33 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:34:45.020 23:37:33 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:34:45.020 23:37:33 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:34:45.020 23:37:33 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:45.020 23:37:33 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:34:45.020 23:37:33 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:34:45.020 23:37:33 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:34:45.020 23:37:33 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:34:45.020 23:37:33 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:34:45.020 23:37:33 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:45.020 23:37:33 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:34:45.020 23:37:33 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:45.020 23:37:33 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:34:45.020 23:37:33 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:34:45.020 23:37:33 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:34:45.020 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:45.020 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.517 ms 00:34:45.020 00:34:45.020 --- 10.0.0.2 ping statistics --- 00:34:45.020 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:45.020 rtt min/avg/max/mdev = 0.517/0.517/0.517/0.000 ms 00:34:45.020 23:37:33 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:34:45.020 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:45.020 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.382 ms 00:34:45.020 00:34:45.020 --- 10.0.0.1 ping statistics --- 00:34:45.020 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:45.020 rtt min/avg/max/mdev = 0.382/0.382/0.382/0.000 ms 00:34:45.020 23:37:33 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:45.020 23:37:33 -- nvmf/common.sh@411 -- # return 0 00:34:45.020 23:37:33 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:34:45.020 23:37:33 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:45.020 23:37:33 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:34:45.020 23:37:33 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:34:45.020 23:37:33 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:45.020 23:37:33 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:34:45.020 23:37:33 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:34:45.020 23:37:33 -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:34:45.020 23:37:33 -- common/autotest_common.sh@710 -- # xtrace_disable 00:34:45.020 23:37:33 -- common/autotest_common.sh@10 -- # set +x 00:34:45.020 23:37:33 -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:34:45.020 23:37:33 -- common/autotest_common.sh@1510 -- # bdfs=() 00:34:45.020 23:37:33 -- common/autotest_common.sh@1510 -- # local bdfs 00:34:45.020 23:37:33 -- common/autotest_common.sh@1511 -- # bdfs=($(get_nvme_bdfs)) 00:34:45.020 23:37:33 -- common/autotest_common.sh@1511 -- # get_nvme_bdfs 00:34:45.020 23:37:33 -- common/autotest_common.sh@1499 -- # bdfs=() 00:34:45.020 23:37:33 -- common/autotest_common.sh@1499 -- # local bdfs 00:34:45.020 23:37:33 -- common/autotest_common.sh@1500 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:34:45.020 23:37:33 -- common/autotest_common.sh@1500 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:34:45.020 23:37:33 -- common/autotest_common.sh@1500 -- # jq -r '.config[].params.traddr' 00:34:45.020 23:37:33 -- common/autotest_common.sh@1501 -- # (( 1 == 0 )) 00:34:45.020 23:37:33 -- common/autotest_common.sh@1505 -- # printf '%s\n' 0000:65:00.0 00:34:45.020 23:37:33 -- common/autotest_common.sh@1513 -- # echo 0000:65:00.0 00:34:45.020 23:37:33 -- target/identify_passthru.sh@16 -- # bdf=0000:65:00.0 00:34:45.020 23:37:33 -- target/identify_passthru.sh@17 -- # '[' -z 0000:65:00.0 ']' 00:34:45.020 23:37:33 -- target/identify_passthru.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:65:00.0' -i 0 00:34:45.020 23:37:33 -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:34:45.020 23:37:33 -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:34:45.020 EAL: No free 2048 kB hugepages reported on node 1 00:34:45.020 23:37:33 -- target/identify_passthru.sh@23 -- # nvme_serial_number=S64GNE0R605494 00:34:45.020 23:37:33 -- target/identify_passthru.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:65:00.0' -i 0 00:34:45.020 23:37:33 -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:34:45.020 23:37:33 -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:34:45.020 EAL: No free 2048 kB hugepages reported on node 1 00:34:45.280 23:37:34 -- target/identify_passthru.sh@24 -- # nvme_model_number=SAMSUNG 00:34:45.280 23:37:34 -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:34:45.280 23:37:34 -- common/autotest_common.sh@716 -- # xtrace_disable 00:34:45.280 23:37:34 -- common/autotest_common.sh@10 -- # set +x 00:34:45.280 23:37:34 -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:34:45.280 23:37:34 -- common/autotest_common.sh@710 -- # xtrace_disable 00:34:45.280 23:37:34 -- common/autotest_common.sh@10 -- # set +x 00:34:45.280 23:37:34 -- target/identify_passthru.sh@31 -- # nvmfpid=5158 00:34:45.280 23:37:34 -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:34:45.280 23:37:34 -- target/identify_passthru.sh@30 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:34:45.280 23:37:34 -- target/identify_passthru.sh@35 -- # waitforlisten 5158 00:34:45.280 23:37:34 -- common/autotest_common.sh@817 -- # '[' -z 5158 ']' 00:34:45.280 23:37:34 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:45.280 23:37:34 -- common/autotest_common.sh@822 -- # local max_retries=100 00:34:45.280 23:37:34 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:45.280 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:45.280 23:37:34 -- common/autotest_common.sh@826 -- # xtrace_disable 00:34:45.280 23:37:34 -- common/autotest_common.sh@10 -- # set +x 00:34:45.280 [2024-04-26 23:37:34.516259] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:34:45.280 [2024-04-26 23:37:34.516312] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:45.540 EAL: No free 2048 kB hugepages reported on node 1 00:34:45.540 [2024-04-26 23:37:34.583015] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:34:45.540 [2024-04-26 23:37:34.615910] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:45.540 [2024-04-26 23:37:34.615952] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:45.540 [2024-04-26 23:37:34.615959] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:45.540 [2024-04-26 23:37:34.615966] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:45.540 [2024-04-26 23:37:34.615972] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:45.540 [2024-04-26 23:37:34.616087] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:34:45.540 [2024-04-26 23:37:34.616211] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:34:45.540 [2024-04-26 23:37:34.616373] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:34:45.540 [2024-04-26 23:37:34.616374] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:34:46.109 23:37:35 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:34:46.109 23:37:35 -- common/autotest_common.sh@850 -- # return 0 00:34:46.109 23:37:35 -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:34:46.109 23:37:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:34:46.109 23:37:35 -- common/autotest_common.sh@10 -- # set +x 00:34:46.109 INFO: Log level set to 20 00:34:46.109 INFO: Requests: 00:34:46.109 { 00:34:46.109 "jsonrpc": "2.0", 00:34:46.109 "method": "nvmf_set_config", 00:34:46.109 "id": 1, 00:34:46.109 "params": { 00:34:46.109 "admin_cmd_passthru": { 00:34:46.109 "identify_ctrlr": true 00:34:46.109 } 00:34:46.109 } 00:34:46.109 } 00:34:46.109 00:34:46.109 INFO: response: 00:34:46.109 { 00:34:46.109 "jsonrpc": "2.0", 00:34:46.109 "id": 1, 00:34:46.109 "result": true 00:34:46.109 } 00:34:46.110 00:34:46.110 23:37:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:34:46.110 23:37:35 -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:34:46.110 23:37:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:34:46.110 23:37:35 -- common/autotest_common.sh@10 -- # set +x 00:34:46.110 INFO: Setting log level to 20 00:34:46.110 INFO: Setting log level to 20 00:34:46.110 INFO: Log level set to 20 00:34:46.110 INFO: Log level set to 20 00:34:46.110 INFO: Requests: 00:34:46.110 { 00:34:46.110 "jsonrpc": "2.0", 00:34:46.110 "method": "framework_start_init", 00:34:46.110 "id": 1 00:34:46.110 } 00:34:46.110 00:34:46.110 INFO: Requests: 00:34:46.110 { 00:34:46.110 "jsonrpc": "2.0", 00:34:46.110 "method": "framework_start_init", 00:34:46.110 "id": 1 00:34:46.110 } 00:34:46.110 00:34:46.370 [2024-04-26 23:37:35.371254] nvmf_tgt.c: 453:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:34:46.370 INFO: response: 00:34:46.370 { 00:34:46.370 "jsonrpc": "2.0", 00:34:46.370 "id": 1, 00:34:46.370 "result": true 00:34:46.370 } 00:34:46.370 00:34:46.370 INFO: response: 00:34:46.370 { 00:34:46.370 "jsonrpc": "2.0", 00:34:46.370 "id": 1, 00:34:46.370 "result": true 00:34:46.370 } 00:34:46.370 00:34:46.370 23:37:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:34:46.370 23:37:35 -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:34:46.370 23:37:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:34:46.370 23:37:35 -- common/autotest_common.sh@10 -- # set +x 00:34:46.370 INFO: Setting log level to 40 00:34:46.370 INFO: Setting log level to 40 00:34:46.370 INFO: Setting log level to 40 00:34:46.370 [2024-04-26 23:37:35.384479] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:46.370 23:37:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:34:46.371 23:37:35 -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:34:46.371 23:37:35 -- common/autotest_common.sh@716 -- # xtrace_disable 00:34:46.371 23:37:35 -- common/autotest_common.sh@10 -- # set +x 00:34:46.371 23:37:35 -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:65:00.0 00:34:46.371 23:37:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:34:46.371 23:37:35 -- common/autotest_common.sh@10 -- # set +x 00:34:46.631 Nvme0n1 00:34:46.631 23:37:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:34:46.631 23:37:35 -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:34:46.631 23:37:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:34:46.631 23:37:35 -- common/autotest_common.sh@10 -- # set +x 00:34:46.631 23:37:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:34:46.631 23:37:35 -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:34:46.631 23:37:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:34:46.631 23:37:35 -- common/autotest_common.sh@10 -- # set +x 00:34:46.631 23:37:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:34:46.631 23:37:35 -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:34:46.631 23:37:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:34:46.631 23:37:35 -- common/autotest_common.sh@10 -- # set +x 00:34:46.631 [2024-04-26 23:37:35.764172] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:46.631 23:37:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:34:46.631 23:37:35 -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:34:46.631 23:37:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:34:46.631 23:37:35 -- common/autotest_common.sh@10 -- # set +x 00:34:46.631 [2024-04-26 23:37:35.775955] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:34:46.631 [ 00:34:46.631 { 00:34:46.631 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:34:46.631 "subtype": "Discovery", 00:34:46.631 "listen_addresses": [], 00:34:46.631 "allow_any_host": true, 00:34:46.631 "hosts": [] 00:34:46.631 }, 00:34:46.631 { 00:34:46.631 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:34:46.631 "subtype": "NVMe", 00:34:46.631 "listen_addresses": [ 00:34:46.631 { 00:34:46.631 "transport": "TCP", 00:34:46.631 "trtype": "TCP", 00:34:46.631 "adrfam": "IPv4", 00:34:46.631 "traddr": "10.0.0.2", 00:34:46.631 "trsvcid": "4420" 00:34:46.631 } 00:34:46.631 ], 00:34:46.631 "allow_any_host": true, 00:34:46.631 "hosts": [], 00:34:46.631 "serial_number": "SPDK00000000000001", 00:34:46.631 "model_number": "SPDK bdev Controller", 00:34:46.631 "max_namespaces": 1, 00:34:46.631 "min_cntlid": 1, 00:34:46.631 "max_cntlid": 65519, 00:34:46.631 "namespaces": [ 00:34:46.631 { 00:34:46.631 "nsid": 1, 00:34:46.631 "bdev_name": "Nvme0n1", 00:34:46.631 "name": "Nvme0n1", 00:34:46.631 "nguid": "3634473052605494002538450000001F", 00:34:46.631 "uuid": "36344730-5260-5494-0025-38450000001f" 00:34:46.631 } 00:34:46.631 ] 00:34:46.631 } 00:34:46.631 ] 00:34:46.631 23:37:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:34:46.631 23:37:35 -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:34:46.631 23:37:35 -- target/identify_passthru.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:34:46.631 23:37:35 -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:34:46.631 EAL: No free 2048 kB hugepages reported on node 1 00:34:46.893 23:37:35 -- target/identify_passthru.sh@54 -- # nvmf_serial_number=S64GNE0R605494 00:34:46.893 23:37:35 -- target/identify_passthru.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:34:46.893 23:37:35 -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:34:46.893 23:37:35 -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:34:46.893 EAL: No free 2048 kB hugepages reported on node 1 00:34:46.893 23:37:36 -- target/identify_passthru.sh@61 -- # nvmf_model_number=SAMSUNG 00:34:46.893 23:37:36 -- target/identify_passthru.sh@63 -- # '[' S64GNE0R605494 '!=' S64GNE0R605494 ']' 00:34:46.893 23:37:36 -- target/identify_passthru.sh@68 -- # '[' SAMSUNG '!=' SAMSUNG ']' 00:34:46.893 23:37:36 -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:34:46.893 23:37:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:34:46.893 23:37:36 -- common/autotest_common.sh@10 -- # set +x 00:34:46.893 23:37:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:34:46.893 23:37:36 -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:34:46.893 23:37:36 -- target/identify_passthru.sh@77 -- # nvmftestfini 00:34:46.893 23:37:36 -- nvmf/common.sh@477 -- # nvmfcleanup 00:34:46.893 23:37:36 -- nvmf/common.sh@117 -- # sync 00:34:46.893 23:37:36 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:34:46.893 23:37:36 -- nvmf/common.sh@120 -- # set +e 00:34:46.893 23:37:36 -- nvmf/common.sh@121 -- # for i in {1..20} 00:34:46.893 23:37:36 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:34:46.893 rmmod nvme_tcp 00:34:47.154 rmmod nvme_fabrics 00:34:47.154 rmmod nvme_keyring 00:34:47.154 23:37:36 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:34:47.154 23:37:36 -- nvmf/common.sh@124 -- # set -e 00:34:47.154 23:37:36 -- nvmf/common.sh@125 -- # return 0 00:34:47.154 23:37:36 -- nvmf/common.sh@478 -- # '[' -n 5158 ']' 00:34:47.154 23:37:36 -- nvmf/common.sh@479 -- # killprocess 5158 00:34:47.154 23:37:36 -- common/autotest_common.sh@936 -- # '[' -z 5158 ']' 00:34:47.154 23:37:36 -- common/autotest_common.sh@940 -- # kill -0 5158 00:34:47.154 23:37:36 -- common/autotest_common.sh@941 -- # uname 00:34:47.154 23:37:36 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:34:47.154 23:37:36 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 5158 00:34:47.154 23:37:36 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:34:47.154 23:37:36 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:34:47.154 23:37:36 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 5158' 00:34:47.154 killing process with pid 5158 00:34:47.154 23:37:36 -- common/autotest_common.sh@955 -- # kill 5158 00:34:47.154 [2024-04-26 23:37:36.244240] app.c: 937:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:34:47.154 23:37:36 -- common/autotest_common.sh@960 -- # wait 5158 00:34:47.414 23:37:36 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:34:47.414 23:37:36 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:34:47.414 23:37:36 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:34:47.414 23:37:36 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:34:47.414 23:37:36 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:34:47.414 23:37:36 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:47.414 23:37:36 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:34:47.414 23:37:36 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:49.326 23:37:38 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:34:49.326 00:34:49.326 real 0m12.703s 00:34:49.326 user 0m9.807s 00:34:49.326 sys 0m6.221s 00:34:49.326 23:37:38 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:34:49.326 23:37:38 -- common/autotest_common.sh@10 -- # set +x 00:34:49.326 ************************************ 00:34:49.326 END TEST nvmf_identify_passthru 00:34:49.326 ************************************ 00:34:49.586 23:37:38 -- spdk/autotest.sh@290 -- # run_test nvmf_dif /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:34:49.586 23:37:38 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:34:49.586 23:37:38 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:34:49.586 23:37:38 -- common/autotest_common.sh@10 -- # set +x 00:34:49.586 ************************************ 00:34:49.586 START TEST nvmf_dif 00:34:49.586 ************************************ 00:34:49.586 23:37:38 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:34:49.849 * Looking for test storage... 00:34:49.849 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:34:49.849 23:37:38 -- target/dif.sh@13 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:49.849 23:37:38 -- nvmf/common.sh@7 -- # uname -s 00:34:49.849 23:37:38 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:49.849 23:37:38 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:49.849 23:37:38 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:49.849 23:37:38 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:49.849 23:37:38 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:49.849 23:37:38 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:49.849 23:37:38 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:49.849 23:37:38 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:49.849 23:37:38 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:49.849 23:37:38 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:49.849 23:37:38 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:34:49.849 23:37:38 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:34:49.849 23:37:38 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:49.849 23:37:38 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:49.849 23:37:38 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:49.849 23:37:38 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:49.849 23:37:38 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:49.849 23:37:38 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:49.849 23:37:38 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:49.849 23:37:38 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:49.849 23:37:38 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:49.849 23:37:38 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:49.849 23:37:38 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:49.849 23:37:38 -- paths/export.sh@5 -- # export PATH 00:34:49.849 23:37:38 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:49.849 23:37:38 -- nvmf/common.sh@47 -- # : 0 00:34:49.849 23:37:38 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:34:49.849 23:37:38 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:34:49.849 23:37:38 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:49.849 23:37:38 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:49.849 23:37:38 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:49.849 23:37:38 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:34:49.849 23:37:38 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:34:49.849 23:37:38 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:34:49.849 23:37:38 -- target/dif.sh@15 -- # NULL_META=16 00:34:49.849 23:37:38 -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:34:49.849 23:37:38 -- target/dif.sh@15 -- # NULL_SIZE=64 00:34:49.849 23:37:38 -- target/dif.sh@15 -- # NULL_DIF=1 00:34:49.849 23:37:38 -- target/dif.sh@135 -- # nvmftestinit 00:34:49.849 23:37:38 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:34:49.849 23:37:38 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:49.849 23:37:38 -- nvmf/common.sh@437 -- # prepare_net_devs 00:34:49.849 23:37:38 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:34:49.849 23:37:38 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:34:49.849 23:37:38 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:49.849 23:37:38 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:34:49.849 23:37:38 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:49.849 23:37:38 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:34:49.849 23:37:38 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:34:49.849 23:37:38 -- nvmf/common.sh@285 -- # xtrace_disable 00:34:49.849 23:37:38 -- common/autotest_common.sh@10 -- # set +x 00:34:56.437 23:37:45 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:34:56.437 23:37:45 -- nvmf/common.sh@291 -- # pci_devs=() 00:34:56.437 23:37:45 -- nvmf/common.sh@291 -- # local -a pci_devs 00:34:56.437 23:37:45 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:34:56.437 23:37:45 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:34:56.437 23:37:45 -- nvmf/common.sh@293 -- # pci_drivers=() 00:34:56.437 23:37:45 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:34:56.437 23:37:45 -- nvmf/common.sh@295 -- # net_devs=() 00:34:56.437 23:37:45 -- nvmf/common.sh@295 -- # local -ga net_devs 00:34:56.437 23:37:45 -- nvmf/common.sh@296 -- # e810=() 00:34:56.437 23:37:45 -- nvmf/common.sh@296 -- # local -ga e810 00:34:56.437 23:37:45 -- nvmf/common.sh@297 -- # x722=() 00:34:56.437 23:37:45 -- nvmf/common.sh@297 -- # local -ga x722 00:34:56.437 23:37:45 -- nvmf/common.sh@298 -- # mlx=() 00:34:56.437 23:37:45 -- nvmf/common.sh@298 -- # local -ga mlx 00:34:56.437 23:37:45 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:56.437 23:37:45 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:56.437 23:37:45 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:56.437 23:37:45 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:56.437 23:37:45 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:56.437 23:37:45 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:56.437 23:37:45 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:56.437 23:37:45 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:56.437 23:37:45 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:56.437 23:37:45 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:56.437 23:37:45 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:56.437 23:37:45 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:34:56.437 23:37:45 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:34:56.437 23:37:45 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:34:56.437 23:37:45 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:34:56.437 23:37:45 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:34:56.437 23:37:45 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:34:56.437 23:37:45 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:34:56.437 23:37:45 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:34:56.437 Found 0000:31:00.0 (0x8086 - 0x159b) 00:34:56.437 23:37:45 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:34:56.437 23:37:45 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:34:56.437 23:37:45 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:56.437 23:37:45 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:56.437 23:37:45 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:34:56.437 23:37:45 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:34:56.437 23:37:45 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:34:56.437 Found 0000:31:00.1 (0x8086 - 0x159b) 00:34:56.437 23:37:45 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:34:56.437 23:37:45 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:34:56.437 23:37:45 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:56.437 23:37:45 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:56.437 23:37:45 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:34:56.437 23:37:45 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:34:56.437 23:37:45 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:34:56.437 23:37:45 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:34:56.437 23:37:45 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:34:56.437 23:37:45 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:56.437 23:37:45 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:34:56.437 23:37:45 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:56.437 23:37:45 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:34:56.437 Found net devices under 0000:31:00.0: cvl_0_0 00:34:56.437 23:37:45 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:34:56.437 23:37:45 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:34:56.437 23:37:45 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:56.437 23:37:45 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:34:56.437 23:37:45 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:56.437 23:37:45 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:34:56.437 Found net devices under 0000:31:00.1: cvl_0_1 00:34:56.437 23:37:45 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:34:56.437 23:37:45 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:34:56.437 23:37:45 -- nvmf/common.sh@403 -- # is_hw=yes 00:34:56.437 23:37:45 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:34:56.437 23:37:45 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:34:56.437 23:37:45 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:34:56.437 23:37:45 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:56.437 23:37:45 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:56.437 23:37:45 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:34:56.437 23:37:45 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:34:56.437 23:37:45 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:34:56.437 23:37:45 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:34:56.437 23:37:45 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:34:56.437 23:37:45 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:34:56.437 23:37:45 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:56.437 23:37:45 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:34:56.437 23:37:45 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:34:56.437 23:37:45 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:34:56.437 23:37:45 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:34:56.437 23:37:45 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:34:56.437 23:37:45 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:56.437 23:37:45 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:34:56.437 23:37:45 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:56.437 23:37:45 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:34:56.437 23:37:45 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:34:56.438 23:37:45 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:34:56.438 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:56.438 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.622 ms 00:34:56.438 00:34:56.438 --- 10.0.0.2 ping statistics --- 00:34:56.438 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:56.438 rtt min/avg/max/mdev = 0.622/0.622/0.622/0.000 ms 00:34:56.438 23:37:45 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:34:56.438 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:56.438 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.308 ms 00:34:56.438 00:34:56.438 --- 10.0.0.1 ping statistics --- 00:34:56.438 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:56.438 rtt min/avg/max/mdev = 0.308/0.308/0.308/0.000 ms 00:34:56.438 23:37:45 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:56.438 23:37:45 -- nvmf/common.sh@411 -- # return 0 00:34:56.438 23:37:45 -- nvmf/common.sh@439 -- # '[' iso == iso ']' 00:34:56.438 23:37:45 -- nvmf/common.sh@440 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:34:59.743 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:34:59.743 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:34:59.743 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:34:59.743 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:34:59.743 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:34:59.743 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:34:59.743 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:34:59.743 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:34:59.743 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:34:59.743 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:34:59.743 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:34:59.743 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:34:59.743 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:34:59.743 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:34:59.743 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:34:59.743 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:34:59.743 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:35:00.057 23:37:49 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:35:00.057 23:37:49 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:35:00.057 23:37:49 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:35:00.057 23:37:49 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:35:00.057 23:37:49 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:35:00.057 23:37:49 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:35:00.057 23:37:49 -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:35:00.057 23:37:49 -- target/dif.sh@137 -- # nvmfappstart 00:35:00.057 23:37:49 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:35:00.057 23:37:49 -- common/autotest_common.sh@710 -- # xtrace_disable 00:35:00.057 23:37:49 -- common/autotest_common.sh@10 -- # set +x 00:35:00.057 23:37:49 -- nvmf/common.sh@470 -- # nvmfpid=11108 00:35:00.057 23:37:49 -- nvmf/common.sh@471 -- # waitforlisten 11108 00:35:00.057 23:37:49 -- common/autotest_common.sh@817 -- # '[' -z 11108 ']' 00:35:00.057 23:37:49 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:00.057 23:37:49 -- common/autotest_common.sh@822 -- # local max_retries=100 00:35:00.057 23:37:49 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:00.057 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:00.057 23:37:49 -- common/autotest_common.sh@826 -- # xtrace_disable 00:35:00.057 23:37:49 -- common/autotest_common.sh@10 -- # set +x 00:35:00.057 23:37:49 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:35:00.057 [2024-04-26 23:37:49.293111] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:35:00.057 [2024-04-26 23:37:49.293183] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:00.353 EAL: No free 2048 kB hugepages reported on node 1 00:35:00.353 [2024-04-26 23:37:49.364324] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:00.353 [2024-04-26 23:37:49.395872] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:35:00.353 [2024-04-26 23:37:49.395910] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:35:00.353 [2024-04-26 23:37:49.395917] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:35:00.353 [2024-04-26 23:37:49.395924] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:35:00.353 [2024-04-26 23:37:49.395930] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:35:00.353 [2024-04-26 23:37:49.395954] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:35:00.925 23:37:50 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:35:00.925 23:37:50 -- common/autotest_common.sh@850 -- # return 0 00:35:00.925 23:37:50 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:35:00.925 23:37:50 -- common/autotest_common.sh@716 -- # xtrace_disable 00:35:00.925 23:37:50 -- common/autotest_common.sh@10 -- # set +x 00:35:00.925 23:37:50 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:35:00.925 23:37:50 -- target/dif.sh@139 -- # create_transport 00:35:00.925 23:37:50 -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:35:00.925 23:37:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:35:00.925 23:37:50 -- common/autotest_common.sh@10 -- # set +x 00:35:00.925 [2024-04-26 23:37:50.092517] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:00.925 23:37:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:35:00.925 23:37:50 -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:35:00.925 23:37:50 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:35:00.925 23:37:50 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:35:00.925 23:37:50 -- common/autotest_common.sh@10 -- # set +x 00:35:01.186 ************************************ 00:35:01.186 START TEST fio_dif_1_default 00:35:01.186 ************************************ 00:35:01.186 23:37:50 -- common/autotest_common.sh@1111 -- # fio_dif_1 00:35:01.186 23:37:50 -- target/dif.sh@86 -- # create_subsystems 0 00:35:01.186 23:37:50 -- target/dif.sh@28 -- # local sub 00:35:01.186 23:37:50 -- target/dif.sh@30 -- # for sub in "$@" 00:35:01.186 23:37:50 -- target/dif.sh@31 -- # create_subsystem 0 00:35:01.186 23:37:50 -- target/dif.sh@18 -- # local sub_id=0 00:35:01.186 23:37:50 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:35:01.186 23:37:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:35:01.186 23:37:50 -- common/autotest_common.sh@10 -- # set +x 00:35:01.186 bdev_null0 00:35:01.186 23:37:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:35:01.186 23:37:50 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:35:01.186 23:37:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:35:01.186 23:37:50 -- common/autotest_common.sh@10 -- # set +x 00:35:01.186 23:37:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:35:01.186 23:37:50 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:35:01.186 23:37:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:35:01.186 23:37:50 -- common/autotest_common.sh@10 -- # set +x 00:35:01.186 23:37:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:35:01.186 23:37:50 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:35:01.186 23:37:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:35:01.186 23:37:50 -- common/autotest_common.sh@10 -- # set +x 00:35:01.186 [2024-04-26 23:37:50.273095] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:01.186 23:37:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:35:01.186 23:37:50 -- target/dif.sh@87 -- # fio /dev/fd/62 00:35:01.186 23:37:50 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:01.186 23:37:50 -- common/autotest_common.sh@1342 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:01.186 23:37:50 -- common/autotest_common.sh@1323 -- # local fio_dir=/usr/src/fio 00:35:01.186 23:37:50 -- common/autotest_common.sh@1325 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:35:01.186 23:37:50 -- common/autotest_common.sh@1325 -- # local sanitizers 00:35:01.186 23:37:50 -- common/autotest_common.sh@1326 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:01.186 23:37:50 -- common/autotest_common.sh@1327 -- # shift 00:35:01.186 23:37:50 -- common/autotest_common.sh@1329 -- # local asan_lib= 00:35:01.186 23:37:50 -- target/dif.sh@87 -- # create_json_sub_conf 0 00:35:01.186 23:37:50 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:35:01.186 23:37:50 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:35:01.186 23:37:50 -- target/dif.sh@82 -- # gen_fio_conf 00:35:01.186 23:37:50 -- nvmf/common.sh@521 -- # config=() 00:35:01.186 23:37:50 -- target/dif.sh@54 -- # local file 00:35:01.186 23:37:50 -- nvmf/common.sh@521 -- # local subsystem config 00:35:01.186 23:37:50 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:35:01.186 23:37:50 -- target/dif.sh@56 -- # cat 00:35:01.186 23:37:50 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:35:01.186 { 00:35:01.186 "params": { 00:35:01.186 "name": "Nvme$subsystem", 00:35:01.186 "trtype": "$TEST_TRANSPORT", 00:35:01.186 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:01.186 "adrfam": "ipv4", 00:35:01.186 "trsvcid": "$NVMF_PORT", 00:35:01.186 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:01.186 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:01.186 "hdgst": ${hdgst:-false}, 00:35:01.186 "ddgst": ${ddgst:-false} 00:35:01.186 }, 00:35:01.186 "method": "bdev_nvme_attach_controller" 00:35:01.186 } 00:35:01.186 EOF 00:35:01.186 )") 00:35:01.186 23:37:50 -- common/autotest_common.sh@1331 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:01.186 23:37:50 -- common/autotest_common.sh@1331 -- # grep libasan 00:35:01.186 23:37:50 -- nvmf/common.sh@543 -- # cat 00:35:01.186 23:37:50 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:35:01.186 23:37:50 -- target/dif.sh@72 -- # (( file = 1 )) 00:35:01.186 23:37:50 -- target/dif.sh@72 -- # (( file <= files )) 00:35:01.186 23:37:50 -- nvmf/common.sh@545 -- # jq . 00:35:01.186 23:37:50 -- nvmf/common.sh@546 -- # IFS=, 00:35:01.186 23:37:50 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:35:01.186 "params": { 00:35:01.186 "name": "Nvme0", 00:35:01.186 "trtype": "tcp", 00:35:01.186 "traddr": "10.0.0.2", 00:35:01.186 "adrfam": "ipv4", 00:35:01.186 "trsvcid": "4420", 00:35:01.186 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:35:01.186 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:35:01.186 "hdgst": false, 00:35:01.186 "ddgst": false 00:35:01.186 }, 00:35:01.186 "method": "bdev_nvme_attach_controller" 00:35:01.186 }' 00:35:01.186 23:37:50 -- common/autotest_common.sh@1331 -- # asan_lib= 00:35:01.186 23:37:50 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:35:01.186 23:37:50 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:35:01.186 23:37:50 -- common/autotest_common.sh@1331 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:01.186 23:37:50 -- common/autotest_common.sh@1331 -- # grep libclang_rt.asan 00:35:01.186 23:37:50 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:35:01.186 23:37:50 -- common/autotest_common.sh@1331 -- # asan_lib= 00:35:01.186 23:37:50 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:35:01.186 23:37:50 -- common/autotest_common.sh@1338 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:35:01.186 23:37:50 -- common/autotest_common.sh@1338 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:01.754 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:35:01.754 fio-3.35 00:35:01.754 Starting 1 thread 00:35:01.754 EAL: No free 2048 kB hugepages reported on node 1 00:35:14.014 00:35:14.014 filename0: (groupid=0, jobs=1): err= 0: pid=11672: Fri Apr 26 23:38:01 2024 00:35:14.014 read: IOPS=185, BW=743KiB/s (761kB/s)(7456KiB/10039msec) 00:35:14.014 slat (nsec): min=5324, max=33835, avg=6251.59, stdev=1551.66 00:35:14.014 clat (usec): min=687, max=47019, avg=21524.10, stdev=20453.19 00:35:14.014 lat (usec): min=693, max=47050, avg=21530.35, stdev=20453.18 00:35:14.014 clat percentiles (usec): 00:35:14.014 | 1.00th=[ 742], 5.00th=[ 938], 10.00th=[ 971], 20.00th=[ 996], 00:35:14.014 | 30.00th=[ 1012], 40.00th=[ 1029], 50.00th=[41157], 60.00th=[41681], 00:35:14.014 | 70.00th=[41681], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:35:14.014 | 99.00th=[42206], 99.50th=[42206], 99.90th=[46924], 99.95th=[46924], 00:35:14.014 | 99.99th=[46924] 00:35:14.014 bw ( KiB/s): min= 672, max= 768, per=100.00%, avg=744.00, stdev=34.24, samples=20 00:35:14.014 iops : min= 168, max= 192, avg=186.00, stdev= 8.56, samples=20 00:35:14.014 lat (usec) : 750=1.13%, 1000=20.87% 00:35:14.014 lat (msec) : 2=27.79%, 50=50.21% 00:35:14.014 cpu : usr=95.06%, sys=4.70%, ctx=25, majf=0, minf=243 00:35:14.014 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:14.014 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:14.014 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:14.014 issued rwts: total=1864,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:14.014 latency : target=0, window=0, percentile=100.00%, depth=4 00:35:14.014 00:35:14.014 Run status group 0 (all jobs): 00:35:14.014 READ: bw=743KiB/s (761kB/s), 743KiB/s-743KiB/s (761kB/s-761kB/s), io=7456KiB (7635kB), run=10039-10039msec 00:35:14.014 23:38:01 -- target/dif.sh@88 -- # destroy_subsystems 0 00:35:14.014 23:38:01 -- target/dif.sh@43 -- # local sub 00:35:14.014 23:38:01 -- target/dif.sh@45 -- # for sub in "$@" 00:35:14.014 23:38:01 -- target/dif.sh@46 -- # destroy_subsystem 0 00:35:14.014 23:38:01 -- target/dif.sh@36 -- # local sub_id=0 00:35:14.014 23:38:01 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:35:14.014 23:38:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:35:14.014 23:38:01 -- common/autotest_common.sh@10 -- # set +x 00:35:14.014 23:38:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:35:14.014 23:38:01 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:35:14.014 23:38:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:35:14.014 23:38:01 -- common/autotest_common.sh@10 -- # set +x 00:35:14.014 23:38:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:35:14.014 00:35:14.014 real 0m11.212s 00:35:14.014 user 0m27.046s 00:35:14.014 sys 0m0.780s 00:35:14.014 23:38:01 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:35:14.014 23:38:01 -- common/autotest_common.sh@10 -- # set +x 00:35:14.014 ************************************ 00:35:14.014 END TEST fio_dif_1_default 00:35:14.014 ************************************ 00:35:14.014 23:38:01 -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:35:14.014 23:38:01 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:35:14.014 23:38:01 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:35:14.014 23:38:01 -- common/autotest_common.sh@10 -- # set +x 00:35:14.014 ************************************ 00:35:14.014 START TEST fio_dif_1_multi_subsystems 00:35:14.014 ************************************ 00:35:14.014 23:38:01 -- common/autotest_common.sh@1111 -- # fio_dif_1_multi_subsystems 00:35:14.014 23:38:01 -- target/dif.sh@92 -- # local files=1 00:35:14.014 23:38:01 -- target/dif.sh@94 -- # create_subsystems 0 1 00:35:14.014 23:38:01 -- target/dif.sh@28 -- # local sub 00:35:14.014 23:38:01 -- target/dif.sh@30 -- # for sub in "$@" 00:35:14.014 23:38:01 -- target/dif.sh@31 -- # create_subsystem 0 00:35:14.014 23:38:01 -- target/dif.sh@18 -- # local sub_id=0 00:35:14.014 23:38:01 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:35:14.014 23:38:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:35:14.014 23:38:01 -- common/autotest_common.sh@10 -- # set +x 00:35:14.014 bdev_null0 00:35:14.014 23:38:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:35:14.014 23:38:01 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:35:14.014 23:38:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:35:14.014 23:38:01 -- common/autotest_common.sh@10 -- # set +x 00:35:14.014 23:38:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:35:14.014 23:38:01 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:35:14.014 23:38:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:35:14.014 23:38:01 -- common/autotest_common.sh@10 -- # set +x 00:35:14.014 23:38:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:35:14.014 23:38:01 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:35:14.014 23:38:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:35:14.014 23:38:01 -- common/autotest_common.sh@10 -- # set +x 00:35:14.014 [2024-04-26 23:38:01.676247] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:14.014 23:38:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:35:14.014 23:38:01 -- target/dif.sh@30 -- # for sub in "$@" 00:35:14.014 23:38:01 -- target/dif.sh@31 -- # create_subsystem 1 00:35:14.014 23:38:01 -- target/dif.sh@18 -- # local sub_id=1 00:35:14.014 23:38:01 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:35:14.014 23:38:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:35:14.014 23:38:01 -- common/autotest_common.sh@10 -- # set +x 00:35:14.014 bdev_null1 00:35:14.014 23:38:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:35:14.015 23:38:01 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:35:14.015 23:38:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:35:14.015 23:38:01 -- common/autotest_common.sh@10 -- # set +x 00:35:14.015 23:38:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:35:14.015 23:38:01 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:35:14.015 23:38:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:35:14.015 23:38:01 -- common/autotest_common.sh@10 -- # set +x 00:35:14.015 23:38:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:35:14.015 23:38:01 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:35:14.015 23:38:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:35:14.015 23:38:01 -- common/autotest_common.sh@10 -- # set +x 00:35:14.015 23:38:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:35:14.015 23:38:01 -- target/dif.sh@95 -- # fio /dev/fd/62 00:35:14.015 23:38:01 -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:35:14.015 23:38:01 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:35:14.015 23:38:01 -- nvmf/common.sh@521 -- # config=() 00:35:14.015 23:38:01 -- nvmf/common.sh@521 -- # local subsystem config 00:35:14.015 23:38:01 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:35:14.015 23:38:01 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:35:14.015 { 00:35:14.015 "params": { 00:35:14.015 "name": "Nvme$subsystem", 00:35:14.015 "trtype": "$TEST_TRANSPORT", 00:35:14.015 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:14.015 "adrfam": "ipv4", 00:35:14.015 "trsvcid": "$NVMF_PORT", 00:35:14.015 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:14.015 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:14.015 "hdgst": ${hdgst:-false}, 00:35:14.015 "ddgst": ${ddgst:-false} 00:35:14.015 }, 00:35:14.015 "method": "bdev_nvme_attach_controller" 00:35:14.015 } 00:35:14.015 EOF 00:35:14.015 )") 00:35:14.015 23:38:01 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:14.015 23:38:01 -- common/autotest_common.sh@1342 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:14.015 23:38:01 -- common/autotest_common.sh@1323 -- # local fio_dir=/usr/src/fio 00:35:14.015 23:38:01 -- common/autotest_common.sh@1325 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:35:14.015 23:38:01 -- common/autotest_common.sh@1325 -- # local sanitizers 00:35:14.015 23:38:01 -- common/autotest_common.sh@1326 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:14.015 23:38:01 -- common/autotest_common.sh@1327 -- # shift 00:35:14.015 23:38:01 -- common/autotest_common.sh@1329 -- # local asan_lib= 00:35:14.015 23:38:01 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:35:14.015 23:38:01 -- target/dif.sh@82 -- # gen_fio_conf 00:35:14.015 23:38:01 -- target/dif.sh@54 -- # local file 00:35:14.015 23:38:01 -- target/dif.sh@56 -- # cat 00:35:14.015 23:38:01 -- nvmf/common.sh@543 -- # cat 00:35:14.015 23:38:01 -- common/autotest_common.sh@1331 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:14.015 23:38:01 -- common/autotest_common.sh@1331 -- # grep libasan 00:35:14.015 23:38:01 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:35:14.015 23:38:01 -- target/dif.sh@72 -- # (( file = 1 )) 00:35:14.015 23:38:01 -- target/dif.sh@72 -- # (( file <= files )) 00:35:14.015 23:38:01 -- target/dif.sh@73 -- # cat 00:35:14.015 23:38:01 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:35:14.015 23:38:01 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:35:14.015 { 00:35:14.015 "params": { 00:35:14.015 "name": "Nvme$subsystem", 00:35:14.015 "trtype": "$TEST_TRANSPORT", 00:35:14.015 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:14.015 "adrfam": "ipv4", 00:35:14.015 "trsvcid": "$NVMF_PORT", 00:35:14.015 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:14.015 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:14.015 "hdgst": ${hdgst:-false}, 00:35:14.015 "ddgst": ${ddgst:-false} 00:35:14.015 }, 00:35:14.015 "method": "bdev_nvme_attach_controller" 00:35:14.015 } 00:35:14.015 EOF 00:35:14.015 )") 00:35:14.015 23:38:01 -- nvmf/common.sh@543 -- # cat 00:35:14.015 23:38:01 -- target/dif.sh@72 -- # (( file++ )) 00:35:14.015 23:38:01 -- target/dif.sh@72 -- # (( file <= files )) 00:35:14.015 23:38:01 -- nvmf/common.sh@545 -- # jq . 00:35:14.015 23:38:01 -- nvmf/common.sh@546 -- # IFS=, 00:35:14.015 23:38:01 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:35:14.015 "params": { 00:35:14.015 "name": "Nvme0", 00:35:14.015 "trtype": "tcp", 00:35:14.015 "traddr": "10.0.0.2", 00:35:14.015 "adrfam": "ipv4", 00:35:14.015 "trsvcid": "4420", 00:35:14.015 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:35:14.015 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:35:14.015 "hdgst": false, 00:35:14.015 "ddgst": false 00:35:14.015 }, 00:35:14.015 "method": "bdev_nvme_attach_controller" 00:35:14.015 },{ 00:35:14.015 "params": { 00:35:14.015 "name": "Nvme1", 00:35:14.015 "trtype": "tcp", 00:35:14.015 "traddr": "10.0.0.2", 00:35:14.015 "adrfam": "ipv4", 00:35:14.015 "trsvcid": "4420", 00:35:14.015 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:35:14.015 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:35:14.015 "hdgst": false, 00:35:14.015 "ddgst": false 00:35:14.015 }, 00:35:14.015 "method": "bdev_nvme_attach_controller" 00:35:14.015 }' 00:35:14.015 23:38:01 -- common/autotest_common.sh@1331 -- # asan_lib= 00:35:14.015 23:38:01 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:35:14.015 23:38:01 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:35:14.015 23:38:01 -- common/autotest_common.sh@1331 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:14.015 23:38:01 -- common/autotest_common.sh@1331 -- # grep libclang_rt.asan 00:35:14.015 23:38:01 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:35:14.015 23:38:01 -- common/autotest_common.sh@1331 -- # asan_lib= 00:35:14.015 23:38:01 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:35:14.015 23:38:01 -- common/autotest_common.sh@1338 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:35:14.015 23:38:01 -- common/autotest_common.sh@1338 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:14.015 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:35:14.015 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:35:14.015 fio-3.35 00:35:14.015 Starting 2 threads 00:35:14.015 EAL: No free 2048 kB hugepages reported on node 1 00:35:24.047 00:35:24.047 filename0: (groupid=0, jobs=1): err= 0: pid=14173: Fri Apr 26 23:38:12 2024 00:35:24.047 read: IOPS=142, BW=571KiB/s (584kB/s)(5728KiB/10040msec) 00:35:24.047 slat (nsec): min=5345, max=31323, avg=6504.47, stdev=1571.74 00:35:24.047 clat (usec): min=467, max=42494, avg=28026.35, stdev=19430.71 00:35:24.047 lat (usec): min=473, max=42500, avg=28032.86, stdev=19430.49 00:35:24.047 clat percentiles (usec): 00:35:24.047 | 1.00th=[ 474], 5.00th=[ 482], 10.00th=[ 486], 20.00th=[ 502], 00:35:24.047 | 30.00th=[ 523], 40.00th=[41681], 50.00th=[41681], 60.00th=[41681], 00:35:24.047 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:35:24.047 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42730], 99.95th=[42730], 00:35:24.047 | 99.99th=[42730] 00:35:24.047 bw ( KiB/s): min= 416, max= 768, per=43.59%, avg=571.20, stdev=111.95, samples=20 00:35:24.047 iops : min= 104, max= 192, avg=142.80, stdev=27.99, samples=20 00:35:24.047 lat (usec) : 500=18.51%, 750=14.46%, 1000=0.28% 00:35:24.047 lat (msec) : 50=66.76% 00:35:24.047 cpu : usr=96.80%, sys=2.95%, ctx=60, majf=0, minf=168 00:35:24.047 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:24.047 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:24.047 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:24.047 issued rwts: total=1432,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:24.047 latency : target=0, window=0, percentile=100.00%, depth=4 00:35:24.047 filename1: (groupid=0, jobs=1): err= 0: pid=14174: Fri Apr 26 23:38:12 2024 00:35:24.047 read: IOPS=185, BW=741KiB/s (759kB/s)(7424KiB/10018msec) 00:35:24.047 slat (nsec): min=5328, max=33406, avg=6119.80, stdev=1293.77 00:35:24.047 clat (usec): min=866, max=44021, avg=21571.67, stdev=20470.93 00:35:24.047 lat (usec): min=871, max=44054, avg=21577.79, stdev=20470.89 00:35:24.047 clat percentiles (usec): 00:35:24.047 | 1.00th=[ 938], 5.00th=[ 979], 10.00th=[ 996], 20.00th=[ 1004], 00:35:24.047 | 30.00th=[ 1020], 40.00th=[ 1037], 50.00th=[41157], 60.00th=[41681], 00:35:24.047 | 70.00th=[41681], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:35:24.047 | 99.00th=[42206], 99.50th=[42206], 99.90th=[43779], 99.95th=[43779], 00:35:24.047 | 99.99th=[43779] 00:35:24.047 bw ( KiB/s): min= 672, max= 768, per=56.49%, avg=740.80, stdev=33.28, samples=20 00:35:24.047 iops : min= 168, max= 192, avg=185.20, stdev= 8.32, samples=20 00:35:24.047 lat (usec) : 1000=15.62% 00:35:24.047 lat (msec) : 2=34.16%, 50=50.22% 00:35:24.047 cpu : usr=96.81%, sys=2.97%, ctx=11, majf=0, minf=64 00:35:24.047 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:24.047 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:24.047 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:24.047 issued rwts: total=1856,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:24.047 latency : target=0, window=0, percentile=100.00%, depth=4 00:35:24.047 00:35:24.047 Run status group 0 (all jobs): 00:35:24.047 READ: bw=1310KiB/s (1341kB/s), 571KiB/s-741KiB/s (584kB/s-759kB/s), io=12.8MiB (13.5MB), run=10018-10040msec 00:35:24.047 23:38:13 -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:35:24.047 23:38:13 -- target/dif.sh@43 -- # local sub 00:35:24.047 23:38:13 -- target/dif.sh@45 -- # for sub in "$@" 00:35:24.047 23:38:13 -- target/dif.sh@46 -- # destroy_subsystem 0 00:35:24.047 23:38:13 -- target/dif.sh@36 -- # local sub_id=0 00:35:24.047 23:38:13 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:35:24.047 23:38:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:35:24.047 23:38:13 -- common/autotest_common.sh@10 -- # set +x 00:35:24.047 23:38:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:35:24.047 23:38:13 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:35:24.047 23:38:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:35:24.047 23:38:13 -- common/autotest_common.sh@10 -- # set +x 00:35:24.047 23:38:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:35:24.047 23:38:13 -- target/dif.sh@45 -- # for sub in "$@" 00:35:24.047 23:38:13 -- target/dif.sh@46 -- # destroy_subsystem 1 00:35:24.047 23:38:13 -- target/dif.sh@36 -- # local sub_id=1 00:35:24.047 23:38:13 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:35:24.047 23:38:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:35:24.047 23:38:13 -- common/autotest_common.sh@10 -- # set +x 00:35:24.047 23:38:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:35:24.047 23:38:13 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:35:24.047 23:38:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:35:24.047 23:38:13 -- common/autotest_common.sh@10 -- # set +x 00:35:24.047 23:38:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:35:24.047 00:35:24.047 real 0m11.471s 00:35:24.047 user 0m33.522s 00:35:24.047 sys 0m0.911s 00:35:24.047 23:38:13 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:35:24.047 23:38:13 -- common/autotest_common.sh@10 -- # set +x 00:35:24.047 ************************************ 00:35:24.047 END TEST fio_dif_1_multi_subsystems 00:35:24.047 ************************************ 00:35:24.047 23:38:13 -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:35:24.047 23:38:13 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:35:24.047 23:38:13 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:35:24.047 23:38:13 -- common/autotest_common.sh@10 -- # set +x 00:35:24.309 ************************************ 00:35:24.309 START TEST fio_dif_rand_params 00:35:24.309 ************************************ 00:35:24.309 23:38:13 -- common/autotest_common.sh@1111 -- # fio_dif_rand_params 00:35:24.309 23:38:13 -- target/dif.sh@100 -- # local NULL_DIF 00:35:24.309 23:38:13 -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:35:24.309 23:38:13 -- target/dif.sh@103 -- # NULL_DIF=3 00:35:24.309 23:38:13 -- target/dif.sh@103 -- # bs=128k 00:35:24.309 23:38:13 -- target/dif.sh@103 -- # numjobs=3 00:35:24.309 23:38:13 -- target/dif.sh@103 -- # iodepth=3 00:35:24.309 23:38:13 -- target/dif.sh@103 -- # runtime=5 00:35:24.309 23:38:13 -- target/dif.sh@105 -- # create_subsystems 0 00:35:24.309 23:38:13 -- target/dif.sh@28 -- # local sub 00:35:24.309 23:38:13 -- target/dif.sh@30 -- # for sub in "$@" 00:35:24.309 23:38:13 -- target/dif.sh@31 -- # create_subsystem 0 00:35:24.309 23:38:13 -- target/dif.sh@18 -- # local sub_id=0 00:35:24.309 23:38:13 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:35:24.309 23:38:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:35:24.309 23:38:13 -- common/autotest_common.sh@10 -- # set +x 00:35:24.309 bdev_null0 00:35:24.309 23:38:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:35:24.309 23:38:13 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:35:24.309 23:38:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:35:24.309 23:38:13 -- common/autotest_common.sh@10 -- # set +x 00:35:24.309 23:38:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:35:24.309 23:38:13 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:35:24.309 23:38:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:35:24.309 23:38:13 -- common/autotest_common.sh@10 -- # set +x 00:35:24.309 23:38:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:35:24.309 23:38:13 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:35:24.309 23:38:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:35:24.309 23:38:13 -- common/autotest_common.sh@10 -- # set +x 00:35:24.309 [2024-04-26 23:38:13.348599] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:24.309 23:38:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:35:24.309 23:38:13 -- target/dif.sh@106 -- # fio /dev/fd/62 00:35:24.309 23:38:13 -- target/dif.sh@106 -- # create_json_sub_conf 0 00:35:24.309 23:38:13 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:35:24.309 23:38:13 -- nvmf/common.sh@521 -- # config=() 00:35:24.309 23:38:13 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:24.309 23:38:13 -- nvmf/common.sh@521 -- # local subsystem config 00:35:24.310 23:38:13 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:35:24.310 23:38:13 -- common/autotest_common.sh@1342 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:24.310 23:38:13 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:35:24.310 { 00:35:24.310 "params": { 00:35:24.310 "name": "Nvme$subsystem", 00:35:24.310 "trtype": "$TEST_TRANSPORT", 00:35:24.310 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:24.310 "adrfam": "ipv4", 00:35:24.310 "trsvcid": "$NVMF_PORT", 00:35:24.310 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:24.310 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:24.310 "hdgst": ${hdgst:-false}, 00:35:24.310 "ddgst": ${ddgst:-false} 00:35:24.310 }, 00:35:24.310 "method": "bdev_nvme_attach_controller" 00:35:24.310 } 00:35:24.310 EOF 00:35:24.310 )") 00:35:24.310 23:38:13 -- common/autotest_common.sh@1323 -- # local fio_dir=/usr/src/fio 00:35:24.310 23:38:13 -- target/dif.sh@82 -- # gen_fio_conf 00:35:24.310 23:38:13 -- common/autotest_common.sh@1325 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:35:24.310 23:38:13 -- target/dif.sh@54 -- # local file 00:35:24.310 23:38:13 -- common/autotest_common.sh@1325 -- # local sanitizers 00:35:24.310 23:38:13 -- target/dif.sh@56 -- # cat 00:35:24.310 23:38:13 -- common/autotest_common.sh@1326 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:24.310 23:38:13 -- common/autotest_common.sh@1327 -- # shift 00:35:24.310 23:38:13 -- common/autotest_common.sh@1329 -- # local asan_lib= 00:35:24.310 23:38:13 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:35:24.310 23:38:13 -- nvmf/common.sh@543 -- # cat 00:35:24.310 23:38:13 -- common/autotest_common.sh@1331 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:24.310 23:38:13 -- target/dif.sh@72 -- # (( file = 1 )) 00:35:24.310 23:38:13 -- common/autotest_common.sh@1331 -- # grep libasan 00:35:24.310 23:38:13 -- target/dif.sh@72 -- # (( file <= files )) 00:35:24.310 23:38:13 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:35:24.310 23:38:13 -- nvmf/common.sh@545 -- # jq . 00:35:24.310 23:38:13 -- nvmf/common.sh@546 -- # IFS=, 00:35:24.310 23:38:13 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:35:24.310 "params": { 00:35:24.310 "name": "Nvme0", 00:35:24.310 "trtype": "tcp", 00:35:24.310 "traddr": "10.0.0.2", 00:35:24.310 "adrfam": "ipv4", 00:35:24.310 "trsvcid": "4420", 00:35:24.310 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:35:24.310 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:35:24.310 "hdgst": false, 00:35:24.310 "ddgst": false 00:35:24.310 }, 00:35:24.310 "method": "bdev_nvme_attach_controller" 00:35:24.310 }' 00:35:24.310 23:38:13 -- common/autotest_common.sh@1331 -- # asan_lib= 00:35:24.310 23:38:13 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:35:24.310 23:38:13 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:35:24.310 23:38:13 -- common/autotest_common.sh@1331 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:24.310 23:38:13 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:35:24.310 23:38:13 -- common/autotest_common.sh@1331 -- # grep libclang_rt.asan 00:35:24.310 23:38:13 -- common/autotest_common.sh@1331 -- # asan_lib= 00:35:24.310 23:38:13 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:35:24.310 23:38:13 -- common/autotest_common.sh@1338 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:35:24.310 23:38:13 -- common/autotest_common.sh@1338 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:24.575 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:35:24.575 ... 00:35:24.575 fio-3.35 00:35:24.575 Starting 3 threads 00:35:24.575 EAL: No free 2048 kB hugepages reported on node 1 00:35:31.166 00:35:31.166 filename0: (groupid=0, jobs=1): err= 0: pid=16380: Fri Apr 26 23:38:19 2024 00:35:31.166 read: IOPS=241, BW=30.2MiB/s (31.7MB/s)(152MiB/5048msec) 00:35:31.166 slat (nsec): min=4310, max=30039, avg=8505.54, stdev=822.81 00:35:31.166 clat (usec): min=6411, max=53870, avg=12375.74, stdev=7125.80 00:35:31.166 lat (usec): min=6420, max=53879, avg=12384.25, stdev=7125.81 00:35:31.166 clat percentiles (usec): 00:35:31.166 | 1.00th=[ 6849], 5.00th=[ 7898], 10.00th=[ 8225], 20.00th=[ 9503], 00:35:31.166 | 30.00th=[10028], 40.00th=[10552], 50.00th=[11207], 60.00th=[11863], 00:35:31.166 | 70.00th=[12518], 80.00th=[13304], 90.00th=[14353], 95.00th=[15270], 00:35:31.166 | 99.00th=[51119], 99.50th=[52167], 99.90th=[52691], 99.95th=[53740], 00:35:31.166 | 99.99th=[53740] 00:35:31.166 bw ( KiB/s): min=25600, max=35584, per=36.40%, avg=31124.10, stdev=4036.14, samples=10 00:35:31.166 iops : min= 200, max= 278, avg=243.10, stdev=31.60, samples=10 00:35:31.166 lat (msec) : 10=30.35%, 20=66.53%, 50=1.48%, 100=1.64% 00:35:31.166 cpu : usr=96.29%, sys=3.43%, ctx=19, majf=0, minf=73 00:35:31.166 IO depths : 1=0.2%, 2=99.8%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:31.166 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:31.166 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:31.166 issued rwts: total=1219,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:31.166 latency : target=0, window=0, percentile=100.00%, depth=3 00:35:31.166 filename0: (groupid=0, jobs=1): err= 0: pid=16381: Fri Apr 26 23:38:19 2024 00:35:31.166 read: IOPS=219, BW=27.5MiB/s (28.8MB/s)(139MiB/5046msec) 00:35:31.166 slat (nsec): min=5352, max=30174, avg=6084.94, stdev=1515.32 00:35:31.167 clat (usec): min=5697, max=90868, avg=13603.52, stdev=9446.70 00:35:31.167 lat (usec): min=5702, max=90873, avg=13609.60, stdev=9446.67 00:35:31.167 clat percentiles (usec): 00:35:31.167 | 1.00th=[ 6194], 5.00th=[ 7832], 10.00th=[ 8356], 20.00th=[ 9372], 00:35:31.167 | 30.00th=[10421], 40.00th=[11076], 50.00th=[11600], 60.00th=[12125], 00:35:31.167 | 70.00th=[13042], 80.00th=[14091], 90.00th=[15401], 95.00th=[46924], 00:35:31.167 | 99.00th=[53216], 99.50th=[55313], 99.90th=[56886], 99.95th=[90702], 00:35:31.167 | 99.99th=[90702] 00:35:31.167 bw ( KiB/s): min=22272, max=33280, per=33.12%, avg=28319.10, stdev=3800.69, samples=10 00:35:31.167 iops : min= 174, max= 260, avg=221.20, stdev=29.70, samples=10 00:35:31.167 lat (msec) : 10=24.17%, 20=70.60%, 50=1.44%, 100=3.79% 00:35:31.167 cpu : usr=96.06%, sys=3.73%, ctx=30, majf=0, minf=179 00:35:31.167 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:31.167 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:31.167 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:31.167 issued rwts: total=1109,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:31.167 latency : target=0, window=0, percentile=100.00%, depth=3 00:35:31.167 filename0: (groupid=0, jobs=1): err= 0: pid=16382: Fri Apr 26 23:38:19 2024 00:35:31.167 read: IOPS=206, BW=25.9MiB/s (27.1MB/s)(131MiB/5045msec) 00:35:31.167 slat (nsec): min=5352, max=32182, avg=7776.76, stdev=1803.86 00:35:31.167 clat (usec): min=5717, max=92524, avg=14448.43, stdev=10787.56 00:35:31.167 lat (usec): min=5723, max=92533, avg=14456.21, stdev=10787.51 00:35:31.167 clat percentiles (usec): 00:35:31.167 | 1.00th=[ 6063], 5.00th=[ 7635], 10.00th=[ 8586], 20.00th=[ 9765], 00:35:31.167 | 30.00th=[10552], 40.00th=[11207], 50.00th=[11863], 60.00th=[12387], 00:35:31.167 | 70.00th=[13173], 80.00th=[13960], 90.00th=[15401], 95.00th=[50594], 00:35:31.167 | 99.00th=[53740], 99.50th=[54264], 99.90th=[55313], 99.95th=[92799], 00:35:31.167 | 99.99th=[92799] 00:35:31.167 bw ( KiB/s): min=22272, max=32512, per=31.17%, avg=26649.60, stdev=3498.56, samples=10 00:35:31.167 iops : min= 174, max= 254, avg=208.20, stdev=27.33, samples=10 00:35:31.167 lat (msec) : 10=21.17%, 20=71.55%, 50=1.34%, 100=5.94% 00:35:31.167 cpu : usr=96.21%, sys=3.55%, ctx=9, majf=0, minf=51 00:35:31.167 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:31.167 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:31.167 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:31.167 issued rwts: total=1044,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:31.167 latency : target=0, window=0, percentile=100.00%, depth=3 00:35:31.167 00:35:31.167 Run status group 0 (all jobs): 00:35:31.167 READ: bw=83.5MiB/s (87.6MB/s), 25.9MiB/s-30.2MiB/s (27.1MB/s-31.7MB/s), io=422MiB (442MB), run=5045-5048msec 00:35:31.167 23:38:19 -- target/dif.sh@107 -- # destroy_subsystems 0 00:35:31.167 23:38:19 -- target/dif.sh@43 -- # local sub 00:35:31.167 23:38:19 -- target/dif.sh@45 -- # for sub in "$@" 00:35:31.167 23:38:19 -- target/dif.sh@46 -- # destroy_subsystem 0 00:35:31.167 23:38:19 -- target/dif.sh@36 -- # local sub_id=0 00:35:31.167 23:38:19 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:35:31.167 23:38:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:35:31.167 23:38:19 -- common/autotest_common.sh@10 -- # set +x 00:35:31.167 23:38:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:35:31.167 23:38:19 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:35:31.167 23:38:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:35:31.167 23:38:19 -- common/autotest_common.sh@10 -- # set +x 00:35:31.167 23:38:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:35:31.167 23:38:19 -- target/dif.sh@109 -- # NULL_DIF=2 00:35:31.167 23:38:19 -- target/dif.sh@109 -- # bs=4k 00:35:31.167 23:38:19 -- target/dif.sh@109 -- # numjobs=8 00:35:31.167 23:38:19 -- target/dif.sh@109 -- # iodepth=16 00:35:31.167 23:38:19 -- target/dif.sh@109 -- # runtime= 00:35:31.167 23:38:19 -- target/dif.sh@109 -- # files=2 00:35:31.167 23:38:19 -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:35:31.167 23:38:19 -- target/dif.sh@28 -- # local sub 00:35:31.167 23:38:19 -- target/dif.sh@30 -- # for sub in "$@" 00:35:31.167 23:38:19 -- target/dif.sh@31 -- # create_subsystem 0 00:35:31.167 23:38:19 -- target/dif.sh@18 -- # local sub_id=0 00:35:31.167 23:38:19 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:35:31.167 23:38:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:35:31.167 23:38:19 -- common/autotest_common.sh@10 -- # set +x 00:35:31.167 bdev_null0 00:35:31.167 23:38:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:35:31.167 23:38:19 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:35:31.167 23:38:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:35:31.167 23:38:19 -- common/autotest_common.sh@10 -- # set +x 00:35:31.167 23:38:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:35:31.167 23:38:19 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:35:31.167 23:38:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:35:31.167 23:38:19 -- common/autotest_common.sh@10 -- # set +x 00:35:31.167 23:38:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:35:31.167 23:38:19 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:35:31.167 23:38:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:35:31.167 23:38:19 -- common/autotest_common.sh@10 -- # set +x 00:35:31.167 [2024-04-26 23:38:19.488266] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:31.167 23:38:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:35:31.167 23:38:19 -- target/dif.sh@30 -- # for sub in "$@" 00:35:31.167 23:38:19 -- target/dif.sh@31 -- # create_subsystem 1 00:35:31.167 23:38:19 -- target/dif.sh@18 -- # local sub_id=1 00:35:31.167 23:38:19 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:35:31.167 23:38:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:35:31.167 23:38:19 -- common/autotest_common.sh@10 -- # set +x 00:35:31.167 bdev_null1 00:35:31.167 23:38:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:35:31.167 23:38:19 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:35:31.167 23:38:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:35:31.167 23:38:19 -- common/autotest_common.sh@10 -- # set +x 00:35:31.167 23:38:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:35:31.167 23:38:19 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:35:31.167 23:38:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:35:31.167 23:38:19 -- common/autotest_common.sh@10 -- # set +x 00:35:31.167 23:38:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:35:31.167 23:38:19 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:35:31.167 23:38:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:35:31.167 23:38:19 -- common/autotest_common.sh@10 -- # set +x 00:35:31.167 23:38:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:35:31.167 23:38:19 -- target/dif.sh@30 -- # for sub in "$@" 00:35:31.167 23:38:19 -- target/dif.sh@31 -- # create_subsystem 2 00:35:31.167 23:38:19 -- target/dif.sh@18 -- # local sub_id=2 00:35:31.167 23:38:19 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:35:31.167 23:38:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:35:31.167 23:38:19 -- common/autotest_common.sh@10 -- # set +x 00:35:31.167 bdev_null2 00:35:31.167 23:38:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:35:31.167 23:38:19 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:35:31.167 23:38:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:35:31.167 23:38:19 -- common/autotest_common.sh@10 -- # set +x 00:35:31.167 23:38:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:35:31.167 23:38:19 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:35:31.167 23:38:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:35:31.167 23:38:19 -- common/autotest_common.sh@10 -- # set +x 00:35:31.167 23:38:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:35:31.167 23:38:19 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:35:31.167 23:38:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:35:31.167 23:38:19 -- common/autotest_common.sh@10 -- # set +x 00:35:31.167 23:38:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:35:31.167 23:38:19 -- target/dif.sh@112 -- # fio /dev/fd/62 00:35:31.167 23:38:19 -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:35:31.167 23:38:19 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:35:31.167 23:38:19 -- nvmf/common.sh@521 -- # config=() 00:35:31.167 23:38:19 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:31.167 23:38:19 -- nvmf/common.sh@521 -- # local subsystem config 00:35:31.167 23:38:19 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:35:31.167 23:38:19 -- common/autotest_common.sh@1342 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:31.167 23:38:19 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:35:31.167 { 00:35:31.167 "params": { 00:35:31.167 "name": "Nvme$subsystem", 00:35:31.167 "trtype": "$TEST_TRANSPORT", 00:35:31.167 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:31.167 "adrfam": "ipv4", 00:35:31.167 "trsvcid": "$NVMF_PORT", 00:35:31.167 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:31.167 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:31.167 "hdgst": ${hdgst:-false}, 00:35:31.167 "ddgst": ${ddgst:-false} 00:35:31.167 }, 00:35:31.167 "method": "bdev_nvme_attach_controller" 00:35:31.167 } 00:35:31.167 EOF 00:35:31.167 )") 00:35:31.167 23:38:19 -- target/dif.sh@82 -- # gen_fio_conf 00:35:31.167 23:38:19 -- common/autotest_common.sh@1323 -- # local fio_dir=/usr/src/fio 00:35:31.167 23:38:19 -- common/autotest_common.sh@1325 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:35:31.167 23:38:19 -- target/dif.sh@54 -- # local file 00:35:31.167 23:38:19 -- common/autotest_common.sh@1325 -- # local sanitizers 00:35:31.167 23:38:19 -- target/dif.sh@56 -- # cat 00:35:31.167 23:38:19 -- common/autotest_common.sh@1326 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:31.167 23:38:19 -- common/autotest_common.sh@1327 -- # shift 00:35:31.167 23:38:19 -- common/autotest_common.sh@1329 -- # local asan_lib= 00:35:31.167 23:38:19 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:35:31.167 23:38:19 -- nvmf/common.sh@543 -- # cat 00:35:31.167 23:38:19 -- common/autotest_common.sh@1331 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:31.167 23:38:19 -- target/dif.sh@72 -- # (( file = 1 )) 00:35:31.168 23:38:19 -- common/autotest_common.sh@1331 -- # grep libasan 00:35:31.168 23:38:19 -- target/dif.sh@72 -- # (( file <= files )) 00:35:31.168 23:38:19 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:35:31.168 23:38:19 -- target/dif.sh@73 -- # cat 00:35:31.168 23:38:19 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:35:31.168 23:38:19 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:35:31.168 { 00:35:31.168 "params": { 00:35:31.168 "name": "Nvme$subsystem", 00:35:31.168 "trtype": "$TEST_TRANSPORT", 00:35:31.168 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:31.168 "adrfam": "ipv4", 00:35:31.168 "trsvcid": "$NVMF_PORT", 00:35:31.168 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:31.168 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:31.168 "hdgst": ${hdgst:-false}, 00:35:31.168 "ddgst": ${ddgst:-false} 00:35:31.168 }, 00:35:31.168 "method": "bdev_nvme_attach_controller" 00:35:31.168 } 00:35:31.168 EOF 00:35:31.168 )") 00:35:31.168 23:38:19 -- target/dif.sh@72 -- # (( file++ )) 00:35:31.168 23:38:19 -- target/dif.sh@72 -- # (( file <= files )) 00:35:31.168 23:38:19 -- nvmf/common.sh@543 -- # cat 00:35:31.168 23:38:19 -- target/dif.sh@73 -- # cat 00:35:31.168 23:38:19 -- target/dif.sh@72 -- # (( file++ )) 00:35:31.168 23:38:19 -- target/dif.sh@72 -- # (( file <= files )) 00:35:31.168 23:38:19 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:35:31.168 23:38:19 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:35:31.168 { 00:35:31.168 "params": { 00:35:31.168 "name": "Nvme$subsystem", 00:35:31.168 "trtype": "$TEST_TRANSPORT", 00:35:31.168 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:31.168 "adrfam": "ipv4", 00:35:31.168 "trsvcid": "$NVMF_PORT", 00:35:31.168 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:31.168 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:31.168 "hdgst": ${hdgst:-false}, 00:35:31.168 "ddgst": ${ddgst:-false} 00:35:31.168 }, 00:35:31.168 "method": "bdev_nvme_attach_controller" 00:35:31.168 } 00:35:31.168 EOF 00:35:31.168 )") 00:35:31.168 23:38:19 -- nvmf/common.sh@543 -- # cat 00:35:31.168 23:38:19 -- nvmf/common.sh@545 -- # jq . 00:35:31.168 23:38:19 -- nvmf/common.sh@546 -- # IFS=, 00:35:31.168 23:38:19 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:35:31.168 "params": { 00:35:31.168 "name": "Nvme0", 00:35:31.168 "trtype": "tcp", 00:35:31.168 "traddr": "10.0.0.2", 00:35:31.168 "adrfam": "ipv4", 00:35:31.168 "trsvcid": "4420", 00:35:31.168 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:35:31.168 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:35:31.168 "hdgst": false, 00:35:31.168 "ddgst": false 00:35:31.168 }, 00:35:31.168 "method": "bdev_nvme_attach_controller" 00:35:31.168 },{ 00:35:31.168 "params": { 00:35:31.168 "name": "Nvme1", 00:35:31.168 "trtype": "tcp", 00:35:31.168 "traddr": "10.0.0.2", 00:35:31.168 "adrfam": "ipv4", 00:35:31.168 "trsvcid": "4420", 00:35:31.168 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:35:31.168 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:35:31.168 "hdgst": false, 00:35:31.168 "ddgst": false 00:35:31.168 }, 00:35:31.168 "method": "bdev_nvme_attach_controller" 00:35:31.168 },{ 00:35:31.168 "params": { 00:35:31.168 "name": "Nvme2", 00:35:31.168 "trtype": "tcp", 00:35:31.168 "traddr": "10.0.0.2", 00:35:31.168 "adrfam": "ipv4", 00:35:31.168 "trsvcid": "4420", 00:35:31.168 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:35:31.168 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:35:31.168 "hdgst": false, 00:35:31.168 "ddgst": false 00:35:31.168 }, 00:35:31.168 "method": "bdev_nvme_attach_controller" 00:35:31.168 }' 00:35:31.168 23:38:19 -- common/autotest_common.sh@1331 -- # asan_lib= 00:35:31.168 23:38:19 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:35:31.168 23:38:19 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:35:31.168 23:38:19 -- common/autotest_common.sh@1331 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:31.168 23:38:19 -- common/autotest_common.sh@1331 -- # grep libclang_rt.asan 00:35:31.168 23:38:19 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:35:31.168 23:38:19 -- common/autotest_common.sh@1331 -- # asan_lib= 00:35:31.168 23:38:19 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:35:31.168 23:38:19 -- common/autotest_common.sh@1338 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:35:31.168 23:38:19 -- common/autotest_common.sh@1338 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:31.168 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:35:31.168 ... 00:35:31.168 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:35:31.168 ... 00:35:31.168 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:35:31.168 ... 00:35:31.168 fio-3.35 00:35:31.168 Starting 24 threads 00:35:31.168 EAL: No free 2048 kB hugepages reported on node 1 00:35:43.409 00:35:43.409 filename0: (groupid=0, jobs=1): err= 0: pid=17882: Fri Apr 26 23:38:30 2024 00:35:43.409 read: IOPS=533, BW=2134KiB/s (2185kB/s)(20.9MiB/10021msec) 00:35:43.409 slat (nsec): min=5495, max=60675, avg=10235.46, stdev=7185.92 00:35:43.409 clat (usec): min=1754, max=57925, avg=29899.79, stdev=6423.90 00:35:43.409 lat (usec): min=1770, max=57937, avg=29910.02, stdev=6424.81 00:35:43.409 clat percentiles (usec): 00:35:43.409 | 1.00th=[ 1991], 5.00th=[20579], 10.00th=[21890], 20.00th=[23725], 00:35:43.409 | 30.00th=[31851], 40.00th=[32375], 50.00th=[32375], 60.00th=[32637], 00:35:43.409 | 70.00th=[32637], 80.00th=[32900], 90.00th=[33817], 95.00th=[33817], 00:35:43.409 | 99.00th=[45351], 99.50th=[49546], 99.90th=[57934], 99.95th=[57934], 00:35:43.409 | 99.99th=[57934] 00:35:43.409 bw ( KiB/s): min= 1920, max= 2816, per=4.53%, avg=2134.15, stdev=252.60, samples=20 00:35:43.409 iops : min= 480, max= 704, avg=533.50, stdev=63.12, samples=20 00:35:43.409 lat (msec) : 2=1.03%, 4=0.77%, 10=0.30%, 20=1.57%, 50=95.88% 00:35:43.409 lat (msec) : 100=0.45% 00:35:43.409 cpu : usr=98.18%, sys=1.14%, ctx=172, majf=0, minf=9 00:35:43.409 IO depths : 1=3.7%, 2=7.5%, 4=17.4%, 8=62.2%, 16=9.2%, 32=0.0%, >=64=0.0% 00:35:43.409 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:43.409 complete : 0=0.0%, 4=92.1%, 8=2.6%, 16=5.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:43.409 issued rwts: total=5346,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:43.410 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:43.410 filename0: (groupid=0, jobs=1): err= 0: pid=17883: Fri Apr 26 23:38:30 2024 00:35:43.410 read: IOPS=484, BW=1938KiB/s (1985kB/s)(18.9MiB/10004msec) 00:35:43.410 slat (nsec): min=5517, max=48363, avg=13211.57, stdev=7919.63 00:35:43.410 clat (usec): min=14964, max=51311, avg=32896.81, stdev=1669.37 00:35:43.410 lat (usec): min=14972, max=51328, avg=32910.02, stdev=1669.47 00:35:43.410 clat percentiles (usec): 00:35:43.410 | 1.00th=[31851], 5.00th=[32113], 10.00th=[32375], 20.00th=[32375], 00:35:43.410 | 30.00th=[32375], 40.00th=[32637], 50.00th=[32637], 60.00th=[32900], 00:35:43.410 | 70.00th=[33162], 80.00th=[33817], 90.00th=[33817], 95.00th=[34341], 00:35:43.410 | 99.00th=[35390], 99.50th=[35914], 99.90th=[51119], 99.95th=[51119], 00:35:43.410 | 99.99th=[51119] 00:35:43.410 bw ( KiB/s): min= 1792, max= 2048, per=4.10%, avg=1933.42, stdev=72.32, samples=19 00:35:43.410 iops : min= 448, max= 512, avg=483.32, stdev=18.16, samples=19 00:35:43.410 lat (msec) : 20=0.33%, 50=99.34%, 100=0.33% 00:35:43.410 cpu : usr=99.19%, sys=0.54%, ctx=14, majf=0, minf=9 00:35:43.410 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:35:43.410 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:43.410 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:43.410 issued rwts: total=4848,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:43.410 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:43.410 filename0: (groupid=0, jobs=1): err= 0: pid=17884: Fri Apr 26 23:38:30 2024 00:35:43.410 read: IOPS=484, BW=1938KiB/s (1984kB/s)(18.9MiB/10007msec) 00:35:43.410 slat (usec): min=5, max=101, avg=26.28, stdev=16.14 00:35:43.410 clat (usec): min=30195, max=43013, avg=32767.02, stdev=983.77 00:35:43.410 lat (usec): min=30204, max=43034, avg=32793.30, stdev=980.62 00:35:43.410 clat percentiles (usec): 00:35:43.410 | 1.00th=[31589], 5.00th=[31851], 10.00th=[32113], 20.00th=[32113], 00:35:43.410 | 30.00th=[32375], 40.00th=[32375], 50.00th=[32375], 60.00th=[32637], 00:35:43.410 | 70.00th=[32900], 80.00th=[33424], 90.00th=[33817], 95.00th=[34341], 00:35:43.410 | 99.00th=[35390], 99.50th=[35914], 99.90th=[42730], 99.95th=[43254], 00:35:43.410 | 99.99th=[43254] 00:35:43.410 bw ( KiB/s): min= 1920, max= 2048, per=4.12%, avg=1940.37, stdev=47.89, samples=19 00:35:43.410 iops : min= 480, max= 512, avg=485.05, stdev=11.99, samples=19 00:35:43.410 lat (msec) : 50=100.00% 00:35:43.410 cpu : usr=98.13%, sys=1.07%, ctx=990, majf=0, minf=9 00:35:43.410 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:35:43.410 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:43.410 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:43.410 issued rwts: total=4848,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:43.410 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:43.410 filename0: (groupid=0, jobs=1): err= 0: pid=17885: Fri Apr 26 23:38:30 2024 00:35:43.410 read: IOPS=486, BW=1945KiB/s (1991kB/s)(19.0MiB/10004msec) 00:35:43.410 slat (nsec): min=5380, max=96088, avg=14467.05, stdev=11636.77 00:35:43.410 clat (usec): min=11484, max=76701, avg=32821.66, stdev=4104.87 00:35:43.410 lat (usec): min=11489, max=76718, avg=32836.13, stdev=4104.06 00:35:43.410 clat percentiles (usec): 00:35:43.410 | 1.00th=[21890], 5.00th=[24773], 10.00th=[29230], 20.00th=[32375], 00:35:43.410 | 30.00th=[32375], 40.00th=[32637], 50.00th=[32637], 60.00th=[32900], 00:35:43.410 | 70.00th=[33424], 80.00th=[33817], 90.00th=[35390], 95.00th=[38011], 00:35:43.410 | 99.00th=[50594], 99.50th=[52691], 99.90th=[56361], 99.95th=[56886], 00:35:43.410 | 99.99th=[77071] 00:35:43.410 bw ( KiB/s): min= 1763, max= 2032, per=4.12%, avg=1940.16, stdev=64.68, samples=19 00:35:43.410 iops : min= 440, max= 508, avg=485.00, stdev=16.29, samples=19 00:35:43.410 lat (msec) : 20=0.35%, 50=98.58%, 100=1.07% 00:35:43.410 cpu : usr=98.97%, sys=0.68%, ctx=62, majf=0, minf=9 00:35:43.410 IO depths : 1=1.4%, 2=3.6%, 4=13.0%, 8=68.6%, 16=13.4%, 32=0.0%, >=64=0.0% 00:35:43.410 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:43.410 complete : 0=0.0%, 4=91.5%, 8=5.1%, 16=3.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:43.410 issued rwts: total=4864,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:43.410 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:43.410 filename0: (groupid=0, jobs=1): err= 0: pid=17886: Fri Apr 26 23:38:30 2024 00:35:43.410 read: IOPS=485, BW=1942KiB/s (1989kB/s)(19.0MiB/10017msec) 00:35:43.410 slat (nsec): min=5530, max=89385, avg=23819.30, stdev=14055.05 00:35:43.410 clat (usec): min=18073, max=36217, avg=32721.80, stdev=1147.65 00:35:43.410 lat (usec): min=18092, max=36228, avg=32745.62, stdev=1145.95 00:35:43.410 clat percentiles (usec): 00:35:43.410 | 1.00th=[31327], 5.00th=[31851], 10.00th=[32113], 20.00th=[32113], 00:35:43.410 | 30.00th=[32375], 40.00th=[32375], 50.00th=[32375], 60.00th=[32637], 00:35:43.410 | 70.00th=[32900], 80.00th=[33424], 90.00th=[33817], 95.00th=[34341], 00:35:43.410 | 99.00th=[34866], 99.50th=[35914], 99.90th=[35914], 99.95th=[35914], 00:35:43.410 | 99.99th=[36439] 00:35:43.410 bw ( KiB/s): min= 1792, max= 2048, per=4.12%, avg=1940.21, stdev=64.19, samples=19 00:35:43.410 iops : min= 448, max= 512, avg=485.05, stdev=16.05, samples=19 00:35:43.410 lat (msec) : 20=0.33%, 50=99.67% 00:35:43.410 cpu : usr=99.10%, sys=0.62%, ctx=7, majf=0, minf=9 00:35:43.410 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:35:43.410 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:43.410 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:43.410 issued rwts: total=4864,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:43.410 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:43.410 filename0: (groupid=0, jobs=1): err= 0: pid=17887: Fri Apr 26 23:38:30 2024 00:35:43.410 read: IOPS=487, BW=1951KiB/s (1998kB/s)(19.1MiB/10005msec) 00:35:43.410 slat (nsec): min=5546, max=82437, avg=9006.91, stdev=5421.97 00:35:43.410 clat (usec): min=3978, max=39028, avg=32727.90, stdev=2232.33 00:35:43.410 lat (usec): min=3998, max=39036, avg=32736.91, stdev=2231.40 00:35:43.410 clat percentiles (usec): 00:35:43.410 | 1.00th=[24249], 5.00th=[32113], 10.00th=[32375], 20.00th=[32375], 00:35:43.410 | 30.00th=[32375], 40.00th=[32637], 50.00th=[32637], 60.00th=[32900], 00:35:43.410 | 70.00th=[33162], 80.00th=[33424], 90.00th=[33817], 95.00th=[34341], 00:35:43.410 | 99.00th=[35914], 99.50th=[38536], 99.90th=[39060], 99.95th=[39060], 00:35:43.410 | 99.99th=[39060] 00:35:43.410 bw ( KiB/s): min= 1916, max= 2048, per=4.14%, avg=1953.47, stdev=56.30, samples=19 00:35:43.410 iops : min= 479, max= 512, avg=488.37, stdev=14.08, samples=19 00:35:43.410 lat (msec) : 4=0.04%, 10=0.29%, 50=99.67% 00:35:43.410 cpu : usr=99.05%, sys=0.64%, ctx=45, majf=0, minf=9 00:35:43.410 IO depths : 1=5.0%, 2=11.3%, 4=25.0%, 8=51.2%, 16=7.5%, 32=0.0%, >=64=0.0% 00:35:43.410 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:43.410 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:43.410 issued rwts: total=4880,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:43.410 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:43.410 filename0: (groupid=0, jobs=1): err= 0: pid=17888: Fri Apr 26 23:38:30 2024 00:35:43.410 read: IOPS=484, BW=1938KiB/s (1984kB/s)(18.9MiB/10007msec) 00:35:43.410 slat (nsec): min=5502, max=96600, avg=21309.63, stdev=17443.31 00:35:43.410 clat (usec): min=23184, max=43737, avg=32858.37, stdev=1121.78 00:35:43.410 lat (usec): min=23190, max=43742, avg=32879.68, stdev=1115.55 00:35:43.410 clat percentiles (usec): 00:35:43.410 | 1.00th=[31589], 5.00th=[31851], 10.00th=[32113], 20.00th=[32375], 00:35:43.410 | 30.00th=[32375], 40.00th=[32375], 50.00th=[32637], 60.00th=[32637], 00:35:43.410 | 70.00th=[32900], 80.00th=[33424], 90.00th=[33817], 95.00th=[34341], 00:35:43.410 | 99.00th=[35914], 99.50th=[35914], 99.90th=[43254], 99.95th=[43254], 00:35:43.410 | 99.99th=[43779] 00:35:43.410 bw ( KiB/s): min= 1920, max= 2048, per=4.12%, avg=1940.37, stdev=47.89, samples=19 00:35:43.410 iops : min= 480, max= 512, avg=485.05, stdev=11.99, samples=19 00:35:43.410 lat (msec) : 50=100.00% 00:35:43.410 cpu : usr=99.13%, sys=0.54%, ctx=45, majf=0, minf=9 00:35:43.410 IO depths : 1=6.0%, 2=12.3%, 4=25.0%, 8=50.2%, 16=6.5%, 32=0.0%, >=64=0.0% 00:35:43.410 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:43.410 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:43.410 issued rwts: total=4848,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:43.410 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:43.410 filename0: (groupid=0, jobs=1): err= 0: pid=17889: Fri Apr 26 23:38:30 2024 00:35:43.410 read: IOPS=484, BW=1938KiB/s (1985kB/s)(18.9MiB/10005msec) 00:35:43.410 slat (nsec): min=5501, max=68438, avg=9461.17, stdev=6419.70 00:35:43.410 clat (usec): min=15364, max=51808, avg=32938.11, stdev=1720.85 00:35:43.410 lat (usec): min=15371, max=51823, avg=32947.57, stdev=1720.81 00:35:43.410 clat percentiles (usec): 00:35:43.410 | 1.00th=[31851], 5.00th=[32113], 10.00th=[32375], 20.00th=[32375], 00:35:43.410 | 30.00th=[32375], 40.00th=[32637], 50.00th=[32637], 60.00th=[32900], 00:35:43.410 | 70.00th=[33162], 80.00th=[33817], 90.00th=[33817], 95.00th=[34341], 00:35:43.410 | 99.00th=[35390], 99.50th=[35914], 99.90th=[51643], 99.95th=[51643], 00:35:43.410 | 99.99th=[51643] 00:35:43.410 bw ( KiB/s): min= 1792, max= 2048, per=4.10%, avg=1933.26, stdev=84.24, samples=19 00:35:43.410 iops : min= 448, max= 512, avg=483.32, stdev=21.06, samples=19 00:35:43.411 lat (msec) : 20=0.33%, 50=99.34%, 100=0.33% 00:35:43.411 cpu : usr=98.37%, sys=0.97%, ctx=58, majf=0, minf=9 00:35:43.411 IO depths : 1=6.1%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.4%, 32=0.0%, >=64=0.0% 00:35:43.411 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:43.411 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:43.411 issued rwts: total=4848,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:43.411 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:43.411 filename1: (groupid=0, jobs=1): err= 0: pid=17890: Fri Apr 26 23:38:30 2024 00:35:43.411 read: IOPS=484, BW=1938KiB/s (1984kB/s)(18.9MiB/10008msec) 00:35:43.411 slat (nsec): min=5281, max=97339, avg=25794.31, stdev=16847.20 00:35:43.411 clat (usec): min=30162, max=44085, avg=32807.87, stdev=1026.64 00:35:43.411 lat (usec): min=30185, max=44099, avg=32833.66, stdev=1021.52 00:35:43.411 clat percentiles (usec): 00:35:43.411 | 1.00th=[31589], 5.00th=[31851], 10.00th=[32113], 20.00th=[32113], 00:35:43.411 | 30.00th=[32375], 40.00th=[32375], 50.00th=[32375], 60.00th=[32637], 00:35:43.411 | 70.00th=[32900], 80.00th=[33424], 90.00th=[33817], 95.00th=[34341], 00:35:43.411 | 99.00th=[35390], 99.50th=[35914], 99.90th=[44303], 99.95th=[44303], 00:35:43.411 | 99.99th=[44303] 00:35:43.411 bw ( KiB/s): min= 1920, max= 2048, per=4.12%, avg=1940.21, stdev=47.95, samples=19 00:35:43.411 iops : min= 480, max= 512, avg=485.05, stdev=11.99, samples=19 00:35:43.411 lat (msec) : 50=100.00% 00:35:43.411 cpu : usr=99.05%, sys=0.67%, ctx=17, majf=0, minf=9 00:35:43.411 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:35:43.411 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:43.411 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:43.411 issued rwts: total=4848,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:43.411 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:43.411 filename1: (groupid=0, jobs=1): err= 0: pid=17891: Fri Apr 26 23:38:30 2024 00:35:43.411 read: IOPS=485, BW=1944KiB/s (1990kB/s)(19.0MiB/10010msec) 00:35:43.411 slat (nsec): min=5566, max=47883, avg=10194.38, stdev=6096.41 00:35:43.411 clat (usec): min=11647, max=45264, avg=32831.69, stdev=1709.41 00:35:43.411 lat (usec): min=11653, max=45273, avg=32841.88, stdev=1709.61 00:35:43.411 clat percentiles (usec): 00:35:43.411 | 1.00th=[31589], 5.00th=[32113], 10.00th=[32375], 20.00th=[32375], 00:35:43.411 | 30.00th=[32375], 40.00th=[32637], 50.00th=[32637], 60.00th=[32637], 00:35:43.411 | 70.00th=[32900], 80.00th=[33424], 90.00th=[33817], 95.00th=[34341], 00:35:43.411 | 99.00th=[35390], 99.50th=[35914], 99.90th=[43254], 99.95th=[43254], 00:35:43.411 | 99.99th=[45351] 00:35:43.411 bw ( KiB/s): min= 1792, max= 2048, per=4.12%, avg=1939.79, stdev=64.34, samples=19 00:35:43.411 iops : min= 448, max= 512, avg=484.95, stdev=16.08, samples=19 00:35:43.411 lat (msec) : 20=0.33%, 50=99.67% 00:35:43.411 cpu : usr=99.17%, sys=0.55%, ctx=52, majf=0, minf=9 00:35:43.411 IO depths : 1=5.9%, 2=12.2%, 4=25.0%, 8=50.3%, 16=6.6%, 32=0.0%, >=64=0.0% 00:35:43.411 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:43.411 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:43.411 issued rwts: total=4864,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:43.411 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:43.411 filename1: (groupid=0, jobs=1): err= 0: pid=17892: Fri Apr 26 23:38:30 2024 00:35:43.411 read: IOPS=524, BW=2098KiB/s (2149kB/s)(20.5MiB/10005msec) 00:35:43.411 slat (nsec): min=5560, max=75561, avg=8796.67, stdev=5166.07 00:35:43.411 clat (usec): min=14304, max=34878, avg=30426.92, stdev=4446.91 00:35:43.411 lat (usec): min=14311, max=34884, avg=30435.71, stdev=4447.51 00:35:43.411 clat percentiles (usec): 00:35:43.411 | 1.00th=[19006], 5.00th=[21103], 10.00th=[22152], 20.00th=[25035], 00:35:43.411 | 30.00th=[32375], 40.00th=[32375], 50.00th=[32637], 60.00th=[32637], 00:35:43.411 | 70.00th=[32637], 80.00th=[32900], 90.00th=[33424], 95.00th=[33817], 00:35:43.411 | 99.00th=[34341], 99.50th=[34866], 99.90th=[34866], 99.95th=[34866], 00:35:43.411 | 99.99th=[34866] 00:35:43.411 bw ( KiB/s): min= 1920, max= 2688, per=4.42%, avg=2081.42, stdev=216.61, samples=19 00:35:43.411 iops : min= 480, max= 672, avg=520.32, stdev=54.11, samples=19 00:35:43.411 lat (msec) : 20=2.13%, 50=97.87% 00:35:43.411 cpu : usr=98.34%, sys=0.92%, ctx=97, majf=0, minf=9 00:35:43.411 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:35:43.411 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:43.411 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:43.411 issued rwts: total=5248,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:43.411 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:43.411 filename1: (groupid=0, jobs=1): err= 0: pid=17893: Fri Apr 26 23:38:30 2024 00:35:43.411 read: IOPS=485, BW=1943KiB/s (1990kB/s)(19.0MiB/10013msec) 00:35:43.411 slat (nsec): min=5527, max=86404, avg=25077.33, stdev=14257.27 00:35:43.411 clat (usec): min=21428, max=38270, avg=32717.24, stdev=1085.83 00:35:43.411 lat (usec): min=21444, max=38302, avg=32742.32, stdev=1083.10 00:35:43.411 clat percentiles (usec): 00:35:43.411 | 1.00th=[30802], 5.00th=[31851], 10.00th=[32113], 20.00th=[32113], 00:35:43.411 | 30.00th=[32375], 40.00th=[32375], 50.00th=[32637], 60.00th=[32637], 00:35:43.411 | 70.00th=[32900], 80.00th=[33424], 90.00th=[33817], 95.00th=[34341], 00:35:43.411 | 99.00th=[35390], 99.50th=[35914], 99.90th=[35914], 99.95th=[38011], 00:35:43.411 | 99.99th=[38011] 00:35:43.411 bw ( KiB/s): min= 1916, max= 2048, per=4.13%, avg=1946.32, stdev=53.97, samples=19 00:35:43.411 iops : min= 479, max= 512, avg=486.58, stdev=13.49, samples=19 00:35:43.411 lat (msec) : 50=100.00% 00:35:43.411 cpu : usr=99.03%, sys=0.70%, ctx=5, majf=0, minf=9 00:35:43.411 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:35:43.411 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:43.411 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:43.411 issued rwts: total=4864,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:43.411 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:43.411 filename1: (groupid=0, jobs=1): err= 0: pid=17894: Fri Apr 26 23:38:30 2024 00:35:43.411 read: IOPS=485, BW=1943KiB/s (1990kB/s)(19.0MiB/10005msec) 00:35:43.411 slat (nsec): min=5489, max=61688, avg=14913.11, stdev=9419.53 00:35:43.411 clat (usec): min=5373, max=76487, avg=32824.99, stdev=2782.44 00:35:43.411 lat (usec): min=5379, max=76504, avg=32839.90, stdev=2782.56 00:35:43.411 clat percentiles (usec): 00:35:43.411 | 1.00th=[21890], 5.00th=[31851], 10.00th=[32113], 20.00th=[32375], 00:35:43.411 | 30.00th=[32375], 40.00th=[32375], 50.00th=[32637], 60.00th=[32900], 00:35:43.411 | 70.00th=[33162], 80.00th=[33424], 90.00th=[34341], 95.00th=[34866], 00:35:43.411 | 99.00th=[39584], 99.50th=[49021], 99.90th=[55313], 99.95th=[55313], 00:35:43.411 | 99.99th=[76022] 00:35:43.411 bw ( KiB/s): min= 1779, max= 2048, per=4.11%, avg=1938.47, stdev=67.29, samples=19 00:35:43.411 iops : min= 444, max= 512, avg=484.58, stdev=16.92, samples=19 00:35:43.411 lat (msec) : 10=0.08%, 20=0.49%, 50=98.93%, 100=0.49% 00:35:43.411 cpu : usr=98.94%, sys=0.75%, ctx=66, majf=0, minf=9 00:35:43.411 IO depths : 1=0.7%, 2=5.4%, 4=19.1%, 8=61.7%, 16=13.1%, 32=0.0%, >=64=0.0% 00:35:43.411 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:43.411 complete : 0=0.0%, 4=93.1%, 8=2.5%, 16=4.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:43.411 issued rwts: total=4860,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:43.411 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:43.411 filename1: (groupid=0, jobs=1): err= 0: pid=17895: Fri Apr 26 23:38:30 2024 00:35:43.411 read: IOPS=485, BW=1942KiB/s (1989kB/s)(19.0MiB/10016msec) 00:35:43.411 slat (usec): min=5, max=108, avg=26.42, stdev=15.41 00:35:43.411 clat (usec): min=18003, max=36107, avg=32714.65, stdev=1152.25 00:35:43.411 lat (usec): min=18009, max=36118, avg=32741.07, stdev=1149.72 00:35:43.411 clat percentiles (usec): 00:35:43.411 | 1.00th=[31327], 5.00th=[31851], 10.00th=[32113], 20.00th=[32113], 00:35:43.411 | 30.00th=[32375], 40.00th=[32375], 50.00th=[32375], 60.00th=[32637], 00:35:43.411 | 70.00th=[32900], 80.00th=[33424], 90.00th=[33817], 95.00th=[34341], 00:35:43.411 | 99.00th=[35390], 99.50th=[35914], 99.90th=[35914], 99.95th=[35914], 00:35:43.411 | 99.99th=[35914] 00:35:43.411 bw ( KiB/s): min= 1792, max= 2048, per=4.12%, avg=1940.37, stdev=64.14, samples=19 00:35:43.412 iops : min= 448, max= 512, avg=485.05, stdev=16.05, samples=19 00:35:43.412 lat (msec) : 20=0.33%, 50=99.67% 00:35:43.412 cpu : usr=99.09%, sys=0.59%, ctx=21, majf=0, minf=9 00:35:43.412 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:35:43.412 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:43.412 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:43.412 issued rwts: total=4864,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:43.412 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:43.412 filename1: (groupid=0, jobs=1): err= 0: pid=17896: Fri Apr 26 23:38:30 2024 00:35:43.412 read: IOPS=484, BW=1938KiB/s (1985kB/s)(18.9MiB/10004msec) 00:35:43.412 slat (nsec): min=5637, max=63141, avg=17797.70, stdev=10492.33 00:35:43.412 clat (usec): min=13672, max=70227, avg=32860.55, stdev=1838.19 00:35:43.412 lat (usec): min=13681, max=70246, avg=32878.34, stdev=1837.73 00:35:43.412 clat percentiles (usec): 00:35:43.412 | 1.00th=[31589], 5.00th=[32113], 10.00th=[32113], 20.00th=[32375], 00:35:43.412 | 30.00th=[32375], 40.00th=[32375], 50.00th=[32637], 60.00th=[32637], 00:35:43.412 | 70.00th=[33162], 80.00th=[33424], 90.00th=[33817], 95.00th=[34341], 00:35:43.412 | 99.00th=[35390], 99.50th=[35914], 99.90th=[51119], 99.95th=[51119], 00:35:43.412 | 99.99th=[69731] 00:35:43.412 bw ( KiB/s): min= 1792, max= 2048, per=4.10%, avg=1933.42, stdev=72.32, samples=19 00:35:43.412 iops : min= 448, max= 512, avg=483.32, stdev=18.16, samples=19 00:35:43.412 lat (msec) : 20=0.37%, 50=99.30%, 100=0.33% 00:35:43.412 cpu : usr=98.68%, sys=0.90%, ctx=107, majf=0, minf=9 00:35:43.412 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:35:43.412 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:43.412 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:43.412 issued rwts: total=4848,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:43.412 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:43.412 filename1: (groupid=0, jobs=1): err= 0: pid=17897: Fri Apr 26 23:38:30 2024 00:35:43.412 read: IOPS=486, BW=1946KiB/s (1993kB/s)(19.1MiB/10027msec) 00:35:43.412 slat (usec): min=5, max=106, avg=17.03, stdev=13.14 00:35:43.412 clat (usec): min=20787, max=46373, avg=32759.70, stdev=1602.14 00:35:43.412 lat (usec): min=20801, max=46379, avg=32776.73, stdev=1601.33 00:35:43.412 clat percentiles (usec): 00:35:43.412 | 1.00th=[22414], 5.00th=[32113], 10.00th=[32113], 20.00th=[32375], 00:35:43.412 | 30.00th=[32375], 40.00th=[32637], 50.00th=[32637], 60.00th=[32637], 00:35:43.412 | 70.00th=[32900], 80.00th=[33424], 90.00th=[33817], 95.00th=[34341], 00:35:43.412 | 99.00th=[35914], 99.50th=[36439], 99.90th=[44303], 99.95th=[46400], 00:35:43.412 | 99.99th=[46400] 00:35:43.412 bw ( KiB/s): min= 1904, max= 2048, per=4.13%, avg=1945.40, stdev=51.08, samples=20 00:35:43.412 iops : min= 476, max= 512, avg=486.35, stdev=12.77, samples=20 00:35:43.412 lat (msec) : 50=100.00% 00:35:43.412 cpu : usr=98.98%, sys=0.71%, ctx=38, majf=0, minf=9 00:35:43.412 IO depths : 1=1.3%, 2=7.5%, 4=24.9%, 8=55.1%, 16=11.2%, 32=0.0%, >=64=0.0% 00:35:43.412 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:43.412 complete : 0=0.0%, 4=94.4%, 8=0.0%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:43.412 issued rwts: total=4878,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:43.412 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:43.412 filename2: (groupid=0, jobs=1): err= 0: pid=17898: Fri Apr 26 23:38:30 2024 00:35:43.412 read: IOPS=492, BW=1971KiB/s (2019kB/s)(19.3MiB/10027msec) 00:35:43.412 slat (nsec): min=5500, max=99131, avg=25329.53, stdev=18158.96 00:35:43.412 clat (usec): min=14811, max=58182, avg=32197.43, stdev=3668.76 00:35:43.412 lat (usec): min=14819, max=58190, avg=32222.76, stdev=3670.34 00:35:43.412 clat percentiles (usec): 00:35:43.412 | 1.00th=[21627], 5.00th=[23987], 10.00th=[28705], 20.00th=[32113], 00:35:43.412 | 30.00th=[32113], 40.00th=[32375], 50.00th=[32375], 60.00th=[32637], 00:35:43.412 | 70.00th=[32900], 80.00th=[33424], 90.00th=[33817], 95.00th=[34866], 00:35:43.412 | 99.00th=[47449], 99.50th=[49021], 99.90th=[57934], 99.95th=[57934], 00:35:43.412 | 99.99th=[57934] 00:35:43.412 bw ( KiB/s): min= 1840, max= 2096, per=4.19%, avg=1974.15, stdev=74.27, samples=20 00:35:43.412 iops : min= 460, max= 524, avg=493.50, stdev=18.52, samples=20 00:35:43.412 lat (msec) : 20=0.61%, 50=98.99%, 100=0.40% 00:35:43.412 cpu : usr=98.32%, sys=0.94%, ctx=41, majf=0, minf=9 00:35:43.412 IO depths : 1=4.8%, 2=9.7%, 4=20.6%, 8=56.8%, 16=8.2%, 32=0.0%, >=64=0.0% 00:35:43.412 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:43.412 complete : 0=0.0%, 4=93.0%, 8=1.6%, 16=5.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:43.412 issued rwts: total=4942,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:43.412 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:43.412 filename2: (groupid=0, jobs=1): err= 0: pid=17899: Fri Apr 26 23:38:30 2024 00:35:43.412 read: IOPS=487, BW=1948KiB/s (1995kB/s)(19.0MiB/10006msec) 00:35:43.412 slat (nsec): min=5503, max=87054, avg=20376.47, stdev=15493.75 00:35:43.412 clat (usec): min=14799, max=61018, avg=32668.00, stdev=3661.86 00:35:43.412 lat (usec): min=14812, max=61036, avg=32688.38, stdev=3661.17 00:35:43.412 clat percentiles (usec): 00:35:43.412 | 1.00th=[21103], 5.00th=[26870], 10.00th=[31589], 20.00th=[32113], 00:35:43.412 | 30.00th=[32375], 40.00th=[32375], 50.00th=[32375], 60.00th=[32637], 00:35:43.412 | 70.00th=[32900], 80.00th=[33817], 90.00th=[34341], 95.00th=[36963], 00:35:43.412 | 99.00th=[47449], 99.50th=[50594], 99.90th=[61080], 99.95th=[61080], 00:35:43.412 | 99.99th=[61080] 00:35:43.412 bw ( KiB/s): min= 1792, max= 2112, per=4.13%, avg=1948.42, stdev=78.62, samples=19 00:35:43.412 iops : min= 448, max= 528, avg=487.11, stdev=19.66, samples=19 00:35:43.412 lat (msec) : 20=0.43%, 50=98.95%, 100=0.62% 00:35:43.412 cpu : usr=98.92%, sys=0.71%, ctx=68, majf=0, minf=9 00:35:43.412 IO depths : 1=4.2%, 2=8.5%, 4=18.3%, 8=59.6%, 16=9.3%, 32=0.0%, >=64=0.0% 00:35:43.412 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:43.412 complete : 0=0.0%, 4=92.5%, 8=2.8%, 16=4.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:43.412 issued rwts: total=4874,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:43.412 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:43.412 filename2: (groupid=0, jobs=1): err= 0: pid=17900: Fri Apr 26 23:38:30 2024 00:35:43.412 read: IOPS=484, BW=1936KiB/s (1983kB/s)(18.9MiB/10014msec) 00:35:43.412 slat (nsec): min=5511, max=95751, avg=21152.99, stdev=17743.27 00:35:43.412 clat (usec): min=26938, max=42715, avg=32860.26, stdev=1041.63 00:35:43.412 lat (usec): min=26948, max=42732, avg=32881.42, stdev=1034.84 00:35:43.412 clat percentiles (usec): 00:35:43.412 | 1.00th=[31589], 5.00th=[31851], 10.00th=[32113], 20.00th=[32113], 00:35:43.412 | 30.00th=[32375], 40.00th=[32375], 50.00th=[32637], 60.00th=[32637], 00:35:43.412 | 70.00th=[33162], 80.00th=[33817], 90.00th=[33817], 95.00th=[34341], 00:35:43.412 | 99.00th=[35390], 99.50th=[35914], 99.90th=[42730], 99.95th=[42730], 00:35:43.412 | 99.99th=[42730] 00:35:43.412 bw ( KiB/s): min= 1792, max= 2048, per=4.10%, avg=1932.95, stdev=57.21, samples=20 00:35:43.412 iops : min= 448, max= 512, avg=483.20, stdev=14.31, samples=20 00:35:43.412 lat (msec) : 50=100.00% 00:35:43.412 cpu : usr=99.00%, sys=0.71%, ctx=15, majf=0, minf=9 00:35:43.412 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:35:43.412 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:43.412 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:43.412 issued rwts: total=4848,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:43.412 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:43.412 filename2: (groupid=0, jobs=1): err= 0: pid=17901: Fri Apr 26 23:38:30 2024 00:35:43.412 read: IOPS=505, BW=2021KiB/s (2070kB/s)(19.8MiB/10005msec) 00:35:43.412 slat (nsec): min=5507, max=81255, avg=10600.75, stdev=6773.70 00:35:43.412 clat (usec): min=13891, max=44355, avg=31569.54, stdev=3517.85 00:35:43.412 lat (usec): min=13898, max=44365, avg=31580.15, stdev=3518.61 00:35:43.412 clat percentiles (usec): 00:35:43.412 | 1.00th=[20317], 5.00th=[21890], 10.00th=[24511], 20.00th=[32375], 00:35:43.412 | 30.00th=[32375], 40.00th=[32375], 50.00th=[32637], 60.00th=[32637], 00:35:43.412 | 70.00th=[32900], 80.00th=[33162], 90.00th=[33817], 95.00th=[34341], 00:35:43.412 | 99.00th=[34866], 99.50th=[35390], 99.90th=[35914], 99.95th=[35914], 00:35:43.413 | 99.99th=[44303] 00:35:43.413 bw ( KiB/s): min= 1920, max= 2432, per=4.30%, avg=2027.53, stdev=141.44, samples=19 00:35:43.413 iops : min= 480, max= 608, avg=506.84, stdev=35.32, samples=19 00:35:43.413 lat (msec) : 20=0.95%, 50=99.05% 00:35:43.413 cpu : usr=98.22%, sys=1.08%, ctx=53, majf=0, minf=9 00:35:43.413 IO depths : 1=5.8%, 2=12.1%, 4=25.0%, 8=50.4%, 16=6.7%, 32=0.0%, >=64=0.0% 00:35:43.413 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:43.413 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:43.413 issued rwts: total=5056,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:43.413 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:43.413 filename2: (groupid=0, jobs=1): err= 0: pid=17902: Fri Apr 26 23:38:30 2024 00:35:43.413 read: IOPS=484, BW=1938KiB/s (1984kB/s)(18.9MiB/10008msec) 00:35:43.413 slat (nsec): min=5519, max=93796, avg=20488.55, stdev=16650.90 00:35:43.413 clat (usec): min=30063, max=44203, avg=32865.11, stdev=1022.55 00:35:43.413 lat (usec): min=30096, max=44221, avg=32885.60, stdev=1016.75 00:35:43.413 clat percentiles (usec): 00:35:43.413 | 1.00th=[31589], 5.00th=[31851], 10.00th=[32113], 20.00th=[32375], 00:35:43.413 | 30.00th=[32375], 40.00th=[32375], 50.00th=[32637], 60.00th=[32637], 00:35:43.413 | 70.00th=[32900], 80.00th=[33424], 90.00th=[33817], 95.00th=[34341], 00:35:43.413 | 99.00th=[35390], 99.50th=[35914], 99.90th=[44303], 99.95th=[44303], 00:35:43.413 | 99.99th=[44303] 00:35:43.413 bw ( KiB/s): min= 1920, max= 2048, per=4.12%, avg=1940.21, stdev=47.95, samples=19 00:35:43.413 iops : min= 480, max= 512, avg=485.05, stdev=11.99, samples=19 00:35:43.413 lat (msec) : 50=100.00% 00:35:43.413 cpu : usr=99.15%, sys=0.57%, ctx=11, majf=0, minf=9 00:35:43.413 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:35:43.413 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:43.413 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:43.413 issued rwts: total=4848,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:43.413 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:43.413 filename2: (groupid=0, jobs=1): err= 0: pid=17903: Fri Apr 26 23:38:30 2024 00:35:43.413 read: IOPS=485, BW=1940KiB/s (1987kB/s)(19.0MiB/10006msec) 00:35:43.413 slat (nsec): min=5540, max=57625, avg=15588.20, stdev=9197.28 00:35:43.413 clat (usec): min=7044, max=57365, avg=32841.72, stdev=2810.18 00:35:43.413 lat (usec): min=7050, max=57373, avg=32857.31, stdev=2810.07 00:35:43.413 clat percentiles (usec): 00:35:43.413 | 1.00th=[21365], 5.00th=[32113], 10.00th=[32113], 20.00th=[32375], 00:35:43.413 | 30.00th=[32375], 40.00th=[32375], 50.00th=[32637], 60.00th=[32637], 00:35:43.413 | 70.00th=[33162], 80.00th=[33817], 90.00th=[33817], 95.00th=[34341], 00:35:43.413 | 99.00th=[47449], 99.50th=[52167], 99.90th=[55313], 99.95th=[55837], 00:35:43.413 | 99.99th=[57410] 00:35:43.413 bw ( KiB/s): min= 1792, max= 2048, per=4.11%, avg=1935.79, stdev=70.26, samples=19 00:35:43.413 iops : min= 448, max= 512, avg=483.95, stdev=17.56, samples=19 00:35:43.413 lat (msec) : 10=0.04%, 20=0.54%, 50=98.56%, 100=0.87% 00:35:43.413 cpu : usr=98.85%, sys=0.74%, ctx=122, majf=0, minf=9 00:35:43.413 IO depths : 1=4.3%, 2=10.5%, 4=24.8%, 8=52.3%, 16=8.2%, 32=0.0%, >=64=0.0% 00:35:43.413 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:43.413 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:43.413 issued rwts: total=4854,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:43.413 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:43.413 filename2: (groupid=0, jobs=1): err= 0: pid=17904: Fri Apr 26 23:38:30 2024 00:35:43.413 read: IOPS=521, BW=2085KiB/s (2135kB/s)(20.4MiB/10018msec) 00:35:43.413 slat (nsec): min=5493, max=96725, avg=15397.53, stdev=13393.92 00:35:43.413 clat (usec): min=12963, max=42926, avg=30574.90, stdev=4547.40 00:35:43.413 lat (usec): min=12970, max=42942, avg=30590.30, stdev=4551.26 00:35:43.413 clat percentiles (usec): 00:35:43.413 | 1.00th=[15401], 5.00th=[21365], 10.00th=[22414], 20.00th=[28443], 00:35:43.413 | 30.00th=[32113], 40.00th=[32375], 50.00th=[32375], 60.00th=[32637], 00:35:43.413 | 70.00th=[32637], 80.00th=[32900], 90.00th=[33817], 95.00th=[33817], 00:35:43.413 | 99.00th=[34866], 99.50th=[35390], 99.90th=[42730], 99.95th=[42730], 00:35:43.413 | 99.99th=[42730] 00:35:43.413 bw ( KiB/s): min= 1920, max= 2784, per=4.42%, avg=2082.95, stdev=303.18, samples=20 00:35:43.413 iops : min= 480, max= 696, avg=520.70, stdev=75.81, samples=20 00:35:43.413 lat (msec) : 20=2.35%, 50=97.65% 00:35:43.413 cpu : usr=98.18%, sys=1.05%, ctx=147, majf=0, minf=9 00:35:43.413 IO depths : 1=4.8%, 2=9.7%, 4=20.6%, 8=57.1%, 16=7.8%, 32=0.0%, >=64=0.0% 00:35:43.413 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:43.413 complete : 0=0.0%, 4=92.8%, 8=1.5%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:43.413 issued rwts: total=5223,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:43.413 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:43.413 filename2: (groupid=0, jobs=1): err= 0: pid=17905: Fri Apr 26 23:38:30 2024 00:35:43.413 read: IOPS=484, BW=1938KiB/s (1985kB/s)(18.9MiB/10004msec) 00:35:43.413 slat (nsec): min=5596, max=68621, avg=15908.78, stdev=10266.33 00:35:43.413 clat (usec): min=13589, max=69635, avg=32876.89, stdev=1945.61 00:35:43.413 lat (usec): min=13598, max=69653, avg=32892.80, stdev=1945.13 00:35:43.413 clat percentiles (usec): 00:35:43.413 | 1.00th=[31589], 5.00th=[32113], 10.00th=[32113], 20.00th=[32375], 00:35:43.413 | 30.00th=[32375], 40.00th=[32637], 50.00th=[32637], 60.00th=[32637], 00:35:43.413 | 70.00th=[33162], 80.00th=[33817], 90.00th=[33817], 95.00th=[34341], 00:35:43.413 | 99.00th=[35914], 99.50th=[36439], 99.90th=[50594], 99.95th=[50594], 00:35:43.413 | 99.99th=[69731] 00:35:43.413 bw ( KiB/s): min= 1792, max= 2048, per=4.10%, avg=1933.42, stdev=72.32, samples=19 00:35:43.413 iops : min= 448, max= 512, avg=483.32, stdev=18.16, samples=19 00:35:43.413 lat (msec) : 20=0.45%, 50=99.22%, 100=0.33% 00:35:43.413 cpu : usr=98.40%, sys=0.89%, ctx=124, majf=0, minf=9 00:35:43.413 IO depths : 1=5.8%, 2=12.0%, 4=25.0%, 8=50.5%, 16=6.7%, 32=0.0%, >=64=0.0% 00:35:43.413 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:43.413 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:43.413 issued rwts: total=4848,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:43.413 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:43.413 00:35:43.413 Run status group 0 (all jobs): 00:35:43.413 READ: bw=46.0MiB/s (48.2MB/s), 1936KiB/s-2134KiB/s (1983kB/s-2185kB/s), io=461MiB (484MB), run=10004-10027msec 00:35:43.413 23:38:31 -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:35:43.413 23:38:31 -- target/dif.sh@43 -- # local sub 00:35:43.413 23:38:31 -- target/dif.sh@45 -- # for sub in "$@" 00:35:43.413 23:38:31 -- target/dif.sh@46 -- # destroy_subsystem 0 00:35:43.413 23:38:31 -- target/dif.sh@36 -- # local sub_id=0 00:35:43.413 23:38:31 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:35:43.413 23:38:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:35:43.413 23:38:31 -- common/autotest_common.sh@10 -- # set +x 00:35:43.413 23:38:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:35:43.413 23:38:31 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:35:43.413 23:38:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:35:43.413 23:38:31 -- common/autotest_common.sh@10 -- # set +x 00:35:43.413 23:38:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:35:43.413 23:38:31 -- target/dif.sh@45 -- # for sub in "$@" 00:35:43.413 23:38:31 -- target/dif.sh@46 -- # destroy_subsystem 1 00:35:43.413 23:38:31 -- target/dif.sh@36 -- # local sub_id=1 00:35:43.413 23:38:31 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:35:43.413 23:38:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:35:43.413 23:38:31 -- common/autotest_common.sh@10 -- # set +x 00:35:43.413 23:38:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:35:43.413 23:38:31 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:35:43.413 23:38:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:35:43.413 23:38:31 -- common/autotest_common.sh@10 -- # set +x 00:35:43.413 23:38:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:35:43.413 23:38:31 -- target/dif.sh@45 -- # for sub in "$@" 00:35:43.413 23:38:31 -- target/dif.sh@46 -- # destroy_subsystem 2 00:35:43.413 23:38:31 -- target/dif.sh@36 -- # local sub_id=2 00:35:43.414 23:38:31 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:35:43.414 23:38:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:35:43.414 23:38:31 -- common/autotest_common.sh@10 -- # set +x 00:35:43.414 23:38:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:35:43.414 23:38:31 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:35:43.414 23:38:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:35:43.414 23:38:31 -- common/autotest_common.sh@10 -- # set +x 00:35:43.414 23:38:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:35:43.414 23:38:31 -- target/dif.sh@115 -- # NULL_DIF=1 00:35:43.414 23:38:31 -- target/dif.sh@115 -- # bs=8k,16k,128k 00:35:43.414 23:38:31 -- target/dif.sh@115 -- # numjobs=2 00:35:43.414 23:38:31 -- target/dif.sh@115 -- # iodepth=8 00:35:43.414 23:38:31 -- target/dif.sh@115 -- # runtime=5 00:35:43.414 23:38:31 -- target/dif.sh@115 -- # files=1 00:35:43.414 23:38:31 -- target/dif.sh@117 -- # create_subsystems 0 1 00:35:43.414 23:38:31 -- target/dif.sh@28 -- # local sub 00:35:43.414 23:38:31 -- target/dif.sh@30 -- # for sub in "$@" 00:35:43.414 23:38:31 -- target/dif.sh@31 -- # create_subsystem 0 00:35:43.414 23:38:31 -- target/dif.sh@18 -- # local sub_id=0 00:35:43.414 23:38:31 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:35:43.414 23:38:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:35:43.414 23:38:31 -- common/autotest_common.sh@10 -- # set +x 00:35:43.414 bdev_null0 00:35:43.414 23:38:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:35:43.414 23:38:31 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:35:43.414 23:38:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:35:43.414 23:38:31 -- common/autotest_common.sh@10 -- # set +x 00:35:43.414 23:38:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:35:43.414 23:38:31 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:35:43.414 23:38:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:35:43.414 23:38:31 -- common/autotest_common.sh@10 -- # set +x 00:35:43.414 23:38:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:35:43.414 23:38:31 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:35:43.414 23:38:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:35:43.414 23:38:31 -- common/autotest_common.sh@10 -- # set +x 00:35:43.414 [2024-04-26 23:38:31.180102] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:43.414 23:38:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:35:43.414 23:38:31 -- target/dif.sh@30 -- # for sub in "$@" 00:35:43.414 23:38:31 -- target/dif.sh@31 -- # create_subsystem 1 00:35:43.414 23:38:31 -- target/dif.sh@18 -- # local sub_id=1 00:35:43.414 23:38:31 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:35:43.414 23:38:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:35:43.414 23:38:31 -- common/autotest_common.sh@10 -- # set +x 00:35:43.414 bdev_null1 00:35:43.414 23:38:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:35:43.414 23:38:31 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:35:43.414 23:38:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:35:43.414 23:38:31 -- common/autotest_common.sh@10 -- # set +x 00:35:43.414 23:38:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:35:43.414 23:38:31 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:35:43.414 23:38:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:35:43.414 23:38:31 -- common/autotest_common.sh@10 -- # set +x 00:35:43.414 23:38:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:35:43.414 23:38:31 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:35:43.414 23:38:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:35:43.414 23:38:31 -- common/autotest_common.sh@10 -- # set +x 00:35:43.414 23:38:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:35:43.414 23:38:31 -- target/dif.sh@118 -- # fio /dev/fd/62 00:35:43.414 23:38:31 -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:35:43.414 23:38:31 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:35:43.414 23:38:31 -- nvmf/common.sh@521 -- # config=() 00:35:43.414 23:38:31 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:43.414 23:38:31 -- nvmf/common.sh@521 -- # local subsystem config 00:35:43.414 23:38:31 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:35:43.414 23:38:31 -- common/autotest_common.sh@1342 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:43.414 23:38:31 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:35:43.414 { 00:35:43.414 "params": { 00:35:43.414 "name": "Nvme$subsystem", 00:35:43.414 "trtype": "$TEST_TRANSPORT", 00:35:43.414 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:43.414 "adrfam": "ipv4", 00:35:43.414 "trsvcid": "$NVMF_PORT", 00:35:43.414 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:43.414 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:43.414 "hdgst": ${hdgst:-false}, 00:35:43.414 "ddgst": ${ddgst:-false} 00:35:43.414 }, 00:35:43.414 "method": "bdev_nvme_attach_controller" 00:35:43.414 } 00:35:43.414 EOF 00:35:43.414 )") 00:35:43.414 23:38:31 -- common/autotest_common.sh@1323 -- # local fio_dir=/usr/src/fio 00:35:43.414 23:38:31 -- target/dif.sh@82 -- # gen_fio_conf 00:35:43.414 23:38:31 -- common/autotest_common.sh@1325 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:35:43.414 23:38:31 -- target/dif.sh@54 -- # local file 00:35:43.414 23:38:31 -- common/autotest_common.sh@1325 -- # local sanitizers 00:35:43.414 23:38:31 -- common/autotest_common.sh@1326 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:43.414 23:38:31 -- target/dif.sh@56 -- # cat 00:35:43.414 23:38:31 -- common/autotest_common.sh@1327 -- # shift 00:35:43.414 23:38:31 -- common/autotest_common.sh@1329 -- # local asan_lib= 00:35:43.414 23:38:31 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:35:43.414 23:38:31 -- nvmf/common.sh@543 -- # cat 00:35:43.414 23:38:31 -- common/autotest_common.sh@1331 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:43.414 23:38:31 -- common/autotest_common.sh@1331 -- # grep libasan 00:35:43.414 23:38:31 -- target/dif.sh@72 -- # (( file = 1 )) 00:35:43.414 23:38:31 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:35:43.414 23:38:31 -- target/dif.sh@72 -- # (( file <= files )) 00:35:43.414 23:38:31 -- target/dif.sh@73 -- # cat 00:35:43.414 23:38:31 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:35:43.414 23:38:31 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:35:43.414 { 00:35:43.414 "params": { 00:35:43.414 "name": "Nvme$subsystem", 00:35:43.414 "trtype": "$TEST_TRANSPORT", 00:35:43.414 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:43.414 "adrfam": "ipv4", 00:35:43.414 "trsvcid": "$NVMF_PORT", 00:35:43.414 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:43.414 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:43.414 "hdgst": ${hdgst:-false}, 00:35:43.414 "ddgst": ${ddgst:-false} 00:35:43.414 }, 00:35:43.414 "method": "bdev_nvme_attach_controller" 00:35:43.414 } 00:35:43.414 EOF 00:35:43.414 )") 00:35:43.414 23:38:31 -- target/dif.sh@72 -- # (( file++ )) 00:35:43.414 23:38:31 -- nvmf/common.sh@543 -- # cat 00:35:43.414 23:38:31 -- target/dif.sh@72 -- # (( file <= files )) 00:35:43.414 23:38:31 -- nvmf/common.sh@545 -- # jq . 00:35:43.414 23:38:31 -- nvmf/common.sh@546 -- # IFS=, 00:35:43.414 23:38:31 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:35:43.414 "params": { 00:35:43.414 "name": "Nvme0", 00:35:43.414 "trtype": "tcp", 00:35:43.414 "traddr": "10.0.0.2", 00:35:43.414 "adrfam": "ipv4", 00:35:43.414 "trsvcid": "4420", 00:35:43.414 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:35:43.414 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:35:43.414 "hdgst": false, 00:35:43.414 "ddgst": false 00:35:43.414 }, 00:35:43.414 "method": "bdev_nvme_attach_controller" 00:35:43.414 },{ 00:35:43.414 "params": { 00:35:43.414 "name": "Nvme1", 00:35:43.414 "trtype": "tcp", 00:35:43.414 "traddr": "10.0.0.2", 00:35:43.414 "adrfam": "ipv4", 00:35:43.414 "trsvcid": "4420", 00:35:43.414 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:35:43.414 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:35:43.414 "hdgst": false, 00:35:43.414 "ddgst": false 00:35:43.414 }, 00:35:43.414 "method": "bdev_nvme_attach_controller" 00:35:43.414 }' 00:35:43.414 23:38:31 -- common/autotest_common.sh@1331 -- # asan_lib= 00:35:43.414 23:38:31 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:35:43.414 23:38:31 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:35:43.415 23:38:31 -- common/autotest_common.sh@1331 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:43.415 23:38:31 -- common/autotest_common.sh@1331 -- # grep libclang_rt.asan 00:35:43.415 23:38:31 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:35:43.415 23:38:31 -- common/autotest_common.sh@1331 -- # asan_lib= 00:35:43.415 23:38:31 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:35:43.415 23:38:31 -- common/autotest_common.sh@1338 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:35:43.415 23:38:31 -- common/autotest_common.sh@1338 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:43.415 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:35:43.415 ... 00:35:43.415 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:35:43.415 ... 00:35:43.415 fio-3.35 00:35:43.415 Starting 4 threads 00:35:43.415 EAL: No free 2048 kB hugepages reported on node 1 00:35:48.710 00:35:48.710 filename0: (groupid=0, jobs=1): err= 0: pid=20139: Fri Apr 26 23:38:37 2024 00:35:48.711 read: IOPS=2075, BW=16.2MiB/s (17.0MB/s)(81.1MiB/5002msec) 00:35:48.711 slat (nsec): min=5336, max=52800, avg=8209.82, stdev=1964.24 00:35:48.711 clat (usec): min=2119, max=6700, avg=3832.62, stdev=620.73 00:35:48.711 lat (usec): min=2133, max=6712, avg=3840.83, stdev=620.67 00:35:48.711 clat percentiles (usec): 00:35:48.711 | 1.00th=[ 2933], 5.00th=[ 3261], 10.00th=[ 3392], 20.00th=[ 3490], 00:35:48.711 | 30.00th=[ 3556], 40.00th=[ 3589], 50.00th=[ 3654], 60.00th=[ 3752], 00:35:48.711 | 70.00th=[ 3785], 80.00th=[ 3884], 90.00th=[ 5145], 95.00th=[ 5473], 00:35:48.711 | 99.00th=[ 5735], 99.50th=[ 5866], 99.90th=[ 6128], 99.95th=[ 6390], 00:35:48.711 | 99.99th=[ 6652] 00:35:48.711 bw ( KiB/s): min=16128, max=17168, per=24.82%, avg=16601.60, stdev=362.21, samples=10 00:35:48.711 iops : min= 2016, max= 2146, avg=2075.20, stdev=45.28, samples=10 00:35:48.711 lat (msec) : 4=83.33%, 10=16.67% 00:35:48.711 cpu : usr=98.08%, sys=1.66%, ctx=6, majf=0, minf=92 00:35:48.711 IO depths : 1=0.1%, 2=0.1%, 4=71.6%, 8=28.3%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:48.711 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:48.711 complete : 0=0.0%, 4=93.4%, 8=6.6%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:48.711 issued rwts: total=10381,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:48.711 latency : target=0, window=0, percentile=100.00%, depth=8 00:35:48.711 filename0: (groupid=0, jobs=1): err= 0: pid=20140: Fri Apr 26 23:38:37 2024 00:35:48.711 read: IOPS=2108, BW=16.5MiB/s (17.3MB/s)(82.4MiB/5003msec) 00:35:48.711 slat (nsec): min=5334, max=60743, avg=6011.20, stdev=1518.04 00:35:48.711 clat (usec): min=1706, max=6209, avg=3777.64, stdev=647.12 00:35:48.711 lat (usec): min=1712, max=6215, avg=3783.65, stdev=647.13 00:35:48.711 clat percentiles (usec): 00:35:48.711 | 1.00th=[ 2638], 5.00th=[ 2966], 10.00th=[ 3195], 20.00th=[ 3392], 00:35:48.711 | 30.00th=[ 3490], 40.00th=[ 3556], 50.00th=[ 3687], 60.00th=[ 3720], 00:35:48.711 | 70.00th=[ 3785], 80.00th=[ 3851], 90.00th=[ 4948], 95.00th=[ 5407], 00:35:48.711 | 99.00th=[ 5538], 99.50th=[ 5735], 99.90th=[ 5997], 99.95th=[ 6063], 00:35:48.711 | 99.99th=[ 6194] 00:35:48.711 bw ( KiB/s): min=16176, max=17328, per=25.22%, avg=16872.00, stdev=387.92, samples=10 00:35:48.711 iops : min= 2022, max= 2166, avg=2109.00, stdev=48.49, samples=10 00:35:48.711 lat (msec) : 2=0.01%, 4=82.59%, 10=17.40% 00:35:48.711 cpu : usr=97.28%, sys=2.48%, ctx=7, majf=0, minf=31 00:35:48.711 IO depths : 1=0.1%, 2=0.6%, 4=70.0%, 8=29.4%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:48.711 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:48.711 complete : 0=0.0%, 4=94.3%, 8=5.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:48.711 issued rwts: total=10550,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:48.711 latency : target=0, window=0, percentile=100.00%, depth=8 00:35:48.711 filename1: (groupid=0, jobs=1): err= 0: pid=20141: Fri Apr 26 23:38:37 2024 00:35:48.711 read: IOPS=2102, BW=16.4MiB/s (17.2MB/s)(82.2MiB/5003msec) 00:35:48.711 slat (nsec): min=5328, max=46097, avg=6088.36, stdev=1807.86 00:35:48.711 clat (usec): min=1965, max=6284, avg=3789.55, stdev=526.76 00:35:48.711 lat (usec): min=1971, max=6290, avg=3795.64, stdev=526.78 00:35:48.711 clat percentiles (usec): 00:35:48.711 | 1.00th=[ 2999], 5.00th=[ 3294], 10.00th=[ 3425], 20.00th=[ 3490], 00:35:48.711 | 30.00th=[ 3556], 40.00th=[ 3589], 50.00th=[ 3654], 60.00th=[ 3752], 00:35:48.711 | 70.00th=[ 3785], 80.00th=[ 3851], 90.00th=[ 4228], 95.00th=[ 5276], 00:35:48.711 | 99.00th=[ 5800], 99.50th=[ 5997], 99.90th=[ 6128], 99.95th=[ 6259], 00:35:48.711 | 99.99th=[ 6259] 00:35:48.711 bw ( KiB/s): min=16144, max=17744, per=25.14%, avg=16816.00, stdev=544.00, samples=10 00:35:48.711 iops : min= 2018, max= 2218, avg=2102.00, stdev=68.00, samples=10 00:35:48.711 lat (msec) : 2=0.01%, 4=85.42%, 10=14.58% 00:35:48.711 cpu : usr=96.06%, sys=3.16%, ctx=172, majf=0, minf=136 00:35:48.711 IO depths : 1=0.1%, 2=0.2%, 4=69.8%, 8=30.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:48.711 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:48.711 complete : 0=0.0%, 4=94.5%, 8=5.5%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:48.711 issued rwts: total=10518,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:48.711 latency : target=0, window=0, percentile=100.00%, depth=8 00:35:48.711 filename1: (groupid=0, jobs=1): err= 0: pid=20142: Fri Apr 26 23:38:37 2024 00:35:48.711 read: IOPS=2075, BW=16.2MiB/s (17.0MB/s)(81.1MiB/5002msec) 00:35:48.711 slat (nsec): min=5333, max=91916, avg=5918.46, stdev=1861.34 00:35:48.711 clat (usec): min=1557, max=6647, avg=3838.40, stdev=584.06 00:35:48.711 lat (usec): min=1562, max=6652, avg=3844.32, stdev=584.04 00:35:48.711 clat percentiles (usec): 00:35:48.711 | 1.00th=[ 3130], 5.00th=[ 3294], 10.00th=[ 3392], 20.00th=[ 3490], 00:35:48.711 | 30.00th=[ 3556], 40.00th=[ 3621], 50.00th=[ 3720], 60.00th=[ 3752], 00:35:48.711 | 70.00th=[ 3785], 80.00th=[ 3884], 90.00th=[ 4883], 95.00th=[ 5473], 00:35:48.711 | 99.00th=[ 5735], 99.50th=[ 5866], 99.90th=[ 6128], 99.95th=[ 6521], 00:35:48.711 | 99.99th=[ 6652] 00:35:48.711 bw ( KiB/s): min=15952, max=17280, per=24.82%, avg=16600.00, stdev=416.09, samples=10 00:35:48.711 iops : min= 1994, max= 2160, avg=2075.00, stdev=52.01, samples=10 00:35:48.711 lat (msec) : 2=0.05%, 4=83.59%, 10=16.36% 00:35:48.711 cpu : usr=97.72%, sys=2.04%, ctx=9, majf=0, minf=67 00:35:48.711 IO depths : 1=0.1%, 2=0.1%, 4=70.6%, 8=29.3%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:48.711 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:48.711 complete : 0=0.0%, 4=94.1%, 8=5.9%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:48.711 issued rwts: total=10383,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:48.711 latency : target=0, window=0, percentile=100.00%, depth=8 00:35:48.711 00:35:48.711 Run status group 0 (all jobs): 00:35:48.711 READ: bw=65.3MiB/s (68.5MB/s), 16.2MiB/s-16.5MiB/s (17.0MB/s-17.3MB/s), io=327MiB (343MB), run=5002-5003msec 00:35:48.711 23:38:37 -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:35:48.711 23:38:37 -- target/dif.sh@43 -- # local sub 00:35:48.711 23:38:37 -- target/dif.sh@45 -- # for sub in "$@" 00:35:48.711 23:38:37 -- target/dif.sh@46 -- # destroy_subsystem 0 00:35:48.711 23:38:37 -- target/dif.sh@36 -- # local sub_id=0 00:35:48.711 23:38:37 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:35:48.711 23:38:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:35:48.711 23:38:37 -- common/autotest_common.sh@10 -- # set +x 00:35:48.711 23:38:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:35:48.711 23:38:37 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:35:48.711 23:38:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:35:48.711 23:38:37 -- common/autotest_common.sh@10 -- # set +x 00:35:48.711 23:38:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:35:48.711 23:38:37 -- target/dif.sh@45 -- # for sub in "$@" 00:35:48.711 23:38:37 -- target/dif.sh@46 -- # destroy_subsystem 1 00:35:48.711 23:38:37 -- target/dif.sh@36 -- # local sub_id=1 00:35:48.711 23:38:37 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:35:48.711 23:38:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:35:48.711 23:38:37 -- common/autotest_common.sh@10 -- # set +x 00:35:48.711 23:38:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:35:48.711 23:38:37 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:35:48.711 23:38:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:35:48.711 23:38:37 -- common/autotest_common.sh@10 -- # set +x 00:35:48.711 23:38:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:35:48.711 00:35:48.711 real 0m24.223s 00:35:48.711 user 5m20.524s 00:35:48.711 sys 0m3.865s 00:35:48.711 23:38:37 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:35:48.711 23:38:37 -- common/autotest_common.sh@10 -- # set +x 00:35:48.711 ************************************ 00:35:48.711 END TEST fio_dif_rand_params 00:35:48.711 ************************************ 00:35:48.711 23:38:37 -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:35:48.711 23:38:37 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:35:48.711 23:38:37 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:35:48.711 23:38:37 -- common/autotest_common.sh@10 -- # set +x 00:35:48.711 ************************************ 00:35:48.711 START TEST fio_dif_digest 00:35:48.711 ************************************ 00:35:48.711 23:38:37 -- common/autotest_common.sh@1111 -- # fio_dif_digest 00:35:48.711 23:38:37 -- target/dif.sh@123 -- # local NULL_DIF 00:35:48.711 23:38:37 -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:35:48.711 23:38:37 -- target/dif.sh@125 -- # local hdgst ddgst 00:35:48.711 23:38:37 -- target/dif.sh@127 -- # NULL_DIF=3 00:35:48.711 23:38:37 -- target/dif.sh@127 -- # bs=128k,128k,128k 00:35:48.711 23:38:37 -- target/dif.sh@127 -- # numjobs=3 00:35:48.711 23:38:37 -- target/dif.sh@127 -- # iodepth=3 00:35:48.711 23:38:37 -- target/dif.sh@127 -- # runtime=10 00:35:48.711 23:38:37 -- target/dif.sh@128 -- # hdgst=true 00:35:48.711 23:38:37 -- target/dif.sh@128 -- # ddgst=true 00:35:48.711 23:38:37 -- target/dif.sh@130 -- # create_subsystems 0 00:35:48.711 23:38:37 -- target/dif.sh@28 -- # local sub 00:35:48.711 23:38:37 -- target/dif.sh@30 -- # for sub in "$@" 00:35:48.711 23:38:37 -- target/dif.sh@31 -- # create_subsystem 0 00:35:48.711 23:38:37 -- target/dif.sh@18 -- # local sub_id=0 00:35:48.711 23:38:37 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:35:48.711 23:38:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:35:48.711 23:38:37 -- common/autotest_common.sh@10 -- # set +x 00:35:48.711 bdev_null0 00:35:48.711 23:38:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:35:48.711 23:38:37 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:35:48.711 23:38:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:35:48.711 23:38:37 -- common/autotest_common.sh@10 -- # set +x 00:35:48.711 23:38:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:35:48.711 23:38:37 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:35:48.711 23:38:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:35:48.711 23:38:37 -- common/autotest_common.sh@10 -- # set +x 00:35:48.711 23:38:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:35:48.711 23:38:37 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:35:48.711 23:38:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:35:48.711 23:38:37 -- common/autotest_common.sh@10 -- # set +x 00:35:48.711 [2024-04-26 23:38:37.772889] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:48.711 23:38:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:35:48.711 23:38:37 -- target/dif.sh@131 -- # create_json_sub_conf 0 00:35:48.711 23:38:37 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:35:48.711 23:38:37 -- nvmf/common.sh@521 -- # config=() 00:35:48.712 23:38:37 -- nvmf/common.sh@521 -- # local subsystem config 00:35:48.712 23:38:37 -- target/dif.sh@131 -- # fio /dev/fd/62 00:35:48.712 23:38:37 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:35:48.712 23:38:37 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:35:48.712 { 00:35:48.712 "params": { 00:35:48.712 "name": "Nvme$subsystem", 00:35:48.712 "trtype": "$TEST_TRANSPORT", 00:35:48.712 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:48.712 "adrfam": "ipv4", 00:35:48.712 "trsvcid": "$NVMF_PORT", 00:35:48.712 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:48.712 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:48.712 "hdgst": ${hdgst:-false}, 00:35:48.712 "ddgst": ${ddgst:-false} 00:35:48.712 }, 00:35:48.712 "method": "bdev_nvme_attach_controller" 00:35:48.712 } 00:35:48.712 EOF 00:35:48.712 )") 00:35:48.712 23:38:37 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:48.712 23:38:37 -- common/autotest_common.sh@1342 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:48.712 23:38:37 -- target/dif.sh@82 -- # gen_fio_conf 00:35:48.712 23:38:37 -- common/autotest_common.sh@1323 -- # local fio_dir=/usr/src/fio 00:35:48.712 23:38:37 -- nvmf/common.sh@543 -- # cat 00:35:48.712 23:38:37 -- common/autotest_common.sh@1325 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:35:48.712 23:38:37 -- target/dif.sh@54 -- # local file 00:35:48.712 23:38:37 -- common/autotest_common.sh@1325 -- # local sanitizers 00:35:48.712 23:38:37 -- target/dif.sh@56 -- # cat 00:35:48.712 23:38:37 -- common/autotest_common.sh@1326 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:48.712 23:38:37 -- common/autotest_common.sh@1327 -- # shift 00:35:48.712 23:38:37 -- common/autotest_common.sh@1329 -- # local asan_lib= 00:35:48.712 23:38:37 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:35:48.712 23:38:37 -- nvmf/common.sh@545 -- # jq . 00:35:48.712 23:38:37 -- common/autotest_common.sh@1331 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:48.712 23:38:37 -- target/dif.sh@72 -- # (( file = 1 )) 00:35:48.712 23:38:37 -- common/autotest_common.sh@1331 -- # grep libasan 00:35:48.712 23:38:37 -- target/dif.sh@72 -- # (( file <= files )) 00:35:48.712 23:38:37 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:35:48.712 23:38:37 -- nvmf/common.sh@546 -- # IFS=, 00:35:48.712 23:38:37 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:35:48.712 "params": { 00:35:48.712 "name": "Nvme0", 00:35:48.712 "trtype": "tcp", 00:35:48.712 "traddr": "10.0.0.2", 00:35:48.712 "adrfam": "ipv4", 00:35:48.712 "trsvcid": "4420", 00:35:48.712 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:35:48.712 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:35:48.712 "hdgst": true, 00:35:48.712 "ddgst": true 00:35:48.712 }, 00:35:48.712 "method": "bdev_nvme_attach_controller" 00:35:48.712 }' 00:35:48.712 23:38:37 -- common/autotest_common.sh@1331 -- # asan_lib= 00:35:48.712 23:38:37 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:35:48.712 23:38:37 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:35:48.712 23:38:37 -- common/autotest_common.sh@1331 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:48.712 23:38:37 -- common/autotest_common.sh@1331 -- # grep libclang_rt.asan 00:35:48.712 23:38:37 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:35:48.712 23:38:37 -- common/autotest_common.sh@1331 -- # asan_lib= 00:35:48.712 23:38:37 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:35:48.712 23:38:37 -- common/autotest_common.sh@1338 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:35:48.712 23:38:37 -- common/autotest_common.sh@1338 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:48.973 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:35:48.973 ... 00:35:48.973 fio-3.35 00:35:48.973 Starting 3 threads 00:35:49.235 EAL: No free 2048 kB hugepages reported on node 1 00:36:01.482 00:36:01.482 filename0: (groupid=0, jobs=1): err= 0: pid=21612: Fri Apr 26 23:38:48 2024 00:36:01.482 read: IOPS=201, BW=25.1MiB/s (26.4MB/s)(253MiB/10048msec) 00:36:01.482 slat (nsec): min=5603, max=30025, avg=7246.12, stdev=1966.46 00:36:01.482 clat (usec): min=8582, max=58063, avg=14885.04, stdev=4415.01 00:36:01.482 lat (usec): min=8588, max=58069, avg=14892.28, stdev=4414.94 00:36:01.482 clat percentiles (usec): 00:36:01.482 | 1.00th=[ 9634], 5.00th=[10683], 10.00th=[12256], 20.00th=[13435], 00:36:01.482 | 30.00th=[13960], 40.00th=[14353], 50.00th=[14746], 60.00th=[15139], 00:36:01.482 | 70.00th=[15533], 80.00th=[15926], 90.00th=[16450], 95.00th=[16909], 00:36:01.482 | 99.00th=[19006], 99.50th=[55837], 99.90th=[57410], 99.95th=[57410], 00:36:01.482 | 99.99th=[57934] 00:36:01.482 bw ( KiB/s): min=24064, max=27904, per=31.49%, avg=25830.40, stdev=1098.43, samples=20 00:36:01.482 iops : min= 188, max= 218, avg=201.80, stdev= 8.58, samples=20 00:36:01.482 lat (msec) : 10=2.08%, 20=96.93%, 100=0.99% 00:36:01.482 cpu : usr=95.77%, sys=3.96%, ctx=33, majf=0, minf=97 00:36:01.482 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:01.482 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:01.482 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:01.482 issued rwts: total=2021,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:01.482 latency : target=0, window=0, percentile=100.00%, depth=3 00:36:01.482 filename0: (groupid=0, jobs=1): err= 0: pid=21613: Fri Apr 26 23:38:48 2024 00:36:01.482 read: IOPS=190, BW=23.8MiB/s (25.0MB/s)(240MiB/10052msec) 00:36:01.482 slat (nsec): min=5542, max=35739, avg=7185.21, stdev=1945.86 00:36:01.482 clat (usec): min=8555, max=97109, avg=15703.88, stdev=6864.41 00:36:01.482 lat (usec): min=8562, max=97117, avg=15711.06, stdev=6864.56 00:36:01.482 clat percentiles (usec): 00:36:01.482 | 1.00th=[10028], 5.00th=[11994], 10.00th=[12911], 20.00th=[13566], 00:36:01.482 | 30.00th=[14091], 40.00th=[14353], 50.00th=[14746], 60.00th=[15139], 00:36:01.482 | 70.00th=[15533], 80.00th=[15926], 90.00th=[16581], 95.00th=[17171], 00:36:01.482 | 99.00th=[56361], 99.50th=[56886], 99.90th=[57934], 99.95th=[96994], 00:36:01.482 | 99.99th=[96994] 00:36:01.482 bw ( KiB/s): min=19712, max=27136, per=29.85%, avg=24486.40, stdev=2029.86, samples=20 00:36:01.482 iops : min= 154, max= 212, avg=191.30, stdev=15.86, samples=20 00:36:01.482 lat (msec) : 10=0.94%, 20=96.50%, 100=2.56% 00:36:01.482 cpu : usr=95.70%, sys=4.04%, ctx=18, majf=0, minf=88 00:36:01.482 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:01.482 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:01.482 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:01.482 issued rwts: total=1916,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:01.482 latency : target=0, window=0, percentile=100.00%, depth=3 00:36:01.482 filename0: (groupid=0, jobs=1): err= 0: pid=21614: Fri Apr 26 23:38:48 2024 00:36:01.482 read: IOPS=249, BW=31.2MiB/s (32.7MB/s)(313MiB/10047msec) 00:36:01.482 slat (nsec): min=5620, max=52776, avg=6749.62, stdev=1574.79 00:36:01.482 clat (usec): min=6708, max=54575, avg=12011.26, stdev=2276.58 00:36:01.482 lat (usec): min=6715, max=54581, avg=12018.01, stdev=2276.54 00:36:01.482 clat percentiles (usec): 00:36:01.482 | 1.00th=[ 8094], 5.00th=[ 9110], 10.00th=[ 9896], 20.00th=[10814], 00:36:01.482 | 30.00th=[11469], 40.00th=[11863], 50.00th=[12125], 60.00th=[12387], 00:36:01.482 | 70.00th=[12780], 80.00th=[13042], 90.00th=[13566], 95.00th=[13960], 00:36:01.482 | 99.00th=[14746], 99.50th=[15270], 99.90th=[51643], 99.95th=[52167], 00:36:01.482 | 99.99th=[54789] 00:36:01.482 bw ( KiB/s): min=29440, max=35072, per=39.05%, avg=32025.60, stdev=1510.87, samples=20 00:36:01.482 iops : min= 230, max= 274, avg=250.20, stdev=11.80, samples=20 00:36:01.482 lat (msec) : 10=10.74%, 20=89.06%, 50=0.04%, 100=0.16% 00:36:01.482 cpu : usr=96.12%, sys=3.64%, ctx=23, majf=0, minf=234 00:36:01.482 IO depths : 1=0.1%, 2=100.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:01.482 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:01.482 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:01.482 issued rwts: total=2504,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:01.482 latency : target=0, window=0, percentile=100.00%, depth=3 00:36:01.482 00:36:01.482 Run status group 0 (all jobs): 00:36:01.482 READ: bw=80.1MiB/s (84.0MB/s), 23.8MiB/s-31.2MiB/s (25.0MB/s-32.7MB/s), io=805MiB (844MB), run=10047-10052msec 00:36:01.482 23:38:48 -- target/dif.sh@132 -- # destroy_subsystems 0 00:36:01.482 23:38:48 -- target/dif.sh@43 -- # local sub 00:36:01.482 23:38:48 -- target/dif.sh@45 -- # for sub in "$@" 00:36:01.482 23:38:48 -- target/dif.sh@46 -- # destroy_subsystem 0 00:36:01.482 23:38:48 -- target/dif.sh@36 -- # local sub_id=0 00:36:01.482 23:38:48 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:36:01.482 23:38:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:36:01.482 23:38:48 -- common/autotest_common.sh@10 -- # set +x 00:36:01.482 23:38:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:36:01.482 23:38:48 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:36:01.482 23:38:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:36:01.482 23:38:48 -- common/autotest_common.sh@10 -- # set +x 00:36:01.482 23:38:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:36:01.482 00:36:01.482 real 0m11.154s 00:36:01.482 user 0m45.410s 00:36:01.482 sys 0m1.443s 00:36:01.482 23:38:48 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:36:01.482 23:38:48 -- common/autotest_common.sh@10 -- # set +x 00:36:01.482 ************************************ 00:36:01.482 END TEST fio_dif_digest 00:36:01.482 ************************************ 00:36:01.482 23:38:48 -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:36:01.482 23:38:48 -- target/dif.sh@147 -- # nvmftestfini 00:36:01.482 23:38:48 -- nvmf/common.sh@477 -- # nvmfcleanup 00:36:01.482 23:38:48 -- nvmf/common.sh@117 -- # sync 00:36:01.482 23:38:48 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:36:01.482 23:38:48 -- nvmf/common.sh@120 -- # set +e 00:36:01.482 23:38:48 -- nvmf/common.sh@121 -- # for i in {1..20} 00:36:01.482 23:38:48 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:36:01.482 rmmod nvme_tcp 00:36:01.482 rmmod nvme_fabrics 00:36:01.482 rmmod nvme_keyring 00:36:01.482 23:38:48 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:36:01.482 23:38:48 -- nvmf/common.sh@124 -- # set -e 00:36:01.482 23:38:48 -- nvmf/common.sh@125 -- # return 0 00:36:01.482 23:38:48 -- nvmf/common.sh@478 -- # '[' -n 11108 ']' 00:36:01.482 23:38:48 -- nvmf/common.sh@479 -- # killprocess 11108 00:36:01.482 23:38:48 -- common/autotest_common.sh@936 -- # '[' -z 11108 ']' 00:36:01.482 23:38:48 -- common/autotest_common.sh@940 -- # kill -0 11108 00:36:01.482 23:38:48 -- common/autotest_common.sh@941 -- # uname 00:36:01.482 23:38:48 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:36:01.482 23:38:49 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 11108 00:36:01.482 23:38:49 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:36:01.482 23:38:49 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:36:01.482 23:38:49 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 11108' 00:36:01.482 killing process with pid 11108 00:36:01.483 23:38:49 -- common/autotest_common.sh@955 -- # kill 11108 00:36:01.483 23:38:49 -- common/autotest_common.sh@960 -- # wait 11108 00:36:01.483 23:38:49 -- nvmf/common.sh@481 -- # '[' iso == iso ']' 00:36:01.483 23:38:49 -- nvmf/common.sh@482 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:36:03.397 Waiting for block devices as requested 00:36:03.397 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:36:03.397 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:36:03.397 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:36:03.657 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:36:03.657 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:36:03.657 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:36:03.657 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:36:03.916 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:36:03.916 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:36:04.176 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:36:04.176 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:36:04.176 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:36:04.437 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:36:04.438 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:36:04.438 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:36:04.438 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:36:04.698 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:36:04.959 23:38:53 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:36:04.959 23:38:53 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:36:04.959 23:38:53 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:36:04.959 23:38:53 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:36:04.959 23:38:53 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:04.959 23:38:53 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:36:04.959 23:38:53 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:06.997 23:38:56 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:36:06.997 00:36:06.997 real 1m17.308s 00:36:06.997 user 8m9.147s 00:36:06.997 sys 0m19.334s 00:36:06.997 23:38:56 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:36:06.997 23:38:56 -- common/autotest_common.sh@10 -- # set +x 00:36:06.997 ************************************ 00:36:06.997 END TEST nvmf_dif 00:36:06.997 ************************************ 00:36:06.997 23:38:56 -- spdk/autotest.sh@291 -- # run_test nvmf_abort_qd_sizes /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:36:06.997 23:38:56 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:36:06.997 23:38:56 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:36:06.997 23:38:56 -- common/autotest_common.sh@10 -- # set +x 00:36:07.258 ************************************ 00:36:07.258 START TEST nvmf_abort_qd_sizes 00:36:07.258 ************************************ 00:36:07.258 23:38:56 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:36:07.258 * Looking for test storage... 00:36:07.258 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:36:07.258 23:38:56 -- target/abort_qd_sizes.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:36:07.258 23:38:56 -- nvmf/common.sh@7 -- # uname -s 00:36:07.258 23:38:56 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:36:07.258 23:38:56 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:36:07.258 23:38:56 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:36:07.258 23:38:56 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:36:07.259 23:38:56 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:36:07.259 23:38:56 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:36:07.259 23:38:56 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:36:07.259 23:38:56 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:36:07.259 23:38:56 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:36:07.259 23:38:56 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:36:07.259 23:38:56 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:36:07.259 23:38:56 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:36:07.259 23:38:56 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:36:07.259 23:38:56 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:36:07.259 23:38:56 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:36:07.259 23:38:56 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:36:07.259 23:38:56 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:36:07.259 23:38:56 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:07.259 23:38:56 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:07.259 23:38:56 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:07.259 23:38:56 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:07.259 23:38:56 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:07.259 23:38:56 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:07.259 23:38:56 -- paths/export.sh@5 -- # export PATH 00:36:07.259 23:38:56 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:07.259 23:38:56 -- nvmf/common.sh@47 -- # : 0 00:36:07.259 23:38:56 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:36:07.259 23:38:56 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:36:07.259 23:38:56 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:36:07.259 23:38:56 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:36:07.259 23:38:56 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:36:07.259 23:38:56 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:36:07.259 23:38:56 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:36:07.259 23:38:56 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:36:07.259 23:38:56 -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:36:07.259 23:38:56 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:36:07.259 23:38:56 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:36:07.259 23:38:56 -- nvmf/common.sh@437 -- # prepare_net_devs 00:36:07.259 23:38:56 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:36:07.259 23:38:56 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:36:07.259 23:38:56 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:07.259 23:38:56 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:36:07.259 23:38:56 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:07.259 23:38:56 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:36:07.259 23:38:56 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:36:07.259 23:38:56 -- nvmf/common.sh@285 -- # xtrace_disable 00:36:07.259 23:38:56 -- common/autotest_common.sh@10 -- # set +x 00:36:13.855 23:39:03 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:36:13.855 23:39:03 -- nvmf/common.sh@291 -- # pci_devs=() 00:36:13.855 23:39:03 -- nvmf/common.sh@291 -- # local -a pci_devs 00:36:13.855 23:39:03 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:36:13.855 23:39:03 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:36:13.855 23:39:03 -- nvmf/common.sh@293 -- # pci_drivers=() 00:36:13.855 23:39:03 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:36:13.855 23:39:03 -- nvmf/common.sh@295 -- # net_devs=() 00:36:13.855 23:39:03 -- nvmf/common.sh@295 -- # local -ga net_devs 00:36:13.855 23:39:03 -- nvmf/common.sh@296 -- # e810=() 00:36:13.855 23:39:03 -- nvmf/common.sh@296 -- # local -ga e810 00:36:13.855 23:39:03 -- nvmf/common.sh@297 -- # x722=() 00:36:13.855 23:39:03 -- nvmf/common.sh@297 -- # local -ga x722 00:36:13.855 23:39:03 -- nvmf/common.sh@298 -- # mlx=() 00:36:13.855 23:39:03 -- nvmf/common.sh@298 -- # local -ga mlx 00:36:13.855 23:39:03 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:36:13.855 23:39:03 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:36:13.855 23:39:03 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:36:13.855 23:39:03 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:36:13.855 23:39:03 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:36:13.855 23:39:03 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:36:13.855 23:39:03 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:36:13.855 23:39:03 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:36:13.855 23:39:03 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:36:13.855 23:39:03 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:36:13.855 23:39:03 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:36:13.855 23:39:03 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:36:13.855 23:39:03 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:36:13.855 23:39:03 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:36:13.855 23:39:03 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:36:13.855 23:39:03 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:36:13.855 23:39:03 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:36:13.855 23:39:03 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:36:13.856 23:39:03 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:36:13.856 Found 0000:31:00.0 (0x8086 - 0x159b) 00:36:13.856 23:39:03 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:36:13.856 23:39:03 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:36:13.856 23:39:03 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:13.856 23:39:03 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:13.856 23:39:03 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:36:13.856 23:39:03 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:36:13.856 23:39:03 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:36:13.856 Found 0000:31:00.1 (0x8086 - 0x159b) 00:36:13.856 23:39:03 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:36:13.856 23:39:03 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:36:13.856 23:39:03 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:13.856 23:39:03 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:13.856 23:39:03 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:36:13.856 23:39:03 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:36:13.856 23:39:03 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:36:13.856 23:39:03 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:36:13.856 23:39:03 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:36:13.856 23:39:03 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:13.856 23:39:03 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:36:13.856 23:39:03 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:13.856 23:39:03 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:36:13.856 Found net devices under 0000:31:00.0: cvl_0_0 00:36:13.856 23:39:03 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:36:13.856 23:39:03 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:36:13.856 23:39:03 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:13.856 23:39:03 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:36:13.856 23:39:03 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:13.856 23:39:03 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:36:13.856 Found net devices under 0000:31:00.1: cvl_0_1 00:36:13.856 23:39:03 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:36:13.856 23:39:03 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:36:13.856 23:39:03 -- nvmf/common.sh@403 -- # is_hw=yes 00:36:13.856 23:39:03 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:36:13.856 23:39:03 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:36:13.856 23:39:03 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:36:13.856 23:39:03 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:36:13.856 23:39:03 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:36:13.856 23:39:03 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:36:13.856 23:39:03 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:36:13.856 23:39:03 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:36:13.856 23:39:03 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:36:13.856 23:39:03 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:36:13.856 23:39:03 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:36:13.856 23:39:03 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:36:13.856 23:39:03 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:36:13.856 23:39:03 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:36:13.856 23:39:03 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:36:13.856 23:39:03 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:36:14.118 23:39:03 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:36:14.118 23:39:03 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:36:14.118 23:39:03 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:36:14.118 23:39:03 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:36:14.118 23:39:03 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:36:14.118 23:39:03 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:36:14.118 23:39:03 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:36:14.379 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:36:14.379 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.571 ms 00:36:14.379 00:36:14.379 --- 10.0.0.2 ping statistics --- 00:36:14.379 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:14.379 rtt min/avg/max/mdev = 0.571/0.571/0.571/0.000 ms 00:36:14.379 23:39:03 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:36:14.379 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:36:14.379 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.234 ms 00:36:14.379 00:36:14.379 --- 10.0.0.1 ping statistics --- 00:36:14.379 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:14.379 rtt min/avg/max/mdev = 0.234/0.234/0.234/0.000 ms 00:36:14.379 23:39:03 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:36:14.379 23:39:03 -- nvmf/common.sh@411 -- # return 0 00:36:14.379 23:39:03 -- nvmf/common.sh@439 -- # '[' iso == iso ']' 00:36:14.379 23:39:03 -- nvmf/common.sh@440 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:36:16.926 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:36:16.926 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:36:16.926 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:36:16.926 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:36:17.187 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:36:17.187 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:36:17.187 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:36:17.187 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:36:17.187 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:36:17.187 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:36:17.187 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:36:17.187 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:36:17.187 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:36:17.187 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:36:17.187 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:36:17.187 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:36:17.187 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:36:17.448 23:39:06 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:36:17.448 23:39:06 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:36:17.448 23:39:06 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:36:17.448 23:39:06 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:36:17.448 23:39:06 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:36:17.448 23:39:06 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:36:17.709 23:39:06 -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:36:17.709 23:39:06 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:36:17.709 23:39:06 -- common/autotest_common.sh@710 -- # xtrace_disable 00:36:17.709 23:39:06 -- common/autotest_common.sh@10 -- # set +x 00:36:17.709 23:39:06 -- nvmf/common.sh@470 -- # nvmfpid=31143 00:36:17.709 23:39:06 -- nvmf/common.sh@471 -- # waitforlisten 31143 00:36:17.709 23:39:06 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:36:17.709 23:39:06 -- common/autotest_common.sh@817 -- # '[' -z 31143 ']' 00:36:17.709 23:39:06 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:17.709 23:39:06 -- common/autotest_common.sh@822 -- # local max_retries=100 00:36:17.709 23:39:06 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:17.709 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:17.709 23:39:06 -- common/autotest_common.sh@826 -- # xtrace_disable 00:36:17.709 23:39:06 -- common/autotest_common.sh@10 -- # set +x 00:36:17.709 [2024-04-26 23:39:06.777590] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:36:17.709 [2024-04-26 23:39:06.777641] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:36:17.709 EAL: No free 2048 kB hugepages reported on node 1 00:36:17.709 [2024-04-26 23:39:06.843131] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:36:17.709 [2024-04-26 23:39:06.875263] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:36:17.709 [2024-04-26 23:39:06.875300] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:36:17.709 [2024-04-26 23:39:06.875309] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:36:17.709 [2024-04-26 23:39:06.875316] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:36:17.709 [2024-04-26 23:39:06.875323] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:36:17.709 [2024-04-26 23:39:06.875483] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:36:17.709 [2024-04-26 23:39:06.875586] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:36:17.709 [2024-04-26 23:39:06.875709] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:36:17.709 [2024-04-26 23:39:06.875710] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:36:18.652 23:39:07 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:36:18.652 23:39:07 -- common/autotest_common.sh@850 -- # return 0 00:36:18.652 23:39:07 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:36:18.652 23:39:07 -- common/autotest_common.sh@716 -- # xtrace_disable 00:36:18.652 23:39:07 -- common/autotest_common.sh@10 -- # set +x 00:36:18.652 23:39:07 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:36:18.652 23:39:07 -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:36:18.652 23:39:07 -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:36:18.652 23:39:07 -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:36:18.652 23:39:07 -- scripts/common.sh@309 -- # local bdf bdfs 00:36:18.652 23:39:07 -- scripts/common.sh@310 -- # local nvmes 00:36:18.652 23:39:07 -- scripts/common.sh@312 -- # [[ -n 0000:65:00.0 ]] 00:36:18.652 23:39:07 -- scripts/common.sh@313 -- # nvmes=(${pci_bus_cache["0x010802"]}) 00:36:18.652 23:39:07 -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:36:18.652 23:39:07 -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:65:00.0 ]] 00:36:18.652 23:39:07 -- scripts/common.sh@320 -- # uname -s 00:36:18.652 23:39:07 -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:36:18.652 23:39:07 -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:36:18.652 23:39:07 -- scripts/common.sh@325 -- # (( 1 )) 00:36:18.652 23:39:07 -- scripts/common.sh@326 -- # printf '%s\n' 0000:65:00.0 00:36:18.652 23:39:07 -- target/abort_qd_sizes.sh@76 -- # (( 1 > 0 )) 00:36:18.652 23:39:07 -- target/abort_qd_sizes.sh@78 -- # nvme=0000:65:00.0 00:36:18.652 23:39:07 -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:36:18.652 23:39:07 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:36:18.652 23:39:07 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:36:18.652 23:39:07 -- common/autotest_common.sh@10 -- # set +x 00:36:18.652 ************************************ 00:36:18.652 START TEST spdk_target_abort 00:36:18.652 ************************************ 00:36:18.652 23:39:07 -- common/autotest_common.sh@1111 -- # spdk_target 00:36:18.652 23:39:07 -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:36:18.652 23:39:07 -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:65:00.0 -b spdk_target 00:36:18.652 23:39:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:36:18.652 23:39:07 -- common/autotest_common.sh@10 -- # set +x 00:36:18.913 spdk_targetn1 00:36:18.913 23:39:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:36:18.913 23:39:08 -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:36:18.913 23:39:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:36:18.913 23:39:08 -- common/autotest_common.sh@10 -- # set +x 00:36:18.913 [2024-04-26 23:39:08.043814] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:18.913 23:39:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:36:18.913 23:39:08 -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:36:18.913 23:39:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:36:18.913 23:39:08 -- common/autotest_common.sh@10 -- # set +x 00:36:18.913 23:39:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:36:18.913 23:39:08 -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:36:18.913 23:39:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:36:18.913 23:39:08 -- common/autotest_common.sh@10 -- # set +x 00:36:18.913 23:39:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:36:18.913 23:39:08 -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.2 -s 4420 00:36:18.913 23:39:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:36:18.913 23:39:08 -- common/autotest_common.sh@10 -- # set +x 00:36:18.913 [2024-04-26 23:39:08.081070] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:18.913 23:39:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:36:18.913 23:39:08 -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:testnqn 00:36:18.913 23:39:08 -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:36:18.913 23:39:08 -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:36:18.913 23:39:08 -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:36:18.913 23:39:08 -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:36:18.913 23:39:08 -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:36:18.913 23:39:08 -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:36:18.913 23:39:08 -- target/abort_qd_sizes.sh@24 -- # local target r 00:36:18.913 23:39:08 -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:36:18.913 23:39:08 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:36:18.913 23:39:08 -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:36:18.913 23:39:08 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:36:18.913 23:39:08 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:36:18.913 23:39:08 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:36:18.913 23:39:08 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:36:18.913 23:39:08 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:36:18.913 23:39:08 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:36:18.913 23:39:08 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:36:18.913 23:39:08 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:36:18.913 23:39:08 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:36:18.913 23:39:08 -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:36:18.913 EAL: No free 2048 kB hugepages reported on node 1 00:36:19.174 [2024-04-26 23:39:08.194406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:189 nsid:1 lba:80 len:8 PRP1 0x2000078c2000 PRP2 0x0 00:36:19.174 [2024-04-26 23:39:08.194429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:189 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:36:19.174 [2024-04-26 23:39:08.211859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:188 nsid:1 lba:776 len:8 PRP1 0x2000078c6000 PRP2 0x0 00:36:19.174 [2024-04-26 23:39:08.211876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:188 cdw0:0 sqhd:0062 p:1 m:0 dnr:0 00:36:19.174 [2024-04-26 23:39:08.220321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:189 nsid:1 lba:1080 len:8 PRP1 0x2000078c0000 PRP2 0x0 00:36:19.174 [2024-04-26 23:39:08.220336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:189 cdw0:0 sqhd:0089 p:1 m:0 dnr:0 00:36:19.174 [2024-04-26 23:39:08.235861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:190 nsid:1 lba:1648 len:8 PRP1 0x2000078c2000 PRP2 0x0 00:36:19.174 [2024-04-26 23:39:08.235876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:190 cdw0:0 sqhd:00d0 p:1 m:0 dnr:0 00:36:19.174 [2024-04-26 23:39:08.258726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:189 nsid:1 lba:2528 len:8 PRP1 0x2000078c0000 PRP2 0x0 00:36:19.174 [2024-04-26 23:39:08.258742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:189 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:36:19.174 [2024-04-26 23:39:08.266321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:189 nsid:1 lba:2848 len:8 PRP1 0x2000078be000 PRP2 0x0 00:36:19.174 [2024-04-26 23:39:08.266336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:189 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:36:19.174 [2024-04-26 23:39:08.283392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:188 nsid:1 lba:3608 len:8 PRP1 0x2000078c6000 PRP2 0x0 00:36:19.174 [2024-04-26 23:39:08.283407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:188 cdw0:0 sqhd:00c5 p:0 m:0 dnr:0 00:36:22.477 Initializing NVMe Controllers 00:36:22.477 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:36:22.477 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:36:22.477 Initialization complete. Launching workers. 00:36:22.477 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 14054, failed: 7 00:36:22.477 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2750, failed to submit 11311 00:36:22.477 success 744, unsuccess 2006, failed 0 00:36:22.477 23:39:11 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:36:22.477 23:39:11 -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:36:22.477 EAL: No free 2048 kB hugepages reported on node 1 00:36:22.477 [2024-04-26 23:39:11.497021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:171 nsid:1 lba:920 len:8 PRP1 0x200007c5a000 PRP2 0x0 00:36:22.477 [2024-04-26 23:39:11.497064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:171 cdw0:0 sqhd:0077 p:1 m:0 dnr:0 00:36:22.477 [2024-04-26 23:39:11.526419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:168 nsid:1 lba:1552 len:8 PRP1 0x200007c5c000 PRP2 0x0 00:36:22.477 [2024-04-26 23:39:11.526445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:168 cdw0:0 sqhd:00cd p:1 m:0 dnr:0 00:36:22.477 [2024-04-26 23:39:11.543928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:184 nsid:1 lba:1832 len:8 PRP1 0x200007c44000 PRP2 0x0 00:36:22.477 [2024-04-26 23:39:11.543952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:184 cdw0:0 sqhd:00ed p:1 m:0 dnr:0 00:36:22.478 [2024-04-26 23:39:11.560001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:178 nsid:1 lba:2248 len:8 PRP1 0x200007c5a000 PRP2 0x0 00:36:22.478 [2024-04-26 23:39:11.560022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:178 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:36:22.478 [2024-04-26 23:39:11.575970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:181 nsid:1 lba:2632 len:8 PRP1 0x200007c5a000 PRP2 0x0 00:36:22.478 [2024-04-26 23:39:11.575992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:181 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:36:22.478 [2024-04-26 23:39:11.583861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:188 nsid:1 lba:2792 len:8 PRP1 0x200007c4a000 PRP2 0x0 00:36:22.478 [2024-04-26 23:39:11.583881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:188 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:36:22.478 [2024-04-26 23:39:11.615936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:173 nsid:1 lba:3528 len:8 PRP1 0x200007c52000 PRP2 0x0 00:36:22.478 [2024-04-26 23:39:11.615958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:173 cdw0:0 sqhd:00c9 p:0 m:0 dnr:0 00:36:22.478 [2024-04-26 23:39:11.632009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:184 nsid:1 lba:4040 len:8 PRP1 0x200007c46000 PRP2 0x0 00:36:22.478 [2024-04-26 23:39:11.632031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:184 cdw0:0 sqhd:00fb p:0 m:0 dnr:0 00:36:23.422 [2024-04-26 23:39:12.616003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:185 nsid:1 lba:26752 len:8 PRP1 0x200007c5e000 PRP2 0x0 00:36:23.422 [2024-04-26 23:39:12.616041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:185 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:36:23.994 [2024-04-26 23:39:13.163990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:190 nsid:1 lba:39240 len:8 PRP1 0x200007c5e000 PRP2 0x0 00:36:23.994 [2024-04-26 23:39:13.164019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:190 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:36:24.567 [2024-04-26 23:39:13.585001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:191 nsid:1 lba:48712 len:8 PRP1 0x200007c62000 PRP2 0x0 00:36:24.567 [2024-04-26 23:39:13.585030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:191 cdw0:0 sqhd:00cb p:0 m:0 dnr:0 00:36:24.829 [2024-04-26 23:39:13.967021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:173 nsid:1 lba:57160 len:8 PRP1 0x200007c48000 PRP2 0x0 00:36:24.829 [2024-04-26 23:39:13.967055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:173 cdw0:0 sqhd:00f7 p:0 m:0 dnr:0 00:36:25.401 [2024-04-26 23:39:14.496865] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc67d70 is same with the state(5) to be set 00:36:25.401 [2024-04-26 23:39:14.496892] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc67d70 is same with the state(5) to be set 00:36:25.401 [2024-04-26 23:39:14.496901] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc67d70 is same with the state(5) to be set 00:36:25.401 Initializing NVMe Controllers 00:36:25.401 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:36:25.401 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:36:25.401 Initialization complete. Launching workers. 00:36:25.401 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 8587, failed: 12 00:36:25.401 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1221, failed to submit 7378 00:36:25.401 success 333, unsuccess 888, failed 0 00:36:25.401 23:39:14 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:36:25.401 23:39:14 -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:36:25.401 EAL: No free 2048 kB hugepages reported on node 1 00:36:27.976 [2024-04-26 23:39:17.142967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:171 nsid:1 lba:270304 len:8 PRP1 0x2000078f4000 PRP2 0x0 00:36:27.976 [2024-04-26 23:39:17.142997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:171 cdw0:0 sqhd:00eb p:1 m:0 dnr:0 00:36:28.549 Initializing NVMe Controllers 00:36:28.549 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:36:28.549 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:36:28.549 Initialization complete. Launching workers. 00:36:28.549 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 41824, failed: 1 00:36:28.549 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2670, failed to submit 39155 00:36:28.549 success 597, unsuccess 2073, failed 0 00:36:28.549 23:39:17 -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:36:28.549 23:39:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:36:28.549 23:39:17 -- common/autotest_common.sh@10 -- # set +x 00:36:28.549 23:39:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:36:28.549 23:39:17 -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:36:28.549 23:39:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:36:28.549 23:39:17 -- common/autotest_common.sh@10 -- # set +x 00:36:30.466 23:39:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:36:30.466 23:39:19 -- target/abort_qd_sizes.sh@61 -- # killprocess 31143 00:36:30.466 23:39:19 -- common/autotest_common.sh@936 -- # '[' -z 31143 ']' 00:36:30.466 23:39:19 -- common/autotest_common.sh@940 -- # kill -0 31143 00:36:30.466 23:39:19 -- common/autotest_common.sh@941 -- # uname 00:36:30.466 23:39:19 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:36:30.466 23:39:19 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 31143 00:36:30.466 23:39:19 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:36:30.466 23:39:19 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:36:30.466 23:39:19 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 31143' 00:36:30.466 killing process with pid 31143 00:36:30.466 23:39:19 -- common/autotest_common.sh@955 -- # kill 31143 00:36:30.466 23:39:19 -- common/autotest_common.sh@960 -- # wait 31143 00:36:30.727 00:36:30.727 real 0m12.027s 00:36:30.727 user 0m49.505s 00:36:30.727 sys 0m1.810s 00:36:30.727 23:39:19 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:36:30.727 23:39:19 -- common/autotest_common.sh@10 -- # set +x 00:36:30.727 ************************************ 00:36:30.727 END TEST spdk_target_abort 00:36:30.727 ************************************ 00:36:30.727 23:39:19 -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:36:30.727 23:39:19 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:36:30.727 23:39:19 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:36:30.727 23:39:19 -- common/autotest_common.sh@10 -- # set +x 00:36:30.727 ************************************ 00:36:30.727 START TEST kernel_target_abort 00:36:30.727 ************************************ 00:36:30.727 23:39:19 -- common/autotest_common.sh@1111 -- # kernel_target 00:36:30.727 23:39:19 -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:36:30.727 23:39:19 -- nvmf/common.sh@717 -- # local ip 00:36:30.727 23:39:19 -- nvmf/common.sh@718 -- # ip_candidates=() 00:36:30.727 23:39:19 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:36:30.727 23:39:19 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:30.727 23:39:19 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:30.727 23:39:19 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:36:30.727 23:39:19 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:30.727 23:39:19 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:36:30.727 23:39:19 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:36:30.727 23:39:19 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:36:30.727 23:39:19 -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:36:30.727 23:39:19 -- nvmf/common.sh@621 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:36:30.727 23:39:19 -- nvmf/common.sh@623 -- # nvmet=/sys/kernel/config/nvmet 00:36:30.727 23:39:19 -- nvmf/common.sh@624 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:36:30.727 23:39:19 -- nvmf/common.sh@625 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:36:30.727 23:39:19 -- nvmf/common.sh@626 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:36:30.727 23:39:19 -- nvmf/common.sh@628 -- # local block nvme 00:36:30.727 23:39:19 -- nvmf/common.sh@630 -- # [[ ! -e /sys/module/nvmet ]] 00:36:30.727 23:39:19 -- nvmf/common.sh@631 -- # modprobe nvmet 00:36:30.989 23:39:19 -- nvmf/common.sh@634 -- # [[ -e /sys/kernel/config/nvmet ]] 00:36:30.989 23:39:19 -- nvmf/common.sh@636 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:36:34.295 Waiting for block devices as requested 00:36:34.295 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:36:34.295 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:36:34.295 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:36:34.556 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:36:34.556 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:36:34.556 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:36:34.816 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:36:34.816 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:36:34.816 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:36:35.076 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:36:35.076 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:36:35.076 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:36:35.337 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:36:35.337 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:36:35.337 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:36:35.337 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:36:35.597 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:36:35.858 23:39:24 -- nvmf/common.sh@639 -- # for block in /sys/block/nvme* 00:36:35.858 23:39:24 -- nvmf/common.sh@640 -- # [[ -e /sys/block/nvme0n1 ]] 00:36:35.858 23:39:24 -- nvmf/common.sh@641 -- # is_block_zoned nvme0n1 00:36:35.858 23:39:24 -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:36:35.858 23:39:24 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:36:35.858 23:39:24 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:36:35.858 23:39:24 -- nvmf/common.sh@642 -- # block_in_use nvme0n1 00:36:35.858 23:39:24 -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:36:35.858 23:39:24 -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:36:35.858 No valid GPT data, bailing 00:36:35.858 23:39:24 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:36:35.858 23:39:24 -- scripts/common.sh@391 -- # pt= 00:36:35.858 23:39:24 -- scripts/common.sh@392 -- # return 1 00:36:35.858 23:39:24 -- nvmf/common.sh@642 -- # nvme=/dev/nvme0n1 00:36:35.858 23:39:24 -- nvmf/common.sh@645 -- # [[ -b /dev/nvme0n1 ]] 00:36:35.858 23:39:24 -- nvmf/common.sh@647 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:36:35.858 23:39:25 -- nvmf/common.sh@648 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:36:35.858 23:39:25 -- nvmf/common.sh@649 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:36:35.858 23:39:25 -- nvmf/common.sh@654 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:36:35.858 23:39:25 -- nvmf/common.sh@656 -- # echo 1 00:36:35.858 23:39:25 -- nvmf/common.sh@657 -- # echo /dev/nvme0n1 00:36:35.858 23:39:25 -- nvmf/common.sh@658 -- # echo 1 00:36:35.858 23:39:25 -- nvmf/common.sh@660 -- # echo 10.0.0.1 00:36:35.858 23:39:25 -- nvmf/common.sh@661 -- # echo tcp 00:36:35.858 23:39:25 -- nvmf/common.sh@662 -- # echo 4420 00:36:35.858 23:39:25 -- nvmf/common.sh@663 -- # echo ipv4 00:36:35.858 23:39:25 -- nvmf/common.sh@666 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:36:35.858 23:39:25 -- nvmf/common.sh@669 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -a 10.0.0.1 -t tcp -s 4420 00:36:35.858 00:36:35.858 Discovery Log Number of Records 2, Generation counter 2 00:36:35.858 =====Discovery Log Entry 0====== 00:36:35.859 trtype: tcp 00:36:35.859 adrfam: ipv4 00:36:35.859 subtype: current discovery subsystem 00:36:35.859 treq: not specified, sq flow control disable supported 00:36:35.859 portid: 1 00:36:35.859 trsvcid: 4420 00:36:35.859 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:36:35.859 traddr: 10.0.0.1 00:36:35.859 eflags: none 00:36:35.859 sectype: none 00:36:35.859 =====Discovery Log Entry 1====== 00:36:35.859 trtype: tcp 00:36:35.859 adrfam: ipv4 00:36:35.859 subtype: nvme subsystem 00:36:35.859 treq: not specified, sq flow control disable supported 00:36:35.859 portid: 1 00:36:35.859 trsvcid: 4420 00:36:35.859 subnqn: nqn.2016-06.io.spdk:testnqn 00:36:35.859 traddr: 10.0.0.1 00:36:35.859 eflags: none 00:36:35.859 sectype: none 00:36:35.859 23:39:25 -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:36:35.859 23:39:25 -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:36:35.859 23:39:25 -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:36:35.859 23:39:25 -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:36:35.859 23:39:25 -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:36:35.859 23:39:25 -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:36:35.859 23:39:25 -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:36:35.859 23:39:25 -- target/abort_qd_sizes.sh@24 -- # local target r 00:36:35.859 23:39:25 -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:36:35.859 23:39:25 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:36:35.859 23:39:25 -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:36:35.859 23:39:25 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:36:35.859 23:39:25 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:36:35.859 23:39:25 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:36:35.859 23:39:25 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:36:35.859 23:39:25 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:36:35.859 23:39:25 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:36:35.859 23:39:25 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:36:35.859 23:39:25 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:36:35.859 23:39:25 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:36:35.859 23:39:25 -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:36:36.119 EAL: No free 2048 kB hugepages reported on node 1 00:36:39.417 Initializing NVMe Controllers 00:36:39.417 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:36:39.417 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:36:39.417 Initialization complete. Launching workers. 00:36:39.417 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 61144, failed: 0 00:36:39.417 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 61144, failed to submit 0 00:36:39.417 success 0, unsuccess 61144, failed 0 00:36:39.417 23:39:28 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:36:39.417 23:39:28 -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:36:39.417 EAL: No free 2048 kB hugepages reported on node 1 00:36:42.716 Initializing NVMe Controllers 00:36:42.716 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:36:42.716 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:36:42.716 Initialization complete. Launching workers. 00:36:42.716 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 103615, failed: 0 00:36:42.716 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 26110, failed to submit 77505 00:36:42.716 success 0, unsuccess 26110, failed 0 00:36:42.716 23:39:31 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:36:42.716 23:39:31 -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:36:42.716 EAL: No free 2048 kB hugepages reported on node 1 00:36:45.260 Initializing NVMe Controllers 00:36:45.260 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:36:45.260 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:36:45.260 Initialization complete. Launching workers. 00:36:45.260 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 99353, failed: 0 00:36:45.260 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 24850, failed to submit 74503 00:36:45.260 success 0, unsuccess 24850, failed 0 00:36:45.260 23:39:34 -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:36:45.260 23:39:34 -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:36:45.260 23:39:34 -- nvmf/common.sh@675 -- # echo 0 00:36:45.260 23:39:34 -- nvmf/common.sh@677 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:36:45.260 23:39:34 -- nvmf/common.sh@678 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:36:45.260 23:39:34 -- nvmf/common.sh@679 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:36:45.260 23:39:34 -- nvmf/common.sh@680 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:36:45.260 23:39:34 -- nvmf/common.sh@682 -- # modules=(/sys/module/nvmet/holders/*) 00:36:45.260 23:39:34 -- nvmf/common.sh@684 -- # modprobe -r nvmet_tcp nvmet 00:36:45.260 23:39:34 -- nvmf/common.sh@687 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:36:48.656 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:36:48.656 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:36:48.656 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:36:48.656 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:36:48.656 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:36:48.656 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:36:48.656 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:36:48.656 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:36:48.656 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:36:48.916 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:36:48.916 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:36:48.916 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:36:48.916 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:36:48.916 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:36:48.916 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:36:48.916 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:36:50.827 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:36:50.827 00:36:50.827 real 0m20.075s 00:36:50.827 user 0m9.367s 00:36:50.827 sys 0m6.089s 00:36:50.827 23:39:40 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:36:50.827 23:39:40 -- common/autotest_common.sh@10 -- # set +x 00:36:50.827 ************************************ 00:36:50.827 END TEST kernel_target_abort 00:36:50.827 ************************************ 00:36:50.827 23:39:40 -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:36:50.827 23:39:40 -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:36:50.827 23:39:40 -- nvmf/common.sh@477 -- # nvmfcleanup 00:36:50.827 23:39:40 -- nvmf/common.sh@117 -- # sync 00:36:50.827 23:39:40 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:36:50.827 23:39:40 -- nvmf/common.sh@120 -- # set +e 00:36:50.827 23:39:40 -- nvmf/common.sh@121 -- # for i in {1..20} 00:36:50.827 23:39:40 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:36:50.827 rmmod nvme_tcp 00:36:51.088 rmmod nvme_fabrics 00:36:51.088 rmmod nvme_keyring 00:36:51.088 23:39:40 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:36:51.088 23:39:40 -- nvmf/common.sh@124 -- # set -e 00:36:51.088 23:39:40 -- nvmf/common.sh@125 -- # return 0 00:36:51.088 23:39:40 -- nvmf/common.sh@478 -- # '[' -n 31143 ']' 00:36:51.088 23:39:40 -- nvmf/common.sh@479 -- # killprocess 31143 00:36:51.088 23:39:40 -- common/autotest_common.sh@936 -- # '[' -z 31143 ']' 00:36:51.088 23:39:40 -- common/autotest_common.sh@940 -- # kill -0 31143 00:36:51.088 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 940: kill: (31143) - No such process 00:36:51.088 23:39:40 -- common/autotest_common.sh@963 -- # echo 'Process with pid 31143 is not found' 00:36:51.088 Process with pid 31143 is not found 00:36:51.088 23:39:40 -- nvmf/common.sh@481 -- # '[' iso == iso ']' 00:36:51.088 23:39:40 -- nvmf/common.sh@482 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:36:54.396 Waiting for block devices as requested 00:36:54.396 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:36:54.396 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:36:54.396 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:36:54.659 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:36:54.659 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:36:54.659 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:36:54.921 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:36:54.921 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:36:54.921 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:36:55.182 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:36:55.182 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:36:55.443 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:36:55.443 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:36:55.443 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:36:55.443 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:36:55.705 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:36:55.705 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:36:55.966 23:39:45 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:36:55.967 23:39:45 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:36:55.967 23:39:45 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:36:55.967 23:39:45 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:36:55.967 23:39:45 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:55.967 23:39:45 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:36:55.967 23:39:45 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:58.517 23:39:47 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:36:58.517 00:36:58.517 real 0m50.954s 00:36:58.517 user 1m3.825s 00:36:58.517 sys 0m18.219s 00:36:58.517 23:39:47 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:36:58.517 23:39:47 -- common/autotest_common.sh@10 -- # set +x 00:36:58.517 ************************************ 00:36:58.517 END TEST nvmf_abort_qd_sizes 00:36:58.517 ************************************ 00:36:58.517 23:39:47 -- spdk/autotest.sh@293 -- # run_test keyring_file /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:36:58.517 23:39:47 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:36:58.517 23:39:47 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:36:58.517 23:39:47 -- common/autotest_common.sh@10 -- # set +x 00:36:58.517 ************************************ 00:36:58.517 START TEST keyring_file 00:36:58.517 ************************************ 00:36:58.517 23:39:47 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:36:58.517 * Looking for test storage... 00:36:58.517 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:36:58.517 23:39:47 -- keyring/file.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:36:58.517 23:39:47 -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:36:58.517 23:39:47 -- nvmf/common.sh@7 -- # uname -s 00:36:58.517 23:39:47 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:36:58.517 23:39:47 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:36:58.517 23:39:47 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:36:58.517 23:39:47 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:36:58.517 23:39:47 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:36:58.517 23:39:47 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:36:58.517 23:39:47 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:36:58.517 23:39:47 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:36:58.517 23:39:47 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:36:58.517 23:39:47 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:36:58.517 23:39:47 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:36:58.517 23:39:47 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:36:58.517 23:39:47 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:36:58.517 23:39:47 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:36:58.517 23:39:47 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:36:58.517 23:39:47 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:36:58.517 23:39:47 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:36:58.517 23:39:47 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:58.517 23:39:47 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:58.517 23:39:47 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:58.517 23:39:47 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:58.517 23:39:47 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:58.517 23:39:47 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:58.517 23:39:47 -- paths/export.sh@5 -- # export PATH 00:36:58.517 23:39:47 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:58.517 23:39:47 -- nvmf/common.sh@47 -- # : 0 00:36:58.517 23:39:47 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:36:58.517 23:39:47 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:36:58.517 23:39:47 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:36:58.517 23:39:47 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:36:58.517 23:39:47 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:36:58.517 23:39:47 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:36:58.517 23:39:47 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:36:58.517 23:39:47 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:36:58.517 23:39:47 -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:36:58.517 23:39:47 -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:36:58.517 23:39:47 -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:36:58.517 23:39:47 -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:36:58.517 23:39:47 -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:36:58.517 23:39:47 -- keyring/file.sh@24 -- # trap cleanup EXIT 00:36:58.517 23:39:47 -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:36:58.517 23:39:47 -- keyring/common.sh@15 -- # local name key digest path 00:36:58.517 23:39:47 -- keyring/common.sh@17 -- # name=key0 00:36:58.517 23:39:47 -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:36:58.517 23:39:47 -- keyring/common.sh@17 -- # digest=0 00:36:58.517 23:39:47 -- keyring/common.sh@18 -- # mktemp 00:36:58.517 23:39:47 -- keyring/common.sh@18 -- # path=/tmp/tmp.6x4pH4DGSZ 00:36:58.517 23:39:47 -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:36:58.517 23:39:47 -- nvmf/common.sh@704 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:36:58.517 23:39:47 -- nvmf/common.sh@691 -- # local prefix key digest 00:36:58.517 23:39:47 -- nvmf/common.sh@693 -- # prefix=NVMeTLSkey-1 00:36:58.517 23:39:47 -- nvmf/common.sh@693 -- # key=00112233445566778899aabbccddeeff 00:36:58.517 23:39:47 -- nvmf/common.sh@693 -- # digest=0 00:36:58.517 23:39:47 -- nvmf/common.sh@694 -- # python - 00:36:58.517 23:39:47 -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.6x4pH4DGSZ 00:36:58.517 23:39:47 -- keyring/common.sh@23 -- # echo /tmp/tmp.6x4pH4DGSZ 00:36:58.517 23:39:47 -- keyring/file.sh@26 -- # key0path=/tmp/tmp.6x4pH4DGSZ 00:36:58.517 23:39:47 -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:36:58.517 23:39:47 -- keyring/common.sh@15 -- # local name key digest path 00:36:58.517 23:39:47 -- keyring/common.sh@17 -- # name=key1 00:36:58.517 23:39:47 -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:36:58.517 23:39:47 -- keyring/common.sh@17 -- # digest=0 00:36:58.517 23:39:47 -- keyring/common.sh@18 -- # mktemp 00:36:58.517 23:39:47 -- keyring/common.sh@18 -- # path=/tmp/tmp.B7A35GqNm3 00:36:58.517 23:39:47 -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:36:58.517 23:39:47 -- nvmf/common.sh@704 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:36:58.517 23:39:47 -- nvmf/common.sh@691 -- # local prefix key digest 00:36:58.517 23:39:47 -- nvmf/common.sh@693 -- # prefix=NVMeTLSkey-1 00:36:58.517 23:39:47 -- nvmf/common.sh@693 -- # key=112233445566778899aabbccddeeff00 00:36:58.517 23:39:47 -- nvmf/common.sh@693 -- # digest=0 00:36:58.517 23:39:47 -- nvmf/common.sh@694 -- # python - 00:36:58.517 23:39:47 -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.B7A35GqNm3 00:36:58.517 23:39:47 -- keyring/common.sh@23 -- # echo /tmp/tmp.B7A35GqNm3 00:36:58.517 23:39:47 -- keyring/file.sh@27 -- # key1path=/tmp/tmp.B7A35GqNm3 00:36:58.517 23:39:47 -- keyring/file.sh@30 -- # tgtpid=41980 00:36:58.517 23:39:47 -- keyring/file.sh@32 -- # waitforlisten 41980 00:36:58.517 23:39:47 -- common/autotest_common.sh@817 -- # '[' -z 41980 ']' 00:36:58.517 23:39:47 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:58.517 23:39:47 -- common/autotest_common.sh@822 -- # local max_retries=100 00:36:58.517 23:39:47 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:58.517 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:58.517 23:39:47 -- common/autotest_common.sh@826 -- # xtrace_disable 00:36:58.517 23:39:47 -- common/autotest_common.sh@10 -- # set +x 00:36:58.517 23:39:47 -- keyring/file.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:36:58.517 [2024-04-26 23:39:47.706422] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:36:58.517 [2024-04-26 23:39:47.706498] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid41980 ] 00:36:58.517 EAL: No free 2048 kB hugepages reported on node 1 00:36:58.779 [2024-04-26 23:39:47.772213] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:58.779 [2024-04-26 23:39:47.809874] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:36:59.352 23:39:48 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:36:59.352 23:39:48 -- common/autotest_common.sh@850 -- # return 0 00:36:59.352 23:39:48 -- keyring/file.sh@33 -- # rpc_cmd 00:36:59.352 23:39:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:36:59.352 23:39:48 -- common/autotest_common.sh@10 -- # set +x 00:36:59.352 [2024-04-26 23:39:48.446769] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:59.352 null0 00:36:59.352 [2024-04-26 23:39:48.478813] tcp.c: 925:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:36:59.352 [2024-04-26 23:39:48.479148] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:36:59.352 [2024-04-26 23:39:48.486826] tcp.c:3652:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:36:59.352 23:39:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:36:59.352 23:39:48 -- keyring/file.sh@43 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:36:59.352 23:39:48 -- common/autotest_common.sh@638 -- # local es=0 00:36:59.352 23:39:48 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:36:59.352 23:39:48 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:36:59.352 23:39:48 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:36:59.352 23:39:48 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:36:59.352 23:39:48 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:36:59.352 23:39:48 -- common/autotest_common.sh@641 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:36:59.352 23:39:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:36:59.352 23:39:48 -- common/autotest_common.sh@10 -- # set +x 00:36:59.352 [2024-04-26 23:39:48.498865] nvmf_rpc.c: 769:nvmf_rpc_listen_paused: *ERROR*: A listener already exists with different secure channel option.request: 00:36:59.352 { 00:36:59.352 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:36:59.352 "secure_channel": false, 00:36:59.352 "listen_address": { 00:36:59.352 "trtype": "tcp", 00:36:59.352 "traddr": "127.0.0.1", 00:36:59.352 "trsvcid": "4420" 00:36:59.352 }, 00:36:59.352 "method": "nvmf_subsystem_add_listener", 00:36:59.352 "req_id": 1 00:36:59.352 } 00:36:59.352 Got JSON-RPC error response 00:36:59.352 response: 00:36:59.352 { 00:36:59.352 "code": -32602, 00:36:59.352 "message": "Invalid parameters" 00:36:59.352 } 00:36:59.352 23:39:48 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:36:59.352 23:39:48 -- common/autotest_common.sh@641 -- # es=1 00:36:59.352 23:39:48 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:36:59.352 23:39:48 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:36:59.352 23:39:48 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:36:59.352 23:39:48 -- keyring/file.sh@46 -- # bperfpid=42044 00:36:59.352 23:39:48 -- keyring/file.sh@48 -- # waitforlisten 42044 /var/tmp/bperf.sock 00:36:59.352 23:39:48 -- common/autotest_common.sh@817 -- # '[' -z 42044 ']' 00:36:59.352 23:39:48 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bperf.sock 00:36:59.352 23:39:48 -- common/autotest_common.sh@822 -- # local max_retries=100 00:36:59.352 23:39:48 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:36:59.352 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:36:59.352 23:39:48 -- common/autotest_common.sh@826 -- # xtrace_disable 00:36:59.352 23:39:48 -- common/autotest_common.sh@10 -- # set +x 00:36:59.352 23:39:48 -- keyring/file.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:36:59.352 [2024-04-26 23:39:48.550502] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:36:59.352 [2024-04-26 23:39:48.550549] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid42044 ] 00:36:59.352 EAL: No free 2048 kB hugepages reported on node 1 00:36:59.614 [2024-04-26 23:39:48.608652] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:59.614 [2024-04-26 23:39:48.637502] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:36:59.614 23:39:48 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:36:59.614 23:39:48 -- common/autotest_common.sh@850 -- # return 0 00:36:59.614 23:39:48 -- keyring/file.sh@49 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.6x4pH4DGSZ 00:36:59.614 23:39:48 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.6x4pH4DGSZ 00:36:59.614 23:39:48 -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.B7A35GqNm3 00:36:59.614 23:39:48 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.B7A35GqNm3 00:36:59.875 23:39:49 -- keyring/file.sh@51 -- # get_key key0 00:36:59.875 23:39:49 -- keyring/file.sh@51 -- # jq -r .path 00:36:59.875 23:39:49 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:59.875 23:39:49 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:36:59.875 23:39:49 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:00.137 23:39:49 -- keyring/file.sh@51 -- # [[ /tmp/tmp.6x4pH4DGSZ == \/\t\m\p\/\t\m\p\.\6\x\4\p\H\4\D\G\S\Z ]] 00:37:00.137 23:39:49 -- keyring/file.sh@52 -- # get_key key1 00:37:00.137 23:39:49 -- keyring/file.sh@52 -- # jq -r .path 00:37:00.137 23:39:49 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:00.137 23:39:49 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:37:00.137 23:39:49 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:00.137 23:39:49 -- keyring/file.sh@52 -- # [[ /tmp/tmp.B7A35GqNm3 == \/\t\m\p\/\t\m\p\.\B\7\A\3\5\G\q\N\m\3 ]] 00:37:00.137 23:39:49 -- keyring/file.sh@53 -- # get_refcnt key0 00:37:00.137 23:39:49 -- keyring/common.sh@12 -- # get_key key0 00:37:00.137 23:39:49 -- keyring/common.sh@12 -- # jq -r .refcnt 00:37:00.137 23:39:49 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:00.137 23:39:49 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:00.137 23:39:49 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:37:00.398 23:39:49 -- keyring/file.sh@53 -- # (( 1 == 1 )) 00:37:00.398 23:39:49 -- keyring/file.sh@54 -- # get_refcnt key1 00:37:00.398 23:39:49 -- keyring/common.sh@12 -- # get_key key1 00:37:00.398 23:39:49 -- keyring/common.sh@12 -- # jq -r .refcnt 00:37:00.398 23:39:49 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:00.398 23:39:49 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:37:00.398 23:39:49 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:00.660 23:39:49 -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:37:00.660 23:39:49 -- keyring/file.sh@57 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:37:00.660 23:39:49 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:37:00.660 [2024-04-26 23:39:49.806413] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:37:00.660 nvme0n1 00:37:00.660 23:39:49 -- keyring/file.sh@59 -- # get_refcnt key0 00:37:00.660 23:39:49 -- keyring/common.sh@12 -- # get_key key0 00:37:00.660 23:39:49 -- keyring/common.sh@12 -- # jq -r .refcnt 00:37:00.660 23:39:49 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:00.660 23:39:49 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:00.660 23:39:49 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:37:00.925 23:39:50 -- keyring/file.sh@59 -- # (( 2 == 2 )) 00:37:00.925 23:39:50 -- keyring/file.sh@60 -- # get_refcnt key1 00:37:00.925 23:39:50 -- keyring/common.sh@12 -- # get_key key1 00:37:00.925 23:39:50 -- keyring/common.sh@12 -- # jq -r .refcnt 00:37:00.925 23:39:50 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:00.925 23:39:50 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:37:00.925 23:39:50 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:01.187 23:39:50 -- keyring/file.sh@60 -- # (( 1 == 1 )) 00:37:01.187 23:39:50 -- keyring/file.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:37:01.187 Running I/O for 1 seconds... 00:37:02.132 00:37:02.132 Latency(us) 00:37:02.132 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:02.132 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:37:02.132 nvme0n1 : 1.01 12171.84 47.55 0.00 0.00 10484.96 5597.87 20862.29 00:37:02.132 =================================================================================================================== 00:37:02.132 Total : 12171.84 47.55 0.00 0.00 10484.96 5597.87 20862.29 00:37:02.132 0 00:37:02.132 23:39:51 -- keyring/file.sh@64 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:37:02.132 23:39:51 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:37:02.393 23:39:51 -- keyring/file.sh@65 -- # get_refcnt key0 00:37:02.393 23:39:51 -- keyring/common.sh@12 -- # get_key key0 00:37:02.393 23:39:51 -- keyring/common.sh@12 -- # jq -r .refcnt 00:37:02.393 23:39:51 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:02.393 23:39:51 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:02.393 23:39:51 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:37:02.654 23:39:51 -- keyring/file.sh@65 -- # (( 1 == 1 )) 00:37:02.654 23:39:51 -- keyring/file.sh@66 -- # get_refcnt key1 00:37:02.654 23:39:51 -- keyring/common.sh@12 -- # get_key key1 00:37:02.654 23:39:51 -- keyring/common.sh@12 -- # jq -r .refcnt 00:37:02.654 23:39:51 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:02.654 23:39:51 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:37:02.655 23:39:51 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:02.655 23:39:51 -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:37:02.655 23:39:51 -- keyring/file.sh@69 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:37:02.655 23:39:51 -- common/autotest_common.sh@638 -- # local es=0 00:37:02.655 23:39:51 -- common/autotest_common.sh@640 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:37:02.655 23:39:51 -- common/autotest_common.sh@626 -- # local arg=bperf_cmd 00:37:02.655 23:39:51 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:37:02.655 23:39:51 -- common/autotest_common.sh@630 -- # type -t bperf_cmd 00:37:02.655 23:39:51 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:37:02.655 23:39:51 -- common/autotest_common.sh@641 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:37:02.655 23:39:51 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:37:02.915 [2024-04-26 23:39:51.971923] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:37:02.915 [2024-04-26 23:39:51.972624] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6931a0 (107): Transport endpoint is not connected 00:37:02.915 [2024-04-26 23:39:51.973618] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6931a0 (9): Bad file descriptor 00:37:02.915 [2024-04-26 23:39:51.974619] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:37:02.915 [2024-04-26 23:39:51.974627] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:37:02.915 [2024-04-26 23:39:51.974634] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:37:02.915 request: 00:37:02.915 { 00:37:02.915 "name": "nvme0", 00:37:02.915 "trtype": "tcp", 00:37:02.915 "traddr": "127.0.0.1", 00:37:02.915 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:37:02.915 "adrfam": "ipv4", 00:37:02.915 "trsvcid": "4420", 00:37:02.915 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:37:02.915 "psk": "key1", 00:37:02.915 "method": "bdev_nvme_attach_controller", 00:37:02.915 "req_id": 1 00:37:02.915 } 00:37:02.915 Got JSON-RPC error response 00:37:02.915 response: 00:37:02.915 { 00:37:02.915 "code": -32602, 00:37:02.915 "message": "Invalid parameters" 00:37:02.915 } 00:37:02.915 23:39:51 -- common/autotest_common.sh@641 -- # es=1 00:37:02.915 23:39:51 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:37:02.915 23:39:51 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:37:02.915 23:39:51 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:37:02.915 23:39:51 -- keyring/file.sh@71 -- # get_refcnt key0 00:37:02.915 23:39:51 -- keyring/common.sh@12 -- # get_key key0 00:37:02.915 23:39:51 -- keyring/common.sh@12 -- # jq -r .refcnt 00:37:02.915 23:39:51 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:02.915 23:39:51 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:37:02.915 23:39:51 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:02.915 23:39:52 -- keyring/file.sh@71 -- # (( 1 == 1 )) 00:37:02.915 23:39:52 -- keyring/file.sh@72 -- # get_refcnt key1 00:37:02.915 23:39:52 -- keyring/common.sh@12 -- # get_key key1 00:37:02.915 23:39:52 -- keyring/common.sh@12 -- # jq -r .refcnt 00:37:02.915 23:39:52 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:02.915 23:39:52 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:02.915 23:39:52 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:37:03.176 23:39:52 -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:37:03.176 23:39:52 -- keyring/file.sh@75 -- # bperf_cmd keyring_file_remove_key key0 00:37:03.176 23:39:52 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:37:03.435 23:39:52 -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key1 00:37:03.435 23:39:52 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:37:03.435 23:39:52 -- keyring/file.sh@77 -- # bperf_cmd keyring_get_keys 00:37:03.435 23:39:52 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:03.436 23:39:52 -- keyring/file.sh@77 -- # jq length 00:37:03.696 23:39:52 -- keyring/file.sh@77 -- # (( 0 == 0 )) 00:37:03.696 23:39:52 -- keyring/file.sh@80 -- # chmod 0660 /tmp/tmp.6x4pH4DGSZ 00:37:03.696 23:39:52 -- keyring/file.sh@81 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.6x4pH4DGSZ 00:37:03.696 23:39:52 -- common/autotest_common.sh@638 -- # local es=0 00:37:03.696 23:39:52 -- common/autotest_common.sh@640 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.6x4pH4DGSZ 00:37:03.696 23:39:52 -- common/autotest_common.sh@626 -- # local arg=bperf_cmd 00:37:03.696 23:39:52 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:37:03.696 23:39:52 -- common/autotest_common.sh@630 -- # type -t bperf_cmd 00:37:03.696 23:39:52 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:37:03.696 23:39:52 -- common/autotest_common.sh@641 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.6x4pH4DGSZ 00:37:03.696 23:39:52 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.6x4pH4DGSZ 00:37:03.696 [2024-04-26 23:39:52.921298] keyring.c: 34:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.6x4pH4DGSZ': 0100660 00:37:03.696 [2024-04-26 23:39:52.921321] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:37:03.696 request: 00:37:03.696 { 00:37:03.696 "name": "key0", 00:37:03.696 "path": "/tmp/tmp.6x4pH4DGSZ", 00:37:03.696 "method": "keyring_file_add_key", 00:37:03.696 "req_id": 1 00:37:03.696 } 00:37:03.696 Got JSON-RPC error response 00:37:03.696 response: 00:37:03.696 { 00:37:03.696 "code": -1, 00:37:03.696 "message": "Operation not permitted" 00:37:03.696 } 00:37:03.696 23:39:52 -- common/autotest_common.sh@641 -- # es=1 00:37:03.696 23:39:52 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:37:03.696 23:39:52 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:37:03.696 23:39:52 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:37:03.696 23:39:52 -- keyring/file.sh@84 -- # chmod 0600 /tmp/tmp.6x4pH4DGSZ 00:37:03.696 23:39:52 -- keyring/file.sh@85 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.6x4pH4DGSZ 00:37:03.696 23:39:52 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.6x4pH4DGSZ 00:37:03.956 23:39:53 -- keyring/file.sh@86 -- # rm -f /tmp/tmp.6x4pH4DGSZ 00:37:03.956 23:39:53 -- keyring/file.sh@88 -- # get_refcnt key0 00:37:03.956 23:39:53 -- keyring/common.sh@12 -- # get_key key0 00:37:03.956 23:39:53 -- keyring/common.sh@12 -- # jq -r .refcnt 00:37:03.956 23:39:53 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:03.956 23:39:53 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:03.956 23:39:53 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:37:04.218 23:39:53 -- keyring/file.sh@88 -- # (( 1 == 1 )) 00:37:04.218 23:39:53 -- keyring/file.sh@90 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:37:04.218 23:39:53 -- common/autotest_common.sh@638 -- # local es=0 00:37:04.218 23:39:53 -- common/autotest_common.sh@640 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:37:04.218 23:39:53 -- common/autotest_common.sh@626 -- # local arg=bperf_cmd 00:37:04.218 23:39:53 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:37:04.218 23:39:53 -- common/autotest_common.sh@630 -- # type -t bperf_cmd 00:37:04.218 23:39:53 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:37:04.218 23:39:53 -- common/autotest_common.sh@641 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:37:04.218 23:39:53 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:37:04.218 [2024-04-26 23:39:53.402538] keyring.c: 29:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.6x4pH4DGSZ': No such file or directory 00:37:04.218 [2024-04-26 23:39:53.402554] nvme_tcp.c:2570:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:37:04.218 [2024-04-26 23:39:53.402577] nvme.c: 683:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:37:04.218 [2024-04-26 23:39:53.402588] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:37:04.218 [2024-04-26 23:39:53.402595] bdev_nvme.c:6208:bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:37:04.218 request: 00:37:04.218 { 00:37:04.218 "name": "nvme0", 00:37:04.218 "trtype": "tcp", 00:37:04.218 "traddr": "127.0.0.1", 00:37:04.218 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:37:04.218 "adrfam": "ipv4", 00:37:04.218 "trsvcid": "4420", 00:37:04.218 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:37:04.218 "psk": "key0", 00:37:04.218 "method": "bdev_nvme_attach_controller", 00:37:04.218 "req_id": 1 00:37:04.218 } 00:37:04.218 Got JSON-RPC error response 00:37:04.218 response: 00:37:04.218 { 00:37:04.218 "code": -19, 00:37:04.218 "message": "No such device" 00:37:04.218 } 00:37:04.218 23:39:53 -- common/autotest_common.sh@641 -- # es=1 00:37:04.218 23:39:53 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:37:04.218 23:39:53 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:37:04.218 23:39:53 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:37:04.218 23:39:53 -- keyring/file.sh@92 -- # bperf_cmd keyring_file_remove_key key0 00:37:04.218 23:39:53 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:37:04.479 23:39:53 -- keyring/file.sh@95 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:37:04.479 23:39:53 -- keyring/common.sh@15 -- # local name key digest path 00:37:04.479 23:39:53 -- keyring/common.sh@17 -- # name=key0 00:37:04.479 23:39:53 -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:37:04.479 23:39:53 -- keyring/common.sh@17 -- # digest=0 00:37:04.479 23:39:53 -- keyring/common.sh@18 -- # mktemp 00:37:04.479 23:39:53 -- keyring/common.sh@18 -- # path=/tmp/tmp.crMPZT5UjH 00:37:04.479 23:39:53 -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:37:04.479 23:39:53 -- nvmf/common.sh@704 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:37:04.479 23:39:53 -- nvmf/common.sh@691 -- # local prefix key digest 00:37:04.479 23:39:53 -- nvmf/common.sh@693 -- # prefix=NVMeTLSkey-1 00:37:04.479 23:39:53 -- nvmf/common.sh@693 -- # key=00112233445566778899aabbccddeeff 00:37:04.479 23:39:53 -- nvmf/common.sh@693 -- # digest=0 00:37:04.479 23:39:53 -- nvmf/common.sh@694 -- # python - 00:37:04.479 23:39:53 -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.crMPZT5UjH 00:37:04.479 23:39:53 -- keyring/common.sh@23 -- # echo /tmp/tmp.crMPZT5UjH 00:37:04.479 23:39:53 -- keyring/file.sh@95 -- # key0path=/tmp/tmp.crMPZT5UjH 00:37:04.479 23:39:53 -- keyring/file.sh@96 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.crMPZT5UjH 00:37:04.479 23:39:53 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.crMPZT5UjH 00:37:04.740 23:39:53 -- keyring/file.sh@97 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:37:04.740 23:39:53 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:37:04.740 nvme0n1 00:37:04.740 23:39:53 -- keyring/file.sh@99 -- # get_refcnt key0 00:37:04.740 23:39:53 -- keyring/common.sh@12 -- # get_key key0 00:37:04.740 23:39:53 -- keyring/common.sh@12 -- # jq -r .refcnt 00:37:05.001 23:39:53 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:05.001 23:39:53 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:05.001 23:39:53 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:37:05.001 23:39:54 -- keyring/file.sh@99 -- # (( 2 == 2 )) 00:37:05.001 23:39:54 -- keyring/file.sh@100 -- # bperf_cmd keyring_file_remove_key key0 00:37:05.001 23:39:54 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:37:05.263 23:39:54 -- keyring/file.sh@101 -- # get_key key0 00:37:05.263 23:39:54 -- keyring/file.sh@101 -- # jq -r .removed 00:37:05.263 23:39:54 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:05.263 23:39:54 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:05.263 23:39:54 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:37:05.263 23:39:54 -- keyring/file.sh@101 -- # [[ true == \t\r\u\e ]] 00:37:05.263 23:39:54 -- keyring/file.sh@102 -- # get_refcnt key0 00:37:05.263 23:39:54 -- keyring/common.sh@12 -- # get_key key0 00:37:05.263 23:39:54 -- keyring/common.sh@12 -- # jq -r .refcnt 00:37:05.263 23:39:54 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:05.263 23:39:54 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:37:05.263 23:39:54 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:05.525 23:39:54 -- keyring/file.sh@102 -- # (( 1 == 1 )) 00:37:05.525 23:39:54 -- keyring/file.sh@103 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:37:05.525 23:39:54 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:37:05.785 23:39:54 -- keyring/file.sh@104 -- # bperf_cmd keyring_get_keys 00:37:05.785 23:39:54 -- keyring/file.sh@104 -- # jq length 00:37:05.785 23:39:54 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:05.785 23:39:54 -- keyring/file.sh@104 -- # (( 0 == 0 )) 00:37:05.785 23:39:54 -- keyring/file.sh@107 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.crMPZT5UjH 00:37:05.785 23:39:54 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.crMPZT5UjH 00:37:06.046 23:39:55 -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.B7A35GqNm3 00:37:06.046 23:39:55 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.B7A35GqNm3 00:37:06.046 23:39:55 -- keyring/file.sh@109 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:37:06.046 23:39:55 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:37:06.305 nvme0n1 00:37:06.305 23:39:55 -- keyring/file.sh@112 -- # bperf_cmd save_config 00:37:06.305 23:39:55 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:37:06.565 23:39:55 -- keyring/file.sh@112 -- # config='{ 00:37:06.565 "subsystems": [ 00:37:06.565 { 00:37:06.565 "subsystem": "keyring", 00:37:06.565 "config": [ 00:37:06.565 { 00:37:06.565 "method": "keyring_file_add_key", 00:37:06.565 "params": { 00:37:06.565 "name": "key0", 00:37:06.565 "path": "/tmp/tmp.crMPZT5UjH" 00:37:06.565 } 00:37:06.565 }, 00:37:06.565 { 00:37:06.565 "method": "keyring_file_add_key", 00:37:06.565 "params": { 00:37:06.565 "name": "key1", 00:37:06.565 "path": "/tmp/tmp.B7A35GqNm3" 00:37:06.565 } 00:37:06.565 } 00:37:06.565 ] 00:37:06.565 }, 00:37:06.565 { 00:37:06.565 "subsystem": "iobuf", 00:37:06.565 "config": [ 00:37:06.565 { 00:37:06.565 "method": "iobuf_set_options", 00:37:06.565 "params": { 00:37:06.565 "small_pool_count": 8192, 00:37:06.565 "large_pool_count": 1024, 00:37:06.565 "small_bufsize": 8192, 00:37:06.565 "large_bufsize": 135168 00:37:06.565 } 00:37:06.565 } 00:37:06.565 ] 00:37:06.565 }, 00:37:06.565 { 00:37:06.565 "subsystem": "sock", 00:37:06.565 "config": [ 00:37:06.565 { 00:37:06.565 "method": "sock_impl_set_options", 00:37:06.565 "params": { 00:37:06.565 "impl_name": "posix", 00:37:06.565 "recv_buf_size": 2097152, 00:37:06.565 "send_buf_size": 2097152, 00:37:06.565 "enable_recv_pipe": true, 00:37:06.565 "enable_quickack": false, 00:37:06.565 "enable_placement_id": 0, 00:37:06.565 "enable_zerocopy_send_server": true, 00:37:06.565 "enable_zerocopy_send_client": false, 00:37:06.565 "zerocopy_threshold": 0, 00:37:06.565 "tls_version": 0, 00:37:06.565 "enable_ktls": false 00:37:06.565 } 00:37:06.565 }, 00:37:06.565 { 00:37:06.565 "method": "sock_impl_set_options", 00:37:06.565 "params": { 00:37:06.565 "impl_name": "ssl", 00:37:06.565 "recv_buf_size": 4096, 00:37:06.565 "send_buf_size": 4096, 00:37:06.565 "enable_recv_pipe": true, 00:37:06.565 "enable_quickack": false, 00:37:06.565 "enable_placement_id": 0, 00:37:06.565 "enable_zerocopy_send_server": true, 00:37:06.565 "enable_zerocopy_send_client": false, 00:37:06.565 "zerocopy_threshold": 0, 00:37:06.565 "tls_version": 0, 00:37:06.565 "enable_ktls": false 00:37:06.565 } 00:37:06.565 } 00:37:06.565 ] 00:37:06.565 }, 00:37:06.565 { 00:37:06.565 "subsystem": "vmd", 00:37:06.565 "config": [] 00:37:06.565 }, 00:37:06.565 { 00:37:06.565 "subsystem": "accel", 00:37:06.565 "config": [ 00:37:06.565 { 00:37:06.565 "method": "accel_set_options", 00:37:06.565 "params": { 00:37:06.565 "small_cache_size": 128, 00:37:06.565 "large_cache_size": 16, 00:37:06.566 "task_count": 2048, 00:37:06.566 "sequence_count": 2048, 00:37:06.566 "buf_count": 2048 00:37:06.566 } 00:37:06.566 } 00:37:06.566 ] 00:37:06.566 }, 00:37:06.566 { 00:37:06.566 "subsystem": "bdev", 00:37:06.566 "config": [ 00:37:06.566 { 00:37:06.566 "method": "bdev_set_options", 00:37:06.566 "params": { 00:37:06.566 "bdev_io_pool_size": 65535, 00:37:06.566 "bdev_io_cache_size": 256, 00:37:06.566 "bdev_auto_examine": true, 00:37:06.566 "iobuf_small_cache_size": 128, 00:37:06.566 "iobuf_large_cache_size": 16 00:37:06.566 } 00:37:06.566 }, 00:37:06.566 { 00:37:06.566 "method": "bdev_raid_set_options", 00:37:06.566 "params": { 00:37:06.566 "process_window_size_kb": 1024 00:37:06.566 } 00:37:06.566 }, 00:37:06.566 { 00:37:06.566 "method": "bdev_iscsi_set_options", 00:37:06.566 "params": { 00:37:06.566 "timeout_sec": 30 00:37:06.566 } 00:37:06.566 }, 00:37:06.566 { 00:37:06.566 "method": "bdev_nvme_set_options", 00:37:06.566 "params": { 00:37:06.566 "action_on_timeout": "none", 00:37:06.566 "timeout_us": 0, 00:37:06.566 "timeout_admin_us": 0, 00:37:06.566 "keep_alive_timeout_ms": 10000, 00:37:06.566 "arbitration_burst": 0, 00:37:06.566 "low_priority_weight": 0, 00:37:06.566 "medium_priority_weight": 0, 00:37:06.566 "high_priority_weight": 0, 00:37:06.566 "nvme_adminq_poll_period_us": 10000, 00:37:06.566 "nvme_ioq_poll_period_us": 0, 00:37:06.566 "io_queue_requests": 512, 00:37:06.566 "delay_cmd_submit": true, 00:37:06.566 "transport_retry_count": 4, 00:37:06.566 "bdev_retry_count": 3, 00:37:06.566 "transport_ack_timeout": 0, 00:37:06.566 "ctrlr_loss_timeout_sec": 0, 00:37:06.566 "reconnect_delay_sec": 0, 00:37:06.566 "fast_io_fail_timeout_sec": 0, 00:37:06.566 "disable_auto_failback": false, 00:37:06.566 "generate_uuids": false, 00:37:06.566 "transport_tos": 0, 00:37:06.566 "nvme_error_stat": false, 00:37:06.566 "rdma_srq_size": 0, 00:37:06.566 "io_path_stat": false, 00:37:06.566 "allow_accel_sequence": false, 00:37:06.566 "rdma_max_cq_size": 0, 00:37:06.566 "rdma_cm_event_timeout_ms": 0, 00:37:06.566 "dhchap_digests": [ 00:37:06.566 "sha256", 00:37:06.566 "sha384", 00:37:06.566 "sha512" 00:37:06.566 ], 00:37:06.566 "dhchap_dhgroups": [ 00:37:06.566 "null", 00:37:06.566 "ffdhe2048", 00:37:06.566 "ffdhe3072", 00:37:06.566 "ffdhe4096", 00:37:06.566 "ffdhe6144", 00:37:06.566 "ffdhe8192" 00:37:06.566 ] 00:37:06.566 } 00:37:06.566 }, 00:37:06.566 { 00:37:06.566 "method": "bdev_nvme_attach_controller", 00:37:06.566 "params": { 00:37:06.566 "name": "nvme0", 00:37:06.566 "trtype": "TCP", 00:37:06.566 "adrfam": "IPv4", 00:37:06.566 "traddr": "127.0.0.1", 00:37:06.566 "trsvcid": "4420", 00:37:06.566 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:37:06.566 "prchk_reftag": false, 00:37:06.566 "prchk_guard": false, 00:37:06.566 "ctrlr_loss_timeout_sec": 0, 00:37:06.566 "reconnect_delay_sec": 0, 00:37:06.566 "fast_io_fail_timeout_sec": 0, 00:37:06.566 "psk": "key0", 00:37:06.566 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:37:06.566 "hdgst": false, 00:37:06.566 "ddgst": false 00:37:06.566 } 00:37:06.566 }, 00:37:06.566 { 00:37:06.566 "method": "bdev_nvme_set_hotplug", 00:37:06.566 "params": { 00:37:06.566 "period_us": 100000, 00:37:06.566 "enable": false 00:37:06.566 } 00:37:06.566 }, 00:37:06.566 { 00:37:06.566 "method": "bdev_wait_for_examine" 00:37:06.566 } 00:37:06.566 ] 00:37:06.566 }, 00:37:06.566 { 00:37:06.566 "subsystem": "nbd", 00:37:06.566 "config": [] 00:37:06.566 } 00:37:06.566 ] 00:37:06.566 }' 00:37:06.566 23:39:55 -- keyring/file.sh@114 -- # killprocess 42044 00:37:06.566 23:39:55 -- common/autotest_common.sh@936 -- # '[' -z 42044 ']' 00:37:06.566 23:39:55 -- common/autotest_common.sh@940 -- # kill -0 42044 00:37:06.566 23:39:55 -- common/autotest_common.sh@941 -- # uname 00:37:06.566 23:39:55 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:37:06.566 23:39:55 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 42044 00:37:06.566 23:39:55 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:37:06.566 23:39:55 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:37:06.566 23:39:55 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 42044' 00:37:06.566 killing process with pid 42044 00:37:06.566 23:39:55 -- common/autotest_common.sh@955 -- # kill 42044 00:37:06.566 Received shutdown signal, test time was about 1.000000 seconds 00:37:06.566 00:37:06.566 Latency(us) 00:37:06.566 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:06.566 =================================================================================================================== 00:37:06.566 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:37:06.566 23:39:55 -- common/autotest_common.sh@960 -- # wait 42044 00:37:06.827 23:39:55 -- keyring/file.sh@117 -- # bperfpid=43519 00:37:06.827 23:39:55 -- keyring/file.sh@119 -- # waitforlisten 43519 /var/tmp/bperf.sock 00:37:06.827 23:39:55 -- common/autotest_common.sh@817 -- # '[' -z 43519 ']' 00:37:06.827 23:39:55 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bperf.sock 00:37:06.827 23:39:55 -- common/autotest_common.sh@822 -- # local max_retries=100 00:37:06.827 23:39:55 -- keyring/file.sh@115 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:37:06.827 23:39:55 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:37:06.827 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:37:06.827 23:39:55 -- common/autotest_common.sh@826 -- # xtrace_disable 00:37:06.827 23:39:55 -- common/autotest_common.sh@10 -- # set +x 00:37:06.827 23:39:55 -- keyring/file.sh@115 -- # echo '{ 00:37:06.827 "subsystems": [ 00:37:06.827 { 00:37:06.827 "subsystem": "keyring", 00:37:06.827 "config": [ 00:37:06.827 { 00:37:06.827 "method": "keyring_file_add_key", 00:37:06.827 "params": { 00:37:06.827 "name": "key0", 00:37:06.827 "path": "/tmp/tmp.crMPZT5UjH" 00:37:06.827 } 00:37:06.827 }, 00:37:06.827 { 00:37:06.827 "method": "keyring_file_add_key", 00:37:06.827 "params": { 00:37:06.827 "name": "key1", 00:37:06.827 "path": "/tmp/tmp.B7A35GqNm3" 00:37:06.827 } 00:37:06.827 } 00:37:06.827 ] 00:37:06.827 }, 00:37:06.827 { 00:37:06.827 "subsystem": "iobuf", 00:37:06.827 "config": [ 00:37:06.827 { 00:37:06.827 "method": "iobuf_set_options", 00:37:06.827 "params": { 00:37:06.827 "small_pool_count": 8192, 00:37:06.827 "large_pool_count": 1024, 00:37:06.827 "small_bufsize": 8192, 00:37:06.827 "large_bufsize": 135168 00:37:06.827 } 00:37:06.827 } 00:37:06.827 ] 00:37:06.827 }, 00:37:06.827 { 00:37:06.827 "subsystem": "sock", 00:37:06.827 "config": [ 00:37:06.827 { 00:37:06.827 "method": "sock_impl_set_options", 00:37:06.827 "params": { 00:37:06.827 "impl_name": "posix", 00:37:06.827 "recv_buf_size": 2097152, 00:37:06.827 "send_buf_size": 2097152, 00:37:06.827 "enable_recv_pipe": true, 00:37:06.827 "enable_quickack": false, 00:37:06.827 "enable_placement_id": 0, 00:37:06.827 "enable_zerocopy_send_server": true, 00:37:06.827 "enable_zerocopy_send_client": false, 00:37:06.827 "zerocopy_threshold": 0, 00:37:06.827 "tls_version": 0, 00:37:06.827 "enable_ktls": false 00:37:06.827 } 00:37:06.827 }, 00:37:06.827 { 00:37:06.827 "method": "sock_impl_set_options", 00:37:06.827 "params": { 00:37:06.827 "impl_name": "ssl", 00:37:06.827 "recv_buf_size": 4096, 00:37:06.827 "send_buf_size": 4096, 00:37:06.827 "enable_recv_pipe": true, 00:37:06.827 "enable_quickack": false, 00:37:06.827 "enable_placement_id": 0, 00:37:06.827 "enable_zerocopy_send_server": true, 00:37:06.827 "enable_zerocopy_send_client": false, 00:37:06.827 "zerocopy_threshold": 0, 00:37:06.827 "tls_version": 0, 00:37:06.827 "enable_ktls": false 00:37:06.827 } 00:37:06.827 } 00:37:06.827 ] 00:37:06.827 }, 00:37:06.827 { 00:37:06.827 "subsystem": "vmd", 00:37:06.827 "config": [] 00:37:06.827 }, 00:37:06.827 { 00:37:06.827 "subsystem": "accel", 00:37:06.827 "config": [ 00:37:06.827 { 00:37:06.827 "method": "accel_set_options", 00:37:06.827 "params": { 00:37:06.827 "small_cache_size": 128, 00:37:06.827 "large_cache_size": 16, 00:37:06.827 "task_count": 2048, 00:37:06.827 "sequence_count": 2048, 00:37:06.827 "buf_count": 2048 00:37:06.827 } 00:37:06.827 } 00:37:06.827 ] 00:37:06.827 }, 00:37:06.827 { 00:37:06.827 "subsystem": "bdev", 00:37:06.827 "config": [ 00:37:06.827 { 00:37:06.827 "method": "bdev_set_options", 00:37:06.827 "params": { 00:37:06.827 "bdev_io_pool_size": 65535, 00:37:06.827 "bdev_io_cache_size": 256, 00:37:06.827 "bdev_auto_examine": true, 00:37:06.827 "iobuf_small_cache_size": 128, 00:37:06.827 "iobuf_large_cache_size": 16 00:37:06.827 } 00:37:06.827 }, 00:37:06.827 { 00:37:06.827 "method": "bdev_raid_set_options", 00:37:06.827 "params": { 00:37:06.827 "process_window_size_kb": 1024 00:37:06.827 } 00:37:06.827 }, 00:37:06.827 { 00:37:06.827 "method": "bdev_iscsi_set_options", 00:37:06.827 "params": { 00:37:06.827 "timeout_sec": 30 00:37:06.827 } 00:37:06.827 }, 00:37:06.827 { 00:37:06.827 "method": "bdev_nvme_set_options", 00:37:06.827 "params": { 00:37:06.827 "action_on_timeout": "none", 00:37:06.827 "timeout_us": 0, 00:37:06.827 "timeout_admin_us": 0, 00:37:06.827 "keep_alive_timeout_ms": 10000, 00:37:06.827 "arbitration_burst": 0, 00:37:06.827 "low_priority_weight": 0, 00:37:06.827 "medium_priority_weight": 0, 00:37:06.827 "high_priority_weight": 0, 00:37:06.827 "nvme_adminq_poll_period_us": 10000, 00:37:06.827 "nvme_ioq_poll_period_us": 0, 00:37:06.827 "io_queue_requests": 512, 00:37:06.827 "delay_cmd_submit": true, 00:37:06.827 "transport_retry_count": 4, 00:37:06.827 "bdev_retry_count": 3, 00:37:06.827 "transport_ack_timeout": 0, 00:37:06.827 "ctrlr_loss_timeout_sec": 0, 00:37:06.827 "reconnect_delay_sec": 0, 00:37:06.827 "fast_io_fail_timeout_sec": 0, 00:37:06.827 "disable_auto_failback": false, 00:37:06.827 "generate_uuids": false, 00:37:06.827 "transport_tos": 0, 00:37:06.827 "nvme_error_stat": false, 00:37:06.827 "rdma_srq_size": 0, 00:37:06.827 "io_path_stat": false, 00:37:06.827 "allow_accel_sequence": false, 00:37:06.827 "rdma_max_cq_size": 0, 00:37:06.827 "rdma_cm_event_timeout_ms": 0, 00:37:06.827 "dhchap_digests": [ 00:37:06.827 "sha256", 00:37:06.827 "sha384", 00:37:06.827 "sha512" 00:37:06.827 ], 00:37:06.827 "dhchap_dhgroups": [ 00:37:06.827 "null", 00:37:06.827 "ffdhe2048", 00:37:06.827 "ffdhe3072", 00:37:06.827 "ffdhe4096", 00:37:06.827 "ffdhe6144", 00:37:06.827 "ffdhe8192" 00:37:06.827 ] 00:37:06.827 } 00:37:06.827 }, 00:37:06.827 { 00:37:06.827 "method": "bdev_nvme_attach_controller", 00:37:06.827 "params": { 00:37:06.827 "name": "nvme0", 00:37:06.827 "trtype": "TCP", 00:37:06.827 "adrfam": "IPv4", 00:37:06.827 "traddr": "127.0.0.1", 00:37:06.827 "trsvcid": "4420", 00:37:06.827 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:37:06.827 "prchk_reftag": false, 00:37:06.827 "prchk_guard": false, 00:37:06.827 "ctrlr_loss_timeout_sec": 0, 00:37:06.827 "reconnect_delay_sec": 0, 00:37:06.827 "fast_io_fail_timeout_sec": 0, 00:37:06.827 "psk": "key0", 00:37:06.827 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:37:06.827 "hdgst": false, 00:37:06.827 "ddgst": false 00:37:06.827 } 00:37:06.827 }, 00:37:06.827 { 00:37:06.827 "method": "bdev_nvme_set_hotplug", 00:37:06.827 "params": { 00:37:06.827 "period_us": 100000, 00:37:06.827 "enable": false 00:37:06.827 } 00:37:06.827 }, 00:37:06.827 { 00:37:06.827 "method": "bdev_wait_for_examine" 00:37:06.827 } 00:37:06.827 ] 00:37:06.827 }, 00:37:06.827 { 00:37:06.827 "subsystem": "nbd", 00:37:06.827 "config": [] 00:37:06.828 } 00:37:06.828 ] 00:37:06.828 }' 00:37:06.828 [2024-04-26 23:39:55.915605] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:37:06.828 [2024-04-26 23:39:55.915656] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid43519 ] 00:37:06.828 EAL: No free 2048 kB hugepages reported on node 1 00:37:06.828 [2024-04-26 23:39:55.975298] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:06.828 [2024-04-26 23:39:56.003698] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:37:07.088 [2024-04-26 23:39:56.136096] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:37:07.659 23:39:56 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:37:07.659 23:39:56 -- common/autotest_common.sh@850 -- # return 0 00:37:07.659 23:39:56 -- keyring/file.sh@120 -- # bperf_cmd keyring_get_keys 00:37:07.659 23:39:56 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:07.659 23:39:56 -- keyring/file.sh@120 -- # jq length 00:37:07.659 23:39:56 -- keyring/file.sh@120 -- # (( 2 == 2 )) 00:37:07.659 23:39:56 -- keyring/file.sh@121 -- # get_refcnt key0 00:37:07.659 23:39:56 -- keyring/common.sh@12 -- # get_key key0 00:37:07.659 23:39:56 -- keyring/common.sh@12 -- # jq -r .refcnt 00:37:07.659 23:39:56 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:37:07.659 23:39:56 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:07.659 23:39:56 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:07.919 23:39:56 -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:37:07.919 23:39:56 -- keyring/file.sh@122 -- # get_refcnt key1 00:37:07.919 23:39:56 -- keyring/common.sh@12 -- # get_key key1 00:37:07.919 23:39:56 -- keyring/common.sh@12 -- # jq -r .refcnt 00:37:07.919 23:39:56 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:07.919 23:39:56 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:07.919 23:39:56 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:37:07.919 23:39:57 -- keyring/file.sh@122 -- # (( 1 == 1 )) 00:37:07.919 23:39:57 -- keyring/file.sh@123 -- # bperf_cmd bdev_nvme_get_controllers 00:37:07.919 23:39:57 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:37:07.919 23:39:57 -- keyring/file.sh@123 -- # jq -r '.[].name' 00:37:08.180 23:39:57 -- keyring/file.sh@123 -- # [[ nvme0 == nvme0 ]] 00:37:08.180 23:39:57 -- keyring/file.sh@1 -- # cleanup 00:37:08.180 23:39:57 -- keyring/file.sh@19 -- # rm -f /tmp/tmp.crMPZT5UjH /tmp/tmp.B7A35GqNm3 00:37:08.180 23:39:57 -- keyring/file.sh@20 -- # killprocess 43519 00:37:08.180 23:39:57 -- common/autotest_common.sh@936 -- # '[' -z 43519 ']' 00:37:08.180 23:39:57 -- common/autotest_common.sh@940 -- # kill -0 43519 00:37:08.180 23:39:57 -- common/autotest_common.sh@941 -- # uname 00:37:08.180 23:39:57 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:37:08.180 23:39:57 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 43519 00:37:08.180 23:39:57 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:37:08.180 23:39:57 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:37:08.180 23:39:57 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 43519' 00:37:08.180 killing process with pid 43519 00:37:08.180 23:39:57 -- common/autotest_common.sh@955 -- # kill 43519 00:37:08.180 Received shutdown signal, test time was about 1.000000 seconds 00:37:08.180 00:37:08.180 Latency(us) 00:37:08.180 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:08.180 =================================================================================================================== 00:37:08.180 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:37:08.180 23:39:57 -- common/autotest_common.sh@960 -- # wait 43519 00:37:08.440 23:39:57 -- keyring/file.sh@21 -- # killprocess 41980 00:37:08.440 23:39:57 -- common/autotest_common.sh@936 -- # '[' -z 41980 ']' 00:37:08.440 23:39:57 -- common/autotest_common.sh@940 -- # kill -0 41980 00:37:08.440 23:39:57 -- common/autotest_common.sh@941 -- # uname 00:37:08.440 23:39:57 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:37:08.440 23:39:57 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 41980 00:37:08.440 23:39:57 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:37:08.440 23:39:57 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:37:08.440 23:39:57 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 41980' 00:37:08.440 killing process with pid 41980 00:37:08.440 23:39:57 -- common/autotest_common.sh@955 -- # kill 41980 00:37:08.440 [2024-04-26 23:39:57.529008] app.c: 937:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:37:08.440 23:39:57 -- common/autotest_common.sh@960 -- # wait 41980 00:37:08.701 00:37:08.701 real 0m10.324s 00:37:08.701 user 0m24.464s 00:37:08.701 sys 0m2.638s 00:37:08.701 23:39:57 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:37:08.701 23:39:57 -- common/autotest_common.sh@10 -- # set +x 00:37:08.701 ************************************ 00:37:08.701 END TEST keyring_file 00:37:08.701 ************************************ 00:37:08.701 23:39:57 -- spdk/autotest.sh@294 -- # [[ n == y ]] 00:37:08.701 23:39:57 -- spdk/autotest.sh@306 -- # '[' 0 -eq 1 ']' 00:37:08.701 23:39:57 -- spdk/autotest.sh@310 -- # '[' 0 -eq 1 ']' 00:37:08.701 23:39:57 -- spdk/autotest.sh@314 -- # '[' 0 -eq 1 ']' 00:37:08.701 23:39:57 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:37:08.701 23:39:57 -- spdk/autotest.sh@328 -- # '[' 0 -eq 1 ']' 00:37:08.701 23:39:57 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:37:08.701 23:39:57 -- spdk/autotest.sh@337 -- # '[' 0 -eq 1 ']' 00:37:08.701 23:39:57 -- spdk/autotest.sh@341 -- # '[' 0 -eq 1 ']' 00:37:08.701 23:39:57 -- spdk/autotest.sh@345 -- # '[' 0 -eq 1 ']' 00:37:08.701 23:39:57 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:37:08.701 23:39:57 -- spdk/autotest.sh@354 -- # '[' 0 -eq 1 ']' 00:37:08.701 23:39:57 -- spdk/autotest.sh@361 -- # [[ 0 -eq 1 ]] 00:37:08.701 23:39:57 -- spdk/autotest.sh@365 -- # [[ 0 -eq 1 ]] 00:37:08.701 23:39:57 -- spdk/autotest.sh@369 -- # [[ 0 -eq 1 ]] 00:37:08.701 23:39:57 -- spdk/autotest.sh@373 -- # [[ 0 -eq 1 ]] 00:37:08.701 23:39:57 -- spdk/autotest.sh@378 -- # trap - SIGINT SIGTERM EXIT 00:37:08.701 23:39:57 -- spdk/autotest.sh@380 -- # timing_enter post_cleanup 00:37:08.701 23:39:57 -- common/autotest_common.sh@710 -- # xtrace_disable 00:37:08.701 23:39:57 -- common/autotest_common.sh@10 -- # set +x 00:37:08.701 23:39:57 -- spdk/autotest.sh@381 -- # autotest_cleanup 00:37:08.701 23:39:57 -- common/autotest_common.sh@1378 -- # local autotest_es=0 00:37:08.701 23:39:57 -- common/autotest_common.sh@1379 -- # xtrace_disable 00:37:08.701 23:39:57 -- common/autotest_common.sh@10 -- # set +x 00:37:16.841 INFO: APP EXITING 00:37:16.841 INFO: killing all VMs 00:37:16.841 INFO: killing vhost app 00:37:16.841 INFO: EXIT DONE 00:37:19.386 0000:80:01.6 (8086 0b00): Already using the ioatdma driver 00:37:19.386 0000:80:01.7 (8086 0b00): Already using the ioatdma driver 00:37:19.386 0000:80:01.4 (8086 0b00): Already using the ioatdma driver 00:37:19.386 0000:80:01.5 (8086 0b00): Already using the ioatdma driver 00:37:19.386 0000:80:01.2 (8086 0b00): Already using the ioatdma driver 00:37:19.386 0000:80:01.3 (8086 0b00): Already using the ioatdma driver 00:37:19.386 0000:80:01.0 (8086 0b00): Already using the ioatdma driver 00:37:19.386 0000:80:01.1 (8086 0b00): Already using the ioatdma driver 00:37:19.386 0000:65:00.0 (144d a80a): Already using the nvme driver 00:37:19.386 0000:00:01.6 (8086 0b00): Already using the ioatdma driver 00:37:19.386 0000:00:01.7 (8086 0b00): Already using the ioatdma driver 00:37:19.386 0000:00:01.4 (8086 0b00): Already using the ioatdma driver 00:37:19.386 0000:00:01.5 (8086 0b00): Already using the ioatdma driver 00:37:19.386 0000:00:01.2 (8086 0b00): Already using the ioatdma driver 00:37:19.386 0000:00:01.3 (8086 0b00): Already using the ioatdma driver 00:37:19.386 0000:00:01.0 (8086 0b00): Already using the ioatdma driver 00:37:19.386 0000:00:01.1 (8086 0b00): Already using the ioatdma driver 00:37:23.596 Cleaning 00:37:23.596 Removing: /var/run/dpdk/spdk0/config 00:37:23.596 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:37:23.596 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:37:23.596 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:37:23.596 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:37:23.596 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:37:23.596 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:37:23.596 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:37:23.596 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:37:23.596 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:37:23.596 Removing: /var/run/dpdk/spdk0/hugepage_info 00:37:23.596 Removing: /var/run/dpdk/spdk1/config 00:37:23.596 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:37:23.596 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:37:23.596 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:37:23.596 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:37:23.596 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-0 00:37:23.596 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-1 00:37:23.596 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-2 00:37:23.596 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-3 00:37:23.596 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:37:23.596 Removing: /var/run/dpdk/spdk1/hugepage_info 00:37:23.596 Removing: /var/run/dpdk/spdk1/mp_socket 00:37:23.596 Removing: /var/run/dpdk/spdk2/config 00:37:23.596 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:37:23.596 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:37:23.596 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:37:23.596 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:37:23.596 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-0 00:37:23.596 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-1 00:37:23.596 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-2 00:37:23.596 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-3 00:37:23.596 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:37:23.596 Removing: /var/run/dpdk/spdk2/hugepage_info 00:37:23.596 Removing: /var/run/dpdk/spdk3/config 00:37:23.596 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:37:23.596 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:37:23.596 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:37:23.596 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:37:23.596 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-0 00:37:23.596 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-1 00:37:23.596 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-2 00:37:23.596 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-3 00:37:23.596 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:37:23.596 Removing: /var/run/dpdk/spdk3/hugepage_info 00:37:23.596 Removing: /var/run/dpdk/spdk4/config 00:37:23.596 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:37:23.596 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:37:23.596 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:37:23.596 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:37:23.596 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-0 00:37:23.596 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-1 00:37:23.596 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-2 00:37:23.596 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-3 00:37:23.596 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:37:23.596 Removing: /var/run/dpdk/spdk4/hugepage_info 00:37:23.596 Removing: /dev/shm/bdev_svc_trace.1 00:37:23.596 Removing: /dev/shm/nvmf_trace.0 00:37:23.596 Removing: /dev/shm/spdk_tgt_trace.pid3717896 00:37:23.596 Removing: /var/run/dpdk/spdk0 00:37:23.596 Removing: /var/run/dpdk/spdk1 00:37:23.596 Removing: /var/run/dpdk/spdk2 00:37:23.596 Removing: /var/run/dpdk/spdk3 00:37:23.596 Removing: /var/run/dpdk/spdk4 00:37:23.596 Removing: /var/run/dpdk/spdk_pid11492 00:37:23.596 Removing: /var/run/dpdk/spdk_pid13716 00:37:23.596 Removing: /var/run/dpdk/spdk_pid16220 00:37:23.596 Removing: /var/run/dpdk/spdk_pid17439 00:37:23.596 Removing: /var/run/dpdk/spdk_pid19928 00:37:23.596 Removing: /var/run/dpdk/spdk_pid21297 00:37:23.596 Removing: /var/run/dpdk/spdk_pid31631 00:37:23.596 Removing: /var/run/dpdk/spdk_pid32439 00:37:23.596 Removing: /var/run/dpdk/spdk_pid33105 00:37:23.596 Removing: /var/run/dpdk/spdk_pid36086 00:37:23.596 Removing: /var/run/dpdk/spdk_pid36553 00:37:23.596 Removing: /var/run/dpdk/spdk_pid37104 00:37:23.596 Removing: /var/run/dpdk/spdk_pid3716237 00:37:23.596 Removing: /var/run/dpdk/spdk_pid3717896 00:37:23.596 Removing: /var/run/dpdk/spdk_pid3718786 00:37:23.596 Removing: /var/run/dpdk/spdk_pid3719827 00:37:23.596 Removing: /var/run/dpdk/spdk_pid3720167 00:37:23.596 Removing: /var/run/dpdk/spdk_pid3721252 00:37:23.596 Removing: /var/run/dpdk/spdk_pid3721500 00:37:23.596 Removing: /var/run/dpdk/spdk_pid3721721 00:37:23.596 Removing: /var/run/dpdk/spdk_pid3722842 00:37:23.596 Removing: /var/run/dpdk/spdk_pid3723328 00:37:23.596 Removing: /var/run/dpdk/spdk_pid3723704 00:37:23.596 Removing: /var/run/dpdk/spdk_pid3724106 00:37:23.596 Removing: /var/run/dpdk/spdk_pid3724514 00:37:23.596 Removing: /var/run/dpdk/spdk_pid3724909 00:37:23.596 Removing: /var/run/dpdk/spdk_pid3725118 00:37:23.596 Removing: /var/run/dpdk/spdk_pid3725320 00:37:23.596 Removing: /var/run/dpdk/spdk_pid3725708 00:37:23.596 Removing: /var/run/dpdk/spdk_pid3727115 00:37:23.596 Removing: /var/run/dpdk/spdk_pid3730377 00:37:23.596 Removing: /var/run/dpdk/spdk_pid3730744 00:37:23.596 Removing: /var/run/dpdk/spdk_pid3731118 00:37:23.596 Removing: /var/run/dpdk/spdk_pid3731184 00:37:23.596 Removing: /var/run/dpdk/spdk_pid3731830 00:37:23.596 Removing: /var/run/dpdk/spdk_pid3731848 00:37:23.596 Removing: /var/run/dpdk/spdk_pid3732373 00:37:23.596 Removing: /var/run/dpdk/spdk_pid3732559 00:37:23.596 Removing: /var/run/dpdk/spdk_pid3732929 00:37:23.596 Removing: /var/run/dpdk/spdk_pid3732951 00:37:23.596 Removing: /var/run/dpdk/spdk_pid3733357 00:37:23.596 Removing: /var/run/dpdk/spdk_pid3733494 00:37:23.596 Removing: /var/run/dpdk/spdk_pid3734182 00:37:23.596 Removing: /var/run/dpdk/spdk_pid3734345 00:37:23.596 Removing: /var/run/dpdk/spdk_pid3734654 00:37:23.596 Removing: /var/run/dpdk/spdk_pid3735259 00:37:23.596 Removing: /var/run/dpdk/spdk_pid3735452 00:37:23.596 Removing: /var/run/dpdk/spdk_pid3735898 00:37:23.596 Removing: /var/run/dpdk/spdk_pid3736290 00:37:23.596 Removing: /var/run/dpdk/spdk_pid3736575 00:37:23.596 Removing: /var/run/dpdk/spdk_pid3736805 00:37:23.596 Removing: /var/run/dpdk/spdk_pid3737051 00:37:23.596 Removing: /var/run/dpdk/spdk_pid3737407 00:37:23.596 Removing: /var/run/dpdk/spdk_pid3737767 00:37:23.596 Removing: /var/run/dpdk/spdk_pid3738122 00:37:23.596 Removing: /var/run/dpdk/spdk_pid3738485 00:37:23.596 Removing: /var/run/dpdk/spdk_pid3738709 00:37:23.596 Removing: /var/run/dpdk/spdk_pid3738930 00:37:23.596 Removing: /var/run/dpdk/spdk_pid3739250 00:37:23.596 Removing: /var/run/dpdk/spdk_pid3739603 00:37:23.596 Removing: /var/run/dpdk/spdk_pid3739965 00:37:23.596 Removing: /var/run/dpdk/spdk_pid3740322 00:37:23.596 Removing: /var/run/dpdk/spdk_pid3740563 00:37:23.596 Removing: /var/run/dpdk/spdk_pid3740778 00:37:23.596 Removing: /var/run/dpdk/spdk_pid3741087 00:37:23.596 Removing: /var/run/dpdk/spdk_pid3741449 00:37:23.596 Removing: /var/run/dpdk/spdk_pid3741807 00:37:23.596 Removing: /var/run/dpdk/spdk_pid3742173 00:37:23.596 Removing: /var/run/dpdk/spdk_pid3742242 00:37:23.596 Removing: /var/run/dpdk/spdk_pid3742669 00:37:23.596 Removing: /var/run/dpdk/spdk_pid3747210 00:37:23.596 Removing: /var/run/dpdk/spdk_pid3845186 00:37:23.597 Removing: /var/run/dpdk/spdk_pid3850295 00:37:23.597 Removing: /var/run/dpdk/spdk_pid3860895 00:37:23.597 Removing: /var/run/dpdk/spdk_pid3867344 00:37:23.597 Removing: /var/run/dpdk/spdk_pid3872102 00:37:23.597 Removing: /var/run/dpdk/spdk_pid3872828 00:37:23.597 Removing: /var/run/dpdk/spdk_pid3887432 00:37:23.597 Removing: /var/run/dpdk/spdk_pid3887490 00:37:23.597 Removing: /var/run/dpdk/spdk_pid3888494 00:37:23.597 Removing: /var/run/dpdk/spdk_pid3889499 00:37:23.597 Removing: /var/run/dpdk/spdk_pid3890509 00:37:23.597 Removing: /var/run/dpdk/spdk_pid3891179 00:37:23.597 Removing: /var/run/dpdk/spdk_pid3891181 00:37:23.597 Removing: /var/run/dpdk/spdk_pid3891518 00:37:23.597 Removing: /var/run/dpdk/spdk_pid3891532 00:37:23.597 Removing: /var/run/dpdk/spdk_pid3891537 00:37:23.597 Removing: /var/run/dpdk/spdk_pid3892586 00:37:23.597 Removing: /var/run/dpdk/spdk_pid3893587 00:37:23.597 Removing: /var/run/dpdk/spdk_pid3894696 00:37:23.597 Removing: /var/run/dpdk/spdk_pid3895317 00:37:23.597 Removing: /var/run/dpdk/spdk_pid3895448 00:37:23.597 Removing: /var/run/dpdk/spdk_pid3895696 00:37:23.597 Removing: /var/run/dpdk/spdk_pid3896992 00:37:23.597 Removing: /var/run/dpdk/spdk_pid3898138 00:37:23.597 Removing: /var/run/dpdk/spdk_pid3908149 00:37:23.597 Removing: /var/run/dpdk/spdk_pid3908501 00:37:23.597 Removing: /var/run/dpdk/spdk_pid3913622 00:37:23.597 Removing: /var/run/dpdk/spdk_pid3920552 00:37:23.597 Removing: /var/run/dpdk/spdk_pid3923610 00:37:23.597 Removing: /var/run/dpdk/spdk_pid3936363 00:37:23.597 Removing: /var/run/dpdk/spdk_pid3947171 00:37:23.597 Removing: /var/run/dpdk/spdk_pid3949181 00:37:23.597 Removing: /var/run/dpdk/spdk_pid3950189 00:37:23.597 Removing: /var/run/dpdk/spdk_pid3970673 00:37:23.597 Removing: /var/run/dpdk/spdk_pid3975422 00:37:23.597 Removing: /var/run/dpdk/spdk_pid3980956 00:37:23.597 Removing: /var/run/dpdk/spdk_pid3983286 00:37:23.597 Removing: /var/run/dpdk/spdk_pid3985437 00:37:23.597 Removing: /var/run/dpdk/spdk_pid3985471 00:37:23.597 Removing: /var/run/dpdk/spdk_pid3985490 00:37:23.597 Removing: /var/run/dpdk/spdk_pid3985705 00:37:23.597 Removing: /var/run/dpdk/spdk_pid3986190 00:37:23.597 Removing: /var/run/dpdk/spdk_pid3988217 00:37:23.597 Removing: /var/run/dpdk/spdk_pid3988953 00:37:23.858 Removing: /var/run/dpdk/spdk_pid3989330 00:37:23.858 Removing: /var/run/dpdk/spdk_pid3992029 00:37:23.858 Removing: /var/run/dpdk/spdk_pid3992405 00:37:23.858 Removing: /var/run/dpdk/spdk_pid3993163 00:37:23.858 Removing: /var/run/dpdk/spdk_pid3998230 00:37:23.858 Removing: /var/run/dpdk/spdk_pid4004812 00:37:23.858 Removing: /var/run/dpdk/spdk_pid4010645 00:37:23.858 Removing: /var/run/dpdk/spdk_pid4055351 00:37:23.858 Removing: /var/run/dpdk/spdk_pid4059984 00:37:23.858 Removing: /var/run/dpdk/spdk_pid4067189 00:37:23.858 Removing: /var/run/dpdk/spdk_pid4068823 00:37:23.858 Removing: /var/run/dpdk/spdk_pid4070754 00:37:23.858 Removing: /var/run/dpdk/spdk_pid4076361 00:37:23.858 Removing: /var/run/dpdk/spdk_pid4081134 00:37:23.858 Removing: /var/run/dpdk/spdk_pid4090094 00:37:23.858 Removing: /var/run/dpdk/spdk_pid4090199 00:37:23.858 Removing: /var/run/dpdk/spdk_pid4095141 00:37:23.858 Removing: /var/run/dpdk/spdk_pid4095469 00:37:23.858 Removing: /var/run/dpdk/spdk_pid4095711 00:37:23.858 Removing: /var/run/dpdk/spdk_pid4096145 00:37:23.858 Removing: /var/run/dpdk/spdk_pid4096156 00:37:23.858 Removing: /var/run/dpdk/spdk_pid4097511 00:37:23.858 Removing: /var/run/dpdk/spdk_pid4099510 00:37:23.858 Removing: /var/run/dpdk/spdk_pid4101508 00:37:23.858 Removing: /var/run/dpdk/spdk_pid4103490 00:37:23.858 Removing: /var/run/dpdk/spdk_pid4105359 00:37:23.858 Removing: /var/run/dpdk/spdk_pid4107223 00:37:23.858 Removing: /var/run/dpdk/spdk_pid4114614 00:37:23.858 Removing: /var/run/dpdk/spdk_pid4115200 00:37:23.858 Removing: /var/run/dpdk/spdk_pid4116112 00:37:23.858 Removing: /var/run/dpdk/spdk_pid4117136 00:37:23.858 Removing: /var/run/dpdk/spdk_pid4123935 00:37:23.858 Removing: /var/run/dpdk/spdk_pid4127016 00:37:23.858 Removing: /var/run/dpdk/spdk_pid4133400 00:37:23.858 Removing: /var/run/dpdk/spdk_pid4140000 00:37:23.858 Removing: /var/run/dpdk/spdk_pid4148278 00:37:23.858 Removing: /var/run/dpdk/spdk_pid4148315 00:37:23.858 Removing: /var/run/dpdk/spdk_pid4170762 00:37:23.858 Removing: /var/run/dpdk/spdk_pid4171414 00:37:23.858 Removing: /var/run/dpdk/spdk_pid4172119 00:37:23.858 Removing: /var/run/dpdk/spdk_pid4172654 00:37:23.858 Removing: /var/run/dpdk/spdk_pid4173590 00:37:23.858 Removing: /var/run/dpdk/spdk_pid4174054 00:37:23.858 Removing: /var/run/dpdk/spdk_pid4175222 00:37:23.858 Removing: /var/run/dpdk/spdk_pid4175818 00:37:23.858 Removing: /var/run/dpdk/spdk_pid4181068 00:37:23.858 Removing: /var/run/dpdk/spdk_pid4181270 00:37:23.858 Removing: /var/run/dpdk/spdk_pid4188669 00:37:23.858 Removing: /var/run/dpdk/spdk_pid4188786 00:37:23.858 Removing: /var/run/dpdk/spdk_pid4191563 00:37:23.858 Removing: /var/run/dpdk/spdk_pid41980 00:37:23.858 Removing: /var/run/dpdk/spdk_pid42044 00:37:23.858 Removing: /var/run/dpdk/spdk_pid43519 00:37:23.858 Removing: /var/run/dpdk/spdk_pid5508 00:37:23.858 Removing: /var/run/dpdk/spdk_pid5515 00:37:23.858 Clean 00:37:24.119 23:40:13 -- common/autotest_common.sh@1437 -- # return 0 00:37:24.119 23:40:13 -- spdk/autotest.sh@382 -- # timing_exit post_cleanup 00:37:24.119 23:40:13 -- common/autotest_common.sh@716 -- # xtrace_disable 00:37:24.119 23:40:13 -- common/autotest_common.sh@10 -- # set +x 00:37:24.119 23:40:13 -- spdk/autotest.sh@384 -- # timing_exit autotest 00:37:24.119 23:40:13 -- common/autotest_common.sh@716 -- # xtrace_disable 00:37:24.119 23:40:13 -- common/autotest_common.sh@10 -- # set +x 00:37:24.379 23:40:13 -- spdk/autotest.sh@385 -- # chmod a+r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:37:24.379 23:40:13 -- spdk/autotest.sh@387 -- # [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log ]] 00:37:24.379 23:40:13 -- spdk/autotest.sh@387 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log 00:37:24.379 23:40:13 -- spdk/autotest.sh@389 -- # hash lcov 00:37:24.379 23:40:13 -- spdk/autotest.sh@389 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:37:24.379 23:40:13 -- spdk/autotest.sh@391 -- # hostname 00:37:24.379 23:40:13 -- spdk/autotest.sh@391 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -t spdk-cyp-12 -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info 00:37:24.380 geninfo: WARNING: invalid characters removed from testname! 00:37:51.040 23:40:36 -- spdk/autotest.sh@392 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:37:51.040 23:40:39 -- spdk/autotest.sh@393 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:37:52.421 23:40:41 -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '/usr/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:37:53.805 23:40:42 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:37:55.188 23:40:44 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:37:57.729 23:40:46 -- spdk/autotest.sh@397 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:37:59.113 23:40:48 -- spdk/autotest.sh@398 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:37:59.113 23:40:48 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:37:59.113 23:40:48 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:37:59.113 23:40:48 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:37:59.113 23:40:48 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:37:59.113 23:40:48 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:59.113 23:40:48 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:59.113 23:40:48 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:59.113 23:40:48 -- paths/export.sh@5 -- $ export PATH 00:37:59.113 23:40:48 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:59.113 23:40:48 -- common/autobuild_common.sh@434 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:37:59.113 23:40:48 -- common/autobuild_common.sh@435 -- $ date +%s 00:37:59.113 23:40:48 -- common/autobuild_common.sh@435 -- $ mktemp -dt spdk_1714167648.XXXXXX 00:37:59.113 23:40:48 -- common/autobuild_common.sh@435 -- $ SPDK_WORKSPACE=/tmp/spdk_1714167648.fXD4wp 00:37:59.113 23:40:48 -- common/autobuild_common.sh@437 -- $ [[ -n '' ]] 00:37:59.113 23:40:48 -- common/autobuild_common.sh@441 -- $ '[' -n v23.11 ']' 00:37:59.113 23:40:48 -- common/autobuild_common.sh@442 -- $ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:37:59.113 23:40:48 -- common/autobuild_common.sh@442 -- $ scanbuild_exclude=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk' 00:37:59.113 23:40:48 -- common/autobuild_common.sh@448 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:37:59.113 23:40:48 -- common/autobuild_common.sh@450 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:37:59.113 23:40:48 -- common/autobuild_common.sh@451 -- $ get_config_params 00:37:59.114 23:40:48 -- common/autotest_common.sh@385 -- $ xtrace_disable 00:37:59.114 23:40:48 -- common/autotest_common.sh@10 -- $ set +x 00:37:59.114 23:40:48 -- common/autobuild_common.sh@451 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-dpdk=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build' 00:37:59.114 23:40:48 -- common/autobuild_common.sh@453 -- $ start_monitor_resources 00:37:59.114 23:40:48 -- pm/common@17 -- $ local monitor 00:37:59.114 23:40:48 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:37:59.114 23:40:48 -- pm/common@23 -- $ MONITOR_RESOURCES_PIDS["$monitor"]=56267 00:37:59.114 23:40:48 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:37:59.114 23:40:48 -- pm/common@23 -- $ MONITOR_RESOURCES_PIDS["$monitor"]=56269 00:37:59.114 23:40:48 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:37:59.114 23:40:48 -- pm/common@21 -- $ date +%s 00:37:59.114 23:40:48 -- pm/common@23 -- $ MONITOR_RESOURCES_PIDS["$monitor"]=56272 00:37:59.114 23:40:48 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:37:59.114 23:40:48 -- pm/common@21 -- $ date +%s 00:37:59.114 23:40:48 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1714167648 00:37:59.114 23:40:48 -- pm/common@23 -- $ MONITOR_RESOURCES_PIDS["$monitor"]=56274 00:37:59.114 23:40:48 -- pm/common@26 -- $ sleep 1 00:37:59.114 23:40:48 -- pm/common@21 -- $ date +%s 00:37:59.114 23:40:48 -- pm/common@21 -- $ date +%s 00:37:59.114 23:40:48 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1714167648 00:37:59.114 23:40:48 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1714167648 00:37:59.114 23:40:48 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1714167648 00:37:59.114 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1714167648_collect-bmc-pm.bmc.pm.log 00:37:59.114 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1714167648_collect-cpu-load.pm.log 00:37:59.114 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1714167648_collect-vmstat.pm.log 00:37:59.114 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1714167648_collect-cpu-temp.pm.log 00:38:00.055 23:40:49 -- common/autobuild_common.sh@454 -- $ trap stop_monitor_resources EXIT 00:38:00.055 23:40:49 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j144 00:38:00.056 23:40:49 -- spdk/autopackage.sh@11 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:38:00.056 23:40:49 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:38:00.056 23:40:49 -- spdk/autopackage.sh@18 -- $ [[ 1 -eq 0 ]] 00:38:00.056 23:40:49 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:38:00.056 23:40:49 -- spdk/autopackage.sh@19 -- $ timing_finish 00:38:00.056 23:40:49 -- common/autotest_common.sh@722 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:38:00.056 23:40:49 -- common/autotest_common.sh@723 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:38:00.056 23:40:49 -- common/autotest_common.sh@725 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:38:00.056 23:40:49 -- spdk/autopackage.sh@20 -- $ exit 0 00:38:00.056 23:40:49 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:38:00.056 23:40:49 -- pm/common@30 -- $ signal_monitor_resources TERM 00:38:00.056 23:40:49 -- pm/common@41 -- $ local monitor pid pids signal=TERM 00:38:00.056 23:40:49 -- pm/common@43 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:38:00.056 23:40:49 -- pm/common@44 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:38:00.056 23:40:49 -- pm/common@45 -- $ pid=56280 00:38:00.056 23:40:49 -- pm/common@52 -- $ sudo kill -TERM 56280 00:38:00.056 23:40:49 -- pm/common@43 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:38:00.056 23:40:49 -- pm/common@44 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:38:00.056 23:40:49 -- pm/common@45 -- $ pid=56287 00:38:00.056 23:40:49 -- pm/common@52 -- $ sudo kill -TERM 56287 00:38:00.056 23:40:49 -- pm/common@43 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:38:00.056 23:40:49 -- pm/common@44 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:38:00.056 23:40:49 -- pm/common@45 -- $ pid=56288 00:38:00.056 23:40:49 -- pm/common@52 -- $ sudo kill -TERM 56288 00:38:00.056 23:40:49 -- pm/common@43 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:38:00.056 23:40:49 -- pm/common@44 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:38:00.056 23:40:49 -- pm/common@45 -- $ pid=56281 00:38:00.056 23:40:49 -- pm/common@52 -- $ sudo kill -TERM 56281 00:38:00.317 + [[ -n 3582558 ]] 00:38:00.317 + sudo kill 3582558 00:38:00.328 [Pipeline] } 00:38:00.349 [Pipeline] // stage 00:38:00.354 [Pipeline] } 00:38:00.372 [Pipeline] // timeout 00:38:00.378 [Pipeline] } 00:38:00.395 [Pipeline] // catchError 00:38:00.401 [Pipeline] } 00:38:00.422 [Pipeline] // wrap 00:38:00.428 [Pipeline] } 00:38:00.445 [Pipeline] // catchError 00:38:00.453 [Pipeline] stage 00:38:00.456 [Pipeline] { (Epilogue) 00:38:00.472 [Pipeline] catchError 00:38:00.474 [Pipeline] { 00:38:00.490 [Pipeline] echo 00:38:00.491 Cleanup processes 00:38:00.497 [Pipeline] sh 00:38:00.784 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:38:00.784 56371 /usr/bin/ipmitool sdr dump /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/sdr.cache 00:38:00.784 56836 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:38:00.801 [Pipeline] sh 00:38:01.086 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:38:01.086 ++ grep -v 'sudo pgrep' 00:38:01.086 ++ awk '{print $1}' 00:38:01.086 + sudo kill -9 56371 00:38:01.100 [Pipeline] sh 00:38:01.388 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:38:13.627 [Pipeline] sh 00:38:13.914 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:38:13.914 Artifacts sizes are good 00:38:13.926 [Pipeline] archiveArtifacts 00:38:13.931 Archiving artifacts 00:38:14.178 [Pipeline] sh 00:38:14.485 + sudo chown -R sys_sgci /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:38:14.505 [Pipeline] cleanWs 00:38:14.517 [WS-CLEANUP] Deleting project workspace... 00:38:14.517 [WS-CLEANUP] Deferred wipeout is used... 00:38:14.524 [WS-CLEANUP] done 00:38:14.525 [Pipeline] } 00:38:14.541 [Pipeline] // catchError 00:38:14.551 [Pipeline] sh 00:38:14.833 + logger -p user.info -t JENKINS-CI 00:38:14.845 [Pipeline] } 00:38:14.862 [Pipeline] // stage 00:38:14.867 [Pipeline] } 00:38:14.886 [Pipeline] // node 00:38:14.890 [Pipeline] End of Pipeline 00:38:14.911 Finished: SUCCESS